We would like to inform you that a regular maintenance of the Managed Kubernetes is due to be carried out.
During the maintenance, connections to your cluster's kube-apiserver might occasionally become slow, possibly also requiring a retry.
Kubernetes resources could also be affected, meaning you might not be able to make changes to your clusters or nodepools, either via the DCD or the API. Posted on
Nov 03, 2025 - 15:08 UTC
This is a routine maintenance during which we update the compute nodes in our FRA and FRA2 data centers.
What are we doing: - We are rolling out several stability and performance improvements - We are deploying several bug fixes
Expected impact: For each processed compute node, we expect a period of packet loss of up to 10 seconds. In exceptional cases, you might experience packet loss for up to 1 minute. The availability of your services hosted on the nodes will be affected as described above. If you have services distributed across several nodes in the affected data centers, you may experience multiple instances of package loss.
Because of the high number of compute nodes in FRA, we will spread the maintenance in this data center over two days. We will announce each maintenance window separately. Posted on
Nov 04, 2025 - 18:32 UTC
We would like to inform you that regular maintenance of the Cloud Panel is due to be carried out. During this time, access to the Cloud Panel will be limited while access to the vSphere Clients, NSX Managers and the management VPN themselves will not be impacted. Availability and accessibility of your Private Cloud resources will remain unaffected. Posted on
Nov 07, 2025 - 13:58 UTC
This is a routine maintenance during which we update the compute nodes in our FRA and TXL data centers.
What are we doing: - We are rolling out several stability and performance improvements - We are deploying several bug fixes
Expected impact: For each processed compute node, we expect a period of packet loss of up to 10 seconds. In exceptional cases, you might experience packet loss for up to 1 minute. The availability of your services hosted on the nodes will be affected as described above. If you have services distributed across several nodes in the affected data centers, you may experience multiple instances of package loss.
Because of the high number of compute nodes in TXL, we will spread the maintenance in this data center over two days. We will announce each maintenance window separately. Posted on
Nov 04, 2025 - 18:40 UTC
This is a routine maintenance during which we update the compute nodes in our TXL data center.
What are we doing: - We are rolling out several stability and performance improvements - We are deploying several bug fixes
Expected impact: For each processed compute node, we expect a period of packet loss of up to 10 seconds. In exceptional cases, you might experience packet loss for up to 1 minute. The availability of your services hosted on the nodes will be affected as described above. If you have services distributed across several nodes in the affected data centers, you may experience multiple instances of package loss. Posted on
Nov 04, 2025 - 18:43 UTC
Completed -
The scheduled maintenance has been completed.
Nov 6, 00:00 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 5, 21:00 UTC
Update -
We will be undergoing scheduled maintenance during this time.
Nov 3, 13:22 UTC
Update -
We would like to inform you that we have rescheduled the maintenance from Monday 03.11 18:00 - 21:00 to Wednesday 05.11. 21:00- 01:00 UTC.
During the maintenance period you may experience temporary connection issues of less than 10 seconds in duration (per bucket) when accessing or editing data in your user-owned buckets. In some cases, HTTP 5XX errors may occur within these 10 seconds.
Oct 31, 17:56 UTC
Scheduled -
We would like to inform you that regular maintenance of our user-owned Object Storage services is due to be carried out.
During the maintenance period, you may experience temporary connection issues of less than 10 seconds when accessing or editing data in your user-owned Buckets. In some cases, HTTP 5XX errors may occur.
Oct 30, 16:13 UTC
Completed -
The scheduled maintenance has been completed.
Nov 5, 07:30 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 5, 06:00 UTC
Scheduled -
We would like to inform you that regular maintenance of the Cloud Panel is due to be carried out. During this time, access to the Cloud Panel will be limited while access to the vSphere Clients, NSX Managers and the management VPN themselves will not be impacted. Availability and accessibility of your Private Cloud resources will remain unaffected.
Oct 31, 15:06 UTC
Completed -
The scheduled maintenance has been completed.
Nov 5, 07:15 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 5, 06:30 UTC
Scheduled -
We would like to inform you that our provisioning engine is due for regular maintenance.
While maintenance is underway, provisioning requests may take longer to process. In exceptional circumstances, you might experience limited availability of the API and DCD.
All other services are expected to continue to operate normally. The availability and accessibility of your virtual datacenter resources will not be affected.
Oct 31, 10:46 UTC
Resolved -
We have identified an issue related to the synchronization behavior of our authentication platform that was caused by maintenance performed on the underlying managed Kubernetes system. This issue is now resolved. A permanent solution that ensures proper synchronization behavior during and after maintenance has been identified and will be implemented during the course of next week.
Nov 1, 10:05 UTC
Investigating -
We are currently investigating intermittent login issues on the DCD. Users may see the following message when attempting to log in: “Your login attempt timed out. Login will start from the beginning.” We will update this status page regularly as we continue our investigation.
Nov 1, 09:36 UTC
We have prepared the following preliminary Root Cause Analysis:
What happened: On 29 October 2025 at 12:30 UTC an unintended route advertisement occurred within the IONOS Cloud network at site FR7 (metro region DE/FRA), causing traffic destined for external networks to be black‑holed for 27 minutes.
This was possible because two events coincided: - A Border Gateway Protocol (BGP) session flap on a compute server triggered the unintended advertisement. - A bug in the upgrade algorithm of the latest software‑defined‑networking (SDN) package caused the route‑suppression rule in the node to be omitted.
What we are doing to prevent this from happening again: - Harden compute nodes to avoid recurrence (done). - Extending the QA and rollout tests for the SDN software. (ETA 3th November 2025) - Fixing the upgrade algorithm in the current SDN package. (ETA End of December 2025) - Harden existing route filters on the route‑reflector layer to reject any default‑route advertisement from hypervisors. (ETA 7th November 2025)
Oct 29, 14:48 UTC
Monitoring -
Network traffic has been restored to normal operations. The teams are monitoring.
Oct 29, 13:48 UTC
Identified -
The cause of the network outage has been identified and mitigated. Networking teams are working to normalize network traffic most customers should be back to normal operations.
Oct 29, 13:26 UTC
Investigating -
We are writing to inform you that we have been experiencing connection issues and substantial delays on packet delivery.
Network technicians have started working on the occurrence immediately after detection and will isolate the problem and solve the issue as quick as possible. However, it is possible that there will be a certain degradation in connection quality affecting individual virtual resources.
We will inform you as soon as the functionality has been restored.
Oct 29, 12:58 UTC