All Systems Operational

Cloud Support Operational
Location DE/FKB Operational
Compute Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Private Cloud Operational
Logging Service Operational
Location DE/FRA Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Object Storage Operational
Logging Service Operational
Location DE/FRA/2 Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Object Storage Operational
Logging Service Operational
Location DE/TXL Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Object Storage Operational
Private Cloud Operational
Logging Service Operational
Location ES/VIT Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Object Storage Operational
Logging Service Operational
Location FR/PAR Operational
Compute Operational
Storage Operational
Cubes Operational
Managed Kubernetes Operational
Provisioning Operational
Supporting Services Operational
Network Operational
Logging Service Operational
Location GB/BHX Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Logging Service Operational
Location GB/GLO Operational
Private Cloud Operational
Location GB/LHR Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Logging Service Operational
Location US/EWR Operational
Compute Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Logging Service Operational
Location US/LAS Operational
Compute Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Logging Service Operational
Location US/MCI Operational
Compute Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Provisioning Operational
Supporting Services Operational
Logging Service Operational
APIs and Frontends Operational
Data Center Designer (DCD) Operational
Cloud API Operational
Billing API Operational
Reseller API Operational
Activity Log API Operational
Global Services Operational
Backup Service Operational
CloudDNS Operational
Database as a Service (DBaaS) Operational
Monitoring as a Service (MaaS) Operational
Content Delivery Network (CDN) Operational
AI Model Hub Operational
IAM Operational
Container Registry Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance

Scheduled Maintenance

Emergency Maintenance: TXL - Switch Replacement Mar 18, 2026 20:00-22:00 UTC

We need to replace a faulty switch to ensure redundancy and stability of the network.
There is no impact expected for customers.

Posted on Mar 18, 2026 - 16:05 UTC

SDN Version Upgrade Mar 19, 2026 10:00-13:00 UTC

We will be performing a scheduled upgrade of the network management software (SDN) for our server infrastructure. This maintenance will bring our systems to the latest stable version to ensure optimal performance.

This maintenance might cause an interruption of network connectivity for up to 1 (one) minute per processed host server. If you have virtual machines or services that are hosted on various hosts, you might experience multiple instances of this service disruption.

Posted on Mar 18, 2026 - 15:44 UTC

Maintenance of the Backup service Mar 26, 2026 18:30-22:30 UTC

We would like to inform you that regular maintenance on our IONOS cloud backup service is due to be carried out.

The backup console will not be available during the maintenance, scheduled backups will be delayed.

Posted on Mar 10, 2026 - 07:41 UTC

Maintenance of the Backup service Mar 30, 2026 18:00-22:00 UTC

We would like to inform you that regular maintenance on our IONOS cloud backup service is due to be carried out.

The backup console will not be available during the maintenance, scheduled backups will be delayed.

Posted on Mar 10, 2026 - 07:58 UTC
Mar 18, 2026
Completed - The scheduled maintenance has been completed.
Mar 18, 07:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 18, 06:00 UTC
Scheduled - We would like to inform you that regular maintenance of the Cloud Panel is due to be carried out.
During this time, access to the Cloud Panel will be limited while access to the vSphere Clients, NSX Managers and the management VPN themselves will not be impacted.
Availability and accessibility of your Private Cloud resources will remain unaffected.

Mar 14, 08:03 UTC
Mar 17, 2026
Resolved - This incident is now marked resolved, as all affected control planes have returned to and stayed in a stable state.
A Root Cause Analysis (RCA) is underway and will be published here once finalized.

Mar 17, 17:04 UTC
Monitoring - Final changes have been applied to all afffected clusters resolving the issue. We are now monitoring the progress
Mar 16, 19:59 UTC
Update - Final changes have been applied to all afffected clusters resolving the issue. We are now monitoring the progress
Mar 16, 19:58 UTC
Update - The changes applied to some control plane clusters have had positive effects. The team is continuing the rollout to other affected clusters.
Mar 16, 19:34 UTC
Update - We are marking DBaaS as recovered.
Our Kubernetes Team is currently working on stabilizing the Kubernetes Control Plane. We are focusing on mitigating recurring load spikes influencing stability.

Mar 16, 13:56 UTC
Update - We are marking the Container Registry Service as recovered.
Mar 16, 12:57 UTC
Update - We are closing the incident for the AI Model Hub. All metrics have recovered and the service should be up and running again normally.
Mar 16, 12:22 UTC
Update - We are adding the Container Registry as an affected Service. Customers may currently experience issues pulling and pushing images from the Registry.
Mar 16, 11:57 UTC
Update - Our Kubernetes Team has deployed a fix for the affected AI Model Hub Database Services. We currently see metrics improving and monitoring the situation closely.
Mar 16, 11:37 UTC
Update - We are expanding the scope of this incident to include DBaaS and AI Model Hub. We have observed an increased error count originating from PostgresDB on Kubernetes. Additionally, to improve transparency, the previously reported separate incident regarding the AI Model Hub (https://status.ionos.cloud/incidents/rmgs845klm32) is being merged into this primary incident.
Mar 16, 11:05 UTC
Identified - The team has identified the root cause as a resource constraint within the etcd database. Mitigation efforts are currently underway.
Mar 16, 10:04 UTC
Investigating - Some customers may experience connection problems to the control plane and degraded functionality of kubernetes.
Our teams are investigating and working on a resolution.

Mar 16, 08:24 UTC
Mar 16, 2026
Resolved - To improve visibility and streamline communication, we are merging this incident into the MK8s incident (tracked here: https://status.ionos.cloud/incidents/h9x5s66m4r28). Consequently, we will close this specific entry and provide all future updates via the referenced incident link.
Mar 16, 11:08 UTC
Identified - The team has traced the root cause to the ongoing Kubernetes incident (link). Both teams are currently working to restore service. (https://status.ionos.cloud/incidents/h9x5s66m4r28)
Mar 16, 10:01 UTC
Investigating - Our AI Model Hub Team is currently investigating increased error rates in the Embeddings functionality in the AI Model Hub.
Mar 16, 08:28 UTC
Mar 15, 2026

No incidents reported.

Mar 14, 2026

No incidents reported.

Mar 13, 2026

No incidents reported.

Mar 12, 2026
Completed - The scheduled maintenance has been completed.
Mar 12, 22:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 12, 21:00 UTC
Scheduled - We are implementing urgent changes to enhance the stability and performance of the TXL network. In rare instances, customers may experience brief, intermittent connectivity interruptions.
Mar 12, 14:49 UTC
Completed - The scheduled maintenance has been completed.
Mar 12, 20:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 12, 19:00 UTC
Scheduled - We are implementing urgent changes to enhance the stability and performance of the PAR network. We do not expect customer facing service degradation.
Mar 12, 14:46 UTC
Completed - The scheduled maintenance has been completed.
Mar 12, 13:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 12, 12:00 UTC
Scheduled - We are implementing urgent changes to enhance the stability and performance of the LAS network. In rare instances, customers may experience brief, intermittent connectivity interruptions.
Mar 12, 11:47 UTC
Mar 11, 2026
Postmortem - Read details
Mar 13, 10:42 UTC
Resolved - We are marking this incident as resolved. Our Network Team will publish an RCA here, once it is compiled.
Mar 11, 21:19 UTC
Monitoring - We are placing the incident in a monitoring state. Our Network Team is closely monitoring the cluster and working to restore full redundancy.
Mar 11, 19:27 UTC
Update - The deployed change has had positive effect. We are downgrading the impact level while continuing to monitor the cluster closely.
Mar 11, 18:54 UTC
Identified - In response to monitoring alerts, our network team deployed a change to stabilize the network in the affected cluster. We will post another update at 19:00 UTC—or as soon as new information becomes available.
Mar 11, 18:31 UTC
Investigating - We are currently investigating network connectivity issues in our TXL datacenter.
Mar 11, 18:13 UTC
Resolved - We are marking this incident as resolved. The incident was caused by capacity constraints following a hardware failure. While capacity has been restored, we still see some usage‑specific constraints with the Llama 3.1 405B Instruct model. Our AI ModelHub team will deploy optimizations to the model to increase performance and reliability. We recommend that users still experiencing issues with the model check GPT‑OSS 120B as a potential (temporary) replacement.
Mar 11, 19:20 UTC
Monitoring - Our AI Model Hub Team has mitigated the incident. While the underlying root cause is not yet fully established or resolved, the model service should be stable. We are monitoring the situation while the investigation is ongoing
Mar 9, 18:53 UTC
Identified - The team has identified the root cause: hardware degradation affecting this model's hosting environment is causing backend instability. We are currently implementing a fix.
Mar 9, 11:52 UTC
Investigating - Our Model Hub Team is currently working on resolving errors related to an instance running the llama 405b model.
Mar 9, 08:26 UTC
Mar 10, 2026

No incidents reported.

Mar 9, 2026
Completed - The scheduled maintenance has been completed.
Mar 9, 23:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 9, 21:00 UTC
Scheduled - We will be performing a scheduled maintenance activity on the TXL network fabric.
We do not expect any customer facing service degradation.

Feb 28, 15:33 UTC
Mar 8, 2026

No incidents reported.

Mar 7, 2026

No incidents reported.

Mar 6, 2026

No incidents reported.

Mar 5, 2026
Completed - The scheduled maintenance has been completed.
Mar 5, 23:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 5, 21:00 UTC
Scheduled - To improve network stability in one cluster, we are rolling out a configuration update related to route advertisement mechanisms.
No customer impact is expected.

Mar 5, 18:07 UTC
Completed - The scheduled maintenance has been completed.
Mar 5, 21:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 5, 18:00 UTC
Scheduled - We are updating software packages in our network stack. We do not expect customer facing service degradation.
Feb 28, 15:43 UTC
Mar 4, 2026
Resolved - We are marking this incident as resolved because no further issues were found in the setup. A Root‑Cause Analysis (RCA) will be published once the team has completed its analysis.
Mar 4, 16:39 UTC
Monitoring - The Kubernetes Team has deployed a mitigation to the issue which involved a version rollback of a component of the K8s control plane. We are currently monitoring the service recovery.
Mar 4, 12:22 UTC
Identified - Our Container Registry team has identified an issue in the underlying Kubernetes cluster serving a subset of images. The team is currently working on applying a fix for the issue.
Mar 4, 10:23 UTC
Investigating - We are currently investigating an increased error count on the IONOS Container Registry. Customers might be unable to pull images currently.
Mar 4, 09:23 UTC
Completed - We decided to postpone this FKB maintenance for some later date (to be defined).
Mar 4, 14:30 UTC
Scheduled - We would like to inform you that an exceptional maintenance of the network is due to be carried out in our location FKB.
We need to exchange some faulty HW that is covered by redundancy. We do not expect any impact on the network and storage connectivity.

The maintenance activity may cause latency for short time but no connectivity loss.
All other services are expected to operate normally. Availability and accessibility of your virtual data center resources will remain unaffected.

Mar 3, 17:07 UTC
Postmortem - Read details
Mar 5, 10:56 UTC
Resolved - We are marking this incident as resolved. The Issue was caused by a faulty DCD release, which was rolled back by the responsible team.
Mar 4, 11:14 UTC
Monitoring - Our DCD Front‑end team has identified an issue introduced by a recent release. A rollback was performed, and customers who were unable to edit their VDCs and were confronted with an “In Progress/Pending” status should now be unblocked.

We are actively monitoring the situation.

Mar 4, 09:54 UTC
Investigating - We are actively investigating reports of frontend problems when attempting to edit VDCs through the DCD UI.
Mar 4, 08:51 UTC