Identified - As documented in the updates to the recent Storage Incident in our Karlsruhe DC, several Virtual Data Centers (VDCs) are currently restricted from issuing new provisioning jobs while redundancy is being restored for affected VMs.
Original Incident: https://status.ionos.cloud/incidents/p6pjqxzgkh1g

With this announcement we want to increase awareness about this current limitation.

Impact
Scope: Only customers and VDCs impacted by the original storage incident are affected by these provisioning limitations.
Symptoms: Users may encounter error code VDC-5-12 when attempting to create or modify resources.
Existing Workloads: All currently running VMs remain operational

We will close this announcement once redundancy, as well as provisioning functionality, is fully restored.

May 13, 2026 - 15:33 UTC
Monitoring - Our team has recovered the remaining VMs. It’s important to note that storage redundancy is not yet fully restored. Due to necessary data synchronization processes and the volume of data involved, full recovery will take several additional hours.

Important notice for affected customers:
- Load: To ensure stability of the recovered VMs, please try to avoid excessive storage load where possible.
- Provisioning: Provisioning jobs against affected VMs will not be possible until redundancy is fully restored. Consequently, making state changes will not be possible during this time.
- VM States: During the incident, the state of affected VMs may have changed; some may currently be in "shutoff" state.
- Manual Intervention: To avoid unintended impact on customers for whom "shutoff" is the desired state, we cannot automatically restart all VMs without approval.

Due to these constraints, we kindly ask affected customers to carefully check their resources and request state changes via support ticket (support@cloud.ionos.com) where necessary. This is only needed until redundancy is restored.

We are moving this incident to Monitoring status while work to restore full redundancy continues. We will provide an update here once that work is complete. A detailed Root Cause Analysis (RCA) will be published subsequently, explaining the cause of the incident, the measures taken to restore service, and the steps implemented to prevent recurrence.

We thank our customers and partners for their patience and cooperation in ensuring the full recovery of affected services and workloads.

May 12, 2026 - 23:06 UTC
Update - We have successfully recovered additional VMs. However, because full storage redundancy has not yet been reached for these systems, Storage Provisioning Jobs will currently fail. We ask affected customers to refrain from making configuration changes or placing excessive load on these servers until redundancy is fully restored.
May 12, 2026 - 21:08 UTC
Update - The team has succeeded in restoring the first VMs during the initial run. We are currently working on rolling out the restoration process further and will update on the progress.
May 12, 2026 - 20:12 UTC
Update - Our development team is working on re-establishing connectivity sessions of the affected VMs so that the recovery effort can continue. The first runs are currently starting.
May 12, 2026 - 20:00 UTC
Update - The team has encountered a blocker while restarting the remaining VMs due to connectivity issues between the replaced hosts and the storage servers. This has significantly increased the complexity of the recovery effort. We are currently collaborating with a specialized development team to bring the remaining systems back online. We will provide the next update at 20:00 UTC or before if there are meaningful updates.
May 12, 2026 - 19:23 UTC
Update - We have made progress regarding the recovery, and are currently in the process of restarting affected Virtual Machines.
May 12, 2026 - 16:01 UTC
Update - The spares have been made available. They are currently brought online to allow recovery to continue. We estimate that we will have another update on the progress within the hour.
May 12, 2026 - 13:53 UTC
Update - We found relevant hardware to be defective, this has to be replaced first in the data center.
After the physical connection is back, we need to sync and update the storage servers to get the VMs back online.
We are working as fast as we can to get this fixed, however, this might take some hours.

May 12, 2026 - 12:49 UTC
Identified - Our storage team is currently working on a firmware restore and upgrade on the affected storage servers and will then attempt to bring them back into service.
May 12, 2026 - 11:23 UTC
Investigating - Unfortunately, we ran into further issues with our Storages in Karlsruhe, so the team keeps on repairing.
Affected VMs (and VDCs) are not available and have to be restarted once they get back online.
We will keep you updated asap on any new information.

May 12, 2026 - 11:04 UTC
Monitoring - All VMs and Storages and VDCs are back online and available.
We keep monitoring the situation.

May 12, 2026 - 08:59 UTC
Identified - Our developers restored the storage availability, we are now restarting the affected VMs to bring them back online.
May 12, 2026 - 08:02 UTC
Investigating - We are currently experiencing restrictions regarding our Storage service in one cluster in our Karlsruhe region. We are working to restore regularly operation of our services as quickly as possible.
May 12, 2026 - 06:15 UTC
Cloud Support Operational
Location DE/FKB Degraded Performance
Compute Operational
Storage Degraded Performance
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Degraded Performance
Private Cloud Operational
Logging Service Operational
Location DE/FRA Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Object Storage Operational
Logging Service Operational
Location DE/FRA/2 Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Object Storage Operational
Logging Service Operational
Location DE/TXL Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Object Storage Operational
Private Cloud Operational
Logging Service Operational
Location ES/VIT Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Object Storage Operational
Logging Service Operational
Location FR/PAR Operational
Compute Operational
Storage Operational
Cubes Operational
Managed Kubernetes Operational
Provisioning Operational
Supporting Services Operational
Network Operational
Logging Service Operational
Location GB/BHX Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Logging Service Operational
Location GB/GLO Operational
Private Cloud Operational
Location GB/LHR Operational
Compute Operational
Cubes Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Logging Service Operational
Location US/EWR Operational
Compute Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Logging Service Operational
Location US/LAS Operational
Compute Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Supporting Services Operational
Provisioning Operational
Logging Service Operational
Location US/MCI Operational
Compute Operational
Storage Operational
Network Operational
Managed Kubernetes Operational
Provisioning Operational
Supporting Services Operational
Logging Service Operational
APIs and Frontends Operational
Data Center Designer (DCD) Operational
Cloud API Operational
Billing API Operational
Reseller API Operational
Activity Log API Operational
Global Services Operational
Backup Service Operational
CloudDNS Operational
Database as a Service (DBaaS) Operational
Monitoring as a Service (MaaS) Operational
Content Delivery Network (CDN) Operational
AI Model Hub Operational
IAM Operational
Container Registry Operational
Accounts and Billing Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance

Scheduled Maintenance

Planned infrastructure upgrade — Monitoring & Logging Services (MCI, LAS, EWR, BHX, VIT) May 19, 2026 14:00-19:00 UTC

As part of our ongoing platform improvements, we will be performing an infrastructure upgrade to enhance the reliability, security, and performance of the Monitoring and Logging services in the listed regions.

Customer impact: A short interruption to ingestion endpoints for Monitoring and Logging may occur during the maintenance window. Previously ingested data is retained and remains accessible after the maintenance completes. No customer action is required.

Affected services: Logging Service
Location: Location ES/VIT, Location GB/BHX, Location US/EWR, Location US/LAS, Location US/MCI

Posted on May 13, 2026 - 16:04 UTC

IAM Cluster Migration May 27, 2026 16:30-17:30 UTC

On Wednesday, 27 May 2026, between 18:30 and 19:30 CEST (16:30–17:30 UTC), we will perform scheduled maintenance to move our sign-in platform to upgraded, geo-redundant infrastructure. During this window, signing in to the Cloud Console (DCD), Reseller Portal, Partner Portal, Partner Applications, CRM, and single sign-on (SSO) flows may be briefly interrupted, and some users may need to sign in again after the maintenance. Already-authenticated API traffic and existing automations are not affected. No password change or account action is required. We have prepared and tested this change carefully and apologize for any inconvenience this short interruption may cause.

Affected services: Data Center Designer (DCD)
Location: APIs and Frontends

Posted on May 13, 2026 - 10:19 UTC

MongoDB Playground: 30-day cluster lifecycle policy takes effect Jun 1, 2026 10:00-12:00 UTC

Starting May 31, 2026, MongoDB Playground clusters will be automatically and permanently deleted 30 days after their creation date, as outlined in our customer documentation.
https://docs.ionos.com/cloud/databases/mongodb/overview

What is changing
* Existing clusters created more than 30 days before May 31, 2026 will be deleted on May 31, 2026.
* Existing clusters created less than 30 days before May 31, 2026 will be deleted 30 days after their creation date.
* New clusters created on or after May 31, 2026 will be deleted 30 days after creation.

Playground clusters do not include automated backups, so any data not exported before deletion will be unrecoverable.

Action required
* If you want to keep your data, please export it before your cluster's deletion date using mongodump. See MongoDB Backup & Recovery.
* Need your cluster for longer than 30 days? MongoDB Business Edition removes the 30-day limit and adds automated daily backups, higher connection limits, and production-ready performance.

* MongoDB Documentation: https://docs.ionos.com/cloud/databases/mongodb
* MongoDB Backup & recovery overview: https://docs.ionos.com/cloud/databases/mongodb/overview/backup-and-recovery

Affected services: Database as a Service (DBaaS)
Location: Global Services

Posted on May 11, 2026 - 16:24 UTC
May 14, 2026

No incidents reported today.

May 13, 2026
Resolved - This incident has been resolved, and all affected customers have been contacted directly via email.
May 13, 06:56 UTC
Identified - We can confirm that these emails were sent out in error.

There are no customer 'to-do's / calls-to-action, and we will send out an informational update to all affected customers as soon as possible.

May 12, 14:38 UTC
Investigating - We are aware that, starting this morning, multiple customers have received an automated email from our Accounting Team advising that their Umsatzsteuer- / VAT ID numbers could not be verified.

We are currently investigating this issue, and expect to send out follow-up emails to the affected customers once we know more.

May 12, 08:50 UTC
Completed - The scheduled maintenance has been completed.
May 13, 06:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 13, 05:00 UTC
Scheduled - We would like to inform you that regular maintenance of the Cloud Panel is due to be carried out.
During this time, access to the Cloud Panel will be limited while access to the vSphere Clients, NSX Managers and the management VPN themselves will not be impacted.
Availability and accessibility of your Private Cloud resources will remain unaffected.

May 8, 16:40 UTC
May 12, 2026

Unresolved incident: Availability of storage service partially limited in Karlsruhe.

May 11, 2026
Resolved - This incident has been resolved.
May 11, 21:16 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 11, 19:05 UTC
Investigating - Some customers may experience connection problems to the control plane and degraded functionality of kubernetes
May 11, 17:53 UTC
Resolved - We are marking this incident as resolved. A RCA will be published here once it is compiled.
May 11, 08:30 UTC
Monitoring - The rollback was completed successfully. The team is currently checking the affected resources for potential residual inconsistencies. We are setting this incident into monitoring status.
May 8, 20:42 UTC
Update - We are currently rolling out the correction. We expect this to take several minutes and will update this page once the rollout is completed.
May 8, 18:42 UTC
Update - The team is now preparing to safely rollback identified changes on affected resources.
May 8, 17:13 UTC
Update - The team continues to audit affected resources to verify that all remaining inadvertent configuration changes are reverted.
May 8, 16:29 UTC
Identified - Our Kubernetes engineering team has identified the root cause of the issue. The incident stemmed from a defect in a configuration filtering mechanism. We are currently implementing mitigation steps and will provide further updates on our progress here.
May 8, 15:52 UTC
Investigating - Some customers may experience connection problems to the control plane and degraded functionality of kubernetes.
Our engineers are working on a fix.

May 8, 15:15 UTC
May 10, 2026

No incidents reported.

May 9, 2026

No incidents reported.

May 8, 2026
May 7, 2026
Completed - The scheduled maintenance has been completed.
May 7, 10:05 UTC
Scheduled - As part of our commitment to delivering industry standard services, the two native IONOS AI Model Hub API endpoints — /models and /predictions — that were deprecated 3 months ago have now been disabled.

The OpenAI-compatible API is not affected — only the native IONOS API endpoints have been removed, specifically:
- https://inference.de-txl.ionos.com/models
- https://inference.de-txl.ionos.com/models/{modelId}/predictions

Our API specifications have been updated to reflect their absence, and for more information, please refer to https://api.ionos.com/docs/inference-openai/v1 or the previous email announcements.

May 7, 09:52 UTC
May 6, 2026
Completed - The scheduled maintenance has been completed.
May 6, 21:59 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 6, 19:00 UTC
Scheduled - We will be performing an important maintenance on the gateways that connect virtual machines and storage in our DE/TXL (Berlin) location.
The work updates the gateways to a new stable software version, ensuring continued security support and bringing the platform onto a modern operating system baseline.

The maintenance is scheduled from 19:00 to 21:59 UTC on 6 May 2026 (21:00 to 23:59 CEST, Berlin time).

The maintenance is carried out in two sequential phases across redundant systems so that services remain available throughout the window. Existing virtual machines and volumes are expected to continue running normally.
However, creating new volumes or new virtual machines in the affected location may be briefly delayed or fail during the maintenance, and should be retried once the work is complete.

Our engineering team will actively monitor the systems throughout the entire maintenance window and is prepared to
respond immediately to any unexpected issues.
No customer action is required.

We will update this notice if the window changes or if any unexpected impact occurs.


Affected services: Compute, Network, Provisioning
Location: Location DE/TXL

May 6, 18:58 UTC
Resolved - The Object Storage key creation is fixed, so new keys can be created and used.
May 6, 06:00 UTC
Monitoring - Object Storage key creation is back working, you can create new access keys now.
However, we keep monitoring the behaviour.

May 5, 14:36 UTC
Investigating - We are currently investigating a general problem with our Object Storage Access Key creation.
Currently, all new key creations are failing, so new keys can not be used.
Old working keys are not affected, they are working as usual.

Please be aware that the DCD frontend doesn't always reflect the key state properly,
so to request the correct key status, please use the API.

May 5, 07:47 UTC
Resolved - The incident is over, all K8s control planes were back and operational yesterday.
No further issues occurred.

May 6, 05:58 UTC
Monitoring - The team identified the issue and applied a fix, control planes are back now.
We keep monitoring the situation.

May 5, 17:06 UTC
Investigating - We currently face outages of a high number of K8s control planes, mainly in Frankfurt.
This could impact K8s workload changes for several customers.
Our Teams are working on a fix.

May 5, 16:06 UTC
Resolved - The incident is over, Provisioning is back working normally.
We are working on the RCA and publish it here asap.

May 6, 05:55 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 4, 15:33 UTC
Investigating - On our ongoing provisioning resolution work, we had to set the Managed K8s service to "read-only",
means no Cluster or Nodepool or Autoscale actions are possible at the moment,
while the running workloads are not affected.
We will let you know asap when the K8s system is back.

May 4, 13:04 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 4, 12:00 UTC
Update - We are continuing to work on a fix for this issue.
May 4, 11:42 UTC
Update - Provisioning was enabled again, however we are still working on the overall resolution.
May 4, 11:26 UTC
Update - We temporarily deactivated provisioning in order to fix the underlying issue.
May 4, 10:18 UTC
Update - We are continuing to work on a fix for this issue.
May 4, 09:53 UTC
Update - We are continuing to work on a fix for this issue.
May 4, 09:22 UTC
Update - We are continuing to work on a fix for this issue.
May 4, 08:45 UTC
Identified - Currently, there is an increased processing time for provisioning orders initiated via Data Center Designer or API.

Customers may experience timeouts or errors in some provisioning-related operations.

Occasionally, connections to the Data Center Designer may be interrupted.

May 4, 04:10 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 3, 21:22 UTC
Identified - Currently, there is an increased processing time for provisioning orders that are initiated via Data Center Designer or API.

Occasionally, connections may be lost in the direction of the Data Center Designer.

Availability and accessibility of your virtual data center resources will remain unaffected.
We will inform you as soon as the functionality has been restored.

May 3, 18:53 UTC
May 5, 2026
Completed - The scheduled maintenance has been completed.
May 5, 22:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 5, 19:00 UTC
Scheduled - We will perform an update of firewall rules on servers in Berlin. This process is generally impact-free, however, it is possible that while the rules are being applied, some packets might be dropped. These issues generally recover immediately, so we do not expect any prolonged outage.

Affected services: Network
Location: Location DE/TXL

Apr 30, 08:07 UTC
May 4, 2026
Completed - The scheduled maintenance has been completed.
May 4, 21:59 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 4, 17:00 UTC
Scheduled - We will be performing scheduled maintenance on the gateways
that connect virtual machines and storage in our DE/TXL
(Berlin) location. The work updates the gateways to a new
stable software version, ensuring continued security support
and bringing the platform onto a modern operating system
baseline.

The maintenance is carried out in two sequential phases across
redundant systems so that services remain available throughout
the window. Existing virtual machines and volumes are expected
to continue running normally. However, creating new volumes or
new virtual machines in the affected location may be briefly
delayed or fail during the maintenance, and should be retried
once the work is complete.

Our engineering team will actively monitor the systems
throughout the entire maintenance window and is prepared to
respond immediately to any unexpected issues.

We will update this notice if the window changes or if any
unexpected impact occurs.

Affected services: Network, Provisioning, Storage
Location: Location DE/TXL

Apr 30, 09:34 UTC
Completed - The scheduled maintenance has been completed.
May 4, 08:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 1, 08:00 UTC
Scheduled - As communicated in an email notification earlier this year, we are finalizing the retirement of the legacy SYNCHRONOUS replication mode. To ensure continued stability, all remaining PostgreSQL clusters will be automatically migrated during their first scheduled maintenance window after May 4, 2026.

What is changing?
- Automatic Migration: Your cluster will be transitioned to ASYNCHRONOUS mode.
- Impact: You will experience a brief restart during the maintenance window. While you may see improved write performance, please note that ASYNCHRONOUS mode prioritizes speed over strict data acknowledgment.

Action Required for Strong Consistency
If your workload requires a zero-data-loss guarantee, you must manually upgrade to STRICTLY_SYNCHRONOUS via the API no later than May 1, 2026:
- Scale Replicas: Increase your cluster to 3 or more replicas.
- Update API: Set the synchronization mode to STRICTLY_SYNCHRONOUS.

Affected services: Database as a Service (DBaaS)
Location: Global Services

Apr 21, 16:58 UTC
May 3, 2026
May 2, 2026

No incidents reported.

May 1, 2026
Apr 30, 2026

No incidents reported.