Monitoring -
Our team has recovered the remaining VMs. It’s important to note that storage redundancy is not yet fully restored. Due to necessary data synchronization processes and the volume of data involved, full recovery will take several additional hours.
Important notice for affected customers:
- Load: To ensure stability of the recovered VMs, please try to avoid excessive storage load where possible.
- Provisioning: Provisioning jobs against affected VMs will not be possible until redundancy is fully restored. Consequently, making state changes will not be possible during this time.
- VM States: During the incident, the state of affected VMs may have changed; some may currently be in "shutoff" state.
- Manual Intervention: To avoid unintended impact on customers for whom "shutoff" is the desired state, we cannot automatically restart all VMs without approval.
Due to these constraints, we kindly ask affected customers to carefully check their resources and request state changes via support ticket (support@cloud.ionos.com) where necessary. This is only needed until redundancy is restored.
We are moving this incident to Monitoring status while work to restore full redundancy continues. We will provide an update here once that work is complete. A detailed Root Cause Analysis (RCA) will be published subsequently, explaining the cause of the incident, the measures taken to restore service, and the steps implemented to prevent recurrence.
We thank our customers and partners for their patience and cooperation in ensuring the full recovery of affected services and workloads.
May 12, 2026 - 23:06 UTC
Update - We have successfully recovered additional VMs. However, because full storage redundancy has not yet been reached for these systems, Storage Provisioning Jobs will currently fail. We ask affected customers to refrain from making configuration changes or placing excessive load on these servers until redundancy is fully restored.
May 12, 2026 - 21:08 UTC
Update - The team has succeeded in restoring the first VMs during the initial run. We are currently working on rolling out the restoration process further and will update on the progress.
May 12, 2026 - 20:12 UTC
Update - Our development team is working on re-establishing connectivity sessions of the affected VMs so that the recovery effort can continue. The first runs are currently starting.
May 12, 2026 - 20:00 UTC
Update - The team has encountered a blocker while restarting the remaining VMs due to connectivity issues between the replaced hosts and the storage servers. This has significantly increased the complexity of the recovery effort. We are currently collaborating with a specialized development team to bring the remaining systems back online. We will provide the next update at 20:00 UTC or before if there are meaningful updates.
May 12, 2026 - 19:23 UTC
Update - We have made progress regarding the recovery, and are currently in the process of restarting affected Virtual Machines.
May 12, 2026 - 16:01 UTC
Update - The spares have been made available. They are currently brought online to allow recovery to continue. We estimate that we will have another update on the progress within the hour.
May 12, 2026 - 13:53 UTC
Update - We found relevant hardware to be defective, this has to be replaced first in the data center.
After the physical connection is back, we need to sync and update the storage servers to get the VMs back online.
We are working as fast as we can to get this fixed, however, this might take some hours.
May 12, 2026 - 12:49 UTC
Identified - Our storage team is currently working on a firmware restore and upgrade on the affected storage servers and will then attempt to bring them back into service.
May 12, 2026 - 11:23 UTC
Investigating - Unfortunately, we ran into further issues with our Storages in Karlsruhe, so the team keeps on repairing.
Affected VMs (and VDCs) are not available and have to be restarted once they get back online.
We will keep you updated asap on any new information.
May 12, 2026 - 11:04 UTC
Monitoring - All VMs and Storages and VDCs are back online and available.
We keep monitoring the situation.
May 12, 2026 - 08:59 UTC
Identified - Our developers restored the storage availability, we are now restarting the affected VMs to bring them back online.
May 12, 2026 - 08:02 UTC
Investigating - We are currently experiencing restrictions regarding our Storage service in one cluster in our Karlsruhe region. We are working to restore regularly operation of our services as quickly as possible.
May 12, 2026 - 06:15 UTC