A major power outage in the AWS me-central-1 (Middle East) region on March 1, 2026, resulted from an unusual physical incident where external objects struck a data center, triggering sparks and a fire.
The event caused significant disruptions to Amazon Elastic Compute Cloud (EC2) services, networking APIs, and resource availability in a single Availability Zone (mec1-az2).
According to AWS incident reports, the fire department mandated a complete shutdown of power to the facility, including backup generators, while they managed the situation. The resulting power loss incapacitated EC2 Instances, Amazon Elastic Block Store (EBS) volumes, and Amazon Relational Database Service (RDS) databases within the affected zone.
Timeline of the Incident
The disruption began around 4:30 AM PST, with AWS officially investigating connectivity and power issues by 4:51 AM PST. By 6:09 AM PST, AWS confirmed the localized power failure in mec1-az2.
The company initiated traffic weighting strategies to route requests away from the damaged facility, shifting loads to the unaffected Availability Zones within the region.
AWS engineers discovered that the outage severely impacted EC2 networking APIs. Customers reported widespread throttling errors and failures when calling critical networking functions, including AllocateAddress, AssociateAddress, DescribeRouteTable, and DescribeNetworkInterfaces.
Throughout the afternoon, AWS deployed multiple configuration changes to mitigate the API failures. By 2:28 PM PST, the AllocateAddress API began showing positive signs of recovery.
However, the AssociateAddress API proved more challenging, leaving customers unable to reassign Elastic IP addresses from downed resources to active ones in healthy zones.
Mitigation and Partial Recovery
At 6:01 PM PST, AWS confirmed the successful recovery of the AssociateAddress API requests. The engineering team deployed a critical update allowing customers to forcefully disassociate Elastic IP addresses from resources trapped in the powerless data center.
This mitigation enabled organizations to restore connectivity by associating their existing IP addresses with newly launched resources in unaffected Availability Zones.
Despite the progress in restoring API functionality, the underlying physical infrastructure remained offline. AWS stated they were still awaiting clearance from local authorities to safely restore power to the damaged facility.
“We are still awaiting permission to turn the power back on, and once we have, we will ensure we restore power and connectivity safely,” an AWS representative stated in the 9:41 AM PST update.
The incident highlights the importance of multi-Availability Zone architectures. AWS emphasized that customers running redundant applications across multiple zones were largely insulated from the outage.
For organizations requiring immediate recovery of affected workloads, AWS recommends launching replacement resources in unaffected zones or utilizing alternative AWS Regions, restoring data from their most recent EBS snapshots or backups.
Due to the influx of traffic shifted from the downed zone, AWS noted that customers might experience longer provisioning times or require retries when launching specific instance types in the healthy ME-CENTRAL-1 zones.
As of the final update at 6:01 PM PST, AWS did not have an estimated time for physical power restoration at the mec1-az2 facility. The company continues to advise customers to operate out of alternate Availability Zones or Regions where applicable while recovery efforts are ongoing.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
The post AWS Power Outage in Middle East Triggers Major Disruption to EC2 and Networking Services appeared first on Cyber Security News.
.webp?w=0&resize=0,0&ssl=1)


