Internet service disruption caused by Cloudflare's configuration alteration admitted
On July 14, 2025, a significant outage affected Cloudflare's 1.1.1.1 DNS resolver service, causing widespread disruption to internet connectivity. The root cause was an internal configuration error related to BGP (Border Gateway Protocol) route advertisements, not a cyberattack or BGP hijack as initially suspected.
**Cause and Sequence of Events:**
The incident began at 21:48 UTC when Cloudflare made a configuration change to a pre-production Data Localization Suite (DLS) service. This update inadvertently refreshed the network configuration globally, linking the critical 1.1.1.1 IP prefixes used by the DNS resolver service to an inactive/non-production DLS environment. As a result, Cloudflare’s production data centers worldwide withdrew the BGP prefixes associated with 1.1.1.1 from global routing tables, making these DNS resolver IP addresses unreachable.
The affected IP ranges included 1.1.1.0/24, 1.0.0.0/24, and IPv6 ranges like 2606:4700:4700::/48. The outage led to significant drops in DNS queries over UDP, TCP, and DNS-over-TLS, though DNS-over-HTTPS traffic remained stable due to routing through domain names rather than IP addresses directly.
By 21:52 UTC, DNS traffic started dropping, and by 22:01 UTC, Cloudflare detected the incident and publicly disclosed it. The misconfiguration was reverted at 22:20 UTC, and Cloudflare began re-advertising the withdrawn prefixes. Full restoration of service in all locations was achieved by 22:54 UTC, resulting in a 62-minute total outage.
**Additional Insights:**
Initially, the incident gave the appearance of a BGP hijack because Cloudflare’s legitimate route advertisements disappeared, and some dormant routes briefly became visible, simulating a hijacking scenario. However, investigations clarified the root cause was an internal error in legacy system configurations for managing BGP advertisements rather than a malicious attack. The outage highlighted hidden vulnerabilities in internet infrastructure, particularly how legacy configurations and edge cases can cause large-scale service disruptions.
**Summary Timeline:**
| Time (UTC) | Event Description | |------------|-------------------| | 21:48 | Configuration change to pre-production DLS service triggering global network refresh | | 21:52 | DNS traffic starts dropping | | 22:01 | Incident detection and public disclosure | | 22:20 | Configuration reverted and BGP prefixes re-advertised | | 22:54 | Full service restoration |
This outage disrupted millions of users worldwide, demonstrating the critical importance of careful network configuration management in large-scale internet services. The incident serves as a reminder for all internet service providers to maintain vigilance in managing their systems to prevent similar disruptions in the future.
In the sequence of events, the outage of Cloudflare's 1.1.1.1 DNS resolver service on July 14, 2025 was primarily caused by an internal configuration error in managing BGP advertisements, affecting AI-driven technology, specifically in their Data Localization Suite (DLS) service. The outage, lengthy enough to disrupt millions of users worldwide, emphasizes the importance of careful network configuration management using cloud technology in large-scale internet services to prevent future disruptions.