Key takeaways
- Global DNS outage: Cloudflare's 1.1.1.1 resolver failed worldwide for 62 minutes on July 14, 2025, due to a configuration error in their service topology .
- Root cause: A dormant misconfiguration from June 6 linked 1.1.1.1 to a non-production service. When activated, it withdrew critical IP prefixes globally .
- Traffic impact: UDP/TCP/DoT queries dropped sharply, but DNS-over-HTTPS (DoH) via
cloudflare-dns.com
stayed stable thanks to separate IPs . - Unrelated hijack: Tata Communications (AS4755) advertised 1.1.1.0/24 during the outage, worsening routing issues for some users .
- Resolution: Cloudflare restored services by 22:54 UTC after reverting configurations and manually re-announcing routes .
Why 1.1.1.1 matters for the internet
You might not think much about DNS resolvers, but they’re like the phonebooks of the internet. Cloudflare’s 1.1.1.1 launched back in 2018 as a faster, privacy-focused alternative to ISP-provided DNS. It quickly became one of the most used resolvers globally, handling billions of queries daily. The service uses anycast routing to direct traffic to the nearest data center, which usually means quick responses and reliability. But on July 14, that same design amplified a failure across every continent. For users relying solely on 1.1.1.1, the internet basically stopped working—websites wouldn’t load, apps froze, and confusion spread. Alot of folks didn’t realize how dependent they’d become on this single service until it vanished .
Timeline of the outage: When everything went dark
Here’s how the incident unfolded, minute by minute :
- 21:48 UTC: A config change for Cloudflare’s Data Localization Suite (DLS) triggered a global refresh. This activated the dormant error from June 6.
- 21:52 UTC: 1.1.1.1 prefixes began withdrawing from BGP tables. DNS traffic plummeted within minutes.
- 21:54 UTC: Tata Communications (AS4755) started advertising 1.1.1.0/24—an unrelated hijack now visible due to Cloudflare’s withdrawal.
- 22:01 UTC: Internal alerts fired. Incident declared.
- 22:20 UTC: Fix deployed after reverting configurations.
- 22:54 UTC: Full service restoration after routes stabilized.
Table: Affected IP ranges during the outage
This 62-minute disruption showed how a small config error can cascade into global chaos. Engineers initially missed the June 6 mistake because it didn’t cause immediate problems—no alerts, no complaints. But when that second change hit, it all unraveled fast .
Technical breakdown: What actually broke
The core issue was a service topology misconfiguration. Cloudflare uses internal systems to map which IPs should be advertised where, especially for services like their Data Localization Suite (DLS) that restrict traffic to specific regions. On June 6, a config update accidentally tied 1.1.1.1’s prefixes to a non-production DLS service. Since that service wasn’t live yet, no one noticed .
Then, on July 14, an engineer attached a test location to that same DLS service. This triggered a global refresh of routing policies. Because of the earlier error, 1.1.1.1’s topology got reduced to one offline data center. Routers worldwide immediately withdrew announcements for its IP ranges. Traffic couldn’t reach Cloudflare’s DNS servers at all.
The legacy system managing these topologies lacked safeguards like canary deployments or staged rollouts. A peer-reviewed change still went global in one shot—no gradual testing, no kill switches. Cloudflare’s newer topology system avoids hardcoded IP lists, but migrating between systems created fragility. They’ve since acknowledged this "error-prone" approach needs retiring .
Why detection took 9 minutes: Monitoring gaps
Cloudflare’s internal alerts didn’t fire until 22:01 UTC—9 minutes after traffic nosedived. Why the delay? A few reasons stand out:
- No immediate metric drops: The BGP withdrawal caused routing failure, not server crashes. Queries didn’t fail; they never arrived. Monitoring systems tuned for server errors missed this.
- Alert thresholds: Teams avoid overly sensitive alerts to prevent false alarms. As one Hacker News comment noted, operators often wait 5+ minutes before escalating to avoid "alert fatigue" .
- Legacy dependencies: Health checks relied on systems that themselves needed DNS resolution, creating blind spots during outages.
This lag highlights a tricky balance: catching failures fast without drowning teams in noise. Cloudflare’s post-mortem implies tighter BGP monitoring might help, but they haven’t detailed specific fixes yet .
The BGP hijack that wasn’t: Tata’s role
As Cloudflare’s routes vanished, something weird happened: Tata Communications (AS4755) started advertising 1.1.1.0/24. ThousandEyes observed this hijack propagating through some networks, worsening connectivity for users whose queries got routed to Tata .
Crucially, this wasn’t malicious. Tata likely advertised 1.1.1.0/24 due to old internal configurations—that prefix was used for testing long before Cloudflare claimed it. Once Cloudflare re-announced their routes, Tata withdrew the hijacked prefix. But for ~25 minutes, it added chaos. This incident underscores how fragile BGP can be when major routes vanish unexpectedly .
Impact analysis: Who felt the outage?
The outage hit hardest for users and apps relying exclusively on 1.1.1.1. But patterns emerged in the data :
- Protocol differences:
- UDP/TCP/DoT traffic dropped ~90% (these use IPs like 1.1.1.1 directly).
- DoH (DNS-over-HTTPS) via
cloudflare-dns.com
stayed near normal. Its IPs weren’t tied to the faulty topology.
- Backup resolver users: People using 1.1.1.1 with 1.0.0.1 or third-party DNS (e.g., 8.8.8.8) saw minimal disruption. Failovers kicked in.
- Regional variances: Reports spiked in North America, Europe, and Asia. Cloudflare Radar confirmed global impact.
Table: Traffic recovery post-fix
Ironically, the outage proved Cloudflare’s DoH resilience. By decoupling DNS from raw IPs, it avoided single points of failure. As one user noted, "DoH was working" when traditional DNS failed .
Lessons for the internet’s infrastructure
This outage wasn’t a cyberattack or hardware failure—it was process and system design flaws. Key takeaways for engineers :
- Staged rollouts save lives: Had Cloudflare used canary deployments for config changes, they’d have caught the error in one region first. Their new topology system supports this, but legacy tech didn’t.
- Validate dormant configs: "No impact" isn’t "safe." Systems must flag unused configurations that could activate later.
- Enforce resolver redundancy: Clients should always use multiple DNS resolvers (e.g., 1.1.1.1 + 8.8.8.8). Single-provider setups risk total outages.
- Monitor routing layer: Services need BGP/advertisement visibility, not just server health.
Cloudflare’s pledged to accelerate retiring legacy systems. But as they noted, "This was a humbling event." For the rest of us, it’s a reminder: even giants stumble, and backups matter .
FAQs about the Cloudflare 1.1.1.1 outage
Q: Could using 1.0.0.1 as a backup have helped?
A: Yes, but not completely. 1.0.0.1 shares infrastructure with 1.1.1.1, so both failed. Ideal backups use unrelated resolvers like Google’s 8.8.8.8 or Quad9 .
Q: Why did DNS-over-HTTPS (DoH) keep working?
A: DoH uses domain names (e.g., cloudflare-dns.com
), not raw IPs. Those domains resolved via unaffected infrastructure. Always prefer DoH/DoT domains over IPs for resilience .
Q: Was this a BGP hijack?
A: Partially, but not by Cloudflare. Tata’s route advertisement was a side effect of Cloudflare’s withdrawal—not the cause. It amplified issues for some users though .
Q: How often does Cloudflare go down?
A: Rarely. In the last 30 days, 1.1.1.1 had 99.09% uptime vs. 99.99% for Google’s 8.8.8.8. This was an exception, not routine .
Q: Did the outage affect other Cloudflare services?
A: Mostly no. Core CDN, security, and dashboard services use different IPs and weren’t withdrawn. The 1.1.1.1 resolver was the primary casualty .
Comments
Post a Comment