Incident Summary
Before we get into the protocol-level details, here is the attack at a glance. The October 21, 2016 assault on Dyn remains one of the most consequential DDoS events ever recorded — not because of its raw bandwidth (larger attacks have followed), but because of its target selection. By hitting a single managed DNS provider, the attackers took down the name resolution layer for hundreds of major internet services simultaneously.
| Parameter | Detail |
|---|---|
| Date | October 21, 2016 (Friday) |
| Target | Dyn Managed DNS infrastructure (authoritative nameservers) |
| Attack Waves | 3 distinct waves over ~11 hours |
| Estimated Peak Bandwidth | ~1.2 Tbps aggregate |
| Estimated Bot Count | ~100,000 compromised IoT devices |
| Primary Vector | DNS query flood (well-formed queries for Dyn-hosted domains) |
| Secondary Vector | TCP SYN flood against DNS resolvers on port 53 |
| Affected Services | Twitter, Netflix, Reddit, Spotify, GitHub, PayPal, Airbnb, Etsy, SoundCloud, The New York Times, CNN, HBO, Visa, and others |
| Total Duration | ~11 hours (11:10 UTC to ~22:00 UTC) |
| Botnet | Mirai (source code released September 30, 2016 — 21 days prior) |
| Bot Composition | IP cameras (Dahua, Hikvision), DVRs (XiongMai), home routers |
Three-Wave Attack Timeline
The attack unfolded in three distinct waves, each escalating in scope and forcing Dyn's engineering team to fight on progressively wider fronts. The timing was deliberate: each new wave arrived after Dyn had partially recovered from the previous one, maximizing disruption and exhausting incident response teams.
Wave 1 — East Coast resolvers targeted. The first wave hit Dyn's data centers on the US East Coast. DNS query floods targeting authoritative nameservers for Dyn-hosted domains began arriving from tens of thousands of source IPs simultaneously. Traffic was concentrated on Dyn's anycast prefixes serving the eastern United States. Users attempting to reach Twitter, Reddit, and Spotify from the East Coast received SERVFAIL or timed out entirely. West Coast and European users were initially unaffected because their recursive resolvers were hitting different anycast nodes. Dyn's NOC identified the attack within minutes but the sheer volume of legitimate-looking DNS queries made surgical filtering extremely difficult.
Wave 1 mitigated. After approximately two hours, Dyn's engineering team rerouted traffic, deployed additional scrubbing capacity, and stabilized service. Most affected sites came back online. Post-wave analysis revealed the queries were well-formed DNS requests for real domains hosted on Dyn — not garbage packets — making them nearly indistinguishable from legitimate traffic at first glance.
Wave 2 — Global resolvers targeted. Two and a half hours after the first wave subsided, the second wave began. This time the attack was broader, targeting Dyn's resolver infrastructure globally rather than just East Coast nodes. The bot count appeared to increase — later analysis suggested additional Mirai botnets had joined, possibly operated by different actors using the same publicly available source code. The attack combined DNS query floods with TCP SYN floods against port 53, attempting to exhaust Dyn's connection tables. The second wave caused the most widespread outages, with sites becoming unreachable from nearly every geographic region.
Wave 2 partially mitigated. Dyn began implementing more aggressive filtering, including rate-limiting queries from IP addresses exhibiting anomalous query patterns. Upstream providers assisted with BGP-based traffic diversion. Service began recovering, though intermittent failures persisted.
Wave 3 — Retry and retreat. A third wave was launched but Dyn's mitigations, now battle-hardened from two prior engagements, held. The attack traffic was largely absorbed by the scrubbing infrastructure that had been deployed during waves 1 and 2. By approximately 22:00 UTC, the attack ceased entirely. Total elapsed time from first packet to last: roughly 11 hours.
How Mirai Built Its Army
The Mirai botnet that powered the Dyn attack did not exploit any zero-day vulnerabilities. It did not use buffer overflows, ROP chains, or sophisticated exploitation techniques. It used Telnet and a list of 62 default username/password pairs.
The recruitment process was brutally simple. Every infected device ran a scanning module that generated random IPv4 addresses and attempted TCP connections on ports 23 and 2323 (Telnet). When a connection succeeded, the bot tried each credential pair from a hardcoded table — combinations like root:xc3511, admin:admin, root:vizxv, root:888888, root:default, and admin:password. These were default credentials shipped on IP cameras manufactured by XiongMai and Dahua, DVRs, and various consumer routers.
When a credential pair worked, the bot sent the device's IP address and credentials back to a reporting server. A separate loader component then logged into the device, determined its CPU architecture (ARM, MIPS, x86, PowerPC, SuperH, or SPARC — Mirai supported all six), and uploaded the appropriate binary. The binary killed any competing malware processes (including other Mirai instances), blocked future Telnet connections to prevent reinfection by rivals, and connected to the C2 server to await attack commands.
Mirai's source code was released on the Hack Forums by a user named "Anna-senpai" on September 30, 2016 — exactly 21 days before the Dyn attack. This meant multiple independent operators could compile and deploy their own Mirai botnets simultaneously. The Dyn attack may have involved coordination between multiple botnet operators, which would explain the escalating wave pattern.
The scanning rate was extraordinary. Each infected device scanned at approximately 250 IPs per second. With 100,000 devices scanning, the botnet was collectively probing 25 million IPs per second — enough to scan the entire IPv4 address space in under three minutes. Devices with default Telnet credentials were typically recruited within minutes of being connected to the internet.
The 62-Credential Table
The credential list in Mirai's source code (scanner.c) targeted specific manufacturer defaults. The top entries by hit rate were:
# Extracted from Mirai source (scanner.c) — credential pairs # Format: username:password — target device type root:xc3511 # XiongMai DVRs and IP cameras root:vizxv # Dahua IP cameras root:admin # Various routers admin:admin # Ubiquitous default root:888888 # Dahua DVRs root:xmhdipc # XiongMai IP cameras root:default # Various embedded devices root:juantech # Juan Technologies IP cameras root:123456 # Common weak password root:54321 # Common weak password support:support # ZTE routers root:root # Generic Linux devices root:12345 # Netgear routers user:user # Various consumer devices admin:password # Generic default admin:7ujMko0admin # Various DVR/NVR brands
These were not randomly chosen. The Mirai authors specifically surveyed which embedded Linux devices were most prevalent on the public internet and compiled credentials accordingly. The result: a hit rate of approximately 1 in 2,000 scanned IPs, which at the scan rate above yielded thousands of new recruits per hour.
The DNS Query Flood Technique
The genius — and danger — of the Dyn attack lay in its choice of vector. Rather than sending obviously malformed traffic or amplified UDP payloads, the Mirai botnet sent legitimate DNS queries for domains actually hosted on Dyn's infrastructure.
Each bot constructed well-formed DNS query packets with the following characteristics:
- Valid query structure: Standard DNS header with QR=0 (query), OPCODE=0 (standard query), RD=1 (recursion desired), QDCOUNT=1
- Real domain names: Queries for domains like
twitter.com,netflix.com,reddit.com, andspotify.com— all hosted on Dyn's nameservers - Randomized subdomains: Queries included randomly generated subdomains (e.g.,
a8f3k2x.twitter.com) to defeat DNS caching at every layer — recursive resolvers, ISP caches, and Dyn's own caching infrastructure all had to process each query as a fresh lookup - Standard query types: Mix of A, AAAA, and ANY record requests
- Legitimate source ports: Ephemeral ports in the normal range (32768-65535), not the fixed source ports typical of crude flooding tools
This is what made the attack so difficult to mitigate. A DNS query for r7x9m2.twitter.com looks, at the packet level, almost identical to a legitimate DNS query for www.twitter.com. Both are valid DNS protocol messages. Both target a domain legitimately hosted on Dyn. The only distinguishing feature is that the subdomain does not exist — but a DNS server must still process the query to determine that, returning an NXDOMAIN response.
# What the attack queries looked like in tcpdump: # (reconstructed from post-incident reports) tcpdump -nn -i eth0 'port 53' -c 10 # 11:10:01.234 IP 201.18.42.7.48291 > 208.78.70.9.53: 12345+ A? k8m2x1.twitter.com. (36) # 11:10:01.234 IP 177.95.110.4.52140 > 208.78.70.9.53: 23456+ AAAA? p4n7w.reddit.com. (34) # 11:10:01.235 IP 115.238.89.12.39201 > 208.78.70.9.53: 34567+ A? v2j6q.netflix.com. (35) # 11:10:01.235 IP 85.54.178.221.44018 > 208.78.70.9.53: 45678+ ANY? m3r8d.spotify.com. (35) # 11:10:01.235 IP 189.112.45.8.51777 > 208.78.70.9.53: 56789+ A? x9b4n.paypal.com. (34) # ... hundreds of thousands more per second
The randomized subdomain technique (sometimes called a "water torture" attack or "pseudo-random subdomain" attack) is the critical detail. Without it, recursive resolvers worldwide would cache the response for twitter.com according to the TTL and never forward the query to Dyn again. By generating a unique subdomain for every query, each one was a guaranteed cache miss that had to traverse the full resolution chain from recursive resolver to Dyn's authoritative nameservers.
TCP SYN Flood as Secondary Vector
Alongside the DNS query floods (primarily over UDP), the botnet also launched TCP SYN floods against Dyn's nameservers on port 53. DNS supports both UDP and TCP transport. Many modern resolvers use TCP for queries exceeding 512 bytes or when the server sets the TC (truncation) flag. By flooding TCP port 53 with SYN packets, the attackers attempted to exhaust Dyn's connection tables, preventing legitimate TCP-based DNS queries from completing the three-way handshake.
Why DNS Was the Perfect Target
The Dyn attack exploited a structural vulnerability in how the internet works. DNS is the translation layer between human-readable domain names and IP addresses. When DNS fails, every service that depends on it fails — even if those services themselves are running perfectly.
Single Point of Failure at Scale
In 2016, many of the internet's largest services used Dyn as their sole managed DNS provider. Twitter's NS records pointed exclusively to Dyn nameservers (ns1.p34.dynect.net through ns4.p34.dynect.net). Netflix, Reddit, and others had similar configurations. When Dyn's authoritative nameservers became unresponsive, there was no fallback. Recursive resolvers around the world tried to resolve twitter.com, received no answer from Dyn's nameservers, and returned SERVFAIL to the end user.
# What users saw during the attack: $ dig twitter.com @8.8.8.8 ; <<>> DiG 9.10.3 <<>> twitter.com @8.8.8.8 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 41823 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1 ;; QUESTION SECTION: ;twitter.com. IN A ;; Query time: 5023 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Fri Oct 21 11:45:12 EDT 2016 # Compare to normal resolution: $ dig twitter.com @8.8.8.8 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58291 ;; ANSWER SECTION: ;twitter.com. 293 IN A 104.244.42.65
DNS TTL: A Double-Edged Sword
DNS caching should, in theory, insulate users from temporary authoritative server failures. If a recursive resolver cached twitter.com = 104.244.42.65 with a 300-second TTL, users on that resolver would continue to reach Twitter for up to five minutes even if Dyn went completely offline. However, several factors undermined this protection:
- Short TTLs: Many Dyn customers used TTLs of 60 seconds or less (common for services using DNS-based load balancing or failover). These cached entries expired quickly, forcing constant re-resolution.
- Cache poisoning by the attack itself: The flood of
NXDOMAINresponses for randomized subdomains caused some recursive resolvers to negatively cache the parent domain, making the problem worse. - Client-side DNS caching is inconsistent: Browsers, operating systems, and stub resolvers all maintain their own caches with varying TTL behavior. Some honor TTLs strictly; others flush on network changes.
- Recursive resolver load: Major recursive resolvers (Google Public DNS, OpenDNS, ISP resolvers) saw their own query volumes spike as cached entries expired and every client retry generated a new upstream query to Dyn. Some recursive resolvers became overloaded themselves, creating a cascading failure.
Recursive vs. Authoritative Impact
It is important to understand the distinction between recursive and authoritative DNS servers in the context of this attack. Dyn operated authoritative nameservers — the servers that hold the actual DNS records for their customers' domains. When an end user's recursive resolver (like 8.8.8.8 or 1.1.1.1) needed to resolve twitter.com, it queried Dyn's authoritative servers. The attack targeted the authoritative layer, but the impact cascaded to recursive resolvers because they could not get answers to relay to their clients. Recursive resolvers responded with SERVFAIL, and applications interpreted this as "the website is down" even though the web servers were running normally.
Detect DNS floods before they take you down
Flowtriq monitors per-second packet rates on every protocol. DNS query floods trigger alerts in under 2 seconds — before your resolvers start timing out. 7-day free trial.
Start Free Trial →Protocol-Level Analysis
Reconstructing what the attack looked like at Dyn's edge requires combining information from Dyn's post-incident report, the Flashpoint analysis of the Mirai botnet, and the packet structures defined in the Mirai source code.
DNS Query Packet Structure
Each attack packet was a standard UDP datagram containing a DNS query. The packet structure was:
# Attack packet anatomy (DNS query over UDP) # Total size: ~70-80 bytes per packet +-- Ethernet Header (14 bytes) --+ | Src MAC | Dst MAC | Type: IPv4 | +-- IPv4 Header (20 bytes) ------+ | Src IP: [real bot IP] | # NOT spoofed — Mirai uses real source IPs | Dst IP: [Dyn NS IP] | # e.g., 208.78.70.9 | Protocol: UDP (17) | | TTL: 64 | # Default Linux TTL — IoT devices run Linux +-- UDP Header (8 bytes) --------+ | Src Port: [ephemeral] | # 32768-65535, randomized per query | Dst Port: 53 | | Length: ~50 bytes | +-- DNS Query (~40-50 bytes) ----+ | Transaction ID: [random] | # 16-bit random ID | Flags: 0x0100 (RD=1) | # Standard query, recursion desired | Questions: 1 | | QNAME: [random].twitter.com | # Random 5-8 char subdomain | QTYPE: A (1) or AAAA (28) | | QCLASS: IN (1) | +--------------------------------+
The critical detail: Mirai does not spoof source IPs for DNS attacks. Unlike amplification attacks that rely on spoofed sources to redirect reflected traffic, Mirai sends queries from real bot IPs. This meant Dyn could see the actual source addresses, but with 100,000 unique IPs each sending hundreds of queries per second, building a blocklist fast enough was impractical. By the time you blocked one IP, a hundred new ones had appeared.
Traffic Rates at the Edge
With ~100,000 bots each sending approximately 10,000 queries per second (well within the capability of even a consumer-grade DVR), the total query rate reached approximately 1 billion DNS queries per second at peak. Each query was 70-80 bytes on the wire, producing roughly 600-800 Gbps of inbound traffic. Combined with the TCP SYN flood component and the response traffic (Dyn's servers were generating NXDOMAIN replies for every query), the total traffic volume approached 1.2 Tbps bidirectionally.
The PPS (packets per second) rate was the more devastating metric. DNS queries are small packets. At 70 bytes each and 1 billion queries per second, Dyn's edge routers and load balancers were processing packet rates that exceeded their forwarding capacity — even when bandwidth was not saturated. This is a common pattern in DNS floods: the attack exhausts PPS budgets before it exhausts bandwidth budgets.
Spoofed vs. Non-Spoofed Traffic Mix
Post-incident analysis revealed that the majority of DNS query flood traffic used real (non-spoofed) source IPs. However, the TCP SYN flood component did use spoofed sources to prevent SYN-ACK responses from reaching real hosts (which would have sent RST packets back). This created a mixed traffic profile:
- UDP DNS queries: Non-spoofed. Real IoT device IPs. Geographic distribution matching IoT deployment density (heavy in Brazil, Vietnam, China, Turkey).
- TCP SYN floods: Spoofed source IPs. Randomized across the IPv4 space. Intended to exhaust connection tracking state on Dyn's infrastructure.
Detection Signatures
How would you detect an attack like this on your own DNS infrastructure? The key indicators are query volume anomalies, NXDOMAIN ratio spikes, and subdomain entropy analysis.
tcpdump Filters for DNS Flood Detection
# Capture DNS queries at high volume and count unique source IPs
tcpdump -nn -i eth0 'udp dst port 53' -c 10000 -w /tmp/dns_flood.pcap
# Analyze: count queries per source IP
tcpdump -nn -r /tmp/dns_flood.pcap 'udp dst port 53' | \
awk '{print $3}' | cut -d. -f1-4 | sort | uniq -c | sort -rn | head -20
# Normal traffic: top source sends ~50-100 queries in 10k sample
# DNS flood: thousands of sources each sending 50+ queries
# Monitor NXDOMAIN response ratio in real time
tcpdump -nn -i eth0 'udp src port 53' -l | \
awk '/NXDomain/{nx++} {total++} total%1000==0{printf "NXDOMAIN ratio: %.1f%% (%d/%d)\n", nx/total*100, nx, total}'
# Normal NXDOMAIN ratio: 5-15%
# During random subdomain attack: 80-95%
Query Pattern Analysis
# Detect random subdomain attacks by measuring subdomain entropy
# High entropy (random characters) = likely attack traffic
tcpdump -nn -i eth0 'udp dst port 53' -l | \
grep -oP '(?<=A\? |AAAA\? )[^ ]+' | \
awk -F. '{print $1}' | \
while read sub; do
echo "$sub" | fold -w1 | sort -u | wc -l
done | awk '{sum+=$1; n++} n%100==0{printf "Avg unique chars in subdomain: %.1f (>6 = suspicious)\n", sum/n}'
# Legitimate subdomains: www, mail, api, cdn — low character diversity
# Attack subdomains: k8m2x1, p4n7w, v2j6q — high character diversity
iptables Rate-Limiting for DNS
# Rate-limit inbound DNS queries per source IP # Allow 50 queries/second burst, 20 queries/second sustained iptables -A INPUT -p udp --dport 53 \ -m hashlimit --hashlimit-above 20/sec --hashlimit-burst 50 \ --hashlimit-mode srcip --hashlimit-name dns_limit \ --hashlimit-htable-expire 10000 \ -j DROP # Rate-limit TCP SYN to port 53 (prevents SYN flood component) iptables -A INPUT -p tcp --dport 53 --syn \ -m hashlimit --hashlimit-above 10/sec --hashlimit-burst 20 \ --hashlimit-mode srcip --hashlimit-name dns_syn_limit \ --hashlimit-htable-expire 10000 \ -j DROP # Log excessive DNS query sources for analysis iptables -A INPUT -p udp --dport 53 \ -m hashlimit --hashlimit-above 100/sec --hashlimit-burst 200 \ --hashlimit-mode srcip --hashlimit-name dns_log \ -m limit --limit 5/min \ -j LOG --log-prefix "DNS_FLOOD: " --log-level 4
Per-source rate limiting helps against non-spoofed floods but is ineffective against spoofed traffic. For spoofed DNS queries, you need response rate limiting (RRL) on the DNS server itself — a feature supported by BIND (since 9.9.4), Unbound, Knot DNS, and PowerDNS.
Lessons and Aftermath
The Dyn attack triggered immediate and lasting changes across the internet infrastructure industry. Some were technical. Others were regulatory. All were overdue.
DNS Redundancy: Multi-Provider Is No Longer Optional
The most immediate lesson was that relying on a single DNS provider is a single point of failure. Within weeks of the attack, major services began configuring secondary DNS providers. Twitter added NS records pointing to additional providers alongside Dyn. Netflix implemented multi-provider DNS with automated failover. The practice of listing nameservers from two or more independent DNS providers in a domain's NS records became standard for any service that could not tolerate DNS-layer outages.
# Before the attack — single provider: $ dig NS twitter.com twitter.com. 172800 IN NS ns1.p34.dynect.net. twitter.com. 172800 IN NS ns2.p34.dynect.net. twitter.com. 172800 IN NS ns3.p34.dynect.net. twitter.com. 172800 IN NS ns4.p34.dynect.net. # After — multi-provider DNS: $ dig NS twitter.com twitter.com. 172800 IN NS ns1.p34.dynect.net. twitter.com. 172800 IN NS ns2.p34.dynect.net. twitter.com. 172800 IN NS a.r53-64.awsdns-07.com. twitter.com. 172800 IN NS b.r53-65.awsdns-08.co.uk.
IoT Security Legislation
The Dyn attack was a catalyst for IoT security regulation. California's SB-327 (effective January 2020) became the first US state law requiring IoT manufacturers to ship devices with unique passwords rather than universal defaults. The UK's Product Security and Telecommunications Infrastructure Act (2022) imposed similar requirements. The EU's Cyber Resilience Act (2024) went further, mandating ongoing security updates for connected devices throughout their lifecycle. None of these laws existed when Mirai was built. All of them trace their legislative urgency to October 21, 2016.
BCP38 and Source Address Validation
The TCP SYN flood component of the Dyn attack relied on IP source address spoofing. BCP38 (RFC 2827), published in 2000, describes ingress filtering that ISPs should implement to prevent their customers from sending packets with spoofed source addresses. Sixteen years after its publication, adoption was still incomplete. The Dyn attack renewed industry pressure on ISPs to deploy BCP38 filtering. The MANRS (Mutually Agreed Norms for Routing Security) initiative gained significant momentum in the following years, with major carriers committing to anti-spoofing measures.
Response Rate Limiting (RRL)
DNS server operators accelerated adoption of RRL — a mechanism that limits the rate of identical or near-identical responses to the same client. When a DNS server detects that it is sending an unusually high number of NXDOMAIN responses to queries for random subdomains of the same parent domain, RRL truncates or drops responses. This does not stop the inbound flood, but it prevents the DNS server from exhausting its own resources generating replies and prevents the server from being used as a reflector.
How Flowtriq Would Detect This Pattern
Flowtriq's per-second PPS monitoring would flag a Dyn-style attack almost immediately. The detection logic works at multiple layers:
- PPS anomaly detection: A sudden spike from a baseline of, say, 5,000 DNS queries/second to 500,000+ would trigger an alert within 2 seconds. The agent continuously samples packet rates and fires when the rate exceeds the learned baseline by a configurable multiplier.
- Protocol classification: Flowtriq classifies traffic by protocol. A flood concentrated on UDP port 53 would be automatically categorized as a DNS flood, distinguishing it from generic UDP floods or SYN floods.
- PCAP capture: At the moment of detection, Flowtriq captures a PCAP sample of the attack traffic. This sample would reveal the random subdomain pattern, the geographic distribution of source IPs, and the packet structure — giving the NOC team everything they need to craft precise mitigation rules or request upstream filtering.
- Multi-channel alerting: Alerts fire simultaneously to Discord, Slack, email, SMS, PagerDuty, OpsGenie, or webhook — ensuring the on-call engineer sees the attack regardless of the communication channel they are monitoring.
The fundamental advantage of per-second detection is time. The Dyn attack went undetected for several minutes. With per-second PPS monitoring, the anomaly would be flagged before the first recursive resolver timed out — giving defenders a window to act before users notice anything wrong.
Key takeaway: The Dyn attack did not break any records for raw bandwidth. What made it devastating was target selection — hitting the DNS layer meant a relatively modest botnet could take down services that individually had the capacity to absorb far larger attacks. The lesson for defenders: monitor every layer of your dependency chain, not just your own uplink. If your DNS provider goes down, your servers are irrelevant.
Protect your infrastructure with Flowtriq
Per-second DDoS detection, automatic attack classification, PCAP forensics, and instant multi-channel alerts. $9.99/node/month.
Start your free 7-day trial →