Back to Blog

Incident Summary

ParameterDetail
DateFebruary 28, 2018 — 17:21 UTC
Peak Bandwidth1.35 Tbps
Peak PPS126.9 million packets per second
Duration~20 minutes (active attack window)
Attack VectorMemcached UDP amplification (port 11211)
Amplification FactorUp to 51,000x
TargetGitHub.com (192.30.252.0/22)
Mitigation ProviderAkamai Prolexic
Time to Mitigation~8 minutes (traffic rerouted by 17:30 UTC)
Downtime~5 minutes of intermittent unavailability

This was not a botnet attack. No malware was involved. No compromised hosts were coordinated by a command-and-control server. The attacker exploited a fundamental design flaw in the memcached protocol: it responds to unauthenticated UDP requests with arbitrarily large payloads, and it never verifies whether the source IP address in the request is legitimate.

Attack Timeline

17:21 UTC

First anomaly detected. GitHub's internal monitoring systems registered a sudden spike in inbound UDP traffic. Within seconds, inbound bandwidth exceeded normal levels by several orders of magnitude. The traffic consisted entirely of UDP packets with source port 11211 arriving from thousands of distinct IP addresses worldwide. GitHub's network operations team was alerted automatically.

17:26 UTC

Traffic rerouted to Akamai Prolexic. GitHub's incident response team made the decision to route traffic through Akamai's Prolexic DDoS mitigation service. BGP announcements were updated to redirect GitHub's IP prefixes through Akamai's scrubbing centers. This is a standard failover procedure that GitHub had pre-configured with Akamai as part of their DDoS response plan. The BGP propagation began immediately.

17:30 UTC

Attack peaks at 1.35 Tbps. The attack reached its maximum volume — 1.35 terabits per second of inbound traffic, with a packet rate of 126.9 million packets per second. Akamai's scrubbing infrastructure absorbed the attack traffic and forwarded only legitimate requests to GitHub's origin servers. Akamai later confirmed this was the largest attack they had ever mitigated at the time.

17:31 UTC

Second wave. A brief second wave of attack traffic arrived, peaking around 400 Gbps. Akamai's scrubbing infrastructure handled this without additional intervention. The attack pattern suggested the attacker was testing whether a different set of memcached reflectors could bypass the mitigation.

~17:40 UTC

Attack subsides. Inbound attack traffic dropped to near zero. Total attack duration from first packet to last was approximately 20 minutes. GitHub confirmed full service restoration and began their internal post-mortem process. The entire incident, from detection to resolution, lasted under 20 minutes.

"Between 17:21 and 17:30 UTC on February 28th, we identified and mitigated a significant volumetric DDoS attack." — GitHub Engineering Blog, March 1, 2018

How Memcached Amplification Works

Memcached is a general-purpose distributed memory caching system designed to speed up dynamic web applications by caching data in RAM. It was created by Brad Fitzpatrick in 2003 for LiveJournal. It was never designed to be exposed to the public internet. It has no authentication mechanism. It has no access control. It listens on TCP and UDP port 11211, and by default (prior to version 1.5.6), both protocols were enabled.

The amplification attack exploits three properties of the memcached UDP protocol:

  1. No authentication. Memcached accepts and responds to any request on UDP port 11211 without requiring a handshake, token, or credential of any kind. Unlike TCP, UDP has no connection establishment phase that would require a three-way handshake to verify the sender's identity.
  2. Asymmetric response sizes. A get command requesting a cached key is typically 15-50 bytes. The response containing the cached value can be up to 1 MB (the default maximum value size in memcached). A single stats command (6 bytes plus headers) returns kilobytes of server statistics. Pre-populated keys using set commands allow attackers to store large payloads on exposed memcached instances before launching the amplification attack.
  3. IP spoofing via UDP. Because UDP is connectionless, an attacker can forge the source IP address in the request packet. The memcached server sends its response to the spoofed source address — which is the victim's IP. The attacker never receives the response. The victim receives an avalanche of unsolicited memcached data.

The mechanics step by step

The attacker first scans the internet for memcached instances listening on UDP port 11211. At the time of the GitHub attack, Shodan and Censys indexes showed approximately 100,000 exposed memcached servers worldwide. Many were running on cloud instances, VPS hosts, and misconfigured development servers.

Before the attack, the attacker (or an automated tool) connects to these servers and stores large payloads using the set command:

# Attacker pre-loads large values onto exposed memcached servers
# This is done via direct TCP connection (no spoofing needed here)
echo -e "set attack_payload 0 0 750000\r\n$(python3 -c 'print("A"*750000)')\r\n" | nc 203.0.113.45 11211

During the attack, the attacker sends a small UDP packet with a spoofed source IP (set to GitHub's address) to each exposed memcached server:

# The actual attack packet (conceptual representation)
# Source IP: 192.30.253.113 (GitHub — spoofed)
# Destination IP: 203.0.113.45 (memcached reflector)
# Source Port: 80 (arbitrary)
# Destination Port: 11211
# Payload: "\x00\x00\x00\x00\x00\x01\x00\x00get attack_payload\r\n"
# Total request size: ~36 bytes
# Response size: ~750,000 bytes
# Amplification factor: ~20,833x

The memcached server receives this 36-byte request, looks up the key attack_payload, and sends the 750,000-byte response to the spoofed source address — GitHub. Multiply this across tens of thousands of reflectors, and the result is 1.35 Tbps of traffic arriving at a single target from legitimate servers around the world.

Amplification factor: 10,000x to 51,000x

The theoretical maximum amplification factor depends on the size of the cached value relative to the request. CloudFlare and Arbor Networks independently measured amplification factors in real-world attacks:

  • A stats request (6 bytes payload + UDP headers) returning ~3 KB of statistics produces roughly a 500x amplification.
  • A get request for a 100 KB cached value produces roughly a 5,000x amplification.
  • A get request for a 750 KB cached value produces roughly a 20,000x amplification.
  • Using multi-get requests for multiple pre-loaded keys, researchers demonstrated amplification factors exceeding 51,000x — the highest ever recorded for any amplification vector.

For comparison, DNS amplification typically achieves 28-54x. NTP amplification achieves 556x. SSDP achieves roughly 30x. Memcached's 51,000x amplification factor was — and remains — in a category of its own.

Detect amplification attacks in under 2 seconds

Flowtriq monitors PPS, BPS, and protocol distribution every second. Memcached amplification, DNS reflection, NTP monlist — classified automatically with PCAP evidence.

Start your free 7-day trial →

Protocol-Level Breakdown

Understanding what this attack looks like on the wire is essential for building effective detection. The memcached binary protocol over UDP includes a framing header that is absent from the TCP variant. Each UDP datagram carries an 8-byte memcached header before the response data:

# Memcached UDP response header (8 bytes)
# Bytes 0-1: Request ID
# Bytes 2-3: Sequence number (for multi-datagram responses)
# Bytes 4-5: Total number of datagrams in this response
# Bytes 6-7: Reserved (0x0000)

# In attack traffic, these headers are visible:
00 00  # Request ID: 0
00 01  # Sequence: 1 of N
00 0f  # Total datagrams: 15
00 00  # Reserved

# Followed by the actual memcached response:
56 41 4c 55 45 20  # "VALUE " (ASCII)
61 74 74 61 63 6b  # "attack" (ASCII)
...

Because memcached responses frequently exceed the UDP maximum datagram size (65,535 bytes minus headers) and certainly exceed the Ethernet MTU of 1500 bytes, the IP layer fragments them. A single memcached response of 750 KB produces approximately 534 IP fragments at a standard 1400-byte payload per fragment. This fragmentation pattern is a critical detection signal.

IP spoofing: why UDP makes this possible

TCP-based amplification attacks are generally not feasible because TCP requires a three-way handshake (SYN, SYN-ACK, ACK) before data exchange. The server sends the SYN-ACK to the spoofed address, and since the victim never initiated the connection, it responds with RST or drops the packet. The handshake never completes, and no data is exchanged.

UDP has no such handshake. A single spoofed packet is sufficient to trigger a response. The memcached server has no way to verify that the source address in the IP header is legitimate. This is by design — UDP was built for speed, not verification. BCP38 (RFC 2827) recommends that ISPs implement ingress filtering to prevent IP spoofing, but adoption remains incomplete. Estimates suggest that roughly 25-30% of autonomous systems on the internet still allow outbound spoofed packets, which is more than enough to power massive amplification attacks.

Detecting memcached reflection with tcpdump

The following tcpdump filters capture the key signatures of a memcached amplification attack in progress:

# Capture all inbound UDP traffic from source port 11211
# This is the most direct indicator of memcached reflection
$ tcpdump -nn -i eth0 'udp src port 11211' -c 100

# Capture with packet content to verify memcached headers
# Look for "VALUE" or "STAT" in the payload
$ tcpdump -nn -X -i eth0 'udp src port 11211 and len > 1000' -c 20

# Monitor IP fragmentation rates (fragments indicate large responses)
$ tcpdump -nn -i eth0 '(ip[6:2] & 0x1fff) != 0' -c 1000

# Capture the initial fragment (has UDP header) and subsequent fragments
# Initial: fragment offset = 0, MF flag set
# Subsequent: fragment offset > 0
$ tcpdump -nn -i eth0 'udp src port 11211 or ((ip[6:2] & 0x1fff) != 0)' -c 500

# Count unique source IPs sending memcached responses (many reflectors)
$ tcpdump -nn -i eth0 'udp src port 11211' -c 10000 2>/dev/null | \
  awk '{print $3}' | cut -d. -f1-4 | sort -u | wc -l

Why This Attack Was Different

The GitHub attack did not set the record merely by scale. It represented a qualitative shift in how DDoS attacks work. Every previous record-setting attack relied on botnets — large networks of compromised devices coordinated to generate traffic.

AttackYearPeakVectorBotnet Required
Spamhaus2013300 GbpsDNS amplificationPartial (small botnet + open resolvers)
BBC2015602 GbpsMulti-vector (NTP, DNS, SSDP)Yes
Dyn DNS2016~1.2 TbpsMirai botnet (IoT devices)Yes (~100,000 devices)
OVH2016~1.1 TbpsMirai botnet (IoT devices)Yes (~145,000 devices)
GitHub20181.35 TbpsMemcached amplificationNo

The Mirai attack on Dyn in October 2016 required the attacker to build and maintain a botnet of approximately 100,000 compromised IoT devices — cameras, DVRs, routers. That infrastructure took months to build, required persistent command-and-control servers, and was vulnerable to takedown efforts.

The GitHub attack required none of that. The attacker needed only a single machine with the ability to send spoofed UDP packets and a list of open memcached servers (readily available via Shodan). No malware. No botnets. No C2 infrastructure. The "botnet" was the internet itself — tens of thousands of legitimate but misconfigured memcached instances that amplified the attacker's traffic by a factor of tens of thousands.

This is the fundamental reason the GitHub attack changed the DDoS landscape. It proved that a single attacker with modest resources could generate over a terabit per second of attack traffic by exploiting protocol-level design flaws in widely deployed software. The barrier to launching a record-breaking attack dropped from "build a botnet" to "run a script."

The Internet's Response

The security community's response was swift and coordinated — arguably one of the fastest collective responses to a new attack vector in internet history.

Within 24 hours:

  • Cloudflare published a detailed technical analysis of the memcached amplification vector, including amplification factor measurements and mitigation recommendations. Their blog post included packet captures and protocol analysis that became the reference material for network operators worldwide.
  • Akamai released their incident report confirming the 1.35 Tbps peak and detailing how Prolexic handled the scrubbing.
  • Arbor Networks (now NETSCOUT) published their own analysis through their ATLAS threat intelligence platform, confirming they had observed memcached amplification probes in the weeks leading up to the GitHub attack.

Within 48 hours:

  • US-CERT issued an alert (TA18-106A) warning about memcached amplification and providing mitigation guidance.
  • Multiple ISPs and transit providers began filtering UDP port 11211 at their network borders. Some implemented rate limiting; others blocked the port entirely for traffic crossing their backbone.
  • Major cloud providers (AWS, GCP, Azure) implemented or strengthened egress filtering for UDP port 11211, preventing memcached instances running on their platforms from being used as reflectors.

Within one week:

  • The memcached project released version 1.5.6, which disabled UDP support by default. The -U 0 flag became the default, and administrators had to explicitly enable UDP with -U 11211.
  • Shodan scans showed the number of exposed memcached servers dropping from approximately 100,000 to under 50,000 as operators patched or firewalled their installations.
  • Several DDoS-for-hire services (booters/stressers) began advertising "memcached amplification" as a new attack option, demonstrating how quickly weaponization follows public disclosure.

Five days after the GitHub attack, on March 5, 2018, Arbor Networks reported an even larger memcached amplification attack — 1.7 Tbps — targeting an unnamed US-based service provider. This attack was mitigated by the provider's upstream transit and never resulted in significant downtime, but it confirmed that the GitHub incident was not an isolated event. The memcached amplification vector was being actively exploited at scale.

Detection Signatures

If you are operating network infrastructure, these are the specific patterns that indicate a memcached amplification attack is underway or imminent.

PPS and BPS characteristics

Memcached amplification produces a distinctive ratio between packets per second and bits per second. Because the response packets are large (typically 1400 bytes due to IP fragmentation at the MTU boundary), the bytes-per-packet ratio during an attack is much higher than during normal traffic. Normal web traffic averages 400-600 bytes per packet. During memcached amplification, the average climbs to 1200-1400 bytes per packet. A sudden shift in this ratio is a reliable early indicator.

Flow record analysis

In NetFlow or sFlow records, memcached amplification appears as thousands of flows with source port 11211, a single destination IP (the victim), and high byte counts per flow. The source IPs are globally distributed — you will see reflectors from every continent. No two consecutive flows share the same source IP, but they all target the same destination.

iptables detection and mitigation rules

# Log and count memcached amplification packets (detection)
iptables -A INPUT -p udp --sport 11211 -j LOG \
  --log-prefix "MEMCACHED-AMP: " --log-level 4
iptables -A INPUT -p udp --sport 11211 -j DROP

# Rate-limit as an alternative to hard drop (allows some legitimate traffic)
iptables -A INPUT -p udp --sport 11211 -m limit \
  --limit 10/sec --limit-burst 20 -j ACCEPT
iptables -A INPUT -p udp --sport 11211 -j DROP

# Also block fragmented packets that may be part of amplified responses
# (subsequent fragments lack UDP headers, so port-based rules miss them)
iptables -A INPUT -f -m limit --limit 100/sec --limit-burst 200 -j ACCEPT
iptables -A INPUT -f -j DROP

# nftables equivalent (modern Linux)
nft add rule inet filter input udp sport 11211 counter log prefix \
  \"memcached-amp: \" drop

Check if your memcached is exposed

This one-liner tests whether a memcached instance is accessible on UDP port 11211 from the public internet. Run it against your own servers to verify they are not acting as open reflectors:

# Check if memcached is responding on UDP 11211
# Replace YOUR_SERVER_IP with the IP to test
echo -en "\x00\x00\x00\x00\x00\x01\x00\x00stats\r\n" | \
  nc -q1 -u YOUR_SERVER_IP 11211

# If you get a response containing "STAT pid" or similar output,
# your memcached is exposed and can be used as a reflector.

# To verify from the server itself:
ss -ulnp | grep 11211
# If this shows 0.0.0.0:11211 or :::11211, memcached is listening
# on all interfaces and is likely exposed.

Lessons for Infrastructure Teams

1. Default-open UDP services are time bombs

Memcached's default configuration before version 1.5.6 — listening on all interfaces, UDP enabled, no authentication — is a textbook example of insecure defaults. The memcached developers designed the software for trusted local networks and never anticipated it being deployed on internet-facing servers. But deploy it on the internet people did, because the default configuration worked and nothing forced them to restrict access.

This pattern repeats across the industry. SNMP, NTP's monlist, SSDP, CHARGEN, DNS open resolvers — every significant amplification vector traces back to a UDP service with insecure defaults deployed on the public internet. If you operate any UDP service, audit whether it is reachable from the internet. If it does not need to be, bind it to localhost or a private interface.

2. Sub-second detection is not optional

GitHub detected the anomaly within seconds and initiated mitigation within 5 minutes. Most organizations are not GitHub. They do not have a pre-configured relationship with Akamai Prolexic. They do not have automated BGP failover to a scrubbing service. For most targets, the time between "attack starts" and "link is saturated" is measured in seconds, not minutes.

The GitHub attack reached 1.35 Tbps within 9 minutes of the first packet. On a 10 Gbps link, that volume of traffic would saturate the connection in under one second. If your monitoring polls at 60-second intervals, you are blind for 59 of those seconds. You find out about the attack when your customers tell you the site is down.

Flowtriq samples /proc/net/dev counters every second and maintains dynamic baselines using exponentially weighted moving averages. When inbound BPS or PPS exceeds learned thresholds, the alert fires in under 2 seconds — not the next time your SNMP poller runs.

3. Amplification attacks bypass traditional defenses

Traditional rate-limiting and IP-based blocking are ineffective against amplification attacks because the traffic comes from legitimate servers. You cannot blocklist 50,000 memcached servers belonging to cloud providers, hosting companies, and universities without blocking legitimate users from those same networks. The only effective response is upstream scrubbing (like Akamai Prolexic) or ISP-level filtering (like BGP blackhole routing).

But you need to detect the attack before you can mitigate it. And you need to detect it fast — before your link fills completely, because once the link is saturated, you may not be able to reach your upstream provider to request mitigation. This is why automated detection and automated mitigation triggers are critical. Flowtriq's webhook integration can fire an API call to your scrubbing service the moment an attack is classified, removing the human from the time-critical path.

4. BCP38 is still not universally deployed

Every amplification attack relies on IP spoofing. BCP38 (RFC 2827), published in 2000, describes ingress filtering that prevents spoofed packets from leaving a network. If every ISP implemented BCP38, memcached amplification — and every other reflection/amplification attack — would be impossible. Eighteen years after its publication, at the time of the GitHub attack, an estimated 25-30% of autonomous systems still did not implement ingress filtering. The Spoofer project run by CAIDA continues to track this, and progress remains slow.

This is a collective-action problem. No individual network benefits from deploying BCP38 — it protects other networks from spoofed traffic originating from yours. Until regulators or market forces incentivize universal deployment, amplification attacks will remain viable.

5. PCAP evidence accelerates response

When GitHub contacted Akamai, they had clear packet-level evidence of the attack vector. This is not always the case. Many organizations detect "high bandwidth" but cannot tell their upstream provider what kind of traffic it is, which ports are involved, or what protocol is being abused. Without this information, the upstream provider has to analyze the traffic themselves before they can build scrubbing rules, adding minutes to the mitigation timeline.

Having PCAP evidence ready when you make the call — or better yet, attached to an automated webhook — compresses the response time from minutes to seconds. Flowtriq maintains a rolling pre-attack PCAP buffer specifically for this purpose: when an attack is detected, the capture includes packets from before detection fired, giving you the complete picture from the attack's first packet.

How Flowtriq Detects This Pattern

Flowtriq's detection engine would catch a GitHub-style memcached amplification attack through multiple independent signals firing simultaneously:

  • BPS anomaly: Inbound bytes per second exceeds the dynamic baseline. At 1.35 Tbps, this triggers instantly on any link.
  • PPS anomaly: 126.9 Mpps is orders of magnitude above normal for any single server or cluster.
  • Protocol shift: UDP jumps from its normal percentage to 99%+ of inbound traffic.
  • IOC match: Source port 11211 is a known memcached amplification indicator. Flowtriq's pattern library flags this automatically.
  • Fragment anomaly: IP fragmentation rate spikes dramatically due to oversized memcached responses.
  • Source diversity: Thousands of unique source IPs converging on a single destination — a signature pattern of reflection attacks.

All six signals would fire within the first second. The attack would be classified as "memcached amplification" with high confidence. Alerts would be dispatched to all configured channels — Discord, Slack, PagerDuty, OpsGenie, email, SMS, webhooks — within 2-3 seconds of the first attack packet. If firewall rules are configured, Flowtriq can trigger upstream scrubbing via webhook before a human even reads the alert.

The 8-minute gap: GitHub took approximately 8 minutes from detection to full mitigation through Akamai. With Flowtriq's automated webhook-triggered mitigation, that gap shrinks to seconds. The difference between 8 minutes and 8 seconds of exposure can mean the difference between a blip on your status page and a full outage.

Final Notes

The February 2018 GitHub attack was a watershed moment in DDoS history. It proved that amplification attacks had reached a scale where even the largest, most well-defended services on the internet could be temporarily knocked offline. It demonstrated that a single attacker with no botnet infrastructure could generate traffic volumes that previously required months of preparation and tens of thousands of compromised devices.

But it also showed that preparation works. GitHub had a response plan. They had a pre-configured relationship with a scrubbing provider. They had monitoring that detected the anomaly in seconds. They were offline for 5 minutes out of a 20-minute attack. That is not a failure — that is a success story. The failure would have been discovering the attack when customers started tweeting.

The question for your infrastructure is simple: if 1.35 Tbps of memcached amplification traffic hit your network right now, how long would it take you to know? And once you knew, how long would it take you to do something about it?

Flowtriq plans start at $9.99/mo per node with memcached amplification detection included. Start your free 7-day trial and find out how fast your detection can be.

Back to Blog

Related Articles