This is not an OSI model tutorial. If you are reading a Flowtriq blog post, you already know what a packet is. What follows is a focused examination of TCP, UDP, and BGP from the angle that matters most for infrastructure operations: where these protocols have structural properties that attackers exploit, what those attacks look like in practice, and what you can do about them at the kernel and network level.
TCP: The Handshake Is the Vulnerability
The State Machine You Are Actually Defending
TCP is a stateful protocol. Every connection progresses through a defined sequence of states: SYN_SENT → SYN_RECEIVED → ESTABLISHED → various FIN_WAIT and TIME_WAIT states on teardown. The server maintains state for every connection in this sequence, storing it in kernel memory. This is by design — it is what makes TCP reliable — and it is the fundamental attack surface for connection-based DDoS.
Why SYN Floods Work
A SYN flood exploits the asymmetry in TCP's three-way handshake. When your server receives a SYN, it allocates a socket, sends a SYN-ACK, and transitions to SYN_RECV state, where it waits for the client's ACK to complete the handshake. This half-open connection consumes memory and occupies a slot in the SYN backlog queue.
The backlog queue is finite. The default tcp_max_syn_backlog on many Linux distributions is 128 to 512 entries. An attacker sending 10,000 SYN packets per second from spoofed source IPs exhausts this queue almost instantly. New legitimate connection attempts receive no response — the queue is full — and your service becomes unreachable even though the server itself is not under CPU pressure.
# Check current SYN backlog settings: sysctl net.ipv4.tcp_max_syn_backlog sysctl net.ipv4.tcp_syncookies # Harden against SYN floods: # Enable SYN cookies — eliminates the backlog queue for new connections sysctl -w net.ipv4.tcp_syncookies=1 # Increase the backlog queue size sysctl -w net.ipv4.tcp_max_syn_backlog=4096 # Reduce SYN-ACK retries (faster cleanup of half-open connections) sysctl -w net.ipv4.tcp_synack_retries=2
SYN cookies solve the backlog exhaustion problem by eliminating the need to store per-connection state during the handshake. When SYN cookies are enabled, the server encodes the connection parameters into the ISN (initial sequence number) of the SYN-ACK and discards the connection state. Only when the client's ACK arrives — with the encoded ISN — is state allocated. Spoofed SYN floods, which never complete the handshake, consume no server memory at all.
TIME_WAIT Buildup
The TIME_WAIT state is the other TCP vulnerability that causes operational problems. After a connection closes, the socket remains in TIME_WAIT for 2 * MSL (Maximum Segment Lifetime), typically 60 seconds on Linux. During this period, the port pair cannot be reused. Under high connection rates — either from attack traffic or legitimate load — TIME_WAIT sockets can accumulate to tens of thousands, exhausting the ephemeral port range and preventing new connections.
# Count TIME_WAIT connections: ss -s | grep time-wait # Or more precisely: ss -n state time-wait | wc -l # Enable TIME_WAIT socket reuse (safe for most configurations): sysctl -w net.ipv4.tcp_tw_reuse=1 # Reduce the FIN timeout for faster cleanup: sysctl -w net.ipv4.tcp_fin_timeout=15
Flowtriq detects SYN floods by monitoring the rate of change in SYN_RECV connections per second, not just the absolute count. A surge from 50 to 8,000 half-open connections in 3 seconds is a reliable attack signature regardless of whether SYN cookies are enabled — the PPS signature is visible in /proc/net/snmp under the TcpExt: TCPSynRetrans and Tcp: PassiveOpens counters.
UDP: Connectionless by Design, Amplifiable by Nature
What "Connectionless" Means in Practice
UDP has no handshake, no state machine, no delivery guarantee, and no built-in flow control. A UDP datagram is fired and forgotten. The receiver has no way to tell the sender to slow down; the sender has no way to know if the datagram arrived. This makes UDP ideal for latency-sensitive applications (DNS, game traffic, VoIP, streaming) where the overhead of TCP's reliability mechanisms would hurt more than it helps.
It also makes UDP an ideal amplification vector, because there is no handshake for the amplifier to verify. An attacker can send a small UDP request to a public service with the source IP spoofed to be the victim's address. The service responds — potentially with a much larger response — to the victim's IP. The victim receives traffic they never requested, from IP addresses that appear to be legitimate services, at whatever amplification factor the protocol allows.
Amplification Factors and Which Protocols Are Abused
Detect amplification attacks before they saturate your link
Flowtriq detects attacks like this in under 2 seconds, classifies them automatically, and alerts your team instantly. 7-day free trial.
Start Free Trial →Common UDP amplification services and their approximate bandwidth amplification factors (response size / request size):
- DNS (UDP 53): 28-54x — ANY queries to open resolvers return large responses
- NTP (UDP 123): 556x peak — the
monlistcommand returns up to 600 peers per request - Memcached (UDP 11211): 10,000-51,000x — the highest amplification factor known; largely mitigated by disabling UDP in Memcached, but misconfigured instances remain
- SSDP (UDP 1900): 30-75x — UPnP devices respond to M-SEARCH queries with large XML payloads
- CharGen (UDP 19): infinite — the server generates arbitrary character data; rarely seen in the wild but still reachable on legacy systems
- CLDAP (UDP 389): 56-70x — connectionless LDAP, primarily on Windows domain controllers
ICMP Unreachables and Why They Matter
When your server receives a UDP datagram on a port with no listening socket, the kernel sends back an ICMP Port Unreachable message. Under UDP flood conditions, this generates a flood of outbound ICMP traffic that consumes your uplink and can trigger rate limiting by upstream providers. In severe cases, the kernel's ICMP rate limiter kicks in and starts dropping ICMP unreachable responses — meaning your legitimate ICMP traffic (traceroute, MTU path discovery) stops working as collateral damage.
The correct response to a UDP flood is to drop the traffic at the earliest point before ICMP processing — either via an upstream ACL, a BPF filter with tc, or at minimum an iptables/nftables rule that drops without generating ICMP responses (-j DROP rather than -j REJECT). The kernel's ICMP rate limit is configurable via net.ipv4.icmp_ratelimit, but reducing it is treating a symptom rather than the cause.
Flowtriq's UDP detection: UDP flood signatures include source port distribution analysis (amplification attacks have a single or narrow source port matching the abused service), UDP packet size distribution (amplification responses are large; floods can be small), and ingress-to-egress ratio anomalies that indicate the server is generating ICMP unreachable bursts.
BGP: The Routing Layer You Rarely Think About Until It Fails
What BGP Actually Does
BGP (Border Gateway Protocol) is the routing protocol that determines how traffic moves between autonomous systems (ASes) on the internet. When you trace a packet from your server to a client in Tokyo, it traverses somewhere between 5 and 20 ASes, each making forwarding decisions based on BGP routes. BGP is a path-vector protocol: each AS announces which IP prefixes it can reach, and its neighbors propagate those announcements outward with the AS path prepended.
There is no central authority for BGP. It is a distributed system that converges on a consistent routing state through peer relationships and route propagation. This is what makes the internet resilient — and what makes it vulnerable to route hijacking, route leaks, and the amplification of outages through reconvergence events.
Why BGP Route Flaps Cause Outages
A BGP route flap occurs when a prefix is repeatedly withdrawn and re-announced in a short period — typically caused by a physical link going up and down, or by a misconfigured router. When a prefix flaps, the change must propagate to every peer that carries the route. At internet scale, this means thousands of route updates across hundreds of ASes, consuming BGP processing resources globally and causing temporary routing instability where traffic black-holes or takes suboptimal paths.
BGP route flap damping is the standard mitigation: a route that flaps repeatedly has its announcement suppressed for an exponentially increasing penalty period. While this prevents flap propagation, it also means a flapping route can become completely unreachable from large portions of the internet even after the underlying cause is resolved, until the penalty decay timer expires.
RTBH: BGP as a DDoS Mitigation Tool
Remotely Triggered Black Hole (RTBH) routing uses BGP to distribute a null-route for an attacked IP address upstream — to your ISP or transit provider's edge routers. Traffic destined for the attacked IP is dropped at the provider's edge, before it enters your network, protecting your bandwidth and infrastructure from the volumetric impact.
The mechanics: you announce a host route (/32 or /128) for the attacked IP with a BGP community tag that your provider has pre-agreed to treat as a blackhole trigger. The provider installs a static route pointing the prefix to Null0 and distributes it to their upstream peers. Traffic is dropped at the earliest feasible point in the network.
# Example: announcing an RTBH route via ExaBGP
# (assuming your provider uses community 65535:666 for blackhole)
# In your ExaBGP configuration:
neighbor 192.0.2.1 {
router-id 198.51.100.1;
local-as 65001;
peer-as 65000;
static {
# Blackhole 203.0.113.50/32
route 203.0.113.50/32 next-hop 192.0.2.254 community [65535:666];
}
}
RTBH protects your network at the cost of making the attacked IP completely unreachable. It is not mitigation — it is a controlled sacrifice. Weigh this against the alternative of letting the attack saturate your uplink and affect all co-hosted services.
BGP Community Strings for Upstream Mitigation Requests
Many ISPs and transit providers support BGP community strings that allow customers to influence routing behavior without calling the NOC. Relevant communities for DDoS response typically include:
- Blackhole communities: Drop traffic to the tagged prefix at the provider edge (e.g.,
65535:666is the IANA-reserved standard, but providers use their own ASN-based communities) - No-export communities: Prevent the tagged route from being announced beyond the provider's directly connected peers — useful for limiting propagation of a more-specific route during an attack
- Scrubbing communities: Divert traffic to the provider's scrubbing infrastructure, where attack traffic is filtered and clean traffic is tunneled back to your router via GRE or MPLS
- Rate-limit communities: Some providers support per-prefix rate limiting triggered by community strings — a less blunt instrument than a full blackhole
The specific community values are provider-specific and documented in their NOC handbooks or LOA templates. Request this documentation before an attack happens — during an active incident is not the time to be reading provider documentation.
How Flowtriq Ties Into This Layer
Flowtriq operates at the host level, below the BGP layer — it monitors what traffic is arriving at your server's network interface, not what is happening in your provider's routing infrastructure. This means Flowtriq detects attacks that make it past any upstream BGP-based mitigation you have in place, and it provides the ground-truth data (attack type, source port distribution, PPS, bandwidth) you need to make the right decision about whether to request RTBH, engage scrubbing, or handle it at the host level.
The incident details Flowtriq generates — including optional PCAP captures — are exactly what a provider NOC needs to provision an effective upstream mitigation. Saying "I am under a DNS amplification attack from source port 53 at 800 Mbps to my IP 203.0.113.50" gets a faster, more targeted response than "my server is slow."
Protect your infrastructure with Flowtriq
Per-second DDoS detection, automatic attack classification, PCAP forensics, and instant multi-channel alerts. $9.99/node/month.
Start your free 7-day trial →