Bad mental models are expensive. When an engineer believes something incorrect about how networks behave, they make wrong decisions: they buy the wrong mitigation, they dismiss real alerts, and they spend hours investigating symptoms while the actual problem goes unaddressed. The following myths are not strawmen — they are positions held by working engineers at real companies, sometimes for years, until something breaks in exactly the way the myth predicted it would not.
Myth 1: High Bandwidth Means No DDoS Problem
The myth: "We have a 10 Gbps uplink — we can absorb any attack."
The reality: Bandwidth is the wrong unit. Packets per second (PPS) is what kills servers. A 10 Gbps link can carry roughly 14.8 million packets per second at minimum Ethernet frame size (64 bytes). But the Linux kernel's networking stack, netfilter, and connection tracking subsystems are CPU-bound, not bandwidth-bound. A SYN flood sending 5 million small packets per second will exhaust CPU and connection tracking state on a typical server long before it saturates a 10 Gbps link — the bandwidth utilization might show only 320 Mbps while the server is completely unresponsive.
Always monitor PPS alongside bandwidth. A PPS spike with flat bandwidth is a reliable attack indicator that pure bandwidth monitoring will miss entirely.
Myth 2: If the Site Loads, It Is Not Under Attack
The myth: "Our uptime check is green, so we are fine."
The reality: Sub-threshold attacks are designed specifically to avoid triggering uptime monitors. A 60-second uptime check can return 200 OK while your server is under a sustained low-rate SYN flood that is causing 8% packet loss for real users. TCP retransmits hide the loss from your uptime check's single connection, but users making 40 simultaneous requests per page load are experiencing compounding degradation.
Uptime checks measure binary availability. Attack impact is a spectrum. A server at 80% of its connection table capacity will pass every uptime check while failing roughly 1 in 5 real user connections — and those connections are the ones that stall rather than fail fast, creating worse UX than an outage would.
Myth 3: Cloud Providers Handle DDoS Automatically
The myth: "We are on AWS/GCP/Azure — DDoS is handled."
The reality: AWS Shield Standard (included free) protects against SYN floods, UDP reflection attacks, and other common volumetric attacks at the network perimeter. It does not protect against application-layer attacks (Layer 7), sophisticated multi-vector attacks, or volumetric attacks targeting your specific EC2 instance's public IP rather than a load balancer. Shield Advanced costs $3,000/month minimum.
More importantly, even Shield Standard does not alert you when you are being attacked. You will see the effect — increased latency, dropped connections — but without instrumentation on the instance itself, you have no visibility into what is happening or whether the cloud provider's mitigation is actually triggering. Running Flowtriq on your instances gives you ground-truth visibility that cloud provider dashboards simply do not provide.
See what your cloud provider is not showing you
Flowtriq detects attacks like this in under 2 seconds, classifies them automatically, and alerts your team instantly. 7-day free trial.
Start Free Trial →Myth 4: A Firewall Is Enough Protection
The myth: "We have iptables/UFW/a hardware firewall — we are protected."
The reality: A stateful firewall is often the first component to fail under a connection flood. The Linux nf_conntrack table — the kernel's connection tracking mechanism used by iptables and nftables — has a fixed maximum size. On a default configuration, this is typically 65,536 entries. A SYN flood or connection flood that exceeds this limit causes the kernel to start dropping all new connections, legitimate and malicious alike, and logs the event as nf_conntrack: table full, dropping packet.
# Check your current conntrack table utilization: cat /proc/sys/net/netfilter/nf_conntrack_count cat /proc/sys/net/netfilter/nf_conntrack_max # If count/max > 0.7, you are at risk of table exhaustion under attack. # Increase the limit and enable hashing: # sysctl -w net.netfilter.nf_conntrack_max=262144 # sysctl -w net.netfilter.nf_conntrack_buckets=65536
A firewall is a necessary tool, not a sufficient one. It handles filtering; it does not handle detection, classification, or volumetric mitigation upstream of the host.
Myth 5: DDoS Is Only for Big Targets
The myth: "We are too small to be a target."
The reality: The majority of DDoS attacks are automated and target-agnostic. Botnet operators run continuous scans of IPv4 space and attack anything that responds, either for hire (booter services start at $5/hour), as collateral damage from wide-area UDP amplification sweeps, or as part of extortion campaigns that run thousands of small targets simultaneously. Cloudflare's 2024 DDoS threat report found that 72% of attacked targets were small and medium businesses.
Game hosting, small SaaS products, and solo-operated VPS nodes are attacked constantly. The assumption of security through obscurity fails the first time a port scanner finds your IP.
Myth 6: More RAM Fixes Slowdowns
The myth: "The server is slow — add more memory."
The reality: Network-related slowdowns are rarely memory-constrained. They are almost always CPU-constrained (interrupt processing, packet handling, connection tracking) or buffer-constrained (kernel socket buffers, TCP receive windows). Adding RAM to a server that is being overwhelmed by packet processing does nothing — the bottleneck is in the kernel's network stack, not the heap.
The correct first step is to check softirq CPU time, which reveals how much CPU is being spent on network interrupt processing. If si in top or vmstat is elevated, you have a packet processing bottleneck. The fix is kernel buffer tuning, RSS (Receive Side Scaling) configuration, or upstream rate limiting — not adding DRAM.
# Check softirq load: vmstat 1 5 # Watch the 'si' column. >5% indicates heavy network interrupt load. # Check kernel buffer settings: sysctl net.core.rmem_max net.core.wmem_max net.core.netdev_max_backlog
Myth 7: High Ping Always Means DDoS
The myth: "Latency spiked — we must be under attack."
The reality: Latency spikes have many causes, the majority of which are not attacks. BGP route flaps can add 50-200ms as traffic takes a suboptimal path for 30-90 seconds during reconvergence. Upstream congestion at a transit provider, especially at IX peering points during peak hours, regularly causes latency increases indistinguishable from attack signatures. Physical link errors causing retransmits produce latency spikes with no attack at all.
The diagnostic question is: what does the traffic volume look like? A latency spike with flat or slightly elevated PPS is almost certainly routing or congestion. A latency spike with a sharp PPS increase is more likely an attack. Chasing a routing issue with DDoS mitigation tooling wastes time and can make the situation worse.
Myth 8: BGP Blackhole Is the Right First Response
The myth: "Just blackhole the attacked IP — problem solved."
The reality: BGP RTBH (Remotely Triggered Black Hole) drops all traffic destined for the attacked prefix — including legitimate traffic. It is an effective way to protect the rest of your network, but it achieves this by making the attacked service completely unreachable. For an IP serving production traffic, a blackhole is not mitigation — it is a self-inflicted outage.
RTBH makes sense as a last resort when an attack is volumetric enough to threaten the surrounding infrastructure, and when the attacked service can afford to be unreachable. It is not a first response. The correct first response is upstream scrubbing or host-level rate limiting while you assess the attack characteristics. RTBH comes into play when those fail or when the attack volume exceeds what scrubbing can handle.
The common thread: Most of these myths share the same root cause — treating network problems as binary (working/not working) rather than as a spectrum of degradation. Real networks degrade gradually, attack traffic mixes with legitimate traffic, and most incidents fall somewhere between "totally fine" and "completely down." Instrumentation that operates in that middle ground is the only way to break these myths in practice.
Protect your infrastructure with Flowtriq
Per-second DDoS detection, automatic attack classification, PCAP forensics, and instant multi-channel alerts. $9.99/node/month.
Start your free 7-day trial →