Back to Blog

The Files You Already Have

Every Linux kernel since 2.2 exposes network statistics through the /proc filesystem. No additional packages, no agents, no kernel modules — the counters are there the moment the system boots. Two files are most relevant during a DDoS incident:

  • /proc/net/dev — per-interface cumulative packet and byte counters since last boot
  • /proc/net/snmp — protocol-level counters for IP, ICMP, TCP, UDP, and the extended TCP statistics in TcpExt

The counters in both files are monotonically increasing integers. To get a rate (packets per second), you read the counter twice with a known time interval between reads and compute the difference. That is exactly how every monitoring tool from iftop to Prometheus's node_exporter works internally. Removing that tooling, you can replicate it in a shell.

Reading /proc/net/dev During an Attack

Here is what /proc/net/dev looks like on a quiet server versus during a 47k PPS UDP flood:

# /proc/net/dev format (columns: interface | RX bytes pkts errs drop ... | TX ...)
cat /proc/net/dev
# Inter-|   Receive                                                |  Transmit
#  face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets ...
#     lo: 42847291  312047    0    0    0     0          0         0  ...
#   eth0: 9284719483 12847291    0    0    0     0          0         0  ...
#
# 47k PPS attack — same file 1 second later:
#   eth0: 9352471283 12894391    0 3200    0     0          0         0  ...
#         ^ bytes grew ~67MB    ^ pkts grew 47,100  ^ drops appeared

Notice the drop counter. Under a UDP flood where the kernel cannot process packets fast enough, it starts incrementing the drop counter for the interface. If you see the drop counter growing alongside PPS, packets are being lost at the NIC ring buffer level, before they even reach the kernel network stack. This is a sign the CPU cannot keep up with the interrupt rate.

A Simple PPS Monitor in Shell

#!/bin/bash
# pps-monitor.sh — zero-dependency PPS monitor using /proc/net/dev
IFACE=${1:-eth0}
PREV=$(awk "/^\s*$IFACE:/{print \$3}" /proc/net/dev)
while true; do
  sleep 1
  CURR=$(awk "/^\s*$IFACE:/{print \$3}" /proc/net/dev)
  PPS=$((CURR - PREV))
  DROPS=$(awk "/^\s*$IFACE:/{print \$5}" /proc/net/dev)
  echo "$(date +%H:%M:%S)  RX PPS: $PPS  cumulative drops: $DROPS"
  PREV=$CURR
done

Run this as bash pps-monitor.sh eth0. The output updates every second and shows the current receive packet rate and cumulative drop count. During a 47k PPS attack, you would see output like:

09:14:22  RX PPS: 47103  cumulative drops: 0
09:14:23  RX PPS: 47891  cumulative drops: 0
09:14:24  RX PPS: 48204  cumulative drops: 1847
09:14:25  RX PPS: 46998  cumulative drops: 9203
09:14:26  RX PPS: 45112  cumulative drops: 21884

The drops appearing at second three indicate the ring buffer filled. The PPS rate staying constant while drops climb means the NIC is still receiving at the same rate, but the kernel stopped processing all of them.

Skip the shell scripts — get per-second alerts automatically

Flowtriq detects attacks like this in under 2 seconds, classifies them automatically, and alerts your team instantly. 7-day free trial.

Start Free Trial →

/proc/net/snmp: The Counters That Tell the Story

While /proc/net/dev tells you the packet rate, /proc/net/snmp tells you what kind of packets they are and what the kernel is doing with them. The file has header rows followed by value rows for each protocol section. Here are the counters most relevant during a DDoS attack:

IP Section

# Read IP-level counters
awk '/^Ip:/{header=$0; getline; for(i=1;i<=NF;i++) printf "%s = %s\n", \
  header_arr[i], $i}' /proc/net/snmp 2>/dev/null || \
awk 'NR%2==1{split($0,h)} NR%2==0 && /^Ip/{for(i=1;i<=NF;i++) \
  printf "%s = %s\n",h[i],$i}' /proc/net/snmp | grep -E "InReceives|InDiscards|ForwDatagrams"

# During a 47k PPS UDP flood, you might see InDiscards climbing:
# Ip InReceives = 2847391028
# Ip InDiscards = 0            # kernel keeping up so far
# Ip ForwDatagrams = 0         # not a router

Ip InReceives is the total number of IP datagrams received. Ip InDiscards increments when the kernel received a valid IP packet but discarded it due to resource constraints (typically full socket receive buffers). During a flood, this counter tells you whether attack traffic is reaching the IP layer or being dropped earlier at the NIC.

UDP Section

# UDP counters — the most revealing during a UDP flood
awk 'NR%2==1{split($0,h)} NR%2==0 && /^Udp:/{for(i=1;i<=NF;i++) \
  printf "%s = %s\n",h[i],$i}' /proc/net/snmp

# Normal output:
# Udp InDatagrams = 128473
# Udp NoPorts = 0
# Udp InErrors = 0
# Udp OutDatagrams = 127821
# Udp RcvbufErrors = 0
# Udp SndbufErrors = 0

# During UDP flood:
# Udp InDatagrams = 47293847   (massive climb)
# Udp NoPorts = 46891234       (99% of packets have no socket waiting)
# Udp InErrors = 401223        (buffer errors on sockets that do exist)
# Udp RcvbufErrors = 401223    (socket receive buffers full)

Udp NoPorts is particularly diagnostic. It counts UDP packets that arrived for a port with no open socket. During a UDP flood targeting random ports (the most common pattern), nearly all attack packets increment this counter. Watching Udp NoPorts increment at 46,000 per second while Udp InDatagrams increments at 47,000 per second tells you immediately: this is a UDP flood, 99% of it is hitting ports with no application behind them, and the kernel is generating ICMP Port Unreachable responses for each one (which is another load you need to rate-limit).

TcpExt Section: The Deep Counters

# Extract critical TcpExt counters
awk 'NR%2==1{split($0,h)} NR%2==0 && /^TcpExt:/{
  for(i=1;i<=NF;i++) {
    if(h[i]~/TCPSack|Syncookies|TCPBacklog|TCPRcv|ListenDrop/)
      printf "%s = %s\n",h[i],$i
  }
}' /proc/net/snmp

During a SYN flood specifically, the counters to watch are:

  • TcpExtTCPReqQFullDrop — SYN requests dropped because the accept queue was full
  • TcpExtTCPReqQFullDoCookies — SYN requests handled with SYN cookies instead of queuing
  • TcpExtListenDrops — connections dropped because the listen backlog was exhausted
  • TcpExtTCPSackFailures — SACK failures, which spike during high-loss events caused by attack traffic competing with real traffic

What 47k PPS Actually Costs Your CPU

Each packet received by a network interface triggers an interrupt (or, with NAPI, a software interrupt poll). The kernel processes each interrupt, DMA-copies the packet from NIC memory, runs it through the netfilter chains, and delivers it to a socket or drops it. The cost per packet depends on the packet size and the depth of your iptables ruleset, but on a modern x86 server without RSS (Receive Side Scaling), you can estimate roughly 1–2 microseconds of CPU time per packet.

At 47,000 PPS, that is 47–94 milliseconds of CPU time per second on a single core — 5–10% of one core, just for packet processing. This sounds manageable, but the problem is that network interrupt handling is not evenly distributed. Without RSS or RPS (Receive Packet Steering), all interrupts land on a single CPU core. You will see one core at 80–100% while others are idle:

# Check interrupt distribution across CPUs
cat /proc/interrupts | grep eth0
#  32:  1847293        0        0        0   PCI-MSI 524288-edge   eth0-rx-0
# All 1.8M interrupts on CPU 0 — single-core bottleneck

# Enable RPS to distribute interrupt processing across all cores
# (add to /etc/network/if-up.d/ or run at boot)
echo "f" > /sys/class/net/eth0/queues/rx-0/rps_cpus

RPS distributes packet processing across CPU cores in software. It will not help if the NIC itself is overwhelmed, but for attacks in the 50k–500k PPS range on multi-core servers, enabling RPS can recover significant CPU headroom and keep the system responsive while you work on upstream mitigation.

How Flowtriq Reads These Counters

The Flowtriq agent reads /proc/net/dev and /proc/net/snmp once per second, computes the per-second delta for each counter, and stores the time series. The PPS figure you see in the Flowtriq dashboard is the derivative of eth0 RX packets — the same value the shell script above produces, but stored historically and graphed.

The attack classification logic uses the ratio between counters. A high Udp NoPorts rate relative to total Udp InDatagrams signals a UDP flood. Rising TcpExtSyncookiesSent signals a SYN flood. An anomalous jump in Ip InReceives with stable Tcp InSegs and Udp InDatagrams suggests ICMP or raw IP flooding. Flowtriq combines these ratios with the raw PPS rate to classify attacks to a specific vector within the first 2 seconds of detection, and that classification is what appears on the incident page and in alert messages.

The counters in /proc/net/snmp are the ground truth of what your kernel is experiencing. Flowtriq's agent just gives you a persistent, historical, alerting-capable view of the same data you could always access yourself.

Protect your infrastructure with Flowtriq

Per-second DDoS detection, automatic attack classification, PCAP forensics, and instant multi-channel alerts. $9.99/node/month.

Start your free 7-day trial →
Back to Blog

Related Articles