Skip to content
← pwnsy/blog
intermediate23 min readMar 6, 2026Updated Mar 11, 2026

DDoS Attacks Explained: How They Work and How to Defend

network-security#ddos#network-security#botnet#mitigation#cloudflare

Key Takeaways

  • A denial of service (DoS) attack originates from a single host.
  • Volumetric attacks are the blunt-force category.
  • Protocol attacks do not require enormous bandwidth.
  • Layer 7 attacks are the most difficult to defend against because the traffic is individually legitimate.
  • Understanding botnets — where DDoS attack traffic actually comes from — is essential context for understanding why mitigation is hard and why attack capacity keeps increasing.
  • Effective DDoS protection is layered.

On October 21, 2016, at approximately 7:10 AM Eastern time, the United States east coast lost access to Twitter, Reddit, GitHub, Netflix, PayPal, the New York Times, and several hundred other major services simultaneously. The outage was not caused by a vulnerability in those services. It was caused by an attack on Dyn, a DNS infrastructure provider in Manchester, New Hampshire, that handled DNS resolution for all of them.

The attack used the Mirai botnet — an army of approximately 600,000 compromised IoT devices (security cameras, home routers, DVRs, baby monitors running Linux with default or hardcoded credentials) — to flood Dyn's DNS servers with lookup requests at rates exceeding 1.2 Tbps. The attack lasted nearly ten hours, with multiple waves. Dyn's distributed infrastructure was overwhelmed. Without DNS resolution, the services' domain names did not resolve, and millions of users could not reach them.

The attacker was a 21-year-old college student who launched the attack in a dispute over Minecraft server hosting. He had not purchased attack capacity — he had written the Mirai malware himself. When he published the source code online shortly before the attack, it seeded dozens of variants that are still compromising IoT devices today.

This is the DDoS threat landscape: industrial-scale infrastructure disruption available to individual attackers, delivered through infrastructure they do not own, at costs that have dropped to the price of a coffee.

What Distinguishes DDoS from DoS

A denial of service (DoS) attack originates from a single host. Blocking one IP address stops it. A distributed denial of service attack originates from thousands or millions of hosts simultaneously — often geographically distributed across every continent.

The distribution creates four problems that make DDoS fundamentally harder than DoS:

  1. No single IP to block. If 100,000 bots each send 10 Kbps, the aggregate is 1 Gbps — but blocking any individual IP removes only 10 Kbps.

  2. Traffic is indistinguishable from legitimate users. Especially at Layer 7, individual requests are syntactically identical to real traffic.

  3. The attack traffic exceeds upstream capacity. Even if you can process the requests, the bandwidth is consumed upstream before it reaches your server. Scrubbing at your perimeter is too late.

  4. Attacker capital requirements are near-zero. The attacker does not own the botnet infrastructure. They are borrowing capacity from compromised devices whose owners do not know they are participating in an attack.

Attack pricing on underground markets as of 2024: a 10 Gbps UDP flood lasting one hour costs approximately $10-$30 on established booter services. A 100 Gbps attack costs under $200. The asymmetry between attack cost and defense cost is extreme.

Volumetric Attacks: Consuming Bandwidth

Volumetric attacks are the blunt-force category. The goal is to saturate the target's upstream bandwidth — fill the pipe so completely that no legitimate traffic can get through. These are measured in bits per second (bps) and can reach terabit-scale.

UDP Flood

UDP is connectionless and stateless. An attacker sends a continuous stream of UDP datagrams to random ports on the target. For each datagram, the target OS must look up the port, determine nothing is listening, and send an ICMP "Destination Unreachable" response. This consumes both inbound and outbound bandwidth, plus CPU for the lookups.

Because UDP has no handshake, source IP addresses are trivially spoofed. Each packet appears to come from a different random IP, making source-based filtering ineffective.

Amplification and Reflection Attacks

Amplification attacks are the most efficient form of volumetric attack. The attacker sends a small spoofed request to a public server using the victim's IP as the source address. The server sends a much larger response directly to the victim. The amplification factor — ratio of response size to request size — determines how much attack bandwidth the attacker can generate from a small initial volume.

| Protocol | Port | Amplification Factor | Notes | |---|---|---|---| | Memcached | UDP 11211 | Up to 51,200x | Used in 2018 GitHub 1.35 Tbps attack | | NTP (monlist) | UDP 123 | ~556x | 234-byte request → 48× 440-byte responses | | DNS (ANY query) | UDP 53 | ~50-100x | 28-byte query → 3,000+ byte response | | CLDAP | UDP 389 | ~57-70x | Connectionless LDAP | | SSDP | UDP 1900 | ~30x | UPnP discovery protocol | | CharGen | UDP 19 | ~358x | Character generator protocol (legacy) | | SNMP | UDP 161 | ~650x | Network management | | QUIC (QOTD) | UDP 17 | ~20x | Emerging amplification vector |

The February 2018 GitHub attack demonstrates the extreme end of this: using Memcached servers (which have no business being internet-accessible but frequently are), a 10 Gbps stream of spoofed UDP requests to Memcached servers generated 1.35 Tbps of traffic aimed at GitHub's infrastructure. The attack lasted approximately 10 minutes before Akamai's scrubbing infrastructure absorbed it. GitHub's peak bandwidth during the attack exceeded 1.35 Tbps — at the time, the largest DDoS ever recorded.

The Memcached amplification factor of 51,000x means an attacker with a 26 Mbps upstream connection could theoretically generate 1.35 Tbps of attack traffic. The economics of DDoS attacks at this scale are purely about the victim's defense capability, not the attacker's resources.

Eliminating amplification from your infrastructure:

# Check if your NTP server has monlist enabled (the amplification vector)
ntpdc -n -c monlist YOUR-NTP-SERVER-IP 2>&1 | head -5
# If you get a list of clients, monlist is enabled — disable it
 
# Disable NTP monlist in /etc/ntp.conf
# Add these lines:
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
disable monitor
 
# Check for Memcached UDP exposure
# Memcached MUST have UDP disabled and be bound to localhost only
# /etc/memcached.conf:
#   -U 0           (disable UDP)
#   -l 127.0.0.1   (listen on localhost only)
 
# Verify Memcached is not externally accessible
nmap -sU -p 11211 YOUR-SERVER-IP
# Should show filtered, not open|filtered
 
# Block SSDP amplification at the router level
# Drop outbound UDP port 1900 responses to internet
iptables -A OUTPUT -p udp --sport 1900 -j DROP

If your organization runs public DNS resolvers, open resolvers are a primary amplification vector. Restrict recursive DNS to your own networks:

# In BIND named.conf — restrict recursion to authorized clients only
options {
    recursion yes;
    allow-recursion {
        127.0.0.0/8;       # localhost
        10.0.0.0/8;        # RFC1918 private
        172.16.0.0/12;
        192.168.0.0/16;
        // Do NOT include 0.0.0.0/0 here
    };
    allow-query {
        any;                # Allow queries from anyone for authoritative zones
    };
};
 
# Implement Response Rate Limiting to prevent your authoritative servers
# from being used as amplifiers even if they are not open resolvers
options {
    rate-limit {
        responses-per-second 5;
        window 5;
    };
};

Protocol Attacks: Exhausting State Tables

Protocol attacks do not require enormous bandwidth. They exploit weaknesses in how network protocols manage connection state to exhaust server-side resources — specifically the connection state tables in firewalls, load balancers, and OS kernels.

SYN Floods: The Classic State Exhaustion Attack

TCP's three-way handshake is the foundation of reliable connections and the source of a significant attack surface. Establishing a TCP connection requires:

  1. Client sends SYN (I want to connect)
  2. Server replies with SYN-ACK (acknowledged, here's my sequence number) and allocates state in its connection backlog
  3. Client completes with ACK (connection established)

In a SYN flood, the attacker sends a continuous stream of SYN packets with spoofed source IP addresses. For each SYN, the server:

  • Allocates a TCP control block (approximately 280 bytes in the Linux kernel)
  • Stores it in the SYN backlog queue (default size: tcp_max_syn_backlog, typically 256-1024 on untuned systems)
  • Sends a SYN-ACK to the spoofed source address
  • Waits for the ACK that will never come (because the spoofed source did not send the SYN)

When the backlog fills, new legitimate connection requests are dropped. The server is effectively offline for new connections without being bandwidth-saturated or CPU-bound.

At 100,000 SYN packets per second — a trivial rate for any botnet — the backlog fills in milliseconds on a default-configured server.

SYN cookies: the kernel-level fix:

SYN cookies eliminate the state allocation problem entirely. Instead of allocating a TCP control block for the half-open connection, the server encodes the connection parameters into the SYN-ACK's initial sequence number using a cryptographic hash. No state is stored. If the client returns a legitimate ACK with the correct derived value, the server reconstructs the connection parameters and establishes the session. If the ACK never comes, nothing was wasted.

# Check current SYN cookie status on Linux
cat /proc/sys/net/ipv4/tcp_syncookies
# 0 = disabled, 1 = enabled only when backlog full, 2 = always enabled
 
# Enable SYN cookies permanently
echo "net.ipv4.tcp_syncookies = 1" >> /etc/sysctl.conf
sysctl -p
 
# Tune the SYN backlog size for high-traffic servers
echo "net.ipv4.tcp_max_syn_backlog = 65536" >> /etc/sysctl.conf
sysctl -p
 
# Reduce SYN-ACK retries (timeout half-open connections faster)
echo "net.ipv4.tcp_synack_retries = 2" >> /etc/sysctl.conf
sysctl -p
 
# View current half-open connections (if high, you're under SYN flood)
ss -n --tcp state syn-recv | wc -l
 
# Use nftables to rate-limit SYN packets at the network layer
nft add rule inet filter input tcp flags syn limit rate 1000/second burst 5000 packets accept
nft add rule inet filter input tcp flags syn drop

Complete nftables SYN flood mitigation ruleset:

# /etc/nftables.conf — anti-DDoS ruleset
table inet filter {
    # Rate limiting: track SYN rates per source IP
    set syn_limit {
        type ipv4_addr
        flags dynamic, timeout
        timeout 60s
    }
 
    chain input {
        type filter hook input priority 0; policy drop;
 
        # Allow established/related (stateful tracking)
        ct state established,related accept
 
        # Allow loopback
        iifname lo accept
 
        # ICMP — rate limit to prevent ICMP flood
        ip protocol icmp limit rate 10/second burst 30 packets accept
        ip protocol icmp drop
 
        # TCP SYN — rate limit per source IP using dynamic sets
        tcp flags syn add @syn_limit { ip saddr limit rate 30/second burst 60 packets }
        tcp flags syn ip saddr @syn_limit drop
 
        # Allow legitimate services
        tcp dport { 80, 443 } accept
        ip saddr 10.0.0.0/8 tcp dport 22 accept  # SSH from management range only
 
        # Drop fragmented packets (often used in attacks)
        ip frag-off & 0x1fff != 0 drop
    }
}

ACK and RST Floods

ACK flood: The attacker sends high volumes of TCP ACK packets with random sequence numbers. A stateful firewall must check each packet against its session table to determine if it belongs to an established connection. At sufficient packets per second (PPS), this consumes all of the firewall's inspection capacity — a throughput-constrained attack rather than a bandwidth attack.

Fragmented ACK flood: Sending ACK packets that are fragmented requires the target to perform IP reassembly for each packet before determining it matches no session. Reassembly is expensive. Even relatively low PPS rates can exhaust reassembly buffers.

Connection Exhaustion: The Slowloris Attack

Slowloris, developed by Robert "RSnake" Hansen in 2009, demonstrates that you do not need volumetric traffic to take down a server. A single machine with a standard internet connection can take down an unprotected Apache server.

Slowloris works by opening many TCP connections to the target and sending HTTP request headers very slowly — deliberately, one header line every 15 seconds. The server holds each connection open waiting for the complete request headers. HTTP servers have a maximum number of concurrent connections (Apache's default is 150). When all connection slots are occupied by Slowloris connections, legitimate users cannot connect.

# Slowloris concept (educational — shows the attack mechanics)
import socket
import time
import random
 
def create_slowloris_connection(target_ip, port=80):
    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    s.settimeout(4)
    s.connect((target_ip, port))
    # Send partial HTTP request headers
    s.send(f"GET /?{random.randint(0, 2000)} HTTP/1.1\r\n".encode())
    s.send(f"User-Agent: Mozilla/5.0\r\n".encode())
    s.send(f"Accept-language: en-US,en;q=0.5\r\n".encode())
    # Do NOT send the final \r\n that would complete the headers
    # The connection stays open indefinitely waiting for headers to complete
    return s
 
# Defense: Apache mod_reqtimeout forces request completion within a deadline
# In Apache httpd.conf:
# RequestReadTimeout header=20-40,MinRate=500 body=20,MinRate=500

Defense against Slowloris and connection exhaustion attacks:

Nginx is inherently more resistant to Slowloris than Apache due to its event-driven architecture — it does not allocate a thread per connection. Apache requires mod_reqtimeout.

# nginx.conf — connection timeout hardening
events {
    worker_connections 65536;
    use epoll;
}
 
http {
    # Limit connections per IP
    limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
    limit_conn conn_limit_per_ip 50;
 
    # Timeout settings that defeat Slowloris
    client_header_timeout 10s;     # Time to read request headers (Slowloris fix)
    client_body_timeout 10s;       # Time between body reads
    send_timeout 10s;              # Time between response sends
    keepalive_timeout 15s;         # Keep-alive timeout
    reset_timedout_connection on;  # Send RST instead of FIN (faster cleanup)
 
    # Request rate limiting per IP
    limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=100r/m;
 
    server {
        listen 443 ssl http2;
 
        # Apply rate limiting to the server
        limit_req zone=req_limit_per_ip burst=20 nodelay;
        limit_conn conn_limit_per_ip 20;
 
        # Return 429 on rate limit (not 503)
        limit_req_status 429;
        limit_conn_status 429;
    }
}

Application-Layer (Layer 7) Attacks: The Hard Category

Layer 7 attacks are the most difficult to defend against because the traffic is individually legitimate. The server cannot distinguish "1,000 GET requests from real users" from "1,000 GET requests from a botnet" based on syntax alone. These attacks are measured in requests per second (RPS).

HTTP Floods: Resource-Intensive Endpoint Targeting

A naive HTTP flood sends high volumes of GET or POST requests. A sophisticated one targets the most expensive endpoints: search functions that hit the database, login endpoints that verify bcrypt hashes, APIs that generate reports, payment processing endpoints.

A typical web server handles static file requests at 10,000-100,000 RPS. A database-backed API endpoint might handle 100-500 RPS. An attacker who knows your architecture (from public API documentation, or from passively scanning your endpoints before the attack) targets the most expensive path.

Cache-busting: CDNs cache responses for repeated requests to the same URL. Attackers append unique random query strings to each request (?a=483920, ?a=203948) to prevent CDN caching and ensure every request reaches the origin server.

# Cache-busting HTTP flood pattern
# Each request has a unique query string that bypasses CDN cache
import requests
import random
import threading
 
def flood_request(target):
    bust_param = random.randint(1, 99999999)
    url = f"https://{target}/api/search?q=test&cache_bust={bust_param}"
    headers = {
        "Cache-Control": "no-cache, no-store, must-revalidate",
        "Pragma": "no-cache",
        "User-Agent": random.choice(USER_AGENTS),  # Rotate to evade UA blocking
    }
    try:
        requests.get(url, headers=headers, timeout=3)
    except:
        pass

Defense at the application layer:

# nginx: WAF rules to detect cache-busting patterns
# Block requests with obviously random cache-busting parameters
if ($arg_cache_bust ~* "[0-9]{7,}") {
    return 429;
}
 
# Rate limit the search endpoint specifically
location /api/search {
    limit_req zone=search_limit burst=5 nodelay;
    limit_req_zone $binary_remote_addr zone=search_limit:5m rate=10r/m;
    proxy_pass http://backend;
}

RUDY (R-U-Dead-Yet)

RUDY is a POST-based attack that sends large POST requests with a claimed Content-Length and delivers the body one byte at a time:

POST /api/contact HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 100000000
 
m=contact&email=test%40test.com[one byte delivered every 10 seconds]

The server allocates resources for the declared content length and waits for the body to arrive. A single attacker machine can hold dozens of server connections open this way. The defense is the same as Slowloris: client_body_timeout in nginx, RequestReadTimeout in Apache.

HTTP/2 Rapid Reset: CVE-2023-44487

In October 2023, Google disclosed a novel HTTP/2 attack vector — CVE-2023-44487 — that enabled attacks up to 7.5 times larger than any previously recorded. The attack exploited the stream multiplexing feature of HTTP/2.

In HTTP/2, multiple requests are multiplexed over a single TCP connection as streams. A client can send a request (RST_STREAM) to cancel a stream immediately after initiating it. The attack sent thousands of requests per second followed immediately by RST_STREAM cancellations. Each request consumed server resources to process, but the rapid RST meant the server could not pipeline responses efficiently. The net effect: a small number of attacker connections could generate enormous server load.

Google's infrastructure absorbed a 398 million RPS attack using this technique. Cloudflare absorbed a 201 million RPS attack. Both were orders of magnitude larger than previously recorded application-layer DDoS attacks.

The fix required patches in HTTP/2 implementations across all major web servers and proxies (nginx, Apache, Go's net/http, Node.js, h2o). The CVE was coordinated across multiple organizations and disclosed simultaneously.

# Verify your nginx version has the CVE-2023-44487 patch
nginx -v
# Fixed in nginx 1.25.3 (mainline) and 1.24.0 (stable) patches
# Check your distribution's advisory for backport status
 
# Workaround if patching is not immediately possible:
# Limit the number of concurrent streams per connection
http2_max_concurrent_streams 128;  # Default is 128 in nginx, lower it if under attack
 
# Limit the connection and request rates more aggressively during an attack
limit_conn conn_limit_per_ip 5;
limit_req zone=req_limit_per_ip burst=10 nodelay;

Botnet Architecture: What Drives DDoS Attacks

Understanding botnets — where DDoS attack traffic actually comes from — is essential context for understanding why mitigation is hard and why attack capacity keeps increasing.

Recruitment: How Devices Get Compromised

IoT default credentials. Mirai's entire botnet was built by scanning for devices with default username/password combinations. The scanner tried 61 specific credential pairs (admin/admin, root/root, admin/1234, root/vizxv, support/support, etc.) against Telnet (port 23) and SSH (port 22). In 2016, a majority of consumer IoT devices shipped with these credentials and exposed management ports to the internet.

The list of credential pairs Mirai tried (from the published source code):

root:xc3511, root:vizxv, root:admin, admin:admin, root:888888,
root:xmhdipc, root:default, root:juantech, root:123456, root:54321,
support:support, root:root, root:12345, user:user, admin:password,
root:pass, admin:admin1234, root:1111, admin:smcadmin, admin:1111,
root:666666, root:password, root:1234, root:klv123, Administrator:admin,
service:service, supervisor:supervisor, guest:guest, guest:12345,
admin:1234, admin:12345, admin:54321, admin:123456, admin:7ujMko0admin,
admin:pass, admin:meinsm, tech:tech, mother:1234

None of these are exotic passwords. They are the factory defaults that shipping firmware hardcoded.

Exploit kits. More sophisticated botnets use automated scanning and exploitation of known vulnerabilities in unpatched routers, NAS devices, and servers. CVE lists for Netgear, TP-Link, D-Link, Hikvision, and Dahua cameras are populated with remotely exploitable vulnerabilities that botnet operators actively weaponize within days of disclosure.

Phishing and malware. For Windows-based botnets targeting higher-bandwidth hosts (compromised home and corporate PCs), traditional phishing and drive-by downloads are the recruitment mechanism.

Command and Control Architecture

Centralized C2. Original botnet design: bots check in with one or a few IRC servers or HTTP endpoints for commands. Effective to take down — seize the C2 and the botnet goes dark. Now rare for large operations.

Domain Generation Algorithms (DGA). The bot and C2 operator use the same algorithm to generate hundreds of potential domain names per day. The attacker registers a small number of them; the bot tries all generated names until it finds one that resolves. Law enforcement must enumerate all generated domains and seize them — a cat-and-mouse game the attacker can win by rotating the algorithm.

Fast-flux DNS. C2 domains resolve to different IP addresses every few minutes, cycling through a pool of compromised hosts acting as proxies. The underlying C2 server is never directly exposed.

Peer-to-peer C2. No central server at all. Commands propagate through the botnet via DHT (distributed hash table) or blockchain-based signaling. Taking down any individual node has no effect on the overall botnet.

Encrypted protocols. Modern botnets communicate over TLS or use legitimate cloud services (GitHub, Pastebin, Telegram channels) as dead drops for C2 commands. Network-level blocking becomes impossible without affecting legitimate traffic.

For-Hire Services: The DDoS Market

DDoS-for-hire services (booters/stressers) commoditize attack capacity. They provide a web dashboard, billing in cryptocurrency, customer support, and uptime guarantees. Law enforcement operations (FBI's DDoS-for-hire takedowns in 2018 and 2022) seized major services, but the market recovers.

Current market pricing (approximate, based on underground forum monitoring):

  • 10 Gbps, 1 hour: $30-$60
  • 100 Gbps, 24 hours: $200-$500
  • 1 Tbps, 24 hours: $500-$2,000 (requires direct botnet operator relationship)

The cost of defending against these attacks for a mid-size e-commerce site is orders of magnitude higher than the cost of launching them.

DDoS Protection Architecture

Effective DDoS protection is layered. No single control handles all attack types. The goal is to absorb volumetric attacks before they reach your network, terminate protocol attacks at the edge, and distinguish legitimate from malicious application traffic at Layer 7.

Layer 1: BGP Anycast and CDN Absorption

Anycast routing is the mechanism that makes CDN-based DDoS absorption work. Multiple servers at different geographic locations advertise the same IP prefix via BGP. Routing naturally directs each packet to the nearest BGP peer. When a DDoS attack begins, traffic is automatically distributed across all PoPs globally rather than concentrating at one target.

Cloudflare operates over 300 PoPs with claimed network capacity exceeding 321 Tbps. If an attack generates 1 Tbps aimed at 203.0.113.1 (a Cloudflare anycast IP), each PoP absorbs a fraction of the total — none of them see more than their capacity can handle. The origin server behind Cloudflare never sees this traffic.

For organizations with significant bandwidth requirements, services like Cloudflare Magic Transit or Akamai Prolexic provide anycast absorption for entire IP prefixes — not just HTTP traffic, but all IP protocols. This is appropriate for protecting game servers, VoIP infrastructure, and non-HTTP services that cannot use a standard CDN.

Layer 2: Traffic Scrubbing Centers

A scrubbing center is a traffic cleaning facility that absorbs attack traffic, filters it, and forwards only clean traffic to the origin. Traffic is diverted to the scrubbing center either via:

Always-on: All traffic routes through the scrubbing center permanently. Higher baseline latency (typically 5-15ms added per hop), but zero detection lag when an attack begins.

On-demand: Traffic routes normally. When an attack is detected (via NetFlow analysis, SNMP thresholds, or manual observation), a BGP advertisement diverts traffic to the scrubbing center. Attack traffic is absorbed; clean traffic is forwarded to the origin via a GRE tunnel.

The GRE tunnel from scrubbing center to origin must be configured to accept traffic only from the scrubbing center's IP ranges, and the origin server must have a security group or firewall rule that blocks all direct internet access:

# Set up GRE tunnel from scrubbing center to origin
# On the origin server:
ip tunnel add gre1 mode gre remote [SCRUBBING-CENTER-IP] local [ORIGIN-IP] ttl 255
ip link set gre1 up
ip addr add 100.64.0.2/30 dev gre1  # Private IP for the tunnel
 
# Route specific prefixes through the tunnel
ip route add default via 100.64.0.1 dev gre1 metric 100
 
# Block all direct internet access EXCEPT the scrubbing center
iptables -A INPUT -s [SCRUBBING-CENTER-IP] -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -j DROP  # Block all other inbound

Layer 3: BGP Blackhole Routing (RTBH)

Remote Triggered Black Hole (RTBH) routing is the last resort — the option you use when the attack exceeds the capacity of all scrubbing infrastructure. You announce the victim IP with a special BGP community attribute that instructs upstream routers to drop all traffic destined for that IP.

The result: the attack traffic disappears, but so does all legitimate traffic. The IP is unreachable. This is useful for protecting everything else on the same infrastructure from collateral congestion, but it hands the attacker a win — the service is down.

Most ISPs and CDN providers support community-based blackholing for their customers. The BGP community value varies by provider (commonly [ASN]:666):

# Cisco router example — announcing a /32 blackhole
# The route is tagged with the blackhole community
ip prefix-list BLACKHOLE-PREFIX permit 203.0.113.100/32
route-map BLACKHOLE-OUT permit 10
  match ip address prefix-list BLACKHOLE-PREFIX
  set community 64496:666  # Provider-specific blackhole community
  set ip next-hop 192.0.2.1  # Null route next-hop

# The route propagates upstream with the blackhole community tag
# Upstream routers see the community and route to Null0

Layer 4: Web Application Firewall and Bot Management

For Layer 7 attacks, a WAF with behavioral bot detection is the appropriate defense.

Cloudflare WAF configuration for DDoS protection:

// Cloudflare Workers script — advanced Layer 7 rate limiting
// Allows legitimate users while blocking automated flood traffic
addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})
 
async function handleRequest(request) {
  const clientIP = request.headers.get('CF-Connecting-IP')
  const url = new URL(request.url)
 
  // Check request rate from this IP using KV store
  const rateLimitKey = `rate:${clientIP}:${Math.floor(Date.now() / 60000)}`
  const currentCount = parseInt(await KV.get(rateLimitKey) || '0')
 
  if (currentCount > 100) {  // More than 100 requests per minute
    return new Response('Too Many Requests', {
      status: 429,
      headers: {
        'Retry-After': '60',
        'Content-Type': 'text/plain'
      }
    })
  }
 
  // Increment counter
  await KV.put(rateLimitKey, String(currentCount + 1), {expirationTtl: 120})
 
  // Challenge suspicious requests (low entropy user agents, no JS support)
  const ua = request.headers.get('User-Agent') || ''
  if (isSuspiciousUA(ua)) {
    return generateChallenge(request)
  }
 
  return fetch(request)
}

Nginx rate limiting and bot blocking:

# /etc/nginx/nginx.conf — comprehensive Layer 7 DDoS mitigation
 
http {
    # Geo-based rate limiting (optional — useful for geo-targeted attacks)
    geo $limit {
        default 1;
        10.0.0.0/8 0;    # Never rate limit internal traffic
        192.168.0.0/16 0;
    }
 
    # Create rate limiting zones
    map $limit $limit_key {
        0 "";
        1 $binary_remote_addr;
    }
 
    limit_req_zone $limit_key zone=general:10m rate=100r/s;
    limit_req_zone $limit_key zone=api:10m rate=20r/s;
    limit_req_zone $limit_key zone=login:10m rate=5r/m;
    limit_conn_zone $limit_key zone=addr:10m;
 
    server {
        listen 443 ssl http2;
 
        # Global rate limits
        limit_req zone=general burst=200 nodelay;
        limit_conn addr 50;
 
        # API endpoints — stricter limits
        location /api/ {
            limit_req zone=api burst=50 nodelay;
            limit_req_status 429;
            proxy_pass http://backend;
        }
 
        # Login/auth endpoints — very strict limits
        location /auth/ {
            limit_req zone=login burst=5 nodelay;
            limit_req_status 429;
            proxy_pass http://backend;
        }
 
        # Block known bad user agents
        if ($http_user_agent ~* "(wget|curl|python-requests|Go-http-client|libwww-perl)") {
            return 403;
        }
 
        # Block requests without User-Agent (simple bots)
        if ($http_user_agent = "") {
            return 403;
        }
    }
}

Protecting Your Origin IP Address

One of the most common reasons CDN-protected services fail under DDoS attacks: the origin IP is known and the attacker bypasses the CDN entirely.

Origin IP addresses leak through multiple channels:

# Check if your origin IP is discoverable via SPF records
dig TXT yourdomain.com | grep "v=spf"
# SPF records often list actual mail server IPs, which may reveal origin IPs
 
# Check Certificate Transparency logs for historical subdomains
# Subdomains like origin.yourdomain.com or direct.yourdomain.com
# may have pointed to the origin IP before CDN was enabled
curl "https://crt.sh/?q=%.yourdomain.com&output=json" | jq '.[].name_value' | sort -u
 
# Check historical DNS records
# SecurityTrails, PassiveTotal, Shodan show historical DNS records
# Your origin IP may have been exposed before you added the CDN
 
# Verify current DNS only shows CDN IPs
dig A yourdomain.com +short
# Should return Cloudflare/Akamai/Fastly IPs, not your origin
 
# Scan your own IP range for exposed services
nmap -sV -p 80,443,8080,8443 YOUR-ORIGIN-IP-RANGE
# Nothing should be reachable on standard ports from the internet

Locking down the origin so only CDN traffic reaches it:

# iptables rules that accept HTTP/HTTPS only from Cloudflare IP ranges
# Cloudflare publishes their IP ranges at https://www.cloudflare.com/ips/
 
# Create ipset for Cloudflare IPs
ipset create CLOUDFLARE hash:net
 
# Add Cloudflare IPv4 ranges
for ip in 103.21.244.0/22 103.22.200.0/22 103.31.4.0/22 104.16.0.0/13 \
          104.24.0.0/14 108.162.192.0/18 131.0.72.0/22 141.101.64.0/18 \
          162.158.0.0/15 172.64.0.0/13 173.245.48.0/20 188.114.96.0/20 \
          190.93.240.0/20 197.234.240.0/22 198.41.128.0/17; do
    ipset add CLOUDFLARE $ip
done
 
# Accept HTTP/HTTPS from Cloudflare only
iptables -A INPUT -p tcp --dport 80 -m set --match-set CLOUDFLARE src -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -m set --match-set CLOUDFLARE src -j ACCEPT
iptables -A INPUT -p tcp --dport 80 -j DROP   # Drop from anyone else
iptables -A INPUT -p tcp --dport 443 -j DROP  # Drop from anyone else

Alternatively, Cloudflare's Authenticated Origin Pulls feature validates that every request reaching the origin is signed with a Cloudflare client certificate — ensuring you are receiving traffic only from the CDN:

# nginx: Require Cloudflare's client certificate for origin connections
server {
    listen 443 ssl;
 
    # Standard SSL config
    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;
 
    # Require client certificate from Cloudflare
    ssl_client_certificate /etc/nginx/cloudflare-ca.pem;
    ssl_verify_client on;
 
    # If the request doesn't have a valid Cloudflare cert, reject it
    if ($ssl_client_verify != SUCCESS) {
        return 403;
    }
 
    location / {
        proxy_pass http://backend;
    }
}

Real-World DDoS Case Studies

The Dyn Attack (October 2016): DNS Infrastructure as Target

The 2016 Dyn attack remains the highest-impact DDoS in terms of collateral damage. Rather than attacking the end targets (Twitter, GitHub, Netflix), Mirai targeted their shared infrastructure dependency — the DNS provider Dyn.

The technical lesson: single points of failure in shared DNS infrastructure create catastrophic blast radius. The resolution was multi-provider DNS — using both Dyn and alternative providers (Route 53, NS1, Cloudflare DNS) simultaneously, so the failure of any single provider does not result in a complete outage.

# DNS redundancy configuration example
# Primary: Dyn, Secondary: AWS Route 53
# If Dyn goes down, Route 53 nameservers answer
 
# Zone delegation with multiple providers
# In registrar settings, specify nameservers from BOTH providers:
# ns1.dyn.com, ns2.dyn.com (Dyn)
# ns-123.awsdns-15.com, ns-456.awsdns-45.net (Route 53)
 
# Clients receive both sets and try them in order
# If Dyn is unreachable, clients fall back to Route 53 automatically

GitHub Attack (February 2018): Memcached Amplification

The 1.35 Tbps attack against GitHub lasted approximately 10 minutes before Akamai's scrubbing centers absorbed it. GitHub's architecture had automatic DDoS mitigation in place but was not contracted for bandwidth at this scale.

The attack vector — Memcached UDP amplification — was unique: no previously recorded attack used Memcached, and its 51,000x amplification factor was unprecedented. Defenders had no signatures for it. Akamai's scrubbing infrastructure identified the attack pattern, characterized it, and deployed mitigation rules within eight minutes.

The immediate fix for Memcached is straightforward:

# Memcached security checklist
# 1. Disable UDP (the amplification vector)
# In /etc/memcached.conf:
-U 0  # Disable UDP
 
# 2. Bind to localhost only (never internet-facing)
-l 127.0.0.1
 
# 3. If remote access is needed, use a VPN or SSH tunnel
# Never expose Memcached directly to the internet
 
# Verify your Memcached configuration
memcached -h | grep -E "listen|udp"
netstat -nlup | grep 11211  # Should show 127.0.0.1:11211 only

AWS Shield Advanced: The $2.3 Tbps Record

In February 2020, AWS Shield Advanced absorbed what was then the largest DDoS attack ever recorded: 2.3 Tbps. The attack used CLDAP (Connectionless LDAP) reflection. AWS has not publicly disclosed the target. The attack was mitigated without any service disruption.

The existence of 2.3 Tbps attacks means organizations without CDN/scrubbing providers face an attack capability that exceeds the total bandwidth of most data centers. There is no on-premises solution that absorbs 2.3 Tbps.

Defense-in-Depth: The Complete Stack

A production-grade DDoS protection stack:

| Layer | Control | What It Handles | |---|---|---| | DNS | Multi-provider DNS (Route 53 + Cloudflare) | DNS infrastructure attacks | | Network edge | CDN with anycast (Cloudflare, Fastly, Akamai) | Volumetric attacks up to Tbps scale | | Protocol | BGP scrubbing on-demand | Protocol attacks beyond CDN capacity | | Application | WAF with behavioral bot management | Layer 7 attacks, cache-busting | | Origin | Origin IP hidden, Authenticated Origin Pulls | Bypass attacks against origin | | Rate limiting | Per-endpoint rate limits (nginx/Cloudflare rules) | Application-layer floods | | Monitoring | NetFlow + SNMP thresholds + CDN attack analytics | Detection and alerting |

Implementation priority order for a new deployment:

  1. Place all public-facing services behind a CDN with DDoS absorption (Cloudflare or Akamai)
  2. Ensure the origin IP is not discoverable (audit SPF records, CT logs, historical DNS)
  3. Configure Authenticated Origin Pulls so only CDN traffic reaches origin
  4. Configure rate limiting at the CDN level for API endpoints and login
  5. Deploy WAF rules for common attack signatures (OWASP Core Rule Set)
  6. Enable SYN cookies and tune TCP stack parameters on origin servers
  7. Disable all unnecessary UDP services (or bind to localhost)
  8. Pre-contract on-demand BGP scrubbing for bandwidth emergencies

What the Dyn attack demonstrated, and what remains true today, is that the critical failure is almost always in planning: origin IPs that leaked, scrubbing capacity not contracted before it was needed, single-provider DNS that created a single point of failure. The technical mitigations exist and work. The gap is in implementation before the attack starts, not in finding new tools during one.

Sharetwitterlinkedin

Related Posts