Journal
Nginx vs Caddy: Choosing a Reverse Proxy in 2026
Nginx has been the default reverse proxy for over a decade. Caddy has been quietly winning converts with automatic HTTPS and a simpler config format. Here’s how they compare in practice.
Configuration: Verbosity vs. Simplicity
Nginx
Serving a static site with HTTPS and a reverse proxy:
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
root /var/www/example.com;
index index.html;
location /api/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Plus you need certbot running on a cron/timer for certificate renewal.
Caddy
The equivalent Caddyfile:
example.com {
root * /var/www/example.com
file_server
handle_path /api/* {
reverse_proxy 127.0.0.1:3000
}
}
That’s it. Caddy obtains and renews TLS certificates automatically via Let’s Encrypt. HTTP→HTTPS redirect is implicit. Headers are forwarded by default.
Automatic TLS
This is Caddy’s killer feature. It handles:
- Certificate issuance via ACME (Let’s Encrypt or ZeroSSL)
- Automatic renewal before expiration
- OCSP stapling
- Redirect from HTTP to HTTPS
- Modern TLS defaults
With Nginx, you manage this yourself through certbot, acme.sh, or similar tools. It works fine, but it’s another moving part.
For internal services, Caddy can also provision certificates from an internal CA, which is useful for securing traffic between microservices.
Performance
For most workloads, you won’t notice a difference. Both handle tens of thousands of concurrent connections comfortably.
Where Nginx has an edge:
- Raw throughput at extreme scale (100k+ concurrent connections)
- Static file serving is marginally faster due to
sendfileand kernel optimizations - Mature caching with
proxy_cachefor complex caching strategies
Where Caddy has an edge:
- HTTP/3 and QUIC support out of the box
- Lower memory usage for small-to-medium deployments
- On-the-fly config reloads via API without dropping connections
PHP-FPM Integration
Nginx
location ~ \.php$ {
fastcgi_pass unix:/run/php/php8.3-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
Caddy
example.com {
root * /var/www/mysite
php_fastcgi unix//run/php/php8.3-fpm.sock
file_server
}
Caddy’s php_fastcgi directive handles index.php rewrites, PATH_INFO splitting, and SCRIPT_FILENAME automatically.
When to Choose Nginx
- You’re already running it and it works
- You need advanced load balancing (weighted, least_conn, ip_hash)
- You need
proxy_cachefor edge caching - Your team knows the config format
- You’re running at massive scale and need every last bit of throughput
When to Choose Caddy
- You’re setting up a new project and want minimal config
- Automatic HTTPS is a priority (especially with many domains)
- You want HTTP/3 without extra modules
- You’re running on a small VPS where simplicity matters
- You manage multiple sites and don’t want to think about certificates
The Pragmatic Answer
For new projects, start with Caddy. The automatic TLS alone saves enough operational headaches to justify it. If you hit Caddy’s limits — which is unlikely for most projects — Nginx is always there.
For existing Nginx setups that work well, there’s no compelling reason to migrate. Nginx isn’t going anywhere, the ecosystem is massive, and every edge case has a Stack Overflow answer.
Both are excellent. Pick the one that reduces your operational burden.