β5000 users. 187ms response. 431MB RAM. 2β6% CPU. $6/month. We didnβt break the laws of physics β we just understood them better.β
π― Goal
Can WordPress serve 5000 concurrent users on a 1GB RAM VPS... and stay under 200ms?
- VPS: Vultr (1vCPU, 1GB RAM β $6/mo)
- Stack: 100% open-source, containerized
- Caching: 4 layers (CDN, full-page, object, opcode)
- Budget: Under $10/mo total
β Spoiler: Yes. Wildly yes.
β‘ The Benchmark
Users | Response | RAM | CPU |
---|---|---|---|
250 | 198ms | 406MB | ~2% |
500 | 191ms | 399MB | ~2% |
750 | 191ms | 403MB | ~2β5% |
1500 | 192ms | 416MB | ~3-6% |
5000 | 187ms (max: 391ms) | 431MB | 2β6% β |
2600% traffic β¬οΈ = minimal latency β¬οΈ
Sub-linear scaling = confirmed
π§± Stack Overview
- NGINX (Alpine)
- PHP 8.2-FPM + OPCache
- WordPress (latest)
- MariaDB (Docker image)
- Redis (object cache)
- Caddy (SSL + HTTP/3)
- Docker Compose
- Cloudflare (free tier)
π Cache Hierarchy
Layer | Hit Rate | Notes |
---|---|---|
Cloudflare | 45.8% | CDN edge cache |
Cache Enabler | 87.2% | Full-page HTML |
Redis | 99.93% | Object, transients |
OPcache | 96.7% | PHP bytecode |
β‘οΈ Only ~0.4% hit PHP, and ~0.02% hit the database.
π§ PHP-FPM Config (for 1GB RAM)
pm = dynamic
pm.max_children = 4
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.max_requests = 500
π₯ OPCache
opcache.enable=1
opcache.memory_consumption=128
opcache.max_accelerated_files=10000
opcache.validate_timestamps=0
βοΈ Redis
maxmemory 256mb
maxmemory-policy allkeys-lru
π NGINX + Caddy
- Gzip enabled
- Static cache: 30d
- HTML cache: 1h
- HTTP/3 via Caddy + TLS
π Performance Breakdown
Layer | Served | Hit Rate |
---|---|---|
Cloudflare | 2,290 users | 45.8% |
Cache Enabler | 2,070 users | 87.2% |
PHP+Redis | 639 users | 99.93% |
DB (MariaDB) | 1 query! | π€― |
Memory Allocation
- PHP-FPM + WP: ~180MB
- Redis: ~128MB
- MariaDB: ~80MB
- NGINX + Caddy: ~27MB
- System overhead: ~15MB
π§ Why It Works
β
4-layer caching synergy
β
Tuned PHP-FPM for RAM budget
β
Zero waste CPU cycles
β
Sub-linear scaling due to warm cache
β
$6 infra instead of $200+ managed hosting
π¦ Repo & Setup
π GitHub: wordpress-docker-performance
- Clone Docker Compose setup
- Launch with
docker-compose up -d
- Install:
- Cache Enabler
- Redis Object Cache
- Configure
wp-config.php
for Redis - Done. Cache hit >99% π
π¬ What do you think?
- Would you run WP this way in prod?
- Can we push this to 8000+ concurrent users?
π Drop your thoughts in the comments β letβs scale smarter!
Top comments (0)