Hasty Briefsbeta

Bilingual

Serving 200M requests per day with a CGI-bin

10 months ago
  • #Benchmarking
  • #Web Development
  • #CGI
  • CGI programs were widely used in the early 2000s for dynamic websites, primarily written in Perl or C.
  • CGI works by spawning a new process per request, with request data passed via environment variables and stdin, and responses written to stdout.
  • CGI programs exit after handling a request, ensuring resources are freed, making them reliable despite potentially poor code quality.
  • Deployment was simple, involving copying the CGI program to the cgi-bin directory on the web server.
  • Early web servers had limited resources (1-2 CPUs, 1-4GB RAM), making them vulnerable to traffic spikes (e.g., Slashdot effect).
  • Modern hardware (e.g., 16+ CPU threads) can handle CGI efficiently, with benchmarks showing 2400+ requests per second possible.
  • A guestbook CGI program was written in Go using SQLite, demonstrating simplicity and performance on modern systems.
  • Benchmark results showed CGI performing well under both Apache and a custom Go net/http server, with high RPS (requests per second).