I deserted OpenLiteSpeed and went again to good ol’ Nginx

Ish is on fire, yo.

Enlarge / Ish is on fireplace, yo. (credit score: Tim Macpherson / Getty Pictures)

Since 2017, in what spare time I’ve (ha!), I assist my colleague Eric Berger host his Houston-area climate forecasting website, House Metropolis Climate. It’s an attention-grabbing internet hosting problem—on a typical day, SCW does perhaps 20,000–30,000 web page views to 10,000–15,000 distinctive guests, which is a comparatively simple load to deal with with minimal work. However when extreme climate occasions occur—particularly in the summertime, when hurricanes lurk within the Gulf of Mexico—the positioning’s site visitors can spike to greater than 1,000,000 web page views in 12 hours. That stage of site visitors requires a bit extra prep to deal with.

Hey, it's <a href="https://spacecityweather.com">Space City Weather</a>!

Hey, it is House Metropolis Climate! (credit score: Lee Hutchinson)

For a really very long time, I ran SCW on a backend stack made up of HAProxy for SSL termination, Varnish Cache for on-box caching, and Nginx for the precise net server utility—all fronted by Cloudflare to soak up nearly all of the load. (I wrote about this setup at size on Ars just a few years in the past for folk who need some extra in-depth particulars.) This stack was absolutely battle-tested and able to devour no matter site visitors we threw at it, however it was additionally annoyingly complicated, with a number of cache layers to cope with, and that complexity made troubleshooting points tougher than I’d have appreciated.

So throughout some winter downtime two years in the past, I took the chance to jettison some complexity and scale back the internet hosting stack all the way down to a single monolithic net server utility: OpenLiteSpeed.

Learn 32 remaining paragraphs | Feedback

Leave a Reply

Your email address will not be published. Required fields are marked *