90% of the time I find staging and production environments to be totally different systems. But people still advertise them as 100% aligned. 🤬
It’s self-inflicted damage. The reason for this is typically „budget“. We assume staging environments are of lower value than production ones and thus assign lower budgets to them.
As a result, they run on sub-par infrastructure and do not get enough time-budget for fixes and improvements. Over time they decay.
Another reason for the drift of both environments is the fear of „changing running systems“. We tend to make marginal manual fixes to either environment when specific issues arise.
Sometimes, due to the difference in infrastructure or setup, the same fix isn’t needed in the other environment so we leave it out there. Over time the drift becomes a burden and the „never touch a running system“ fallacy kicks in again: better keep the differences in mind and change the way we deploy to each environment.
So yeah, your applications run in both environments, but the reason they exist at all is defeated: you can’t make any assumptions on the quality and stability of your applications from running it in staging since production has different prerequisites.
That’s why we automate those things and that’s why we’ve grown to „continuous deployment“ over the years. Doing things continuously is like training a muscle - you lose the fear of failing because you’re used to it.
And rigorously automating throughout all the layers, from infrastructure to application, gives you the confidence to keep your systems in sync and gain the insights that made you setup multiple environments in the first place.
In tech, we tend to be super sloppy with these things. Imagine the Pharma or Food industry would slack like this with their testing systems, jeopardizing the quality of their products. Class-Action suit incoming!
Having a proper quality control setup is very important and, honestly, very cheap when it comes to software development. Don’t slack on it. And don’t lie about it!