I've always worked in environments where people tried to draw distinctions and I don't think it ever ended up well. There is no way around emergent behaviors in complicated software systems. The closer you are to the production environment the better. Better yet, just ship everything to production as safely as possible. That could mean feature flags, fast rollback, immutable infrastructure, or whatever else.
A good example of stuff that always breaks in production is data access. Usually in staging all access patterns fit in memory so everything is always fast. Then one day someone writes a query that goes past the memory limit and the index and working set no longer fit so you start going to disk and everything catches fire. There is no way to catch this anywhere other than production.
It depends a lot on the system.
If I'm working on a system that involves things like medical data, financial services, engineering or other areas where a bug might put someone's life at risk, yes, staging is a critical step. But, if it's a simple infotainment app or the like, why add the overhead as long as decent unit and integration testing is done.
I think having staging server can greatly benefit you down the road specially if the project is a bit on the large side and involves a lot of data. I am working with a client for a Dynamics AX 365 Implementation and having a UAT (staging) environment has proved to be beneficial time and time again.
A few instances where having a staging server proved useful:
This way we are more confident when doing something over production because we have already tested it out on a near production environment (staging server).
Hi. It's interesting but I wonder about this:
Having a client (or even our own QAs) test out a new feature
Having a client (or even our own QAs) test out a new feature
Cannot you use feature flags for that instead? How do you give access to staging to customer and is it not risky? And then how do you manage keeping long lived "beta features" in staging - i.e. how do you then make other parallel changes without deploying the beta feature?
As Ryan said, I believe having a staging server is useful for testing.
It is particularly useful when you are working with structure and data changes in the database i.e.: migrations.
Also, I believe that even if you have unit and integration tests the end user or users acquainted with the application should give it an eye before it is deployed to production servers.
I've worked in relatively big SaaS company (100s of production deployments a day) without having staging environment. PR -> master -> CI -> live production. We relied heavily on feature flags and it worked well. We did not have significant outages number higher than industry standart. We did not loose data. And when we did, we made sure we have recovery ways. We would think about what is the worst case that can happen and be prepared for it, rather try to ignore it.
We learned how to work in such environment, beeing risk cautious, limiting blast radiuses by flags or sort of blue/green deloyments.
What it gives us tho was much faster development cycle. And also getting feedback from actual live system as early as possible. As even staging would not give you 100% security as certain bugs are only discovered in production.
Currently, I have a development, a pilot, and a production environment for my company's ERP. I believe having an environment in the middle where you can test a deployment is important. This can save headaches in case a deployment to production fails. This is an added benefit, however, it's another environment to maintain.
Usually, monthly we will 'refresh' our development and pilot environments with production data so every environment is at an equal level.
I've always believed that a staging environment that is architecturally the same as production is beneficial. It certainly may be smaller in scale, but it should contain all of the separate components that production has.
Your production environment is more than just your code. Permissions, networks, storage, firewalls, deployment processes, logging, data, caching, etc. make up an operational environment. As a full-stack developer, those things need to be considered. If you're not concerned with those things, surely someone else in your company is.
Staging provides the place to test the entire concert together, rather than the individual instruments.
I think there is cost to not having a staging area. That said having the infrastructure to handle production issues well, all the more power to you. I certainly work very hard to understand issues and identify them earlier I find little value testing in staging environments, there are still exceptions.
In most scenarios you will see multi environments. Some have more some have less.
But mostly consists of..
Dev > QA > Staging > Prod
Your staging environment should be somewhat replica of productions. So if you have an issue in staging you most def will have issue in production.
Now some companies skip the staging , it all really depends on what technologies you work with IMO.
At work we also have staging servers for testing. Usually I write some functionality as a local development stage. Then I push changes to staging server so product managers can test and play. After product managers agree I deploy to production
For me yes. Our challenge is how to sync environments. Especially test data. I guess that's where docker comes in and share environments (immutable state). I haven't tried this approach though.
Immutable infrastructure breads environmental parity. When you are able to deploy any desired version of the infrastructure, at any moment, parity becomes as easy as picking the version you want to be at.
Really good question , personally I hate staging servers. It always seems to me you spend more time moving data to the staging server and migrating the staging code to production .
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.