People claim that it’s dangerous to generate and run SQL in the production. We should execute SQL in production carefully.
Wait! Why is it important to run SQL carefully? More importantly, how to run SQL carefully? It’s not about typing
SELECT slowly, is it?
If we do any change in production system, we should do it carefully. Whether we deploy new code, upgrade the operating system or run any SQL, we should not mess up the system that’s already up and running. Users don’t like downtime. If users can’t access our page that hurts business. So that’s why.
This carefulness falls into the following three categories:
- We should make sure that nothing goes wrong during the change.
- We should make sure that when something goes wrong, it has a minimal impact.
- When something goes wrong, we should have a plan how to fix it
So, we should test our change before deploying it. In case of SQL we should have an identical database structure that we can play with. It’s even better to have a dataset realistically similar to that of the production database. Then we can run our stuff on that.
These kind of databases usually need a separate environment. Usually the companies are trying to spare those resources. So we might have a handful of multi-purpose databases. We better don’t mess them up, not even during testing. How can we do this? You can see my tricks in the following sections.
Before running a
DELETE or an
UPDATE, we should run a
SELECT with the same WHERE clause. So we know we’re targeting the desired dataset.
We can run our stuff in a transaction. In Oracle SQL Developer we can simply turn off auto commit. So if we’re happy with our results, then we can push the commit button. If we aren’t then we push the rollback button.
Microsofts SQL Server Manager Studio doesn’t provide this functionality. But we can simulate it like this:
BEGIN TRANSACTION -- do stuff -- run SELECT-s to make sure we like the result ROLLBACK TRANSACTION
Once we have the desired result, we can run it with a
COMMIT instead of a
It’s good to have a backup of the test database. This can be a snapshot of a virtual machine, or a snapshot of the database files. So if everything else fails, you can always go back to the previous state.
There are several strategies to do risky stuff with minimal impact.
We can implement small changes. So the result should be small. Usually it means Continuous Integration, or maybe Continuous Delivery. Lots of automatic tests make it difficult to do stupid things on production. I don’t have much experience with those. As far as I heard, it requires lots of concentrated effort from the management and the technical teams to have Continuous Delivery. They say the problem itself if hard and there is no easy way around it. If you want it, then the whole company must figure out a way to do that in their business.
The Bank Card business are regulated by the PCI security standard. They must comply with a strict change management policy. So they roll out the changes in the off-hours. This is how they can have minimal impact.
This plan is usually about rollbacks and backups. In case of code change there must be a way to roll it back. In case of a database change, there must be a way to roll it back.
This means that before we go live with a change, we should test the rollback too.
Backups are problematic because the online traffic doesn’t stop for those couple of hours while you’re restoring the backups. Usually we should avoid this.
- Try not to do ad-hoc changes on production databases.
- Test the
WHEREclause before changing the data.
- Test the SQL commands with transactions.
- Test the changes on test systems first.
- Test the rollback too.
- Have a back-up.
What are your tricks?
Originally appeared at tamasrev.