Cover photo by Dario Morandotti on Unsplash
An excellent article by Charity Majors!
Enormous amount of valid points - go read it immediately ... then drop back here if you want to hear a couple of things I have to say.
She advises that Friday deploy freezes are a smell and that companies who have them should move some people...
...onto your CI/CD process and observability tooling for as long as it takes to ✨fix that✨.
I'm afraid a problem has been overlooked which makes "✨fix that✨" a somewhat scary proposition:
What if your development process is broken?
You can improve your CI/CD pipelines and observability tooling as much as you want. It's not going to help.
You can yell at developers to reduce the size of the changesets. They won't be able to, after a point.
There is a prerequisite to frequent deploys & small changesets.
When Charity says:
Fix. That. Take some cycles off product and fix your fucking deploy pipeline.
I nod, sigh and wish it were that simple.
In the all-too-regular case of a broken development process the fucking deploy pipeline is the least of your problems.
It has a much smaller scope & complexity than the codebase. Just compare the number of lines of code vs. the pipeline. (I assume you have your CI pipeline 100% defined in source files. No? Go do that first.)
Also, you can't fix this problem with "some cycles", and especially not "off your product". Fixing that requires a mindset change. Doing things differently. All. The. Time. Forever.
OK, stop whining and tell us how to fix it!
As many have said before me, build code quality in.
Easier said than done, but here's a list of practices to get you started:
Impact mapping, story mapping, story slicing
Understand the customer's vision by - wait for it - talking to them.
Transform it into a set of coherent goals.
Slice the goals/features into tiny chunks (a.k.a. stories) with each chunk having the smallest possible but still visible business impact.
Sort in order of business value (not always knowable but take your best guess and see what happens).
Don't transform the whole vision at once, nor slice all goals. Start from the most important parts, and stop when there's enough stories for a week or two. When those are in production (or a demo environment), continue slicing.
Ensure constant (daily, hourly) communication between customers, testers & developers, building the understanding of the requirements into executable specifications (tests) and driving the questions by trying to write the specifications to be executable.
Pay attention to architecture! Test-first doesn't guarantee your architecture is good, it just guarantees bad architecture will make your tests complex (a valuable canary in the mine).
Understand the difference between domain code & code facing 3rd party libs & frameworks. Push them apart.
Learn to apply SOLID and 4 rules of simple design. Apply them everywhere, every minute (in every refactoring, see below).
Test-drive the crap out of chunks of logic in your programs. It always does more than you see at first glance.
Thorns before the gold: First test the crappy situations, leave the happy path for last.
Don't forget to refactor, refactor, refactor. Clean up the code you just added/modified, and then clean up a bit around it.
Start using a type-system. Types are just a bunch of tests you write more quickly.
If you can't collaborate in any other way, at least review code regularly. Rotate reviewers all the time. Everyone's code should be reviewed and everyone should review code.
If you can, add pair programming & mob programming to the mix.
Pair programming is not two people doing the same job. It's two people doing the same job 4 times faster. (Why? Because all decisions the developers make are drastically improved. And developers make decisions all the time.)
About debugging & observability tools
In short, to me they look like manual processes which should be as automated as possible.
What do you do when you debug? You set a breakpoint and watch the state of the code when the execution arrives on it. The breakpoint should be a function call. The state should be the result value. Refactor the function into existence and write a test for it. You should not have an intimate knowledge of the debugger. Use it but you shouldn't have to use it often.
Everything you observe using observability tools can be observed by a machine. (Actually, a machine observes it first, then delivers it to you in a pretty format so you can make sense of it.) Write smoke, performance, security and other types of tests. That way you clarify the requirements (functional and the other ones) and write them down in an executable way so checking if they still hold requires pressing a button and reading the results. Like debugging, I'm all for observability, but automate what you can.
If it's truly a problem of a broken CI pipeline, you're in luck. Go read Charity's article and do everything she advises.
However, from my experience in IT, you usually have a bigger problem than that. Start applying the above practices to your development workflow. It's hard but worth it.
Top comments (0)