After many years of delivering software as a developer and a tech lead, it feels like I should be able to lay claim to some knowledge about how software can be delivered successfully. This post covers some of the key actions I think you need to take in order to deliver software successfully.
This seems obvious but I suspect a lot of projects don't have satisfactory answers to these questions at the start of their project. Before you start, you need to know what you're building (phone app / desktop app / web app / completely automated background service / library?..) and understand some of its main features.
In order to answer that question you need to know who will use it and why they need it. While you're working, these answers will empower you to make good decisions about what is most important. This is vital because (spoiler alert!) things are going to go wrong and you are going to be shedding, delaying or rethinking some of the work.
This is important for two reasons:
- How you will put your work live will affect how you do that work. Heroku apps or AWS services are fundamentally different to simple internal Java apps, desktop apps or phone apps both in terms of how you deploy them and how they work.
- Deciding these things (and even putting them in place) early will solidify in the minds of your team the reality of what you're doing. Your team will start to think about what this thing will look like when it's live, how you will do updates, how it will be used and so on.
It's become a lot easier to argue for this kind of thought process in recent years, because of Continuous Deployment practices and PAAS offerings that make such decisions easier. However, I've still seen some developers who are months into a development and have still only ever run their app inside their IDE. Don't be that developer. Working that way will leave significant unknowns in your project until it's far too late.
I won't say too much about how to do this other than, once again, to mention Dan North's blink estimation, which I've mentioned in other posts.
However you do it, you need to confirm amongst the team that you think your goals are realistic. This needs to happen with the entire team because everyone needs to buy into it.
If you exclude anyone in the team from this process then you run the risk of someone in the team being presented with a plan they don't believe in. Do not underestimate the damage this will do to team morale.
If you exclude the client from at least part of this process you risk providing them with a headline target date and not enough detail. If they don't understand how deadlines were arrived at they may well panic and become defensive when something changes.
Once you've decided all of this, you're ready to build stuff...
This is hardly controversial as it is one of the core principles of Agile software development. It's worth stating though. I like to try to go end-to-end with a development, putting something "complete" live even if it doesn't do anything yet and then iterating to flesh it out. To give some examples of types of development and initial end-to-end goals:
- Simple web app with persistence - a Hello World app comprising a simple web client, a server-side app and a running database, all connected together and deployed to a test environment in a manner as similar as possible to your eventual deployment (which means no hacking DNS or turning off the firewall)
- Microservices app - at least two microservices, talking to one another over your preferred medium, including service discovery or an alternative. If it's UI driven then maybe a Hello World UI too.
- Mobile app - a properly named, versioned, deployed app on at least one device, using an app store, showing Hello World, including a server-side component somewhere if necessary
- Library - a named, versioned, lib deployed to a CDN or Artifactory somewhere with one method in it that prints "Hello World"
After that initial step, I like to think of the overarching product as a strategic goal and develop useful stepping stones from here to there. Each stepping stone should be its own marketable product or set of features. This way we can deliver each stepping stone and measure how successful it is one our way to the strategic goal. For example, an automated image processing app might progress as follows:
- An image viewer, with loading and displaying capabilities
- A basic editing tool, with contrast/brightness or some touching up tools
- A whizz-bang processing algorithm to turn pics into oil paintings or something (an early USP)
- An image catalogue, with storage and tagging capabilities
- A social space, to allow users to upload, store, edit and share their images
- Some more whizz bang algorithms
It may be that we think the real money is going to come from the social space itself and the number of users we will get, but that doesn't mean that we don't release the earlier stages, particularly once we have a whizz-bang algorithm.
Let's take a moment to think where you will be if you follow all of the advice above. Think about all of the risks and issues that have been identified and overcome by taking these actions. We know what we're building and we have a concrete artifact we can point to; we know who we're building it for and how we're going to get it to them; and finally, we've considered how long we think it will take to get it to them.
From this point on, unless something changes, we're fleshing out paths that have already been proven.
The number of things that can change in the middle of your project is mind boggling. Here's a list of things I've seen change in the middle of building software:
- A team member left - this disrupted the team and meant that timelines changed. We had to rethink what we could still achieve in the time
- A feature was more complicated than we thought - this required a reconsideration of what we were trying to achieve and meant we found an easier interim solution
- Someone else released a similar product - we needed to think about whether our product could compete and replanned based on the existence of the other product
- Money dried up - because we were following an agile approach we were capable of just stopping where we were, but we decided to plan out a few additional months of the most important work. Planning changed with the knowledge that there might not be any more active development for a while
- The domain changed - more than once, the world threw us a curve ball: someone created a new category of stuff we had to deal with, a new set of users appeared (using iPads!), users became fixated on a particular feature, Brexit happened. Depending on the scale of the change we did everything from adding new features to the plan to stopping entirely and starting a fresh plan
- The technology changed - someone began deprecating an important library or framework so suddenly we had to plan to get off that technology.
Two things will help massively in this space: 1) you're much better off if you can identify these things early (although, as the song says, some of your real troubles will still blindside you on some idle Tuesday) so set up mechanisms to notice as many of these things as you can early, 2) when something does happen, tell everyone and confirm with everyone that all of the things you think you know about what you're doing and why you're doing it still hold true.
Delivering software is as much about buy in and communication with the client as it is about hitting deadlines and delivering a good product. You can, if you're not careful, do everything you said you would do, down to the letter, and still end up with an unhappy client if you aren't communicating well with them.
On my projects, I think about two different sorts of communication. The first sort is inside the team. It's often messy and the plan changes a lot. We might decide something on Monday but dump it on Wednesday when someone uncovers a new fact. I actively promote that openness to change because it is the thing that will make us most likely to head in the right direction. I'd rather admit my mistake yesterday and do well today than plough on for the sake of my own pride.
The second sort of communication is when we communicate with those outside the project, which often includes the client. It also includes the eventual users. We make an effort to communicate clearly by taking our time before making a statement, so as to minimize confusion and ease decision making on the part of the client.
For example, we might be delayed and decide internally on Monday that we can't deliver X as agreed, but we don't tell the client yet. By Wednesday, we have a plan that delivers something similar to X, or a plan that delivers Y early as an alternative to X. We then present the issue and the options to the client. Once everyone has agreed on a way forward, we communicate with the users.
There is a caveat here. We're delaying in order to make communication clearer and decisions simpler, but we can't delay too long. A good rule of thumb is to avoid surprising the client.
One other thing: you won't catch up. I've seen this too many times to count. You get delayed by something and, without communicating outside the team, you tell one another that you will catch up. You won't. I've never seen it. Accept the delay and assume you won't catch up, then respond accordingly.
Write unit tests. This really isn't something that should be a big step for anyone working in software these days, but it's worth mentioning.
I don't necessarily use TDD all of the time because I feel like sometimes it's nice to be able to do my thinking in code and TDD means I have a lot of tests to fix when I decide to change my architecture. However, when I'm not TDDing, I would always go back and exercise all of my execution paths using unit tests. This has the effect of making assumptions about what something does explicit in the code so that, when something changes later on, some tests will fail and the developer will be forced to consider how the contract has changed and address any knock-on effects. Recently I've been forced to work on a legacy code base with no unit tests and it was stressful making changes without that safety net.
Setting up CI environments is similarly vital. They give you instant feedback about whether your code change works for the dumbest developer in your team. It's worth setting up CI for more complicated scenarios like larger integrations too, since these can be difficult to test and there is a great benefit in having these happen automatically. Similarly with benchmarking and performance tests.
Ultimately, you need your technology to help you to identify issues sooner rather than later, and to make sure that you are where you think you are.
Of course, software tests are not good enough on their own.
All of the above runs much more smoothly if you have some tame users to perform exploratory testing for you, so it's worth going out of your way to find some. They will become the people you're delivering to, people to give you feedback and find your bugs.
Testers are not the same. You want testers, sure, and you'll want them to put 3000 characters into your text inputs and all of that, but tame users will do a very different job for you - using your software "for real" or as close to it as you can manage. If you can persuade them, you should even get your tame users into your requirements and design sessions.
One note of caution here is that you should think hard about how many tame users you have and how many eventual users your software will have. If there is a huge disparity (and in particular if there is only one tame user!) be careful not to build software that only your tame users will like.
The next three items are less significant but I think they're valuable things I've learned...
Sometimes an essential feature or property of your system can't be started early on in development and has uncertainties attached to it. Perhaps it's an image processing algorithm that absolutely must work within a set memory limit or run inside a time limit. Perhaps we're using a hardware component in devices, like NFC, the microphone or the compass, that we've never used before. Things like this, that could break your development if they don't work out, should be thoroughly spiked early.
However, if there's a decision that needs to be made later for which there are a number of options, I wouldn't be inclined to spike it until much later. Chances are that one of the options will work, or that later on one of the options will be a clear favourite based on work that will have taken place by that point.
This is pretty simple. Never tell yourself that the thing you're building is a prototype because it encourages you and your team to think of it as less permanent, and therefore less strictly controlled, than it is. The world of software is packed with things that started life as prototypes. If you're writing code, it's going into live until you decide that it isn't and throw it away. That means CI, tests, READMEs and deployment strategies.
If your plan has a step in it that says something like "and then we migrate all of the data into the new system", that day is your biggest project risk. No matter how well it's going up to that point, your whole project hinges on that migration. If it's bad, and you may not know for weeks or months later that it was bad, you can find yourself in a quagmire. Test migrations are all well and good but you won't know it's been a success until all of your users are happily using the new system in the live environment.
A better approach is to find a way around that migration. Maybe you can double save the data in the backend, saving it to the old system and the new one simultaneously ahead of retiring the old system, with a long-running copy that populates the new system from the old. Maybe you can migrate people to the new system one or two features at a time. It may take longer and it may require a little scaffolding here and there, but it removes the risk of that big bang moment and that will pay off in the long run.
Finally, a word on how it should feel to run software development. First, I haven't had a really bad day in a couple of years now. When something trashes all of my plans, my team are open to it. When it turns out that we have broken something, our CI informs us sooner rather than later. On those rare occasions when something unexpected makes its way into live, we have release processes that allow us to fix issues quickly. By the time we are close to the delivery date for a new feature, nine times out of ten the code has been in live for a while, but hidden somewhere.
A manager once described what I did in development as "a little rudder, a long way from the rocks" and I have since read that same phrase in the excellent book, Turn the Ship Around. That isn't to say I don't find myself, like all Tech Leads, surrounded by sirens and smoke once in a while, but I'm generally aiming to see the problems as early as possible and guide us around them.