I remember learning about the scientific method many years ago. Watching the Neil DeGrasse Tyson Masterclass, I started thinking about how the scientific method applies to delivering and supporting software. One quote jumped out at me: "The most important moments of your life are not decided by what you know, but how you think." It's not about what we know about delivering and deploying software, but how we think about the processes we use to do so.
As humans, we are constantly faced with problems. We build software to solve problems. The features we create sometimes have problems when we deploy. We encounter an obstacle and need to figure out how to overcome it. We don't necessarily know how to solve the problem at the outset, but how we think about the problem and the solution will impact whether we are successful or not.
Imagine you're trying to compile newly written code and encounter an error. You don't immediately know what is wrong; we need to investigate the issue. How do you approach the problem?
Option 1: Delete all the code and rewrite it?
Option 2: Examine the error message, debug, review the code, consult others for help?
I'm hoping most of you picked option 2. When we encounter a new problem, we take a step back, think about the problem, and if we can't solve it on our own, we ask for help.
Whether you know it or not, you are likely using portions of the scientific method when solving problems. Science is about breaking things down into smaller pieces to gain a better understanding. The scientific method is a defined set of steps to follow before, during, and after an experiment.
- Isolate a particular process
- Form a hypothesis
- Create an experiment
- Produce repeatable results
- Share your knowledge with others
We'll dig into each of these with a semi-fictional example. At LaunchDarkly, our main dashboard page originally listed all flags created in the system. As our customers began creating more flags, this resulted in performance issues with the main page loading. We isolated the performance issue to a large number of flags loading. Problem identified, process isolated.
Now to create a hypothesis. A hypothesis is a provable statement that can be answered by an experiment. A hypothesis isn't a question. It's a prediction or an explanation of behavior. Hypotheses generally start with research questions, such as:
- Will adding pagination increase page load time for users with a small number of flags?
- Will pagination improve performance for users with a large number of flags?
These are good questions, but they aren't hypotheses. A good hypothesis should be specific and seek to answer a single question. This requires concrete goals that can be measured. We can convert these two questions to hypotheses:
As a result of adding pagination, customers with fewer than 100 flags will see no significant difference in page load times.
As a result of adding pagination, customers with more than 100 flags will see a significant improvement in page load times.
Now, these hypotheses are ok. The problem is they aren't concrete. How do we know what significant means? A slight change gives us better versions.
- As a result of adding pagination, customers with fewer than 100 flags will not see page load times increase by more than 5%.
- As a result of adding pagination, customers with more than 100 flags will see page load times decrease by at least 10%.
These hypotheses are concrete, specific, and measurable. Now we are ready to create an experiment. Experiments help us make a discovery or test a hypothesis in a controlled manner. We run experiments all the time with software. We experiment when we test in production, conduct A/B/n tests, or run game days.
We identified two customer segments for our experiment, those with over 100 flags and those with under 100 flags. We can now enable a flag and collect page load times for these users. When experimenting, you need repeatable results. Having multiple customers in a segment ensures the results are not an anomaly.
The final aspect of an experiment is sharing the knowledge you gained with others. In addition to sharing the information internally, consider writing an article, or giving a talk.
If you're ready to start using the scientific method and experimenting—great! But before you go off, I have a word of caution. Beware of your biases.
One critical aspect when using the scientific method is to be aware of your biases. When crafting a hypothesis or experiment, you need to think about how your biases frame your thoughts. You don't want to construct an experiment that forces the "right" answer.
For example, there are two biases you might encounter when building software. The first is the anchoring bias. We rely too much on the first piece of information we receive. Everything else gets interpreted from this vantage point. When creating a hypothesis or experiment, make sure that you don't anchor everything on the first item presented.
Another bias is the Framing effect. We decide on options based on whether they are presented in a positive or negative light. If your hypothesis is framed positively or negatively, does that influence whether you see it as a success?
The next time you look to add functionality or troubleshoot an issue, take a moment to think about how you can use the scientific method to structure your thinking.