Photo by Bart Christiaanse on Unsplash
We all have been through the following scenario: you are getting the first sip of your coffee when someone reports a bug. Your first reaction is probably to run the project and type some logs to see some data and try to understand what's happening.
If you are like the 80% of developers (I made up this number), you don't have any kind of tests and you are basically making tweaks "hoping" that nothing else breaks while you fix the problem, and if that happens you end up spending way more time than you expected fixing these chained bugs.
Let's be clear, I'm not here to tell you that you must have tests and great code coverage, every project is different and you could have more than one reason to not have tests. Probably you didn't work on that functionality, you thought that the behavior was so simple that tests were overkill or you had to rush to make it before the end of the sprint.
Whatever is the reason, this problem is giving you a second chance to add tests that will save you and future developers a lot of time. I like to think of tests as smoke detectors in a house and the bug is a fire that appeared, you might be able to put out the fire but if you don't have enough smoke detectors, you won't know if other fires are cooking somewhere else until is too late.
Photo by Adam Wilson on Unsplash
A real-world example
A teammate reported that URL inputs in our forms where accepting strings that weren't a valid URL. Basically you could type something like foobarhttps://google.com,fizbuzz
and validation passed. We managed to identify the culprit:
function validUrl(url) {
const regex = /(^$)|((http(s)?:\/\/.)?(www\.)?[-a-zA-Z0-9@:%._+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_+.~#?&/=]*))/g;
const found = url.match(regex);
if (found === null) {
return false;
}
return true;
}
Someone basically copy/pasted a regex from somewhere, tested it against some URLs and non URLs, and thought it worked. After punching that regex into regextester.com, it's clear that the pattern is not strict and it's kind of an includes pattern.
After some regex tweaking we figured out a way to make it more strict and ended up like this:
/(^((http(s)?:\/\/.)?(www\.)?[-a-zA-Z0-9@:%._+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_+.~#?&/=]*))$)/g
We tested it on the regex tool and seems to work.
Now we just replace that line of code and ship it right? WRONG!
Before making any code change we write some tests for the scenarios that we know should be working:
describe("validUrl", () => {
it("should not return errors with an empty string", () => {
const actual = validUrl("");
const expected = true;
expect(actual).equal(expected);
});
it("should not return errors with a well formed URL", () => {
const actual = validUrl("https://google.com");
const expected = true;
expect(actual).equal(expected);
});
it("should return an error with something that is not a url", () => {
const actual = validUrl("not-a-url");
const expected = false;
expect(actual).equal(expected);
});
it("should return an error if empty spaces ares used", () => {
const actual = validUrl(" ");
const expected = false;
expect(actual).equal(expected);
});
});
If you don't want to read/understand that code basically we are checking that:
- When it receives an empty string it's valid (we have a requiredValidUrl function for when it should not)
- A well-formed URLs is valid
- Something that is not a URL should return false
- Using spaces isn't allowed either
Now we add the new failing test case, a URL with extra text should fail:
it("should return an error when a url has extra text", () => {
const actual = validUrl("somethinghttp://google.com,other-thing");
const expected = false;
expect(actual).equal(expected);
});
Here's a codepen so that you can follow along
Then we run the tests, they all pass except the last one because we haven't changed the regex and that's great because we confirmed our assumptions and we are now in RED on the TDD cycle.
We replace the regex with the new one /(^((http(s)?:\/\/.)?(www\.)?[-a-zA-Z0-9@:%._+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_+.~#?&/=]*))$)/g
.
Remember this was the only thing we thought we needed to fix the bug, and then we run the tests again expecting that all pass, except they don't.
Surprisingly our regex is now so strict that it doesn't allow for empty strings, but our smoke detector prevented us from sending this fire we hadn't noticed. This would have been a waste of time for us, testers, project manager and in a worst case scenario could have prevented a launch or caused a catastrophic problem if it deployed.
Since we need to take care of an edge case we decide to short circuit it from the start instead of keep tweaking the regex:
if (!url) {
// empty string should be valid
return true;
}
Now we are green! We can continue with the refactoring and send our change more confidently.
Of course, this isn't a silver bullet, there might be more edge cases or changes of requirements. The advantage is that when that happens, you or any other dev that needs to maintain the project can keep adding smoke detectors and be confident that their changes are putting out fires and not reigniting old ones.
Top comments (4)
It's really interesting how such a simple in nature workflow makes such a noticeable difference. The smoke detector is a great illustration, they also remind me of milestones. Each new test passed is progress being made.
Good post Eder, thank you!
Great post!
Thanks for explaining how important tests are with a real-world example. I myself just learned about TDD and how to write tests recently and plan on using them more often.
nice now i am most likely to use smoke detectors , i didnt get the point of using tests before , thank you that was helpful
Nice post! Tks