Imagine a simple function: rgbToHex
.
It takes three arguments, integers between 0 and 255; and converts it to a hexadecimal string.
Here's what this function's definition might look like in a dynamic, weakly typed language:
rgbToHex(red, green, blue) {
// …
}
I think we all agree that "program correctness" is essential.
We don't want any bugs, so we write tests.
assert(rgbToHex(0, 0, 0) == '000000')
assert(rgbToHex(255, 255, 255) == 'ffffff')
assert(rgbToHex(238, 66, 244) == 'ee42f4')
Because of our tests, we can be sure our implementation works as expected. Right?
Well… We're actually only testing three out of the 16,777,216 possible colour combinations. But human reasoning tells us that if these three cases work, all probably do.
What happens though if we pass doubles instead of integers?
rgbToHex(1.5, 20.2, 100.1)
Or numbers outside of the allowed range?
rgbToHex(-504, 305, -59)
What about null
?
rgbToHex(null, null, null)
Or strings?
rgbToHex("red", "green", "blue")
Or the wrong amount of arguments?
rgbToHex()
rgbToHex(1, 2)
rgbToHex(1, 2, 3, 4)
Or a combination of the above?
I can easily think of five edge-cases we need to test, before there's relative certainty our program does what it needs to do.That's at least eight tests we need to write — and I'm sure you can come up with a few others given the time.
Continue to read on http://stitcher.io/blog/tests-and-types#read-on
Top comments (0)