DEV Community

Cover image for The Adventures of Blink #16: Continuous Testing
Ben Link
Ben Link

Posted on

The Adventures of Blink #16: Continuous Testing

Hey friends! Today we add to the "Becoming a DevOps" series with the topic of "Continuous Testing". By the time we finish this series we're going to have a complete look at a whole lot of DevOps principles. I hope you're enjoying the journey as much as I am!

What's the big deal about testing?

Maybe it was just a failing of my particular faculty, but in school we didn't talk about automated testing... like, at all. Testing your code meant "run the program and try things". It wasn't until much, much later that I learned from some excellent software engineers (Thanks @sturdy5 !) about the idea of automating the testing process.

Types of Testing

There are a few possible phrasings of "test automation"... but to my mind, they're all just focus points within the umbrella of software testing. Why do we need the distinctions? Well, mostly for our own ability to keep things organized... at the core, software testing is just ensuring that code does what it's supposed to. Nevertheless, we should probably cover a few key terms to help you get started and to make sure you know what people are talking about.

Unit Testing

Unit Testing is the discipline of validating that units of code perform correctly regardless of their inputs. Think of unit tests like the Proofs you had to write back in math class - how do you prove that an isosceles triangle has two congruent angles? In the same fashion, a unit test describes all the possible inputs to a bit of code and its expected outputs. For example, you might have a method in your program that calculates the amount of interest on a given principal sum, given a rate and its compounding frequency.

Python Code to calculate interest

This is something that most definitely needs to be mathematically consistent! The unit test would take all possible inputs and provide expectations of outputs that those inputs would give. Very straightforward, don't you think? But Unit Testing requires a slightly different thought process to be successful...

The Art of Unit Testing comes in the ability of the programmer to anticipate things not following the "happy path". You might look at the previous example and think "well that's pretty easy code. It's like 5 lines long. What's to test?"

BUT...

What happens if you call

simple_interest(-1,-1,-1)
Enter fullscreen mode Exit fullscreen mode

Will those negative numbers cause an undesirable behavior in your application? What about if you passed it an array of values for principals - is it your intent to calculate the interest for each array element and return an array of results, or should that be an error condition?

Or worse, what if you call it like this:

simple_interest('turnip',0,0)
Enter fullscreen mode Exit fullscreen mode

For a simple script like this, these concerns might seem a bit over-the-top... but if you're producing code to be used by others, you need to add a degree of due diligence to your work to ensure that you provide a consistent, expected way of handling invalid (or just generally weird) inputs. You might already be in the habit of writing good error handling routines... but do you remember to test every possible iteration when you complete your work?

WATCH HERE: The Adventures of Blink - Getting started with PyTest!

Unit tests generally don't ever exceed the boundaries of a single method - you're testing ONLY the functionality within the specific method. This may require you to provide "mock" data as a response from any code that your method calls. Your goal is isolation of the code to ensure its logic is sound.

Integration Tests

Integration tests work the same unit tests, except that they focus on a higher layer - rather than verifying that one individual block of code functions correctly, they're intended to confirm that modules interact properly.

Integration tests still rely on "mock" data to an extent - the goal is still to test the integration of the two components in isolation.

Functional Tests

Functional tests take the scope all the way to the user's perspective. Mocking is much less likely as we're focused on seeing the entire performance of the functionality. Functional tests are typically built to match the requirements provided by the user / designer. They're often a little "fuzzier" in nature - where unit tests can be at the level of a mathematic proof, a functional test is likely to be more about whether our code has met the "spirit of the law".

Other types of tests

You'll see lots of other types of "tests" enumerated in the wild:

  • UI Tests
  • Regression Tests
  • Load Tests
  • Performance Tests
  • Smoke Tests
  • Security Tests
  • Usability Tests

Generally speaking, these are made up of the same basic components found in Unit, Integration, and Functional tests, but with different intents and points of focus.

The Purpose of Testing

There are several purposes of providing automated tests for your code:

1 - You need to ensure you're releasing as few bugs as possible.

This seems the most obvious - you test your code to make sure you write (and release) good code! In the old Waterfall, we had a whole section of "QA Testing" which could either be manual or automated... as you might imagine, automated testing is less expensive and time-consuming than manual testing, so it makes sense that we'd want more of it.

2 - You're future-proofing.

Testing would seem like a frustrating thing if you could only use it once. Fortunately, a well-designed suite of automated tests is tremendously valuable for your future!

Imagine: You build and release this application, and then you move on to other work. A year later, someone comes back and asks for a change to this application. Is the change going to break anything? How would you know? You've forgotten all the details and you're having to get back up to speed before you can make the change, only to realize you don't know for certain if this will affect any other parts of the code.

But because you've built a great test suite, you make your change and re-run your tests, and they all pass. You can be confident that you didn't break anything else by making this change. Or maybe another test fails, allowing you to spot something you forgot about.

Having a test suite is critical to the maintenance of the application. If you don't invest in building it now, you're going to spend extra time manually testing things (or risking surprise production bugs)... which can be expensive. An ounce of prevention...

3 - You're you-proofing.

Another way to think about future-proofing is from the perspective that "ownership is eternal". You wrote this code, and that means you get to maintain it. Forever. Doesn't matter if you moved to a different role... or maybe even a different company... someone's going to see your ID on the commit history and think "this is urgent" and reach out to you (as if you carry the history of this codebase around in your mind at all times!)

A comprehensive test suite can help another developer figure out what to expect from your code. They can see how it's used, see what common concerns are, and validate their own changes without reaching out to you.

The DevOps Angle

A CI/CD Pipeline with Test Automation featured

Ok, Ben, you've convinced me that test automation is important... what does this have to do with being a DevOps practitioner?

Have you ever made changes to a codebase, but forgot to run the tests? Or maybe you thought you could get by without it, and then the world came crashing down around you?

Your CI/CD Pipeline needs your test suite! If you configure every build to require a successful test run, you add safety into your changes. And the best part is that, if you're practicing good hygiene and swarm-fixing every broken build, this will ensure the build breaks whenever a test fails. Your delivery will get faster and smoother and as long as your tests are always improving, so too will your delivery stats!

When to test

The short answer: Early and Often!

There are many proponents of a paradigm called "TDD"... "Test-Driven Development". In this strategy, a programmer FIRST writes the test suite code based on the requirements for the project, and then begins building the app until it passes all the tests.

Is this a good practice? As with most things, it depends... 😏 If you're meticulous about your requirements and design phases, or if you're embracing agility and your work is properly sized for fast delivery, you might be able to do this with ease. If your requirements are more fuzzy, or subject to change, or if you're struggling to get down to the smallest units of work, TDD may be a little more difficult to achieve. That's not to say it's not possible, but it may have some complexities that you'll have to work through.

Ultimately, there's no "right" or "wrong" time to build your test suites though - having tests is always better than not having them! So if TDD doesn't work with your brain or with your work style, just commit to submitting tests alongside all new code that you write. Don't fall in the trap of "I'll do this later" because "later" never comes!

What do I test?

Some folks need the "100% Code Coverage" thing worked out... but in reality, that's overkill for most projects. So much of our code involves "boilerplate work" - setting up variables, creating get/set methods for operating with data in a class, default constructors... and there's very little value in actually testing those things because they're so simple.

A good rule of thumb is that "if it was hard to write, it should be tested." Nobody struggles with creating get/set methods, except in occasional weird edge cases... but if you spent significant time on a bit of logic, make sure it's tested!

Wrapping up

Test automation is a critical need in the DevOps pipeline because it establishes one of the guardrails we need in order to move fast - we want to know that we're producing code that works!

The various flavors of testing work together to give us a more complete picture of how our application will perform - but the biggest taekaway is that testing is not a task for someone else on another team. It's up to us, as we write our code, to think about how it could misbehave and not only address it but validate with that misbehavior in mind. This will make our work more reliable and valuable to our teammates and save us time & effort in the long term!

Top comments (0)