DEV Community

K
K

Posted on

What are the alternatives to unit tests?

When I was in university, I had a lecturer who didn't like unit tests. He was an eldery man who worked at IBM and gave lectures about mainframes. He always said, the hype about unit tests would simply double the amout of code written and do nothing for its safety.

When I did my first big project in 2009, a HTTP API, nobody in the company I worked for (the company was founded in 2001) had written any unit tests. They had huge C/C++ and PHP code-bases. They did integration tests, but the project I had been given was the first that used unit-tests.

I heard about it at university and wanted to make my first project look good right from the start. So I wrote a bunch of unit tests for every class, ending up with about 200 tests after the first version was released. Trying to hit that famous 100% coverage. Only a few months later the architecture of the API changed and somebody new at the project hat to rewrite more than 100 tests.

In the lifetime of the project, the unit tests didn't prevent any major bugs, just the stuff I had in mind while testing the code, but they slowed down the development progress tremendously. Also, they forced some style on the code that was mostly there to ease the writing of the tests and not the resulting application.

So what is your opinion about this? Did I do unit tests wrong? Is there an alternative? Are integration tests (black- or grey-box) enough when automated? Is TDD a placebo? Are type-systems the way to go?

Lets discuss! :)

Top comments (47)

Collapse
 
quii profile image
Chris James • Edited

So what is your opinion about this? Did I do unit tests wrong?

Probably yes (sorry!)

It's a classic issue of writing tests that are too coupled to implementation detail. People then get frustrated at tests because they can no longer refactor without changing everything

So I wrote a bunch of unit tests for every class, ending up with about 200 tests after the first version was released. Trying to hit that famous 100% coverage.

I'm going to speak in terms of TDD; and that does not prescribe writing tests for every class/function/whatever. It prescribes it for every behaviour. So you may write a thing that does X, internally it may have a few collaborators; don't make the mistake of writing tests for implementation detail. These are still unit tests.

Ask yourself. If I were to refactor this code, would I have to change lots of tests? The very definition of refactoring is changing the code without changing behaviour. So in theory you should be able to refactor without changing tests.

I would suggest looking into Kent Beck's book on test driven development. It's an easy read and quite short. Or if you like Go and dont want to pay any money have a look at my book. This video covers some of the main issues you talked about and probably explains what i've typed a lot better infoq.com/presentations/tdd-original

Writing tests effectively takes a while to get proficient at, but the fastest way to get there is to study and retrospect the effect tests had on your codebase

Collapse
 
aleron75 profile image
Alessandro Ronchi

My 2¢ about this discussion.

First of all, I think that this quote is fundamental to understand why we test our code:

“Testing shows the presence, not the absence of bugs” ~ E. W. Dijkstra

It means that our tests can't prove the correctness of our code, they can only prove that our code is safe against the bugs that we are looking for.

Having 100% code coverage doesn't guarantee that our code is 100% correct and bug-free.

It only means that our code is 100% safe against the bugs that we are looking for.

There may be bugs we aren't looking for even with a 100% code coverage passing tests.

Tests show the presence, not the absence of bugs.


Chris James says: "the very definition of refactoring is changing the code without changing behavior."

The behavior refactoring refers to is external behavior, that is, the expected outcome of a piece of code, not how the code behaves internally.

When we write a test, we can make assertions about internal behavior but it can change without modifying the expected output.
That's the very definition of refactoring.

When we make assertions about the internal behavior, we are coupling our test to an implementation: internal behavior changes will likely bring to change the test.

That's why I like what Michał T. says: "code that is perfectly suited for unit tests are things that have predictable inputs and outputs, and don't have dependencies or global effects."

The assertions about the behavior of our code will likely depend on the behavior of our dependencies.

Indeed, we mock external dependencies because we don't want our code being affected by their potentially bugged outcome.
Thus, we set up our environment to have a predictable output.

That's why even if external dependencies have bugs, our unit tests can pass. And that's why unit tests aren't enough to save us from having issues.

Reducing external dependencies will make our code easier to test and less prone to side effects coming from the outside.


My last thought, starting with this quote from connectionist: "code changes happen all the time and unit tests have to change with them. It's unpleasant but necessary."

Software, by definition, is soft to adapt to changes.
Otherwise, it would have been "hard" ware.

We have to deal with it. It should not be unpleasant but the opposite: it's its the ability to change that proves the real value of software.

The frustration that we feel when we have to change our software comes from the fact that as long as we add code we tend to reduce the flexibility of our software (we add accidental complication).

Thus, adapting to changes becomes frustrating.

But it's not software's fault.
It's not our customers' fault.
It's our fault.

It's only by making our code better over time that we can reduce that frustration.

And we can make it better by performing refactoring on a regular basis.

Everything that encourages refactoring should be welcome.

I warmly recommend watching this: vimeo.com/78898380

Cheers

Collapse
 
sneakin profile image
Nolan 🚀👉❤ :/

Adding to test the code's behavior, test that the code implements requirements: those things the end user, legal, marketing has to have. Then you get into tracing requirements to exact lines of code, and anything else can get deleted.

Collapse
 
kayis profile image
K

Thanks, I'm going to read this book :D

Collapse
 
jfrankcarr profile image
Frank Carr

One of the important things that unit tests will do is to get you focused on SOLID, most notably single responsibility. It reduces the temptation to write "Swiss army knife" functions or massive blocks of if..else or switch..case code. When you work in short blocks of testable code it makes debugging so much easier. Likewise, if you find tests becoming elaborate, maybe some refactoring is needed.

When you're working on a team, having the unit test gives other developers a guide as to how a particular function should work. If they come up with use cases you didn't anticipate, it provides an easy way for them to communicate it. When you're primarily working on the backend, it gives you something to demo in the sprint retrospective/demo.

When debugging issues unit tests make it easier to locate problem areas both in integration testing and in production. Without having this testing you can spin your wheels trying to find bugs.

Alternatives to unit tests? I've had to do these when working with legacy code where there were no tests originally written. Usually, these tests were in the form of one-off sandbox applications that would exercise a particular function or set of functions, trying to track down a bug. I've found this to be more inefficient than writing tests to begin with, particularly when trying to deal with critical production problems.

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Thanks for posting your experiences. ❤️ I have similar history with unit tests.

Nowadays, I no longer bother to test everything. I do not believe there is enough ROI in doing so for most of our apps. I mainly test business logic. And when I say that, I mean only business logic. I practice Dependency Rejection, so business logic code is purely logic and is very easy to test. I will highlight the difference with a couple of images.

Dependency Injection Test Surface Area

This kind of design is what you normally see exemplified in unit test demos with interfaces being injected into business code. This makes "business code" not only responsible for business calculations but also handling IO. Despite those things being represented as interfaces, the code will likely need to know specifics like which exceptions are thrown or other effect details which are unique to type of integration. So it has the appearance of decoupling while potentially being still quite coupled.

This kind of design also creates a lot of work in unit tests, since you have to create behavioral mocks of the injected components. The fact that you need a framework to dull the pain is a hint that it is not an optimal strategy.

Instead, I do this.

Dependency Rejection Test Surface Area

Here, the business logic code (inner circle) has no knowledge of other components outside of its purview... not even their interfaces. It only takes data in and returns other data. If IO is necessary to fetch the data, the logic does not care about it and is not responsible for it. Only once it is fetched do you run the logic code. It is also fair to give the logic code an object representing no data (e.g. Null Object Pattern or a Maybe). This is ridiculously easy to test since all you only have to pass in some data and check the the output matches what you expect.

For example, I might have some logic code like this:

let createItem existingItem data =
    match existingItem with
    | Some i -> Error ItemAlreadyExists
    | None -> validate data
    // validate: check required fields, invalid ranges, etc.

Then have a test like this:

    [<TestMethod>]
    member __.``Duplicate items cannot be created`` () =
        let existingItem = Some { ... }
        let data = { ... }
        let expected = Error ItemAlreadyExists
        let actual = Logic.createItem existingItem data
        eq expected actual

How do I handle IO? I have an outer piece of code (I call a use case handler, or just "handler") which is responsible for tying the logic to other integrations (database, API calls, etc.) needed for the use case. Sometimes logic steps are interleaved with IO, and so the logic has different function/methods for each step. The handler must check the logic response from the first step and perform appropriate IO before calling the next step.

This design draws a very fine line between which types of testing is appropriate for which parts. Unit testing (even property-based) is appropriate for business logic code. Integration testing is appropriate for the integration libraries used by the handler. End-to-end testing is appropriate for the handler code itself since it may deal with multiple integrations. But the main win, and the most important thing to the business is the business code -- that decisions are correct. And this is now the easiest piece to test. The other parts are no harder to test than they were before, but still not worth the ROI for us yet.

Collapse
 
kayis profile image
K

Ah, yeah I read about these things.

But all the examples were in FP languages I didn't know, so I didn't take much from it.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

You might want to search for the "humble object pattern" if you want to learn more about Kasey's testing strategy.

Collapse
 
cotcotcoder profile image
JeffD

(I'm a unit-test-addict :) )

Did I do unit tests wrong?
In my opinion unit-tests are documentation, so if your product change, your unit-tests must be rewrited. If you had to rewrite to many tests for a little change so maybe you should make your tests more flexible, or use them only to test the "freezed part of your code" (utils functions and algorithms).

Is there an alternative?
In case of API (constantly evolving) some tools create test directly from spec (Swagger maybe).

Are integration tests enough?
It's difficult to test only a function with integration test, the scope is not the same. But testing "GET .../user/1" return the good object it could be ok. I higly recommand to use unit-test to deal with user-inputs (POST requests) because you can test the endpoint with a lot of bad entry (and check for security, malformed, bad type, ...)

Is TDD a placebo?
Personnaly it's a security I love to have :)

Collapse
 
kayis profile image
K

> maybe you should make your tests more flexible
How? :)

> or use them only to test the "freezed part of your code"
Isn't this against the TDD philosophy?

> I higly recommand to use unit-test to deal with user-inputs
How does this eliminate the problem that I only test what I had in mind anyway when writing the functionality in the first place?

Like, when I test my software I find fewer bugs than when someone else tests it. etc.

Collapse
 
cotcotcoder profile image
JeffD

More flexibility:

  • Use Single responsibility principle (SOLID) / functionnal programming pure function : it reduce the test scope.
  • Maybe using mock can help

It's recommanded to test only one case per test, I get the bad practice to put all my testing case into an array:

for (arg1, arg2, result) in [(1,2,3),(-1,-3,-4)]:
assert(my_sum_function(arg1, arg2) == 3)

It's bad but you can make a lot of case and change function name easily.

Maintains few tests is always better to have no test at all. To encourage your team adding tests it should be easy ;). So test the freezed functions is a good start.

I'ld love to write an article about "unexpected testing cases", I have this list of error cases:

  • Not found: unexisting file (or bad permission)
  • bad type: '01' instead of 01
  • bad format: negative integer, phone number with +XX prefix, (XSS injection for HTML field), too long (buffer overflow) ...
  • injection: SQL, XPath, LDAP, ...
  • textcase: get uppercase when we're waiting lowercase
  • None/undefined value
  • Exception: divide by zero, ...
  • timeout/network error/ database off
  • invalid endpoint/version/key/authentification (for API)
Collapse
 
kashperanto profile image
kashperanto

The main thing with TDD from my understanding is that tests are the requirements, so anything that falls outside of the tests is by definition irrelevant. Most of the "test everything" recommendations come from the TDD mindset, so if you try to apply that outside of the TDD framework it can get messy.

This perspective helps limit the scope and coupling of your tests, since there is typically an astronomical number of tests that you could do, but a very finite number of testable requirements. Refactoring should not generally break tests, but if refactoring occurs across/between several modules then you will probably have some rework, but I would argue that that is more of a "redesign" than a "refactor".

One good reason to test every module/class is to reduce the scope of any bugs you do come across. If I have a suite of tests that demonstrate my module's behavior then I know where not to look for the bug. With integration/system tests alone you will have some searching to do.

Thread Thread
 
kayis profile image
K

I always have the feeling that is still a problem.

I get rather high leven requirements, but they are implemented by many parts of the code. So simply writing a "Req1 passes" would require to implement many many thigns till the requirement is met.

Collapse
 
itsasine profile image
ItsASine (Kayla) • Edited

I'm bookmarking this to read later (so many good comments!) but I'll chime in with a QA perspective:

If you're working at a place with a formal QA step, test your implementation, not your requirements

I've noticed in my devs' specs they'll have tests for things like "this has been called once", "all parts of this if can be hit", yada yada yada, and then there will be things like "it returns all this info", "the info is sorted properly", "the info is in the right format", etc.

Then if you look at my tests, they're "it returns all this info", "the info is sorted properly", "the info is in the right format", etc... things a user would see and that are in the story's acceptance criteria for the feature. Where I am, QA automation (end-to-end with a hint of integration testing) is a part of the formal definition of done, so a feature isn't considered done until both of us have written the same thing just at two different levels.

Collapse
 
devcamilla profile image
Camilla Santiago • Edited

I haven't written any unit tests for Web APIs yet but here's my take on TDD:

In my part, I don't recommend writing unit tests for every class. Only for classes that changes behavior based on various arguments and conditions.

Writing unit tests helps me in various ways:

  1. It validates my understanding of the requirement. There's a tendency for us developers to jump right into coding without fully grasping the requirement. Writing unit tests forces us to think and ask questions even before the actual coding. Which eventually saves us more time than rewriting code from previous assumptions.

  2. It helps me make design decisions. That is, if a class is hard to test, it may still be broken down into smaller testable classes. Therefore, enforcing SRP (Single Responsibility Principle)

  3. Acts as harness after refactoring and bug fixing. Tests should still be green after code changes. It's a quality layer that signals me that I didn't break anything.

  4. Like @JeffD said, also a documentation. I've written and deleted a lot of unit tests. Requirements may or may not change in the future. You don't know when or if it will but for this time that it's true, its better to write unit tests than to write none in anticipation that it will just be deleted or changed in the future.

Hopefully, these insights helped you.

Collapse
 
kayis profile image
K

You're probably right.

I often read unit tests of libraries I used to understand them, but on the other hand I don't write libraries myself. They feel like they would lend themselves rather well to unit-testing, like APIs and such. UIs feel different somehow.

Collapse
 
idanarye profile image
Idan Arye

If you haven't already, you should read Joel Spolsky's excelent article Five Worlds. To sum it up - great programmers sometimes come up with tips and best practices that make perfect sense in the area they work in, but are not very useful and maybe even harmful in other areas.

I believe unit testing is one of these best practices. When it comes to library development, for example, unit testing are great. In other areas their RoI is too low to be useful, and other kind of tests should be preferred.

In my opinion, automated tests should be an automation of manual tests. It usually easy to decide how to test something manually. For example:

  • When you are developing a library function, you write a small main that prints it's output for some hard-coded inputs.
  • When you are developing a new feature, you run the entire program and use that feature to see that it works.

These workflows are intuitive:

  • You can't test the new feature with a custom main. I mean you can - but that would be a lot of work to re-create the environment needed to test that feature, and in the end it won't be very effective because that temporary environment may be different than what you use in the actual program.
  • You can't test that library function by running the entire program. I mean you can - but you won't be able to give it the inputs you want to test and you won't be able to directly see the output. Unless you use a debugger, that is. Also, having to run everything up to the point that function is used will make your cycles needlessly long.

Since the manual testing strategy is so clear, the automated testing strategy should mimic it. Use unit tests for the library function and integration tests for the feature. Some people will insist on unit tests for the feature, but that has the exact same drawbacks of manually testing it with a custom main.

Collapse
 
atsteffen profile image
atsteffen • Edited

I second this!

When working with a mature framework, or using a good library, features should usually come in the form of extensions. The purest extensions are those that are almost entirely declarative. i.e. you are just picking what functionality offered by the framework to compose into your new feature. When a piece of code simply composes, or declares constants, there is nothing to unit test. There's no such thing (at a unit level) as declaring the wrong constant or composing the wrong functionality. The declarations should trivially match your requirements, and (though we may have our opinions) there are no wrong or right requirements. If you write unit tests to re-assert declarative requirements, you will just have to change those tests as the requirements change without ever really protecting the "correctness" of anything. Also, these extensions are usually the most sensitive thing to API changes, and can double your clean-up effort if you have a framework API update.

Of course there are usually logical utilities and functional bits added with feature extensions, but those can usually be tested in isolation of the declarative bits. Their functional bits can always be made into a local mini-library, which is again just composed into the final feature, locally testable, and ideally not sensitive to changes to the API that the feature is extending.

High level integration tests are what you need to guarantee that you've composed these features properly to produce the desired effect.

My guess from the OP stating that there were hundreds of tests to change on an API change is that he was either testing declarative bits, or didn't have declarative bits properly isolated.

Collapse
 
alainvanhout profile image
Alain Van Hout • Edited

Unit tests have the greatest ROI when either

  • the feature is business-critical and/or is very unlikely to change requirements (to guard against regressions)
  • the feature is an algorithm with plenty of edge-cases (to fascilitate develepment and guard against regressions)

On the other hand, unit tests have very low ROI when a feature is not business-critical and has requirements that change very frequently.

Note that the value of unit tests is like everything else: it depends.

As to alternatives, I’ve had cases where API tests (on a running test instance of the application) provided an immensely high ROI. Integration tests, in the sense of testing the collaboration of a chuck of your codebase, those have for me always had a low ROI, because of the effort in setting up while still only resembling actual production behaviour (due to the mocked parts).

Collapse
 
conectionist profile image
conectionist

Did I do unit tests wrong?
I can't say for sure, but what I can say if "Trying to hit that famous 100% coverage" is a nothing but a wild goose chase. To find out why, see this article: dev.to/conectionist/why-code-cover...

Is there an alternative?
Code changes happen all the time and unit tests have to change with them.
It's unpleasant but necessary.
However, if a large part of your architecture has to change (and this happens quickly/frequently) then the problem is not with your unit tests.
It's with the architects and the faulty/rushed decisions they make when deciding upon an unstable/unreliable architecture.

Are integration tests (black- or grey-box) enough when automated?
NO!
Unit tests and integration tests serve different purposes. They are complementary. They are not meant to be a substitute for one another.

Unit tests are meant to test code. They are like a defense mechanism against yourself (or, more specifically, against accidental mistakes you might make).
The idea is the following:

  • you write a piece of code
  • you decide what your expectations you have after that code has run
  • you write some unit tests that make sure those expectations remain the same after you've made some changes in that area

Because it's possible that changes you make in some places, have undesired effects in other places. That's where unit tests come in. They tell you "No, no! If you continue with these changes, you will break something that was working well. Back to the drawing board!"

Integration tests on the other hand test functionality. They check if everything works ok when it's all put together.

Is TDD a placebo?
Certainly not. But like all things, it works only if used properly.

As a side note, don't be discouraged if your unit tests didn't catch any major bugs. That's very good! That means your a good programmer who writes very good code.
If your unit tests failed every time you ran them, it would mean you're very careless (or in love and with your head somewhere else :)) )

Think of it this way:
If you hire a security guard and you have no break-ins, are you upset that you have no break-ins?
You're probably feel that you're paying the security guard for nothing.
But trust me, if he wasn't there, you'd have more break-ins that you'd like.

Collapse
 
kayis profile image
K

Yes, I guess that's the problem.

After a few years of practice you write code that is pretty robust and the tests you write basically do nothing until the first changes to the software happen :)

Collapse
 
pies profile image
Michał T. • Edited

From my experience unit tests are incredibly useful when developing code that is perfectly suited for unit tests, generally things that have predictable inputs and outputs, and don't have dependencies or global effects. On the other hand if you're testing boilerplate code with a lot of complex dependencies (i.e. an MVC controller) it's probably better to cover it with integration or acceptance tests.

You should move as much code as reasonably possible into unit-testable blocks, but going out of your way for 100% unit-test coverage leads to tests that aren't worth writing and updating.

Then there are tricks, like mocking outside services (so that you don't have to actually hit remote services when running acceptance tests) and comparision testing, i.e. not testing the contents of an XML document but just storing it and comparing output directly to it. When testing APIs I also automatically test inputs and outputs on endpoints against a specification, which is a pretty good way of testing both the endpoints and the specification.

I also think that unit tests are a great way to force yourself to write easily testable code, which is usually better structured than non-testable code :)

But in general code needs to be tested if you care about it working. Any endpoint you don't test will eventually be broken.

Collapse
 
jvanbruegge profile image
Jan van Brügge

My take on unit tests is to avoid them. Write your software ina way it could be tested easily, as this will keep your code decoupled, will force you to explicitly inject external stuff and more.
But if you have a decent type system and a bunch of integration/end-to-end tests, unit tests are not worth the hazzle.
After all you dont care about implementation details as long as your module/component/insert-similar-here does the correct thing

Collapse
 
frothandjava profile image
Scot McSweeney-Roberts • Edited

When I was at uni, unit testing didn't exist. We had "black box" and "white box" testing (which kind of map to integration and unit testing). But if anything, the idea that developers needed to write any testing code at all was seen as a general failure of Computer Science. There was an emphasis on things like formal verification (so, using mathematics to verify that code is correct) and the hope that you could just specify what you wanted and the program would be automatically created.

So I'm not surprised that you lecturer wasn't a fan of unit testing. In some respects unit testing is an industry-wide wrong turn, but then unit testing is a lot easier than some of the alternatives (have a look at Z Notation - when I was at uni the course in it brought people to tears).

Collapse
 
tsetliff profile image
tsetliff

I think it totally depends on the expectations of the system, how much experience you have, and the risk you can afford.

One of the systems I've been working on for about 14 years is a CRM. It's probably about a million and a half lines of code with a few hundred movable UI components. At one point we had around 10,000 unit+integration tests but have removed many of them. The issue is it tolerates a fair amount of mistakes in edge cases as they typically don't impact many employees at a time because someone with knowledge of the business tested the feature before it went into production. My goal is to try to provide the business with a high ROI for development time and over the years I've seen what works and what doesn't. These days I typically use very few unit tests. In fact given the application I try to limit situations where I feel they are necessary at all causing fewer errors, faster turn around etc.

Most of my development is a UI, maybe some business logic, a model, and a DB. If I have tests I make a class with dependency injection just to test the business logic, and many situations there is no business logic to test. If there are very few critical paths or few people are using it I may also not add unit tests.

Financial portions of an application typically receive many more tests.

If you are a new developer unit tests may be useful to help you understand the potential issues with the patterns you use.

Unit tests don't replace integration tests. Or making tests to reproduce bug reports (seems you missed it the first time).

I think most new developers make things complicated enough to require tests because they are bored or because they think it is clever or they just don't think long term. It may unintentionally function as a training exercise. In my experience systems with a few repeated patterns over and over seem to stand the test of time much better. Much of the "complicated" code just comes from tried and true libraries you shouldn't be editing. The best business code is something someone else who isn't even a great programmer can sit down at and understand quickly so they can add additional features that are valuable to the end user.

You may however be in a very different situation. If you are writing a library to release, your company screams at you for every little error, your software will be installed in hardware and sold, it is customer facing, it is life or death, etc then I would change the way I write and test it accordingly.

As a note one way I reduced complexity (and unit tests) was to not be afraid to move complicated things to the administrative user space where possible. This also allows the business to build groups of people who may not be "programmers" but who can set up complicated business logic on a test server, test it, and move it to production without a developer even being involved. Your software will be more resilient to change. Along those lines I recommend moving anything that looks like a report to it's own department or at least its own thought process/repo.

/rant (since they canceled our fireworks due to rain) lol.

Collapse
 
jdsteinhauser profile image
Jason Steinhauser

I'm going to shamelessly plug my own Intro to Property-Based Testing ;-)

But seriously, PBT is a good secondary layer to proper unit tests. I'll be glad to answer any questions!

Collapse
 
kayis profile image
K

Yes, I read about this.

Even autmatically generating inputs for JavaScript tests with the help of Flow type annotations.

I also liked the idea of mutation testing.

Collapse
 
elanid profile image
Daniel J Dominguez • Edited

To me, you should treat tests like features and features like wizards. As Gandalf has said, "A wizard is never late, nor is he early, he arrives precisely when he means to." To me, 100% test is always too early. By writing tests to 100% completion, you are saying that your features are 100% done, and that your product, in turn, is 100% complete. So, when that requirement came in, there was no room for it, thus forcing a rewrite of the system to accommodate it. Tests aren't just an assertion that everything is complete, but a measure of how much work the product needs.

Another way to think about it, is using the same quote, but focus on the last bit. "...he arrives precisely when he means to." Rather than testing what you say (code), you test what you mean (intent/behavior/requirement). Sometimes, we developers only know code, we don't know the requirements. If that happens, then any tests that we create may be worthless, as they do not express what was intended. Due to lack of communication, you did not anticipate a new feature, thus creating new work. Some may argue that dependency injection would have solved this, but unless you apply that to littlest model, there will be some way that this will get you. This is why agile was about building smaller and communicating faster.

I like to think about tests in a different context than TDD. Rather than testing to mean asserting, I like testing to mean trying it on, sort of like shoes. If I like the result, I will lock it in. This idea harkens back to when we started programming. Code a bit, complain about a missing semicolon, compile it, play with it to see if it works, repeat. With this same idea, we just gain two things: it is automated and no CLI required to input. This will bring a different mindset into testing. Rather than building the test first or last, it is with the feature. Code a bit of the implementation. Code a bit of the test. Code a bit more of the implementation, code a bit more of the test. Rather than the test being something to assert against, it becomes an explanation of your intent. This is what it means for a test to become your documentation. Of course, that should not be your only form of documentation. Just because it passed the unit test, does not mean it is correct behavior.