I've come to realize how much developers hate writing unit tests. I read Software Development is Not About Unit Tests a few months ago, a lot of which has really resonated with me through several re-readings to today. I get it. I've written a lot of unit tests; they can be tedious and often harder to maintain than the code itself. It is seemingly a waste of time and the backlash is somewhat justified. Writing tests sucks...
Recently, however, I have begun challenging my team to think about writing tests differently. I propose that there is much more value in a test describing the behavior of the system rather than testing it. It seems pedantic and I don't expect it to be a huge revelation to anyone, but I think this perspective simplifies the answer to the "what do I test?" question.
The problem with the "I know this works, how can I verify that it does?" mindset is that you're missing an important piece to answering the question: how does it work? I would challenge any of you "code is self-documenting" evangelists to look at any of the crap that your colleagues write or that you wrote three weeks ago and tell me exactly what it does and how it was meant to be implemented. I can't either.
If you've never seen the Agile Testing Quadrants, I encourage you to take a look at the chart. The bottom quadrants (1 and 4) are labeled "Technology Facing" and the top "Business Facing"; that means something. Testing is really about understanding the product you have and the users that use it. If you're able to describe and demonstrate how the product solves the needs of your user, then your "test" was successful. You can scale that same concept up (to end-to-end and user-experience testing, whole product-level and organization-level tests where the users are your customers) or down (to unit and integration tests on smaller slices of your product or it's code where the users are the developers).
In this way, the test becomes less of a "test" and more of a "checked example". I stumbled across this label in Exploration by Example, which is linked in the testing quadrants article above. I thought it accurately summarized the idea.
The checked example method makes the tests serve more as documentation of the code and a document for the shared understanding of the system. I think going beyond that to document every behavior of the system is a little silly, so I tend to stick to functions and workflows that need to be documented and understood in some fashion. For unit tests, that could be because it because they include some business logic or because it has cascading effects throughout the application. For end-to-end tests, tests would be for user-facing functionality that describes how the user uses the product to achieve a result.
You can't really verify that a product works without describing how it does. Chances are that your product is not very complicated, so your tests shouldn't be either; keep it simple and describe how the core functionality of your organization and product work. You can incorporate other techniques (user personas come to mind) to help you do that, but keep the barrier to entry low.
This idea is probably more-or-less encapsulated in Behavior Driven Development, but is how I made testing "click" for me. I didn't include any examples because it's more conceptual than that; it's more of a way to think about writing tests than a method for actually writing them.