I was recently talking with a skeptic of software testing. In his view, writing code to test other code seems dubious. Why would you want to create and maintain a bunch of extra code to prove your original code does what it should? His argument was that testing was unnecessary and not worth the effort (you're going to manually test it anyway).
I can see his point of view, even if I don't agree with his sentiment. I think he was coming from a place of bad habit, having never written any tests for any of the application code he's ever built. From that point of view, testing can seem like a daunting task, requiring a ton of extra effort for no tangible benefit.
There are many kinds of software tests you can write, and all of them provide benefits. The longer the life expectancy of an application, the more benefit tests provide. You may not be the sole programmer on a project for its lifetime, and having specified tests is actually an effective form of documentation.
Unit Testing involves writing code to call individual functions. While application logic may contain needed complexity to solve business problems, the tests should be pure and simple. By following the pattern of AAA (Arrange, Act, Assert, a.k.a Given/When/Then) unit tests are clear and easy to follow. Unit Tests validate that business logic executes correctly and returns expected outputs under different scenarios of inputs. Adding unit tests is somewhat tedious, but with practice they become so easy to write that there is no good excuse for skipping them. They provide proof that the code is well built and individual units of code work as expected.
Integration Tests are similar, but instead of mocking dependencies you inject the concrete components of your application and test how they interact. This kind of testing also can validate that the software behaves in a reasonable amount of time, and has good performance in all cases.
UI Tests are usually harder to write and likely to break over time with application changes. But they can replace or reduce the amount of manual regression testing you or your QA team has to perform.
I often hear from programmers who are working on personal projects, that they have 0 tests. I think it is something people often tend to skip on personal projects because it feels like work, and developing the software functionality itself is more fun. I think as we develop better tools for building software, testing will become even easier to the point where everyone's personal projects will include tests. There are tools being developed like randoop to automatically generate unit tests!
I haven't said anything about Test Driven Development (TDD) yet. TDD is the practice of writing tests BEFORE you implement any code for any given new feature. I am not a TDD zealot; I think it is mostly a useful practice for defect fixes. When you get a bug report and don't yet know the cause, it is usually pretty easy to write a failing test at some level to reproduce the bug. From there, you can step through and debug and find the right refactor to fix the defect and pass the test. This is an efficient way to work through the bug and provide coverage to prove the correct behavior from there on. However, in my experience doing green-field development, writing tests first is a struggle. I prefer to at least design my components and interfaces from a high level before adding test cases.
Bottom line, I think a testing suite provides tremendous value to an application. If I am joining a project I would always prefer that codebase to have some form of testing in place as a way for me to understand and follow the code. It also gives me a sanity check that any changes I commit do not adversely affect other parts of the app (ideally as part of an automated build CI pipeline). To non-programmers and stakeholders, test coverage gives confidence that the software meets the requirements, and is usable and stable. Software testing is absolutely a worthwhile endeavor!