Your should reconsider if you actually want a unit test for what you're testing there. The point of a class is encapsulating. As in you should ask for the latest returns and not care how it does it.
Look at your mock however. It's very concerned with the how. It wont work with an interface that provides findLatest.
While the mock allows the test to in theory, in a strict sense conform to being a unit test under the definition that you should only test the unit, in reality it's also breaking OOP turning things inside out.
This is a common problem also seen with things like checked exceptions where things suddenly are made to have to be aware of the call graph when they're not supposed to be.
When I say I it breaks OOP I don't mean in a theoretical sense. It imposes immediate overhead that's immediately discernible. Just look at how much of your code is mock in that test.
Breaking the unit test on the other hand might not break the test. You could consider reducing that to three or four lines and having it implementation agnostic.
How to improve it is up to you but this might give you some ideas...
The mocking here is so verbose and implementation aware that you could practically generate the the method and implementation from the meta data stored in the mock.
Hi Joey, thank you very much for putting the effort in so descriptive comment!
First I have to say that I read your comment a couple of times and I agree with some points 100%.
When it comes to the improvement proposition you mentioned I personally strongly disagree with point one (Consider simply not having that test. Is it really useful?). I hear that opinion along with point two (Make it another type of test, such as an integration test (you can use sqlite in memory, etc).) from time to time, but I think it is not the way we should go. I can imagine that you also think that chasing 100% unit test coverage is the right goal, and that's the point where I usually disagree with my colleagues who have a similar approach like yours. In my opinion, the unit test should have 100% coverage, and one test class per one production class, integration, and functional tests should also take place in the repo but not as a replacement for unit tests.
Just to be clear, I don't think my opinion and approach is necessary right in the point where we disagree, I just found it useful and beneficial from my experience.
I'm also curious, what is an argument in your opinion, for not having a unit test for some class?
The issue is that basically there are many ways to skin a cat. Starting out a very scripted and prescribed approach can feel more comfortable. There's also only so much you can learn at once. On some level things like 100% coverage might seem right but that'll change when you've gotten to a certain point.
The issue is that you've gotten things backwards. It's not hard to do. The ends are what come first yet the means are what must come first.
Your statement that there should be 100% coverage is actually not particularly useful. If you went through all the things you should do in an activity it would be too much.
It's not you should have coverage by unit tests unless you shouldn't. You shouldn't have coverage by unit tests unless you should. Should you have everything? Must you have everything? You can't make a program entirely from unit tests. You can make any possible program without unit tests. Unit tests are always an extra. Any thing you add to the minimum must be justified. Adding a unit test is always a guaranteed cost. It's not benefit later, pay later. It's pay now, pay later and hope that one day your gamble will pay you back. The minimal cost of a unit test can always be proven. The benefit of a unit test can never be proven unless it actually fails which is also only 50% likely to be a gain, as the test may be broken.
That's not to say you're not trying to cover an essential base with unit tests and it's not bad either to learn how to implement them well but in reality you'll find your self being principle rather than results driven and that doesn't work out so well.
What you should do is strive to produce working software and to deliver that. You should be able to find out when it's not working so you know when you've not finished. A good testing strategy gives a good definition of done and delivered.
Unit testing it not a good testing strategy. It's just a testing strategy. It's not good unless it's good. If you apply it in a blanket fashion then it mostly wont be good.
It's not easy. There can be too many ways or not enough ways. It's common for either there to be no two ways about it or many ways to skin a cat with the balance often being poor between the two extremes.
When you say 100% coverage by unit tests, coverage of what? Lines of code? Features? Requirements? Not all of that need be covered by unit tests. Unit tests as commonly implemented can't cover everything. Unit tests only tell you about the functionality of parts, not the whole. 100% coverage by unit tests isn't a real goal. Working software is a real goal.
While coverage is a very important topic it's not solely in the domain of unit tests. You want to be very careful of the unit testing brigades that try to dictate that people go to extremes with unit testing. Many of these take inspiration from inappropriate places such as creating software to keep satellites falling from the sky. Many people will try to promote the argument for excessive coverage by a mechanism in a domain by abusing retrospect. I regret the numbers I played in the lottery this week and if only I had 100% coverage I would have won. It's actually not always that feasible.
I could talk about alternatives to unit testing but in fact I think unit testing itself today is massively problematic because there can only be one way to do it, it has become overly constrained while sometimes trying to deliver too much.
The biggest problem is OOP. That's not to say we drop OOP but we must understand how the combination causes problems. OOP does not lend well to creating well defined interfaces. If I sit in a car it just has what I need. I can do a fairly thorough test just putting the key in and turning it. The problem with OOP is that it turns things inside out. All the internal components and their interfaces are exposed so people can inject new behaviour or reuse. Those aren't always positives as they come at a cost. You also can end up if not careful creating spaghetti where units are eliminated or rather reduced to leaves. Your integration test can actually be a unit test. If you're mocking external dependences with sqlite then it's a unit test. An integration unit test. Not a class unit test.
What I'm getting at is that the definition of unit needed to be multidimensional and not one dimensional. Currently the definition is one dimensional on the leaf level. That is, the unit is the class. That's not right. It's a fixed level but why fixed? Why not per operation? Why not only not higher but why not lower? A unit isn't necessarily size dependent. Units at one level should come together to form units on the next. In that way you can start from the top. Test your country, then cities, then towns, etc. A compound can be a unit or an atom. As long as it has clearly defined boundaries, meaningful definition and purpose. If libraries are just flat bags of objects you can pick and mix then you'll end up having to unit test each rather than the entire module. You'll end up with ten tests and a thousand lines where one test and a hundred lines would have gotten as much. The fixed definition of the unit as the class is causing immense damage to architecture by then exporting every class to be part of the interface to be exported anywhere else. It's far too expensive.
This is compounded with secondary concerns. For example splitting tests to avoid getting the same coverage twice. That's premature optimisation. Don't do it until you have a real problem and when you do address that problem not all possible problems. It's just divide one test by two. The current default is to divide all tests fully first. You rarely even have to divide a test by two. Instead you just only add another test either at a different depth or side by side to fill missing coverage when the existing tests reach their limit. It's a simple process of recursive subdivision lost in unit testing. It has gone from log n to n. You write one test at the top level unit giving the most coverage. Then out of what's left you repeat.
People are also trying to use this granularity to narrow down on the problem. It's a waste of time. Unit tests only need to tell you there's a problem. Then you track that down. It's nice to know roughly where it is but people should only take what they get for free. After that it's a cost to replace what humans are better at, tracing down problems in a search domain. Writing a hundred extra classes to find out which of a hundred classes has a problem isn't dividing the search space by N at the cost of N. Humans already divide the search space by log n to track down bugs. Due to heuristics and selection search humans often get it in O(1) already. It boils down to nothing but having doubled the size of the codebase. These are heuristics that can be gradually used to also structure tests. Demanding O(N) loses the opportunistic optimisations that provide the 20% doing the 80%.
I can't tell you what strategy and structure will work best for your set of problems but I can tell you that overly rigid almost ritualistic programming or rather reducing it to a pure discipline rather than a thinking exercise isn't going to yield optimal results even if that's the stated intention. It's important to both keep an open mind and to question everything.
I can also tell you that it's not theoretical. Everything I talk about going right or wrong I've seen in the real world and worked out intimately why. After decades in the programming industry you learn from your mistakes as well as centuries worth of mistakes made by other programmer.
I find it very disturbing today to see things like prejudice where a young programmer will make judgement of my code, ask me if it has unit tests, if no the code is to be dismissed as not meeting some imaginary standard. I then peer directly at their unit test laden code and not only is it awful on nearly every front but I immediately find bugs, incompleteness, infinite loops, security holes, etc. Their code rarely and barely works. That's combined with unit testing being something where I've been there, done that is a big nail in the coffin for those trying to make it mandatory. They're trying to make their bad code the best by inventing the criteria and requirements for quality assessment so they can justify replacing good code with their bad code then sell their code as better because the old code never had unit tests. You should ask anyone if they tested it / do they know if its good / will work before submission but don't feel like you have to be a victim of unit testing bigots that do more to ruin it and make it bad then to make it good. The responsibility for producing good code is on you, not unit tests or any methodology. Only you can save mankind.
Doing everything with a single type of unit test might seem like a good idea as well as simple, one to rule then all, but as that can't cover everything even when it has 100% of coverage that unit tests can get you're just going to have to bite the bullet that you're going to have to manage stratified coverage anyway. Everything almost has diminishing returns. Don't go for 100%. That's saying ignore diminishing returns. Stop with unit testing when returns are poor and switch to another tier. You're very almost there. You have however defined a stopping point or a barrier. Essentially that's working out what to take away where. One testing type cannot necessarily replace one in its entirety but you're also losing out on where they can. You can replace a bunch of per class tests somewhere if for example you get the same out of a test at the root level when you remove mocks. Though the only way to be efficient is to take control of the code and not letting just a little bit of resistance like having to think about something rather than taking automatic action be a blocker.
Your biggest fear to overcome is removing things. Most likely you're afraid to propose removing unit tests where it might be good to do so to avoid invoking the ire of bullies that want to boss everyone around and make everyone have to do things all the time whether good or not. Just tell me which part of the playground the bullies are in and I'll go sort them out.
Something else to keep in mind. While you may have eliminated something knowing too much about internal implementation that's only one step back. You're unit test is still intimately aware of the external interface, dependencies and indirectly the inner workings through input and output expectations. Basically if you have to change anything a scope above the method and it's public or you need to change the nature of input / output then your unit tests have to change as well. That means they're not well suited to change. When you have unit tests everywhere which are intimately aware of your object graph then you must change them every time you change your interface. Unless your UML is almost entirely set in stone through something like UML then you're going to struggle when you need to make sweeping changes to interfaces as well as input / output. This is why to go top down so that tests are only as implementation aware as need be. Classes and method prototypes are also implementation.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.