The German Mathematican Jacobi is reputed to have told his students 'man muss immer umkehren' ('invert, always invert' or 'one must always turn back' depending on who's translating).
As Warren Buffetts partner Charlie Munger puts it
It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.'
The basic idea is that we can often substitute a tricky problem for a less tricky one by reversing our train of thought. For instance, it's generally difficult to find out what to do to make ourselves happy but it is generally much easier to find out what makes us miserable and not do that.
The reason I'm raising all this on a tech forum is that this principle is also very useful in coding.
One great example of this is outlined by Martin Fowler in a blog on the "You Aren't Gonna Need It" principle. Essentially, this principle argues that we should wait until features are needed until they are implemented because we may not actually need the feature so it would be wasted effort and also the additional complexity will incur an increased maintenance cost.
One of the objections to this principle is that it will be cheaper to design and build a prospective feature now rather than waiting until it is needed. We thus need to balance the cost of doing it now versus later and consider the probability of actually needing it later.
Fowlers suggestion to developers making this objection is to
imagine the refactoring they would have to do later to introduce the capability when it's needed.
He says he tends to find that not only do the developers come to the conclusion that it won't be that expensive in future but that they
add something that's easy to do now, adds minimal complexity, yet significantly reduces the later cost. Using lookup tables for error messages rather than inline literals are an example that are simple yet make later translations easier to support.
In other words, instead of spending time worrying about what we will or won't need to do in the future, which is a hard problem, we substitute it for the easier problem of 'what would be the cost of doing this in the future and is there a cheap way to mitigate this cost now?'
A related use of the inversion principle is that, in general, it is hard to know which code will turn out to be easy to maintain but fairly easy to know which code will be turn out to be hard to maintain. The former requires predictive powers that we don't have but the latter involves simple observation. We've all experienced the pain of maintaining code bases with unnecessary complexity, poor testing or missing documentation. Not doing the things that will make code bad is perhaps not the same as making it good but it's a reasonable start. Nassim Taleb talks (at length) about this idea in his book Antifragile.
Another place I've found inversion to be useful is in writing test cases. I use a TDD/BDD style approach and the tendency there is to think in a functionality driven way. We think about what the user might do and write test cases based on that. However, it is also useful to think backwards. That is to say, think about what unpleasant states your application might get into and then how your user might generate those states.
And, of course, probably the most fundamental use of the inversion principle a developer should consider is that instead of always asking 'am I right?', ask 'what would be the cost of being wrong?'. Never create a problem for yourself that you can't fix.
It's a simple trick but a useful one.
Thanks for reading. Thoughts and comments welcome.