I have become more skeptical about the use of interfaces in object oriented programming languages like Java.
In particular if there is only one implementing class of an interface.
What's the point of an abstraction if there is only one concrete?
Common arguments go like this, and I don't find them convincing.
The argument: unit tests should run in isolation, and in particular without accessing I/O and external services. Dependencies to I/O make tests brittle and slow. You need interfaces in order to switch to test doubles - like mocks and stubs - in your test code, enabling isolation .
I think these are some good points. But you don't need interfaces to test classes in isolation. Pass in the dependencies as instances of their concrete class as constructor arguments. Then use a mocking library like Mockito to create the test doubles.
Bottom line: testing in isolation is great, but you don't need interfaces for that.
Well known architectural styles like hexagonal architecture promote the use of interfaces, to abstract from the concrete technology and clearly separate infrastructure from domain logic.
When you change your technology decisions significantly, for example move to a different framework, having an architecture based on these concepts will likely pay off. That's why I like the ideas, and have written about them.
Yet, think about this. Say you have only one concrete adapter that implements a certain (driven) port interface. If you consistently used the adapter instead of the port interface, would the application still work?
The answer is: yes. Why shouldn't it. That's not surprising.
But what if you wanted to introduce another implementation of the port interface later on?
Then you could do the following refactoring: replace the concrete adapter class with an interface of the exact same name, make the existing adapter and the new adapter implement the interface. And that's it, your done. Maybe do some renaming if appropriate.
One thing I don't like about interfaces is that they stop you from "zooming in". In a well maintained code base that consists mostly of concrete classes, you can go from abstract to concrete. Look at the high level algorithms first, then jump to the definitions of called methods by pressing a single key in your IDE. That's the way I gradually increase my understanding of unknown code.
You lose that navigability once you hit an interface. It feels like you hit a wall at first, you have to look up the references and then see how the implementations match. If you have too many of them, the interfaces clutter up your code base and make it harder to understand.
Not at all. There are good reasons for having interfaces. An obvious one is that you have several implementations in production code that you want to switch based on configuration. Another one is that you want to enforce hard boundaries between the work that different teams do. There are more.
My point is just this: I wish there were more discussions about the detailed practical consequences of certain design decisions, rather than the unquestioned belief in certain design principles. Architecture and design are all about trade-offs: you rarely find a solution that has only benefits, but no downsides. And what works in one context, may not fit another.
What do you think?