DEV Community

Krste Šižgorić
Krste Šižgorić

Posted on • Originally published at on

Making custom tools

When a new feature, part of the system or system as whole needs to be implemented, there is always debate which way to go.

Photo by Salvador Godoy on Unsplash

Do we go for an existing solution and adjust it, or do we make it ourselves. And to what extent do we choose one or the other option.

There is great benefit of making things from the ground up, where they are fully adapted to our needs, but it is a longer way. It is time consuming, and we need to put in much more effort. And in the end our problem might not even be that different to require this level of customization.

Existing solutions give us the benefit of being able to use it right away. It could give us a nice base to build upon, but it forces us to adjust to its principals. Perhaps the way we do things is that advantage that makes us stand out, and those principals could invalidate it. Or simply, we do not need complexity that existing solution force upon us, and simpler solution would better meet our needs.


To make things easier, let’s focus on libraries. When we need functionality, and we decide to go with a custom solution, we are controlling the direction in which that code will evolve. If we need a feature, we simply implement it, and we implement it the way we want.

If there is breaking change, it is there because our needs are requiring it. We can control the extent of that braking change and mitigate impact on existing codebase. In case there is a need, we can go for micro-optimizations. We can do what we want because we have ownership over it.

By using the library, we are given up all control over that part of code. In exchange we often get the benefit of not worrying about it. It just works. And if it does not work, there is often someone that will fix it for us. If it needs to be optimized, someone will probably optimize it. Or it was already considered when it was originally implemented.

With the rise of popularity of open-source things are shifting a little bit, so we do have some control over code. We can contribute to the open-source project, but there is still the fact that maintainers are deciding if our contribution and use cases are in line with their vision of how their library should work.

This is always the risk for our project. Good example of this is RestSharp. It is a simple HTTP API client library. At some point in 2022. in version 107 maintainers have decided to do some refactoring, and in that process it was decided to remove all interfaces except IRestClient. Is it a good or bad decision is not the point, but the impact that had on consumers of the library.

Consumers could have either updated their implementation and unit tests to reflect new changes or decide not to update library and lock themselves into using the last compatible version.

Code updates require an unknown amount of time, and in business time is money. And there is always opportunity cost. Effort of adjusting code due to library change could have been directed into implementing new features.

Not updating and using a not maintained version of a library exposes them to potential vulnerabilities and bugs, and deprives them of new functionalities and potential bug fixes.

Neither option seems appealing. After heated discussion there was compromise, and interfaces were returned in version 109. And this is a simple HTTP API client library.


Our code is our responsibility. It is up to us to test it properly, discover bugs and fix them. Documentation of behaviors and functionalities is mandatory if we want to avoid unnecessary bugs due to presumption of how things work. This is a lot of additional effort.

Libraries often provide us with those. Instead of investing time and money in creating something that already exists, we could simply reuse existing tools. Libraries often provide additional configuration with which we can adjust the behavior to our needs. And there is always the possibility to abstract it and add additional behavior.

In a custom tool, if there is a bug or vulnerability, we might not even know that it exists. Existing tools often have pretty good coverage. And if they do not, we can always choose a different tool.

Each bug in a custom tool stretches our resources more and more, and we might get ourselves in a situation where we spend more time on maintaining the tool we are using, instead developing a system that is made for. For specific topics it might even require us to have specialized knowledge to make it work.

This is an additional load on developers. Alongside knowing the domain, there is an additional need of knowing technical details that are not directly related to that domain, but are tool specific. We no longer work on one project, but two. We might not see it that way, but it is.

Using well established tools gives us certainty that if something is not working it is probably due to a mistake in our code. It depends on time invested into creating it, but we often can not say the same for a custom tool. And we find ourselves in situations where we are examining and verifying a custom tool’s code because something does not work while we are using it.

I don’t know why, but I stumble upon a lot custom ORMs. In dotnet ecosystem there is a lot of well establish ORMs like Entity Framework Core, NHibernate, Dapper or Linq2Db. Pretty much all functionality possible are covered and each of them are properly tested. And there is truly no need to create yet another ORM just for one specific project.

Yes, it is fun to make them, but for a specific project there is no direct benefit of making them. What benefits systems are mostly functionalities, and rarely how they are implemented under the hood.

Only when our implementation requires too many compromises to use a certain tool there is justification of ditching it and doing it from scratch. Otherwise, we are doing custom implementation just because it is fun.

And it stops being fun when a critical bug is discovered in our custom tool, and we need to stop everything and try to fix it as fast as possible. Not to mention that it could block other developers from doing their job until we fix it. Whole development is on standby, and damage to the business (even not directly visible) is massive.

Or when our custom tool finally goes to production and ends up having horrible performance. Something needs to be optimized, but each “optimization” breaks some other feature. Never-ending spiral of making it work could eventually degrade code quality to the extent it is unmaintainable.

Idea of creating a custom logging tool is great until you have thousands of log requests, and you start to deal with concurrent writes to the destination. By using a library for logging, we do expose ourselves to vulnerabilities like one with Log4J, but on the other hand, in pursuit of functionalities we could have implemented the same vulnerability without even knowing about it and not being able to locate it or knowing how to fix it.

Importance of abstraction

It is hard to determine what external dependency is. For example, if we have a custom implementation of mediator pattern to communicate between different parts of our system, is it that external dependency or not? If we decide to extract it in a different project, and use it as a library, does the nature of that code suddenly change? Is it now an external dependency? We still have full control of the code we are using.

Can we look at some other libraries the same way? MediatR is a simple implementation of mediator pattern. Should we treat it differently from our custom implementation of mediator pattern? If it is providing us with some fundamental building block for our solution, is it dependency or just that — building block? How does it differ from components from the framework we are using?

There is no one-fits-all solution. In the case of RestSharp, simple abstraction would have saved a lot of trouble for anyone using it. But the same thing could happen to us with a custom tool. Just because we are controlling the code it does not mean that we did not lock ourselves to specific implementation.

We might want to replace our custom tool with an existing library because it is hard to maintain it, but if there is no clear abstraction between implementation and consumption, we might not be able to do so. And then we simply keep using something that nobody wants, and it is a source of a lot of frustration.

We should differentiate between what is the fundamental building block of our system, something that we choose to tighten ourselves to, and what is a tool that does not affect our business logic but is just mean to the end.

First one does not require abstraction. Like numbers and formulas in math, it is the base we build everything else in our system. In the other case, we should have some level of abstraction for something that is in its essence just a tool. Simple thing layer that protects us from something that we cannot control or could jeopardize our project.


Custom tools are a double-edged sword. On the one hand, it is fun making them, and it is a great way of learning technology and design practices. On the other hand, it is additional work and potential threat to the system. Even more so than some external tools, because we do not look at it as dependency and it can creep up to us. And unmaintained dependency is the worst.

So, when choosing which way to go, let’s just use common sense. If the purpose of custom implementation is a fun side project, go for it. If it is work-related or supporting functionality… think carefully about what it brings to the table and is it worth it. We should not settle for OK, just because it is easy. But a good solution is better than a perfect unachievable one.

Top comments (0)