DEV Community

Krste Šižgorić
Krste Šižgorić

Posted on • Originally published at krste-sizgoric.Medium on

Benefits of stepping into the unknown

Programmer’s job is to solve problems. It applies to everyone in the industry no matter which programming languages or framework we are using. Our job starts with an idea that needs to be transformed into something that can be used by others. And that process of transforming ideas into implementation can be done in so many different ways. That is why there are so many technologies out there, so many different approaches. People tend to find different solutions for the same problems.

However, once we do solve it, if we run into the same or similar problem again, we will probably implement it the same way. Fact is, we do not do things in the most efficient way but in the way that originally led us to the satisfying result. We are not looking for the best, we are looking for something to get things done and let us move on. If we find something that works, we stick to it.

Leaving the comfort zone

It is hard to look at the same problem in a different light. Even if we try to do things differently we tend to fall into the same tracks and in the end we create very similar solutions. If the only tool you have is a hammer, you tend to see every problem as a nail. But a hammer is not the only tool in the bag. And we should try to have as many tools in the bag as we can.

One way of achieving this, that I would definitely recommend, is by experimenting with different technologies. Every technology has its way of doing things. If some technology has a big community, it means that a lot of people like to do things that way. So there should be something good that we can learn from it.

Experimenting with different languages

I am a C# back-end developer, but I did experiment with different programming languages. I even worked as a PHP developer for a couple of years. While working on one of these PHP projects, I stumbled upon an interesting feature inside ORM. This system had 19 roles. To make business logic more reusable, on each model there were access limits per role. This meant that additional conditions would be appended on each query for each table inside that query. That additional condition would vary depending on the user’s role.

This is something, as far as I know, that was not possible in Entity Framework nor NHibernate at the time. You could append additional conditions, but only on query level (that is now changed with Global Query Filters in Entity Framework Core). And this way of thinking was definitely not the way any .Net programmer I know was thinking.

Besides PHP I also had a little “field trip” to Ruby. Even though I didn’t work a lot with Ruby on Rails, I did however had the chance to witness the power of “Rails Magic”. Rails is based on convention over configuration, and with use of Active Admin you could get really fast development. If you name things a certain way you can leverage certain behavior and things would work out of the box. This seemed so nice to me. You do things once, and with the right naming convention, everything will just work.

Doing things the different way

I was given a task of setting up an architecture for a particular system. Programming language would be C#, and for storage we would use SQL server. Pretty standard. I knew that this system will have a lot of business logic, but equally so a good portion of the functionality will be standard CRUD operations. System will have multiple roles and will be used by multiple internally developed client apps. Also, development was supposed to be fast due to short deadlines.

Since application development speed was one of the factors, and a lot of actions would be simple CRUD operations, generics was a logical choice. Reusing as many things as possible would be great. And if something like this could be done with configuration, or even convention over configuration, that would be even better.

So this was my proposal: CRUD operations would be covered with generic implementations. If generic implementation doesn’t fit some use case, specific implementation would be created.

To limit the need for different actions for different roles, Entity Framework Core would be used in combination with Global Query Filters. For each entity we would configure access limits based on the role. This would increase reusability of generic implementation. You could use the same method for admin, manager or a basic user. That business logic will produce different results based on the user role. Same goes with update and delete methods. Method will only work for entries users have access to. If a new role is added to the system, most of the functionalities would already work just by adding global query filters for that role.

Another thing that I thought would be nice was to delegate select logic to models and use auto mapper projection to retrieve only needed data. I combined this with generic implementations which would receive generic request and response models. Remember, these are simple CRUD operations. If you want to update an entity, you would receive a generic model, retrieve entity from database, remap request to entity and save it. Since you can remap different kinds of models to the same entity, this opens a whole new level of reusability of generic implementation.

Result

Whole architecture was broken to small pieces which were building blocks for a solution. We use what we could and only if we need we do specific implementation. This way of doing things do have a big learning curve and might seem a bit weird, but it was good choice for this case. Four team members (starting from empty solution) implemented almost 400 endpoints within two months. Everything we could, we delegated to generic implementation, which gave us more time to devote to complex business logic.

If we needed to expand an existing endpoint to return an additional field, all we needed to do is to add that property to the model. By following the naming convention, auto-mapper would do its magic and remap the entity’s property to a model and additional fields would be added to select statements.

In case there is a need for two different sets of data based on the same entity (index page with list, dropdown list or specific formatting) you could reuse the same method by forwarding a new model into it and getting different results. This could be done with OData, but we had multiple clients, so I decided that it is better to do this on the back-end side and prevent reinventing the wheel on each client app.

End solution was a hybrid of ideas from different technologies compacted into something specific for the problem we faced. I believe this is a pretty unique approach in the dotnet ecosystem. And it would never be done this way if I didn’t work on a PHP project with ORM filtering, or if I never worked with Ruby on Rails.

Conclusion

By experimenting with different technologies we expand our skill set. Of course, there is no need to learn all technologies out there, specialization is still the best way of perfecting yourself, but healthy amount of exposure to different technologies do give us new insights.

This makes us more flexible and resilient to changes. And we do need to be resilient. Software development is a very dynamic profession. Paradigms change all the time. Before React.js there was consensus that business logic should be separated from presentation. Then came React.js, change all of that, and became most popular SPA framework out there.

But trends are not reason why we should learn new things. We should learn new things so we can use them in situations where they are the best solution for the given problem. There is no silver bullet, one approach to rule them all. We have a bag of tools and we choose which one to use. The more tools we have, the easier it gets to adjust to given situations.

Top comments (0)