DEV Community

Discussion on: Java may be verbose, but who cares?

 
danlebrero profile image
Dan Lebrero

Hey Nicolas,

I cannot really comment. In the past 10 years, I have always worked in some kind of service oriented architecture. I don't dare to call it microservices as it is still not clear to me how micro is micro.

On such architectures, the biggest codebase that I have worked with was 20k lines of Java code.

If the code growed larger than that, it would be a symptom of too much complexity in the same place and we will split it.

We found it easier to work with small lightly coupled processes, than with one big monolith.

We have right now more than a 100 of such services.

My only experience with large codebases is that nobody wants to touch them, mostly because they have grow to be a monstrosity of spaghetti code.

Why none of those codebases is composed of small, decoupled components with clear interfaces and boundaries? If we had such clean codebases, the need to make a change/refactor across component boundaries should rarely exist, hence the need for sophisticated tools would be less.

What if we could build such clean programs that tooling was not necessary? Wouldn't that be better?

I agree mutability has its value. No program would be useful without it.

But IMHO immutability has more value.

Thank god, Clojure doesn't make it idiomatic to use mutability, which means that writing mutable code requires an extra effort. On the other hand, Java requires you to make and extra effort when writing immutable code.

Do I need to sell you immutability over mutability? What you mention seems like a strength on Clojure's side.

Thanks again for your comments and for the civilized tone. It is a pleasure talking with you.

Cheers,

Daniel

Thread Thread
 
bousquetn profile image
Nicolas Bousquet • Edited

To be honest micro service is a special case of modularity where the boudaries of a module are defined by network interfaces. The same core design of having separated components with low coupling still applies.

But the complexity of having 100 components each with 20K LOC isn't generally the same as one component with 20KLOC. After all the total number of lines of code is 2 million LOC !

If the design is good, some components never interract with each other, and that the consequence of a good architecture. So this decrease the overall cost.

But if for a typicall application/product/whatever that has a set of features, you really need say 10 components with 20KLOC each and theses 10 components have nothing in common with the other 90 components, you can say you have 200K LOC for that application/product/whatever. It may not be fair to consider 2 millions as the real number but 20K isn't necessarily the truth neither.

I wouldn't put the unit of an application/product/whatever necessarily at the service or component level except if that service/component is totally isolated without any interractions with other services/components.

But we have also service there. Of course. Thousand of them. And many have lot of component inside. We just operate on a bigger scale.

What I can see from experience is that sometime some data is flowing accross many services. And when that need to be updated. Just maybe a new tag, in reality often a bit more complex than that of course. The total cost of having that new data handled correctly among all the services is huge.

On mature software, a team may end up doing just that adding a few more infor here and there rather than lot of new thing. But the cost of adding a piece of data to be propagated and handled correctly among many service is huge.

The coupling is low, but not zero. And the cost of maintenance and evolution still grows with the number of services or components.

Also this change the way things are designed. Time is spent analysing message exchanges, their orchestration and so own with associated documentation and impact. This shift where the complexity is, but the complexity, ultimately is still there.

Thread Thread
 
danlebrero profile image
Dan Lebrero

Hi Nicolas,

Thanks a lot for the thoughtful answer.

We also have the problem that sometimes when adding a new field or adding a new enum value, we need to change several services.

In my experience this usually happens because we autogenerate our Java classes from some schema, which makes our classes too rigid.

I agree that microservices are more complex, but also more flexible.

Would building microservices inside a monolith give us the best of both worlds?

Thread Thread
 
bousquetn profile image
Nicolas Bousquet

To me, these no silver bullet. Micro-service is just a way to componentize an application/product. It is great for some cases, terrible for others.

I would not consider microservices to componentize the plugins of a image editing software on the desktop as an example ;)

To me using micro services, fat service, a single service or no service at all, network wise is not really important. What is important is that your components are well componentized.

Example: You or me may say that service is XXX LOCs. But we do not include the JVM code. Neither do we include apache-common libs or spring, or the application server (or netty) code. If we use clojure we do not include that. And we consider only the real source code, not the compiled code...

This is because we manage to extremely well componentize theses components. The abstraction they provide is so great you never have to look inside how it work. You can, it has value, but you don't have to.

Often even using the network there more coupling that we think between components. The format of data we use while in JSON or XML is far less generic than we think because if we try to incomporate the data from another provider we may miss some concept entirely and the exchange format has to be reviewed.

Theses things are hard. Networks service allows to have thing on different processes/computer and that's nice. The resiliant format like JSON/XML help on adding new features without breaking existing code. As long as clients are smart enough to ignore what they don't know in the message.

But you can achieve with services defined as interface in java (or clojure). The idea are quite simple in the end. The input is created by the client and the one that receive it do no modify it. The output is created by the service and that service ensure there no dependency to its internal state that would create issues.

The service doesn't take parameters as adding more break client code and there a narrow limit of what you can pass but use more POJOs or clojure maps. In both cases, it is possible to extends the datastructure and consider you have default values for things that are missing (with pojo, the default value can be always present).

Debugging inside the same process and performing integration tests is much easier and faster and you remove entirely the need for serialization/deserialization and you don't have to manage the various network errors...

Don't get me wrong you'll not want necessarily everything in the same VM neither. It depends of the problem you are trying to solve. A good architecture would bring natural boundaries and some of theses boudaries would be the network.

But the network itself bring lot of complexities. I can see that clearly. In my company we have thousand and thousand services. Many of theses services are what you would call large codebases and we have many farms of servers. Fast it become complex to understand what the right service to call, what dependent service will be impacted when you want to add an new feature, how to scale and keep the latency low, how to scale the network itself...

One day or another, the comforts of the abstraction that protected us for years come back to bite us ;) There hundred people in my company working on the enterprize bus, helping the configuration and maintenances of all services for various clients and so on.

Thread Thread
 
danlebrero profile image
Dan Lebrero

Hi Nicolas,

All very true.

The point that I was trying to make with the micro services is that I have found that they give you better components, I think because it is usually another team the one that produces them and several teams consume them, hence it forces for a more isolated and thoughtful design, it forces to make things backwards compatible and it also makes the boundaries obvious.

I do not think they make things simpler. As you say, the network is a huge headache on its own, but I think they force us to follow good practices.

On the other hand, with monoliths I find it easier to put some hack, break encapsulation and to end up with a big ball of mud.

Of course, I have also seen the death star from Netflix :)

Thanks a lot!

Dan

Thread Thread
 
bousquetn profile image
Nicolas Bousquet • Edited

I agree that network services on a resiliant exchange format (like XML, protobuff, Json...) greatly help on the backward compatible aspect and if you are serious about the doc, versioning and all it great way to isolate a component.

I always found componentization to be extremely hard to achieve. A web service can be very fast become non backward compatible, expose proprietary format or a structure of data that is incomplete or not future proof. The cost of maintenance is then huge.

When you are in the same process, the issues are far easier to solve but also are far easier to make and once there too many, nobody can manage to remove them. This is because languages at least like C, java or clojure are not really solving modularity issues.

Java 9 is going to give a try to a module system and OSGI has been there for quite some time in the eclipse echosystem. But that last one is not easy to use.

What we do in our project there is that each module in their simplest form have 3 maven modules: an interface module, an implementation module and an aggregator module that depends on both: one with the standard compile scope, the second with the runtime scope.

Clients code depend on the aggregator. They perfectly see the public interface but can't statically reference the implementation module as it is not included in the classpath at compilation time.

Linking is done either by spring (typically with annotations) or for components that are not expected to be bound to spring, by a singleton in the interface that access the implementation by introspection. Typically in a ServiceLocator or equivalent way.

The implementation can implement any interface, have any kind of public/protected visibility and clients can't abuse that without doing it on purpose (moving the class, making the access public or adding a compile time dependency to impl). Go to separate git (for a group of modules arround a feature likely) and it become harder to go unnoticed.

What we still miss but I guess could be added is a build failure in maven if you try for another scope than runtime on an "impl" module.

Thread Thread
 
danlebrero profile image
Dan Lebrero

I worked with the OSGi for four years back in the early days and it is one of the best frameworks to force you to think about how to componetize your application.

But the most useful lesson was to think about how your system should behave if one of the component was not present or was being restarted/upgraded.

It is an experience that translated quite nicely to the micro services world and that still helps me every time that we build a new system. Somebody should write a book called "What happens if this dependency is not available?".

What the OSGi did not taught me was that the network is a PITA. I learned that latter :)

Thanks!

Dan

Thread Thread
 
mohr023 profile image
Matheus Mohr

I just wanted to say it was a pleasure to watch your discussion guys haha

Thread Thread
 
danlebrero profile image
Dan Lebrero

Kudos to Nicolas!