DEV Community

Andy McCright
Andy McCright

Posted on • Originally published at andymc12.net on

Practical Cloud-Native Java Development with MicroProfile

For the past six months, I’ve had the privilege to work with some amazing co-authors (Emily Jiang, John Alcorn, David Chan and Alasdair Nottingham) and a super-supportive publisher (Packt) to produce the definitive guide to Java Cloud-native development.

So, what’s in it? My co-authors and I don’t like fluff – it’s titled Practical for a reason. The book takes a deep dive into MicroProfile technologies – this include some of the technologies that MP “borrowed” from Jakarta (REST, CDI, JSON-P/B) as well as the original MicroProfile technologies like Rest Client, Config, Fault Tolerance, Metrics, etc. But this book isn’t a textbook – there are practical examples with a real-world application at the core.

We also cover deployment strategies like how to run and test your application in open source servers like Open Liberty – then deploy in Docker and Kubernetes. All of the examples in the book are available in GitHub.

So, where can I get it? I’m glad you asked! It’s available for pre-sale on Amazon and Packt. It’s scheduled to release in September, 2021.

Anything else you can tell me about it? You bet! I plan to post some excerpts from the book on this site in the next few weeks. Stay tuned!

Top comments (5)

Collapse
 
siy profile image
Sergiy Yevtushenko

What actually "cloud-native" means?

Collapse
 
andymc12 profile image
Andy McCright

We've got a whole chapter on that! :) The short answer is "applications built specifically for cloud deployment". Alasdair talks about this in chapter 1 - how you can build a highway intended for 70mph (or ~115kph), but if you drive a horse and buggy on it, you won't achieve those speeds; you need a modern vehicle. So, while it is possible to take an existing monolith application and drop it into a cloud container, it will not perform as well as a modern application built with cloud deployment in mind. This usually (but not always) means using a micro-service architecture, tolerating unreliable services, asynchronous or reactive service invocation, in-app config switches that can be controlled externally, health and metrics monitoring, etc. Good question!

Collapse
 
siy profile image
Sergiy Yevtushenko

After spending more than a decade building distributed systems, I'm wondering why building unreliable systems (those which need to "tolerate unreliable services") on top of reliable/available ones (clouds) is considered "modern"? From your description, "cloud-native" sounds like "tightly coupled to cloud up to the level when application is unable to survive without cloud services". And that strange misconception that "monolith != modern".

About this time 9 years ago I've conceived and then, with my team, implemented a system, which, according to your assumptions, would be considered "not modern". It had had only single deployable artifact, i.e. it was a "monolith". Obviously, it was not designed for cloud deployment, did not rely on external services and did not contain any code to "tolerate unreliable service". Despite being "not modern", the system had many properties, which are beyond the wildest dreams of many "modern", "cloud native" applications:

  • simple deployment - only one deployable artifact, no need for "orchestration" or complex deployment schemes
  • numerous scalability options - spread task across all or the subset of the nodes, launch more service instances at existing nodes, add more nodes to the system (with close to linear scalability)
  • fault tolerance - while at least half + 1 node is up, system is available and completely functional.
  • built-in centralized configuration, monitoring and management. The new node joins the system in "bare" state and then reconfigures itself (starts/stops services, changes service parameters, etc.) according to configuration defined for him by leader node (automatically elected or re-elected as necessary).
  • no partial failure scenarios - system either working and completely functional or does not accept client requests at all: no way to corrupt data, no need to consider or handle myriads of possible failure scenarios.
  • very convenient writing of the business logic (services) - all other services and all necessary data are accessible directly and that access is reliable.

As you already might suspect, the system mentioned above was designed and implemented as a cluster. It was not a miracle by then and used only widely used and well-known technologies. It's even easier today, when relevant technologies grew enormously.

With all the above in mind, I'm wondering why unreliable, poorly scalable and unsuitable for distributed environment systems are all the hype now.

P.S. there is no such thing as "microservice architecture" because microservices are a packaging option, not the architecture (if you don't believe me, try to draw a diagram for "microservice architecture").

Thread Thread
 
andymc12 profile image
Andy McCright

Hi Sergiy - whew! there's a lot to discuss here!

I think it's overly simplistic to say that "monolith != modern" - there is certainly a case for monoliths in the cloud. It sounds like you already favor the monolith model, so I'll describe what I like about microservices - please understand that with most (all?) design decisions there are tradeoffs, and I would not advocate a one-size-fits-all approach.

Some of the shortfalls of monolith is the sheer amount of code and coders involved. Dependency management can be difficult, and there is no separation of concerns. You run the order processing service in the same CPU and memory space as the catalog service - so a bunch of customers with credit card in hand might end up waiting because of a slowdown in the catalog database. If those services are split out, then the two services are not using the same resources, and the transactions can proceed. Likewise, you could spin up or shut down instances of one service or another depending on traffic - but that leads to "unreliability" - a client service might have to handle the fact that the server instance it was communicating with is shut down, and now it must start communicating with a different instance. Most monoliths have this same "instability" - it's just handled at the IP sprayer rather than "inside" the application.

There's no doubt that microservices add more complexity, so the real question an organization must ask is, is it worth it? If you're building a site like Amazon or Spotify or many online banking apps, there are a lot of moving parts, so complexity is part of the system. I think microservices work well in these environments. For more dedicated systems, monoliths will make more sense. There is also the middle ground of macroservices that might make more sense.

Going back to the book, while we discuss the microservice architecture and the 12 factor application, it is not entirely about microservices. The APIs discussed in the book work just as well in a monolith. It discusses the latest in Java-based RESTful services, dependency injection, cloud-native config, fault tolerance (both within an app and for handling downed remote services), etc. It also covers how to deploy, debug and monitor applications in the cloud. All of this is useful regardless of your application architecture.

Thanks for the comment!

Thread Thread
 
siy profile image
Sergiy Yevtushenko • Edited

What I wrote is not about monolith vs microservices. It's about how an application designed for distributed deployment differs from something which is not designed, but used in a distributed environment.

Yes, I know that the main consideration behind microservices is the team separation. I completely understand the reasoning behind such a separation. I just don't think that microservices is the solution for this issue. In fact, most of the issues you've mentioned (dependency management and separation of concerns) were solved many years ago by OSGi. With some (small) efforts it even allows splitting teams, exactly like this is done with microservices.

Also, I think my post was too short to explain enough details about clustered solution. Perhaps this article will provide further information. So, most of your considerations against monolith don't applicable to this approach. First, there is no sharing CPU or other resources across all services. Each service instance works only at some cluster nodes (how many and where exactly is up to service manager component). In other words, nodes deployed with the same single artifact, but this does not mean that they are all running the same code. Some services run at some nodes, others at other nodes. And there are no "unreliability" or "instability" issues mentioned above. Moreover, it's rather easy to implement things in a way when there will be no difference, which exactly node was used to communicate with the system. Whole system runs as a single, big and exceptionally reliable computer. If you ever played with Docker swarm, then you should understand what I mean.