DEV Community

Sven Ruppert
Sven Ruppert

Posted on

Chartcenter or - Do not reinvent the wheel

Who doesn't know the times when there was only one server in operation? A single computer that combines all the services that are required within the company. In the beginning, it was a simple printing service so that you only had one central printer within the company. Services were added later, such as a shared file storage system, email server and so on and so on. The requirements were continually being revised upwards, and gradually more and more servers were offering the necessary services alone and later in a network.
The first server failure in a company could be perceived as a warning event. A person who looked after the server and subsequently the server became an independent IT department. It quickly became apparent that different optimization goals, unfortunately, stood in each other's way. As an example, I would like to mention the cost development, which sadly took an unpleasant course in connection with the design of the reliability of individual services.

The first virtual server

A new age in IT began with the development of virtualization. Services could now be separated from each other even more intensely on the physical hardware. Several services on one hardware server were now easy to implement again. With this new technology, however, new weak points in the area of security were introduced into the existing IT landscape. The requirements for the IT department were changed again at this point. The approach of using this virtualization, not like a component kit has established itself over time. More and more often, it was possible to obtain prefabricated virtual machines from the respective software products. The configuration within these modules could therefore, be provided directly by the manufacturers, which represented a big leap for the success of this technology.

One server becomes many servers

Over time, the demand for the computing power provided has grown steadily. The hardware could only keep up to a limited extent at this point if you didn't want to let the hardware costs grow into infinity. It turned out that many commercially available servers could be used in such a way that a higher level of reliability is achieved compared to a particular expensive server. So it made sense to look at the way to manage this zoo of small servers efficiently. After several approaches by different companies and the most diverse communities, a combination prevails on the market worldwide. The combination of Docker as a para-virtualizer and Kubernetes as a management tool is the current industry standard.


Unfortunately, it has always been the case with distributed systems that the underlying mechanisms are not trivial. On the one hand, location transparency is desirable to realize availabilities that are not minimized even by changing the version of the individual components. However, on the other hand, it is by no means trivial to provide the services required to manage this location transparency. So it was time within IT to deal with how declarative approaches can be used to make controllability accessible to the general public in IT. One method that is becoming more and more popular is described under the term "infrastructure as source text". The aim here is to define a description language that enables the required IT system to be described in its entirety. These definitions are then used within code versioning systems, e.g. git, so that you can easily switch between different versions of a description.

Don't reinvent the wheel

One of the essential principles within IT is that you should reuse existing knowledge efficiently. In this case, it means that one should take over the existing description of partial systems to compose the required overall system from them then. What sounds simple has it all in the details. There are immediately some questions that arise in practical use.

Where do I put my experiences?

The question of the right place to store your experiences so that others can access them is essential. In the past, it has been shown that the approach of creating a superset is prevalent. Be it with tools like maven or npm, or with operating systems like Linux. It enormously simplifies access when there is a central authority that can be used as an initial entry point. For the definition of infrastructure compositions, the "description language" HELM is the industrial location when using Kubernetes. HELM is the package manager for Kubernetes similar to maven it is for the Java world. To offer a central entry point for the community, was created. It's a superset of different sources to make an efficient and user-friendly collection point.


The next question that arises is the question of trust. In other words, one can ask oneself about the vulnerability or security of the components offered. The parts shown are, in turn, very complex units consisting of program elements and their configuration. These sub-systems, in turn, have their characteristics and also have their dependencies. The complete dependency graph is complicated for a human to grasp. Manual control is almost impossible. Here the IT itself has to be used again.
At chartcenter Xray from JFrog is used to check the definitions stored there. All binaries that are used directly and indirectly in these compositions are examined for known security gaps. This results in a complete dependency graph that shows where security gaps are present and how they affect the overall context.

The first steps

In order to start composing your own environment, you also need to know what is actually there. Tools that support navigation in this component repository using full-text searches and taxonomies help here. Here offers a very intuitive and user-friendly graphical interface to identify the essential components required in the shortest possible time. The additional necessary information based on the README (for example ) of the element helps to make the first decisions quickly.
The initial commands for using the selected component are also available.


In summary, one can say that is the next logical step to meet the requirements of IT. In order to make the existing components efficiently usable, a central location is required where this knowledge about the existence and availability of the ingredients is collected. Access is free and supports every developer who wants to use these components as well as those who want to make their knowledge and skills available to the general public. This approach supports every open source project directly and indirectly and helps to harden the IT world a little further with the security information offered.
The next step is to visit the website to get your picture of how easy it is to use these components.

Cheers Sven

Top comments (0)