Lost in the clouds
The rise of cloud computing has brought about a number of benefits for businesses of all sizes. From increased speed and flexibility to cost savings and improved security, the cloud has proven to be a valuable asset for many organizations. While the advancement of cloud paradigms like CaaS and FaaS with the integration of Kubernetes and Serverless has made it easier to scale applications without worrying about the underlying infrastructure or vendor lock-in. The widespread use of CI/CD practices and Infrastructure as Code tools has made it much easier to generate and replicate cloud resources across various providers and regions.
Despite the benefits, the cloud computing landscape can be complex and confusing. With the vast array of options available, including multiple pricing models, instance types, storage solutions, and more, along with the widespread adoption of Kubernetes and Serverless, it can be difficult to select the appropriate resources for specific applications and precisely calculate their expenses. Furthermore, the fragmentation of cloud infrastructure across multiple cloud accounts, regions, and providers also leads to a lack of clarity and security, with the existence of dormant, untracked, and idle resources exacerbating these problems.
This is not a new issue, it has been a pain point for several years.
There’s still pain left
A few years ago, while serving as the Head of DevOps at a deep tech startup, I faced the challenge of managing a multi-cloud infrastructure to handle billions of data points daily. Our cloud expenses were rapidly increasing and reached thousands of dollars a day just on AWS, with resources scattered across multiple regions and accounts. Our developers utilized various methods to create cloud resources, such as Cloud Consoles, Dockerfiles, Helm Charts, Jenkinsfiles, Terraform templates, etc., causing infrastructure drift and a growing number of assets that were hard to keep track of. It was difficult to gain a comprehensive understanding of all assets without spending hours searching through multiple AWS Console tabs and layers of hierarchy to answer questions like “How many EC2 instances are running in our Frankfurt region?”
The growth in infrastructure complexity has led to an increase in the number of cloud services used, including compute, storage, and network, among others, each with its own unique pricing structure. This complexity makes it challenging to estimate cloud costs using traditional methods such as spreadsheets. The pressure to deliver results, coupled with the lack of visibility into cloud costs, put DevOps engineers, SREs, and developers at risk of facing backlash from management when unexpected “bill shock” occurs and reaches the CFO.
To address the challenges mentioned, I started by creating a comprehensive view of our cloud resources and their associated costs. Unable to find an existing tool to achieve this, I developed a basic dashboard called Komiser that scans all AWS regions for cloud resources and presents the information in a clear and easy-to-understand format. This gave us complete visibility into all our resources from a single location, making it easier to manage them and avoid costly mistakes. Although the tool was basic, it saved us countless hours of sifting through complex billing statements and reduced the need for frequent context switching.
Once we got the full visibility of our resources as well as the breakdown of their cost and location. We started tagging resources of the cloud environments, giving developers much more visibility and empowering them to take accountability for their cloud spending.
We end up saving thousands of dollars not by implementing an AI-driven cost-saving model but simply by having visibility and taking control of our infrastructure. In addition, our comprehensive inventory knowledge ensured that all assets were properly secured and adhered to the latest security configurations and best practices. This involved monitoring for vulnerabilities, establishing appropriate access controls, and implementing effective network security policies. This gave us a deeper understanding of the entire potential attack surface.
The tool was open-sourced and become a cloud-agnostic with the support of major cloud providers. Upon release, it gained popularity and my colleague Cyril and I noticed that many organizations shared similar challenges, particularly regarding limited visibility into their infrastructure and related tools. To address this challenge, we launched Tailwarden, an open-core company founded on the principles of Komiser and built on an open-source model. Our aim is to empower developers by improving transparency and collaboration in the cloud. Our mission is to put control of the cloud into the hands of developers by tackling one of the most pressing issues in the space.
Why Open-source
We have made Komiser an open-source project due to the significant shift towards open-source software and the numerous advantages it offers in terms of quality, security, and innovation. As Marc Andreessen famously said, “Software is eating the world” and Open Source is now eating software. With the growing number of cloud providers, services, and cloud-native tools, it is challenging to keep up with the wide range of platforms used by developers using a closed-source model. For example, just from re:Invent 2022, AWS introduced 119 new services and features.
Open source is the best approach to tackle the complexity of cloud providers and give developers control over their cloud infrastructure. By opening up Komiser to the open-source community, we gain access to the collective knowledge and expertise of developers from around the world. This enables us to tackle the complexity of supporting a vast array of services and cloud providers.
OSS is built on the foundation of transparency. This aligns with our core values and helps us establish trust with our users. The open-source model ensures accountability and promotes continuous improvement, as our code is openly accessible for review and critique. Unlike closed-source software, where technical debts and flaws may be hidden, open-source software demands the highest quality output. While transparency can be challenging at times, as it puts us under a microscope, it ultimately results in the best possible outcome.
We aim to provide a cloud-agnostic platform that empowers the next generation of developers to build applications without worrying about hidden costs, security issues, or orphaned resources. We believe that with the help of our users and contributors, we can make cloud management easier for everyone.
Where we’re today
Today, Komiser has a growing community of users, with over 3000 stars on GitHub, 3 million downloads, and a thriving community of contributors on Discord.
The tool has proven to be a valuable asset for many developers, helping them to better understand their cloud resources and costs, and making it easier to manage their cloud infrastructure.
You can shape the future of DevOps by contributing to the project, or by providing feedback, be it through GitHub, through the #feedback channel on our Discord server, or by testing existing and new features.
The future of DevOps
Companies are moving to the cloud rapidly and building their entire infrastructure on Cloud providers and SaaS tools. If teams don’t have a single place to manage all of this complexity, they can’t see the big picture. Eventually leaving key insights and opportunities untapped.
Komiser is an essential tool for anyone looking to manage their cloud resources effectively. With its cloud-agnostic approach and open-source model, it can help you better understand your cloud costs and make it easier to manage your resources.
Top comments (0)