This year I attended the Kubernetes Cloud Native Open Source Summit for the first time. For those who don’t know, KubeCon is the largest open source conference in North America with over 12,000 attendees. Several companies sponsor panels, meetups and round-table discussions on new feature releases for the open source cloud native community. I was impressed by the level of engagement from speakers and attendees. I also thought the most under-appreciated perk was the ping pong table in the expo area — great way to relax between sessions and networking.
Here are my top four takeaways.
Heather Kirksey, VP of Community Development at Linux Foundation, and Azhar Sayeed, Chief Architect at Red Hat, presented a demo of an end-to-end 5G Network built on open source infrastructure. They began their keynote with an overview of the challenges involved in scaling a telecommunications network using a cloud native approach.
In short, telcos are uniquely difficult to scale because the infrastructure is decoupled. Innovation in the industry has been lackluster due to the fact that critical communications, such as air traffic control and emergency response are expected to perform seamlessly at all times. There are no APIs that seamlessly integrate services at the present state.
Despite these limitations, the team presented a successful demo on the keynote stage showcasing the power of cloud native. Azhar’s team walked the audience through the process where they connected to a packet core in Montreal, an LTE lab in France, and accessed a mix of global private and public cloud services. Each service needed to be broken up into distinct components as the networks are incredibly fragmented.
Building upon cloud native certainly has advantages as it would allow telecommunications companies to consolidate resources, scale up bandwidth for different devices and users, and allow enterprises to move from disparate networks to a shared service offering.
I’m excited about the role the Kubernetes ecosystem will play in powering the next generation of devices. It’s up to the ecosystem to innovate and think about how it can further expand on Azhar’s work to bring cloud native into communications.
It’s clear that the Kubernetes ecosystem is investing heavily behind OPA (“Open Policy Agent” ) as a standard approach to represent, enforce and govern policies across applications.
Vicki Cheung, Engineering Manager at Lyft and Co-Chair of KubeCon, opened her keynote with the notion that infrastructure is taking on greater responsibilities. Vicki specifically highlighted how the OPA project is restricting what images can run on containers.
OPA works on the basis that policy decision-making should be decoupled from policy enforcement. Policy enforcement can be standardized across a wide range of technology at any layer of the stack. Some of the use cases can include admission control, risk management, API authorization, and data protection.
The project started in 2016, but is beginning to amass mainstream momentum.
I attended a presentation sponsored by Reddit regarding how the company used OPA to protect against malicious activity, and also control costs. OPA can prevent the creation of server policies that have not been approved by the administration. If you want to restrict actions on Kubernetes objects, OPA allows users to create cluster policies around enterprise needs.
Before OPA, guardrails would have to be programmed manually in each specific application and if there were updates on policy, programmers would have to go back and change the code in each application. An example of where you would want to restrict permits could be support staff having only access to customer service tickets. More importantly, OPA allows engineers to test if a policy is implemented correctly and if administrative access is being restricted properly.
We are at an inflection point with regard to our attitudes towards technology and privacy. As different industries and geographies build unique frameworks, engineers will need a way to dynamically update policies in a way that’s consistent across all applications.
Despite hype around cloud software eating the world, on-prem remains a major focus for enterprises. The Kubernetes ecosystem is no longer dogmatic about pushing cloud-based technologies.
Rae Wong, group product manager at Google, mentioned during the second day of keynotes that hybrid environments were still the reality for today’s enterprises and the Kubernetes community had to be accommodating.
If you were to take financial services, for example, on-prem is still very much the starting point in discussions due to security and compliance concerns. Laura Schornack, Senior Architect at JPMorgan, and Jeff Fogarty, Innovation Engineer at USBank, spoke about the necessity for cloud native software to hand over controls such as data and credential management. JPMorgan and USBank are forward-thinking in their approach to IT relative to other banks. Laura and Jeff gave the example of deploying advanced workloads by running machine learning applications inside Kubernetes clusters. However, they added that support for air gapped environments is still necessary.
More cloud services are also facilitating portability to private instances. Ultimately, I expect to see all major cloud native tools having a strategy for on-prem.
More responsibilities that were traditionally under operations teams are shifting to application engineers. With an endless amount of micro-services running in real-time, it can be challenging to manage where exactly to push changes.
“GitOps” is the science of using Git pull requests to manage infrastructure provisioning and software deployment
This practice makes sense for a few reasons. First, version control can help with auditing different types of infrastructure to streamline operations. Second, teaching developers to create git requests does not require reinventing the wheel, since most are already familiar with the practice. Third, security teams can troubleshoot and understand where dependencies may lie since developers have to submit PRs for each new server request.
Tamao Nakahra, Head of Developer Experience at Weaveworks, led a great presentation on the keynote stage with representatives from Palo Alto Networks, Branch, Under Armour and Intuit on using GitOps to solve infrastructure needs.
Javeria Khan, Senior Site Reliability Engineer at Palo Alto Networks spoke in depth of how the company is using GitOps for secrets management, where the company knows exactly when keys and authentication controls have been updated.
In a separate keynote, Maneesh Vittolia and Sriram Komma of Walmart discussed how they used GitOps to create custom pipelines and used the same Kubernetes images across other services. The consistency of GitOps allowed Walmart to simplify consumption of resources, and integrate across servers for each storefront.
The cloud native ecosystem is growing tremendously with more enterprises increasing adoption of containers and embracing open source technology. It will be exciting to see the direction the community takes to built on its current momentum.
Lastly, I wanted to give a shout out to Bryan and Vicki for putting together such a great experience!