DEV Community

Ali Naqvi for Flomesh

Posted on

Announcing osm-edge 1.1: ARM support and more

We're very happy to announce the release of osm-edge 1.1. This release is the culmination of months of effort from the team and we are very excited to unveil it today. osm-edge is built on top of Open Service Mesh v1.1.0 codebase and is built purposely for edge computing and uses lightweight, high-performant, cloud native, and programmable proxy Pipy as its data plane and sidecar proxy.

osm-edge is a simple, complete, and standalone service mesh and ships out-of-the-box with all the necessary components to deploy a complete service mesh. As a lightweight and SMI-compatible Service Mesh, osm-edge is designed to be intuitive and scalable.

Why osm-edge?

In practice, we have encountered users from a variety of industries with similar requirements for service mesh. These industry users and scenarios include:

  • Energy and power companies. They want to build simple server rooms at each substation or gas station to deploy computing and storage capacity for the processing of data generated by devices within the coverage area of that location. They want to push traditional data center applications to these simple rooms and take full advantage of data center application management and operation capabilities
  • Telematics service providers. They want to build their simple computing environments in non-data center environments for data collection and service delivery to cars and vehicle owners. These environments may be near highway locations, parking lots, or high-traffic areas
  • Retailers. They want to build a minimal computing environment in each store, and in addition to supporting traditional capabilities such as inventory, sales, and payment collection, they also hope to introduce new capabilities of data collection, processing, and transmission
  • Medical institutions. They want to provide network capabilities at every hospital, or a simple point of care, so that in addition to providing digital service capability for patients, they can also complete data collection and data linkage with higher management departments at the same time
  • Teaching institutions, hospitals, and campuses alike. These campuses are characterized by a relatively regular and dense flow of people. They want to deploy computing resources near more crowd gathering points for delivering digital services, as well as collecting and processing data in real-time

These are typical edge computing scenarios, and they have similar needs:

  • Users want to bring the traditional data center computing model, especially microservices, and the related application delivery, management operation, and maintenance capabilities to the edge side
  • In terms of the working environment, users have to deal with factors such as power supply, limited computing power, and unstable network. Therefore, computing platforms are required to be more robust and can be deployed quickly or recover a computing environment completely in extreme situations
  • The number of locations (we call POP=Point of Presence) that usually need to be deployed is large and constantly evolving and expanding. The cost of a POP point, the price of maintenance, and the price of expansion are all important cost considerations.
  • Common, or low-end, PC servers are more often used in these scenarios to replace cloud-standard servers; low-power technology-based computing power such as ARM is further replacing low-end PC servers. On these hardware platforms, which are not comparable to cloud-standard servers, users still want to have enough computing power to handle the growth in functionality and data volume. The conflicting demands of computing moving closer to where the data is generated, the growth in data volume and functional requirements at the edge, and the limited computing resources at the edge require edge-side computing platforms to have better computing power efficiency ratios, i.e., running more applications and supporting larger data volumes with as little power and as few servers as possible
  • The fragility and a large number of POP points require better application support for multi-cluster, cross-POP failover. For example, if a POP point fails, the neighboring POP points can quickly share or even temporarily take over the computing tasks.

Compared with the cloud data center computing scenario, the three core and main differences and difficulties of edge computing are:

  • Edge computing requires support for heterogeneous hardware architectures. We see non-x86 computing power being widely used at the edge, often with the advantage of low power consumption and low cost
  • Edge computing POP points are fragile. This fragility is reflected in the fact that they may not have an extremely reliable power supply, or the power supply is not as powerful as a data center; they may operate in a worse environment than the constant temperature and ventilation of a data center; their networks may be narrowband and unstable
  • Edge computing is naturally distributed computing. In almost all edge computing scenarios, there are multiple POP points, and the number of POP points is continuously increasing. the POP points can disaster-proof each other and migrate to adjacent POP points in case of failure, which is a fundamental capability of edge computing

The evolution of Kubernetes to the edge side solves the difficulties of edge computing to a certain extent, especially against fragility; while the development of service mesh to the edge side focuses on network issues in edge computing, against network fragility, as well as providing basic network support for distributed, such as fault migration. In practice, container platforms, as today's de facto quasi-standard means of application delivery, are rapidly evolving to the edge side, with a large number of releases targeting edge features, such as k3s; but service mesh, as an important network extension for container platforms, are not quickly keeping up with this trend. It is currently difficult for users to find service mesh for edge computing scenarios, so we started the osm-edge an open source project with several important considerations and goals, namely

  • Support and compatibility with the SMI specification, so that it can meet users' needs for standardization of service mesh management
  • Full support for the ARM ecosystem, which is the "first-class citizen" or even the preferred computing platform for edge computing, and the Service Mesh should be fully adapted to meet this trend. osm-edge follows the ARM First strategy, which means that all features are developed, tested, and delivered on the ARM platform first
  • High performance and low resources. The service mesh as infrastructure should use fewer resources (CPU/MEM) while delivering higher performance (TPS/Latency) at the edge.


  • Light-weight, high-performant, cloud-native, extensible
  • Out-of-the-box supports x86, ARM architectures
  • Easily and transparently configure traffic shifting for deployments
  • Secure service-to-service communication by enabling mutual TLS
  • Define and execute fine-grained access control policies for services
  • Observability and insights into application metrics for debugging and monitoring services
  • Integrate with external certificate management services/solutions with a pluggable interface
  • Onboard applications onto the mesh by enabling automatic sidecar injection of Pipy proxy

ARM Support

osm-edge 1.1 brings the oft-requested support for ARM (both for development & production use), Whether you’re focused on cost reduction with ARM-based compute such as AWS Graviton or simply want to run service mesh on your Raspberry Pi cluster, now you can!

Multi-cluster Kubernetes the Kubernetes way

osm-edge 1.1 comes bundled with Flomesh Service Mesh (FSM) a Kubernetes North-South traffic manager, provides Ingress controllers, Gateway API, Load Balancer, and cross-cluster service registration and service discovery.

With the help of FSM, osm-edge can now connect Kubernetes services across cluster boundaries in a way that’s secure, fully transparent to the application, and independent of network topology. Automated fail-over ability to automatically redirect all traffic from a failing or inaccessible service to one or more replicas of that service—including replicas on other clusters.

Multiple Sidecar Proxy support

To break the tight coupling on Pipy and open doors for 3rd parties to develop or make use of their data plane or sidecar proxies, we have refactored the OSM v1.1.0 codebase to make it generic and provide extension points. We strongly believe in and support open source and our proposal for this refactoring have been submitted to upstream for their review, discussion, and/or utilization.

multiple sidecar proxy support

And lot more

osm-edge 1.1 also has a tremendous list of other improvements, performance enhancements, and bug fixes, for a detailed change log, please refer to the Release page on Github. Below are some of the notable changes included in this release:

  • Refactoring of Proxy Control Plane component to stay generic and allow interaction with new sidecar proxy implementation
  • Added new driver.Driver interface that needs to be implemented by 3rd party vendors wishing to provide sidecar proxy for control plane
  • Added new driver.HealthProbes, driver.HealthProbe, driver.InjectorContext, driver.controllerContext struct to use with new sidecar proxy driver implementation
  • Refactored Envoy based proxy sidecar integration to a separate Driver, so that it works as an implementation of driver.Driver interface
  • Added pipy implementation as a new Proxy Control Plane component proxy Driver
  • Refactors Helm chart values.yaml and added items osm.sidecarClass, osm.sidecarImage, osm.sidecarWindowsImage, osm.sidecarDrivers List to allow configuring the proxy sidecar driver.
  • Pipy driver is the default driver of osm-edge distribution, but can be configured via cli flag --set=osm.sidecarClass=XXX where XXX refers to sidecar Driver.
  • osm-edge control plane images are now multi-architecture, built for Linux/amd64 and Linux/arm64
  • osm-edge test suite used images are now multi-architecture, built for Linux/amd64 and Linux/arm64
  • Optimization of scripts
  • Added Makefile targets for easier installation/setup
  • Updated scripts to setup a development environment on amd64 and arm64 architectures.

What's next and where to learn more?

For more documentation, a quick-start guide, and step-by-step tutorials please visit osm-edge-docs

osm-edge is a fork of open service mesh and we will strive to keep this fork in sync with its upstream and propose back major changes and/or feature proposals to upstream for broader benefits of the community. Both OSM and osm-edge are hosted on Github. If you have any feature request, question, or comment, we’d love to have you join the rapidly-growing community via Github Issues, Pull Requests, or osm slack channel!

Top comments (0)