DEV Community

Cover image for Unlocking Istio's Versatility: A Guide to WASM Plugins in Kubernetes
GF
GF

Posted on

Unlocking Istio's Versatility: A Guide to WASM Plugins in Kubernetes

Introduction

In this article, I want to share my experience with Istio Wasm Plugin and how it helped me overcome the limitations of Kubernetes and Istio. While I am not a Kubernetes engineer, I had to gain some experience with it because our team works in a "develop, deploy, and support by ourselves" mode. Many times we ran into the limitations of Kubernetes or Istio (we use it for networking) or just wanted to do something different from common patterns of usage of these solutions. I'll discuss a solution that allowed us to dynamically route external client traffic to specific pods within the cluster based on their current load.

The Need for Dynamic Routing

Our Kubernetes-based backend system comprises multiple services and scales dynamically based on the number of clients it needs to serve (by the way our scaling is quite interesting as well, we had to write our own implementation of Horizontal Pod Autoscaler (HPA) because of existing one doesn't match our requirements of how exactly to downscale pods). For the performance and for some system features, we needed a way to route external client traffic to the particular pod within the cluster, by pod name. Although Istio provides powerful networking capabilities, we couldn't find a direct solution for our specific requirement.

Collecting Workload Information

We have an internal service responsible for collecting workload information from destination service pod instances. That service determines the least loaded pod and provides that information through HTTP endpoint.

Routing

The native solution for routing as we needed is destination rules (DR) and virtual services (VS). But as I mentioned above, we have that server scales dynamically and we need to route to a particular pod, hence since our server scales dynamically, we cannot rely on static DR and VS specifications, as we do not know the exact number or names of pods in the cluster at any given time we have to change these specifications dynamically. Our implementation of HPA which is responsible for scaling our service also became responsible for adding a new record to DR and VS specs each time we scale up and remove records once we scale down to keep everything consistent. So, now we can shoot requests in a particular pod assuming we know its name. Our routing is based on a query parameter in a request URL, which looks like "http://quite-interesting-service.com/some-path?podId=service-pod-b95fcf0d03-mjz8f".

Our VS routes look similary to:

 http:
  - match:
      - queryParams:
          backend3dWsId:
            regex: service-pod-1adbfcdd9c.+
        uri:
          prefix: /
    route:
      - destination:
          host: <service>.<namespace>.svc.cluster.local
          port:
            number: 80
          subset: service-pod-1adbfcdd9c
        weight: 100
Enter fullscreen mode Exit fullscreen mode

And corresponding DR:

host: <service>.<namespace>.svc.cluster.local
subsets:
  - labels:
      job-name: service-pod-1adbfcdd9c
    name: service-pod-1adbfcdd9c

Enter fullscreen mode Exit fullscreen mode

Client traffic

While we successfully implemented the dynamic routing solution, a new challenge arose: how would the client know which specific pod to request? There is the option - to ask our API and include the pod name in the request, but this would introduce an additional HTTP request and increase latency. So, we came up with the following idea - what if we intercepted incoming requests at the Istio ingress gateway and modified the client's URL by adding the required query parameter? This way, the subsequent routing would just work as intended and point the request to the necessary pod.
This is where Istio wasm plugins come into play.

Wasm plugin implementation

As far I understand Istio is based on Envoy. And wasm plugin is an Istio representation of Envoy filters. So, in order to work with wasm plugins I would recommend getting at least a basic understanding of Envoy architecture; there are some terminology it's good to be familiar with.

Take a look at Envoy terminology
Read more about Envoy clusters

Configuring Gateway Envoy

To allow our plugin to call our API endpoint in k8s cluster we need to add a new cluster in Envoy specification.

If you want to check your envoy spec on a pod, you can do ks exec -n <namespace> <pod name> -c istio-proxy -it -- bash and then http://127.0.0.1:15000/config_dump
Or ks port-forward -n <namespace> <pod name> -c istio-proxy 15000:15000 and check localhost:15000. There are a lot of things you can check or tweak.

To add the Envoy cluster to the specification, we can use an Istio Envoy filter. The following example demonstrates how to add a cluster:

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: awesome-filter
  namespace: istio-system
spec:
  workloadSelector:
    labels:
      app: public-istio-ingressgateway
  configPatches:
    - applyTo: CLUSTER
      match:
        context: GATEWAY
      patch:
        operation: ADD
        value: 
          name: "clusetr-name"
          type: STRICT_DNS
          connect_timeout: 1s
          lb_policy: ROUND_ROBIN
          load_assignment:
            cluster_name: cluster-name
            endpoints:
              - lb_endpoints:
                  - endpoint:
                      address:
                        socket_address:
                          protocol: TCP
                          address: "api.<namespace>.svc.cluster.local"
                          port_value: 80

Enter fullscreen mode Exit fullscreen mode

After applying that to a k8s cluster we will be able to call api.namespace.svc.cluster.local from our wasm plugin.

Installing wasm plugin

Yeah, installing before coding, but I think it's a reasonable chronology for a coding explanation.
So, spec will look like:

apiVersion: extensions.istio.io/v1alpha1
kind: WasmPlugin
metadata:
  name: awersome-plugin
  namespace: istio-system
spec:
  selector:
    matchLabels:
      istio: ingressgateway
  url: file:///opt/filters/plugin.wasm
  phase: AUTHN
  pluginConfig:
    upstream_name: "cluster-name"
    authority: "<service>.<namespace>.svc.cluster.local"
    api_path: "/pod-lookup"

Enter fullscreen mode Exit fullscreen mode

Here we passed some params to the plugin config which we will use later.
Also, we will need to provide plugin.wasm file in containers with envoy on path we mentioned here.

Coding wasm plugin

It's possible to use a few languages for implementing wasm plugin. The following SDK are available now:

I like Rust, so I picked Rust :)

Before all, you need to install wasm toolchain:

rustup target add wasm32-unknown-unknown
Enter fullscreen mode Exit fullscreen mode

SDK has pretty simple API as well, actually I like that whole wasm plugin expandable approach. Those SDK help you to implement Application Binary Interface which is understood by the Envoy.

Reading plugin config

First of all we might want to read some parameters we have passed to our plugin via yaml spec. It's a bit tricky, but to read passed configuration is allowed only once and you have to store params in your plugin because it won't be available in the future.

I would define a struct that represents params I passed to plugin and define my plugin:

#[derive(Default)]
struct Plugin {
    config: PluginConfig,
}


#[derive(Serialize, Deserialize, Default, Debug, Clone)]
struct PluginConfig {
    upstream_name: String,
    authority: String,
    api_path: String,
}


Enter fullscreen mode Exit fullscreen mode

Reading plugin configuration is only available in on_configure:

   fn on_configure(&mut self, _plugin_configuration_size: usize) -> bool {
        if let Some(config_bytes) = self.get_plugin_configuration() {
            if let Ok(config) = serde_json::from_slice::<PluginConfig>(&config_bytes) {
                self.config = config;
            } else {
                error!("Can't parse plugin config!")
            }
        } else {
            error!("There is no plugin config for plugin!")
        }

        true
    }
Enter fullscreen mode Exit fullscreen mode

And to be able to use that configuration in the future, we need to implement the following:

    fn create_http_context(&self, _context_id: u32) -> Option<Box<dyn HttpContext>> {
        Some(Box::new(Plugin {
            config: self.config.clone(),
        }))
    }
    fn get_type(&self) -> Option<ContextType> {
        Some(ContextType::HttpContext)
    }
Enter fullscreen mode Exit fullscreen mode

It is the way to pass config for all further requests.

Dispatch HTTP call from inside wasm plugin

There are good examples in SDK's repos, so I just mention one feature which is not covered here. If you want to dispatch HTTP call from your plugin you can't do it just like in regular code, you have to use hostcalls API, wrapped and provided by SDK.
In my case I need to catch incoming requests in on_http_request_headers and call dispatch_http_call in order to perform HTTP request to API:

impl HttpContext for MyPlugin {
    fn on_http_request_headers(&mut self, _: usize, _: bool) -> Action {

        let http_call_res = self.dispatch_http_call(
            &self.config.upstream_name,
            vec![
                (":method", "GET"),
                (":path", &self.config.path),
                (":authority", &self.config.authority),
            ],
            None,
            vec![],
            Duration::from_secs(2),
        );

        if http_call_res.is_err() {
            error!(
                "Failed to dispatch HTTP call, to {} status: {http_call_res:#?}",
                self.config.upstream_name
            );
            Action::Continue
        } else {
            Action::Pause
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Keep in mind that the first parameter is the name for a cluster we added with Istio Envoy Filter, "clusetr-name" in this case.

Good, but http_call_res is not an API response. It's just a result of call itself.
In order to get response for our HTTP call we need to set one more calllback:

 fn on_http_call_response(
        &mut self,
        _token_id: u32,
        _num_headers: usize,
        body_size: usize,
        _num_trailers: usize,
    ) {
        if let Some(response) = self.get_http_call_response_body(0, body_size) {
            let path = self.get_http_request_header(":path");
            if path.is_none() {
                error!("Failed to get_http_request_header");
                self.resume_http_request();
                return;
            }
            let path = path.unwrap();

            let response_string = String::from_utf8(response).unwrap_or("unknown".to_string());
            let modified_url = format!("{}?podId={}", path, response_string);

            self.set_http_request_header(":path", Some(&modified_url));
        } else {
            error!("API responded without body");
        }

        self.resume_http_request();
    }
Enter fullscreen mode Exit fullscreen mode

After that patch, HTTP request will be matching our DRs and VSs and placed to the pod with the corresponding pod name.

Convenient development

Istio gives us three options to deliver wasm plugin binary into cluster:

  • As a file in a pod
  • As a file available through HTTP
  • As OCI image (it's possible to store binary file in AWS ECR, for instance)

In a production environment, you probably will use the second or third option, but for development purposes, I would pick the first. But it's available only in case your file is less than 1 Mb.
You only need to patch Kubernetes deployments in pods you want to apply the plugin to with a mount of config map to a pod file system (how to do that).

To be able to produce a wasm plugin binary less than 1 Mb you might need to add some options to your Cargo.toml file, like:

[profile.release]
opt-level = "z"
lto = true
codegen-units = 1
Enter fullscreen mode Exit fullscreen mode

If you are able to fit the constraint I could offer you to use the following script for developing and debugging of wasm plugin:

cargo build --target wasm32-unknown-unknown --release
cp ../target/wasm32-unknown-unknown/release/url_patcher_wasm_plugin.wasm ./
ls -s
# Update wasm plugin binary in cluster
kubectl delete configmap -n istio-system wasm-plugin || true
kubectl create configmap -n istio-system wasm-plugin --from-file ./url_patcher_wasm_plugin.wasm

# Update or create envoy filter spec (add cluster to Envoy in my case)
kubectl apply -f ./envoy-filter.yaml

# Update or create wasm plugin resource in cluster
kubectl apply -f ./wasm-plugin.yaml

# Restart gateways to apply changes
kubectl delete pods -n istio-system -l istio=ingressgateway --force
Enter fullscreen mode Exit fullscreen mode

Also, despite I have set log level to trace in my wasm plugin code I didn't see the logs, until I set the log level manually. To do that you need to make kubectl exec or port-forward like I showed earlier and set log level with the following command:

curl 127.0.0.1:15000/logging?wasm=debug -XPOST
Enter fullscreen mode Exit fullscreen mode

Conclusion

I think that extensibility through plugins, particularly Wasm plugins, is cool. The ability to choose from a variety of languages and build and deliver plugins according to my preferences is great.

However, I must admit that working with the EnvoyFilter configuration can be a bit challenging, especially for newbies. It involves mixed Istio and Envoy abstractions and configurations, which require getting familiar with the Envoy documentation to understand Istio's features. It would be beneficial to improve the documentation and provide more real-world examples of Wasm plugin usage.

Top comments (0)