DEV Community

Cover image for Let’s build a single binary gRPC server-client with Rust in 2020 - Part 3
T.J. Telan
T.J. Telan

Posted on • Originally published at tjtelan.com

Let’s build a single binary gRPC server-client with Rust in 2020 - Part 3

In the previous post, we covered creating our data schema in the Protocol Buffer (protobuf) format, and using Rust build scripts to compile the protobufs into Rust code.

It is recommended that you follow in order since each post builds off the progress of the previous post.

This is the third post of a 4 part series. If you would like to view this post in a single page, you can follow along on my blog.


Server

Moving onto writing our server now that we can use the protobuf generated code. We’re going to write the server (and client) in a new module.

$ tree
.
├── build.rs
├── Cargo.toml
├── proto
│ └── cli.proto
└── src
    ├── main.rs
    └── remotecli
            ├── mod.rs
            └── server.rs
Enter fullscreen mode Exit fullscreen mode

Cargo.toml

[package]
name = "cli-grpc-tonic-blocking"
version = "0.1.0"
authors = ["T.J. Telan <t.telan@gmail.com>"]
edition = "2018"

[dependencies]
# gRPC server/client
tonic = "0.3.0"
prost = "0.6"
# CLI
structopt = "0.3"
# Async runtime
tokio = { version = "0.2", features = ["full"] }

[build-dependencies]
# protobuf->Rust compiler
tonic-build = "0.3.0"
Enter fullscreen mode Exit fullscreen mode

This is the last change we’ll be making to Cargo.toml.

We’re adding in tonic and prost as we implement the gRPC server/client. Prost is the implementation of protocol buffers in Rust, and is needed to compile the generated code when we include it into the rest of the package.

Tokio is the async runtime we’re using. The gRPC server/client are async and we will need to adjust our main() to communicate more in the code that we’re now calling async functions..

remotecli/mod.rs

pub mod server;
Enter fullscreen mode Exit fullscreen mode

To keep the implementations organized, we’ll separate the server and client code further into their own modules. Starting with the server.

remotecli/server.rs

Similar to the frontend CLI walkthrough, I’ll break this file up into pieces and review them. At the bottom of this file’s section I’ll have the complete file there for copy/paste purposes.

Imports

use tonic::{transport::Server, Request, Response, Status};

// Import the generated rust code into module
pub mod remotecli_proto {
   tonic::include_proto!("remotecli");
}

// Proto generated server traits
use remotecli_proto::remote_cli_server::{RemoteCli, RemoteCliServer};

// Proto message structs
use remotecli_proto::{CommandInput, CommandOutput};

// For the server listening address
use crate::ServerOptions;

// For executing commands
use std::process::{Command, Stdio};
Enter fullscreen mode Exit fullscreen mode

At the top of the file, we declare a module remotecli_proto that is intended to be scoped only in this file. The name remotecli_proto is arbitrary and for clarity purposes.

The tonic::include_proto!() macro effectively copy/pastes our protobuf translated Rust code (as per protobuf package name) into the module.


The naming conventions of the protobuf translation can be a little confusing at first, but it is all consistent.

Our protobuf’s RemoteCLI service generates separate client and server modules using snake case + _server or _client. While generated trait definitions use Pascal case (a specific form of camel case with initial letter capitalized).

From the server specific generated code, we are importing a trait RemoteCli which requires that we implement our gRPC endpoint Shell with the same function signature.

Additionally we import RemoteCliServer, a generated server implementation that handles all the gRPC networking semantics but requires that we instantiate with a struct that implements the RemoteCli trait.


The last import from the gRPC code are our protobuf messages CommandInput and CommandOutput

From our frontend, we are importing the ServerOptions struct, since we are going to pass the user input in for the server listening address.


At last, we import from std::process. Command and Stdio - for executing commands and capturing output.

RemoteCli Trait implementation

#[derive(Default)]
pub struct Cli {}

#[tonic::async_trait]
impl RemoteCli for Cli {
   async fn shell(
       &self,
       request: Request<CommandInput>,
   ) -> Result<Response<CommandOutput>, Status> {
       let req_command = request.into_inner();
       let command = req_command.command;
       let args = req_command.args;

       println!("Running command: {:?} - args: {:?}", &command, &args);

       let process = Command::new(command)
           .args(args)
           .stdout(Stdio::piped())
           .spawn()
           .expect("failed to execute child process");

       let output = process
           .wait_with_output()
           .expect("failed to wait on child process");
       let output = output.stdout;

       Ok(Response::new(CommandOutput {
           output: String::from_utf8(output).unwrap(),
       }))
   }
}
Enter fullscreen mode Exit fullscreen mode

We declare our own struct Cli because we need to impl RemoteCli.

Our generated code uses an async method. We add #[tonic::async_trait] to our trait impl so the server can use async fn on our method. We just have one method to define, async fn shell().


I’m waving my hands here for the function signature, but the way I initially learned how to write them was to go into the generated code, skimmed the code within the remote_cli_server module and modified the crate paths.


The first thing we do when we enter shell is peel off the tonic wrapping from request with .into_inner(). We further separate the ownership of data into commandandargs vars.

We build out process as the std::process::Command struct so we can spawn the user’s process and capture stdout.

Then we wait for process to exit and collect the output with .wait_with_output(). We just want stdout so we further take ownership of just that handle.


Last, we build a tonic::Response, converting the process stdout into a String while we instantiate our CommandOutput. Finally wrapping the Response in a Result and returning it to the client.

start_server

pub async fn start_server(opts: ServerOptions) -> Result<(), Box<dyn std::error::Error>> {
   let addr = opts.server_listen_addr.parse().unwrap();
   let cli_server = Cli::default();

   println!("RemoteCliServer listening on {}", addr);

   Server::builder()
       .add_service(RemoteCliServer::new(cli_server))
       .serve(addr)
       .await?;

   Ok(())
}
Enter fullscreen mode Exit fullscreen mode

This function will be used by the frontend for the purpose of starting the server.


The listening address is passed in through opts. It’s passed in as a String, but the compiler figures out what type we mean when we call .parse() due to how we use it later.


We instantiate cli_server with the Cli struct which we implemented as the protobuf trait RemoteCli.


tonic::Server::builder() creates our gRPC server instance.

The .add_service() method takes RemoteCliServer::new(cli_server) to create a gRPC server with our generated endpoints via RemoteCliServer and our trait impl via cli_server.

The serve() method takes in our parsed listening address, providing the hint the compiler needed to infer the required type and returns an async Result<> for us to .await on.

main.rs - so far

We are making small changes to main.rs to plug in the server module.

pub mod remotecli;

use structopt::StructOpt;

// These are the options used by the `server` subcommand
#[derive(Debug, StructOpt)]
pub struct ServerOptions {
   /// The address of the server that will run commands.
   #[structopt(long, default_value = "127.0.0.1:50051")]
   pub server_listen_addr: String,
}

// These are the options used by the `run` subcommand
#[derive(Debug, StructOpt)]
pub struct RemoteCommandOptions {
   /// The address of the server that will run commands.
   #[structopt(long = "server", default_value = "http://127.0.0.1:50051")]
   pub server_addr: String,
   /// The full command and arguments for the server to execute
   pub command: Vec<String>,
}

// These are the only valid values for our subcommands
#[derive(Debug, StructOpt)]
pub enum SubCommand {
   /// Start the remote command gRPC server
   #[structopt(name = "server")]
   StartServer(ServerOptions),
   /// Send a remote command to the gRPC server
   #[structopt(setting = structopt::clap::AppSettings::TrailingVarArg)]
   Run(RemoteCommandOptions),
}

// This is the main arguments structure that we'll parse from
#[derive(StructOpt, Debug)]
#[structopt(name = "remotecli")]
struct ApplicationArguments {
   #[structopt(flatten)]
   pub subcommand: SubCommand,
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
   let args = ApplicationArguments::from_args();

   match args.subcommand {
       SubCommand::StartServer(opts) => {
           println!("Start the server on: {:?}", opts.server_listen_addr);
           remotecli::server::start_server(opts).await?;
       }
       SubCommand::Run(rc_opts) => {
           println!("Run command: '{:?}'", rc_opts.command);

       }
   }

   Ok(())
}
Enter fullscreen mode Exit fullscreen mode

We now import our remotecli module.

The main() function changes slightly as well. First, we change the function to be async.

We add the #[tokio::main] attribute to mark the async function for execution.

And we call our new start_server() to actually start a server when the user runs the server subcommand.

remotecli/server.rs all together

Here’s the final form of the server module.

use tonic::{transport::Server, Request, Response, Status};

// Import the generated rust code into module
pub mod remotecli_proto {
   tonic::include_proto!("remotecli");
}

// Proto generated server traits
use remotecli_proto::remote_cli_server::{RemoteCli, RemoteCliServer};

// Proto message structs
use remotecli_proto::{CommandInput, CommandOutput};

// For the server listening address
use crate::ServerOptions;

// For executing commands
use std::process::{Command, Stdio};

#[derive(Default)]
pub struct Cli {}

#[tonic::async_trait]
impl RemoteCli for Cli {
   async fn shell(
       &self,
       request: Request<CommandInput>,
   ) -> Result<Response<CommandOutput>, Status> {
       let req_command = request.into_inner();
       let command = req_command.command;
       let args = req_command.args;

       println!("Running command: {:?} - args: {:?}", &command, &args);

       let process = Command::new(command)
           .args(args)
           .stdout(Stdio::piped())
           .spawn()
           .expect("failed to execute child process");

       let output = process
           .wait_with_output()
           .expect("failed to wait on child process");
       let output = output.stdout;

       Ok(Response::new(CommandOutput {
           output: String::from_utf8(output).unwrap(),
       }))
   }
}

pub async fn start_server(opts: ServerOptions) -> Result<(), Box<dyn std::error::Error>> {
   let addr = opts.server_listen_addr.parse().unwrap();
   let cli_server = Cli::default();

   println!("RemoteCliServer listening on {}", addr);

   Server::builder()
       .add_service(RemoteCliServer::new(cli_server))
       .serve(addr)
       .await?;

   Ok(())
}
Enter fullscreen mode Exit fullscreen mode

And that’s the server implementation and the frontend code for starting the server. It is a surprisingly small amount of code.

You can start an instance of the server by running:

$ cargo run -- server
[...]
Start the server on: "127.0.0.1:50051"
RemoteCliServer listening on 127.0.0.1:50051
Enter fullscreen mode Exit fullscreen mode

We just covered using protobuf compiled Rust code and using it to implement our gRPC server module. Then we wrote the server startup code and plugged it into our now async CLI frontend.

In our final post, we'll cover writing our gRPC client and plugging it into our CLI frontend.

I hope you'll follow along!

Top comments (0)