DEV Community

Harsh Joshi
Harsh Joshi

Posted on

Approaches to Serverless Deployment

Serverless is the go to paradigm for many young startups today. I have been experiencing the quirks, caveats and corners of it ever since I started working on it.
This blog aims to establish "Deployment in Serverless" as a business strategy over a development paradigm.
In this incrementally evolving piece, I will talk about serverless deployment strategies, business influences, decision factors and a lot more.

Yes, Serverless is cool. It is one less thing to do, but there’s certainly more to it. When you are not occupied with tedious, time consuming server development, operations, and maintenance, you can take some time out to click some heads in your favorite multiplayer first-person shooter game or even learn making Pepperoni Pasta.

Especially when you are building a startup, you want to be efficient in solving bigger problems at hand. There are many other factors teams consider before moving to the serverless paradigm. Once you have taken the call, you are presented with multiple strategies for deployment.

image

This blog helps you explore more about approaches and suggests to you the paradigm that might fit your strategies. This blog will consider Google cloud platform for most of the examples and explanations but the deployment strategies more or less remain similar among all the major cloud vendors.

Serverless computing is a method of providing back end services on an as-used basis. A serverless provider allows users to write and deploy code without the hassle of worrying about the underlying infrastructure. Precisely back end as a managed service sums up the essence of serverless paradigm. You write code as functions, ship it to a third party. The third party is responsible for managing the entire infrastructure which hosts your code. You are given a window through which you and the rest of the world can interact with the code. While the management is done by the third party, it must be realized that you are the one in control of your code.

Serverless computing allows developers to purchase back end services on a flexible ‘pay-as-you-go’ basis, meaning that developers only have to pay for the services they use. This is like switching from a cell phone data plan with a monthly fixed limit, to one that only charges for each byte of data that actually gets used.

One of the most commonly used serverless entities are the cloud functions. Google Cloud defines cloud functions as a serverless execution environment for building and connecting with cloud services. Cloud functions expose you to write simple single purpose functions that are attached to events emitted from Cloud infrastructure and services the code executes in a fully managed environment.

To put things into perspective once you have the service architecture up and running you will need to package your updated code and give it to a provider. This way the management completely becomes the responsibility of the third-party vendor, this also means that the third-party cloud vendor is responsible for updating the code and making it accessible to all the future requests that come. However you are in charge of having a mechanism to push the updated code from your source repositories to the cloud vendor. Each function is associated with its own environment and the update won't affect any code during execution. The old functions generally die after the completion and the update will seem to be instantaneous .

GCP Says, “Google Cloud Functions is a serverless execution environment for building and connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from your cloud infrastructure and services. Your function is triggered when an event being watched is fired. Your code executes in a fully managed environment”

That’s comforting right? Or is it alarming. It is a topic for future reveals when we talk about observability in managed services.

image
It is quite obvious that to use the serverless architecture you need to handle your deployments in a different way, most likely you might consider having a continuous integration and continuous delivery pipeline. The delivery pipeline usually begins after the development is completed but it is as important. Most of the cloud vendors today give you access to entities like cloud functions directly from the console, which might not be a good idea. Using serverless architecture you are already dependent on the third party vendors for managing resources, and using the console to create cloud functions without having a copy of the source code might be a bad idea.

Deployment Options: Deployments work by shipping your function's source code to a place in cloud storage. There are a number of ways the deployments can be done. Typically cloud Functions run with container orchestration. In simple words, each cloud function has its own environment. Read more here.

Typically, deployment can be availed using the following techniques:

  • Ship from local machine
  • Deploy from source control
  • Deploy from Cloud Console
  • Deploy with the Cloud Functions API Google Cloud comes up with something called Functions Framework to run and debug your functions locally for supported runtime. It makes testing and debugging easier.

Choosing the right strategy: The right strategy can depend on your business, wallet, urgency, design and a lot of other factors. Generally, deployment from source control is the way to go for many.
image
This is largely because this strategy provides benefits on VCS, feature based development, continuous integration, development coherence and automated deployment. The source code already stored as commits can use tags on each commit and these tags can be used to deploy the code. While most teams prefer to work in coherence, yet prefer independent control over code modules, small scale projects typically do not define a strategic deployment strategy.

image

These tags act as a trigger to the building engines of cloud providers. Good strategies include a mechanism to only build the difference on code rather than the entire unit. One good example to understand this is a GCP managed python cloud function built entirely only when there are more modules needed by the environment for that function to work. This can be observed using the time taken to deploy in both cases.

To be continued! Next on this:

  • Managing Cloud Functions
  • Cloud Functions and Security
  • Deploying Pipeline

Top comments (0)