Serverless architecture is becoming a compelling choice for developers and companies to host their applications. It is easy to see why with its ability to dynamically scale to meet load requirements as well as removing a lot of the complexity with deploying and maintaining applications, sometimes even removing the need for an Ops team. But what are the security considerations we should consider before choosing to go serverless?
- What is serverless architecture?
- Why choose to go serverless?
- Serverless security overview
- Security considerations in serverless
Serverless architecture (also known as serverless computing or function as a service, FaaS) is a software architecture where applications are hosted by a third-party service. This essentially means that your application is broken into individual services, which negates the need for server software and hardware management by the developers.
When hosting an application on the internet, as most modern software is, requires some kind of server infrastructure. With options from cloud providers such as AWS, GCP and Azure it is more common today to have a virtual server which does remove a lot, if not all of the physical hardware concerns. But these platforms still require a lot of setup and management when it comes to the operating environment. For complex applications managing and maintaining these environments as well as deployments to them requires considerable resources and often done by a dedicated ops team.
Serverless architecture removes this need and it allows you to focus purely on the individual services and your application. It also means applications can be auto-scaled dynamically depending on the workloads. Developers only ever need to worry about their applications and code.
You should especially consider using a serverless provider if you have a small number of functions that you need hosted. If your application is more complex, a serverless architecture can still be beneficial, but you will need to architect your application very differently.
There are pros and cons between all architecture designs and serverless architecture, despite all the comfort and elegance it has, serverless is not without some security considerations. This is not to say it is less secure in nature, but that there are new unique security considerations we must consider when choosing a serverless architecture.
It is true that with serverless architecture we can offload some of the responsibility onto the serverless providers. This includes the responsibility for: securing the data center, network, servers, operating systems and their configurations. However: application logic, code, data and application-layer configurations still need to be robust and resilient to attacks, which is the responsibility of application owners.
(Fig 1.1 Serverless Shared Responsibility Model)
Serverless functions consume data from a wide range of event sources such as HTTP APIs, message queues, cloud storage, IoT device communications and so forth. This increases the attack surface, especially when such messages use protocols and complex message structures - many of which cannot be inspected by standard application layer protections such as Web application firewalls.
In addition the attack surface can become increasingly more complex in serverless architectures and can be difficult for some to understand. Many software developers and architects have yet to gain enough experience with the security risks and appropriate security protections required to secure such applications
Visualizing and monitoring serverless architectures is still more complex than standard software environments. A lack of visibility can result in breaches going undetected for much longer periods of time.
There have become some recent tools to help visibility over serverless deployment. One such tool Dashbird attempts to help operate serverless applications. It offers failure detection, analytics, and visibility for AWS Lambda-based solutions. If not using AWS infrastructure another company offering a bespoke observability stack for serverless architectures is IOpipe. The platform provides fine-grained and near-real-time visibility into applications built using serverless computing.
Performing security testing for serverless architectures is more complex than testing standard applications, especially when such applications interact with remote 3rd party services or with back-end cloud services such as NoSQL databases, cloud storage, or stream processing services. In addition, automated scanning tools are currently not adapted to scanning serverless applications:
- DAST (dynamic application security testing) tools will only provide testing coverage for HTTP interfaces. This poses a problem when testing serverless applications that consume input from non-HTTP sources, or interact with back-end cloud services. In addition, many DAST tools have issues to effectively test web services (e.g. RESTful services) which don't follow the classic HTML/HTTP request/response model and request format.
- SAST (static application security testing) tools rely on data flow analysis, control flow and semantic analysis to detect vulnerabilities in software. Since serverless applications contain multiple distinct functions that are stitched together using event triggers and cloud services (e.g. message queues, cloud storage or NoSQL databases), statically analyzing data flow in such scenarios is highly prone to false positives. In addition, SAST tools will also suffer from false negatives, since source/sink rules in many tools do not take into account FaaS constructs. These rule sets will need to evolve in order to provide proper support for serverless applications.
As applications grow in size and complexity, there is a need to store and maintain "application secrets" -- for example:
- API keys
- Database credentials
- Encryption keys
- Sensitive configuration settings One of the most frequently recurring mistakes related to application secrets storage, is to simply store these secrets in a plain text configuration file, which is a part of the software project. In such cases, any user with "read" permissions on the project can get access to these secrets. The situation gets much worse, if the project is stored on a public repository.
In serverless applications, each function is packaged separately. A single centralized configuration file cannot be used. This leads developers to use "creative" approaches like using environment variables, which if used insecurely, may leak information. A common mistake is to store these secrets in plain text, as environment variables. While environment variables are a useful way to persist data across serverless function executions, in some cases, such environment variables can leak and reach the wrong hands.
Serverless deployment also limits ability to control access to secrets, particularly in deployment, leading to increased secret sprawl. If secrets are stored using environment variables - it's most likely that the people who deploy the application will have permissions to access the sensitive data
It is critical that all application secrets will be stored in secure encrypted storage and that encryption keys be maintained via a centralized encryption key management infrastructure or service. Depending on the service you are using there are different tools to manage this: AWS Secrets Manager (link), Serverless secrets storage project on GitHub (link), Azure Key Vault (link).
It is also crucial to make sure that you have visibility into where secrets may be sprawled. For this, consider GitGuardian which can scan repositories and other locations secrets may be sprawled.
Serverless architecture can be an elegant solution that can reduce the need for dedicated Ops teams to manage infrastructure. Like everything within development it comes with pros and cons. Serverless applications are not by nature less secure, but come with a different set of challenges that need to be addressed before making the decision to go serverless.