Primitives and Frameworks
We have heard many times that AWS builds primitives, not frameworks. But, what does that mean? For starters, here is my own definition of the terminology:
Primitives are low-level, highly-reusable building blocks with high Inversion of Control. With primitives, we give developers the power to implement their use cases and solve business problems while we don't leave them alone. We present them with added-value tools such as UI Components, APIs, or SDKs.
Frameworks, on the contrary, are high-level abstractions that encapsulate concrete and particular use cases. These solutions take longer to build because you are locking down a concrete use case, even for a concrete persona. That use case is only modifiable via the predicted API of the abstraction you are building, such as smart UI components pre-connected to the backend, low-code tools, or even configurable apps (the highest-level software abstraction possible!)
Are AWS execs really committed to this vision of building primitives only? Of course, they are. After another year of AWS re:Invent, we can see how AWS doubles down on this strategy by introducing new primitives (AWS VPC Lattice) and evolving the existing ones (e.g., AWS Lambda SnapStart), especially around serverless. It looks to me that adding the serverless identifier to an existing service name is a safe path to innovation (and survival) at AWS in this difficult times. I have no other explanation. How else can you say that new services such as AWS OpenSearch Serverless are serverless? This one is far from the first principles of usage-based pricing without minimums.
All this made me think. What happens when all AWS services have a serverless flavor? What's next after serverless?
We can look a this from 2 perspectives.
A cloud provider perspective
We call The Cloud a set of industrialized infrastructure and computing services offered as commodities you can subscribe to. This definition includes serverless computing, which is nothing but "runtime as a service" where the cloud provider owns the infrastructure and runtimes, and the customer owns the application data and code.
Will AWS (or any other cloud provider) ever go up the stack and own the application data and code? No, it's implausible that they enter the SaaS space with out-of-the-box applications, leaving their platform business model behind. As masters of industrialization, I can see AWS in the foreseeable future commoditizing all aspects of modern computing until they reach the highest level of abstraction possible according to its vision. What is this level? Remember, they don't build frameworks, so don't expect them to create new services that lock down specific use cases to help developers build a particular type of application. Instead, they make primitives, so expect new cloud computing services arriving to disruptive fields such as AI and simulated worlds, some of them with all kinds of serverless identifiers in their names. Also, expect improved documentation, refined APIs, and better pricing schemes.
In a nutshell, what's next after serverless for them? Nothing, they will keep industrializing computing services. Serverless is the end game for hyperscale cloud providers such as AWS, Google Cloud or Microsoft Azure.
Well, there is one path I believe they would eventually test deeper: the idea of distributed provisioning with centralized control. This is about pushing the cloud limits to the edge, where the edge is actually any piece of infrastructure closest to the user, even if that piece is self-managed by them. In other words, this is like transforming your own datacenter in a custom AWS cloud region. We can now say that AWS ECS Anywhere was a shy attempt to test these waters, as we didn't see much traction from AWS evolving this family of services.
A niche player perspective
With all the above, there seems to be a business case for niche players. There is an opportunity now to build frameworks that lock down concrete use cases that solve specific problems for developers when they are building serverless applications. You know, AWS would never do it ... and they should not, as Jeremy points out in this thread!
Okay, but what are these problems? Let me tell you: problem number one is (poor) serverless development experience.
One issue that I hear often is that when you are building serverless applications, you don't have a clear view of what services you are using. It's difficult to find your stuff. I've even listened to a few developers claiming that "with monolithic and serverful architectures, I know all the frameworks and technology I am using just by looking at the project structure in VS Code".
In the early days of serverless, my answer was: "When you are building a serverless app, that view is provided by the AWS console. You can see all you need there!". Almost. Now I reckon my answer was not very satisfactory and those developers still had a point. It's all too scattered in the AWS Console and it is very complicated to track down the AWS services you are using only by looking in there.
The problem is that from a developer's point of view, you don't have a very-much-needed, app-centric view of the serverless services you are using to compose your application. Of course, AWS Console is not the answer, but pointing the developer to lengthy AWS SAM or CloudFormation YAML files is also NOT the answer. That's not the right developer experience.
In a sense, new developer cloud tech such as Terraform and AWS CDK partially alleviated this problem. With Infrastructure as Code scripts, you can now have a dedicated repo of source code that defines and composes your infrastructure. Now, you can see all the services you are using just by looking at the IaC project structure in VS Code! That's a secondary code repo, separated from the main application business code repo, and that's okay. The IaC code repository is just for application composition code, not business logic, and it's good to have them separated. One could argue that the new AWS Application Composer service introduced at this year's AWS re:Invent visually solves this problem. I have feelings about this service generating YAML code, although I like the idea of deploying your UML collaboration diagram as a serverless app!
So, where is all this heading? What's next after serverless that niche players can help with? It seems that the answer pivots around a better developer experience to build serverless apps, and here's an idea that new cloud providers are introducing: Why do you even care about what serverless services you are using to compose your app? What if you let the cloud platform decide what is the best infrastructure needed for your application based on its code?
Enter Infrastructure FROM Code. I see an evolution here:
- Infrastructure and code: This is traditional DevOps and IaaS, where operations teams write deployment scripts in chef, puppet, or custom tech to connect users with the infrastructure necessary to run the app.
- Infrastructure as code: This is where where developers can write their own Terraform and AWS CDK scripts to compose the infrastructure necessary for their apps.
- Infrastructure from code: This is where cloud providers transparently provision the infrastructure resources needed to run your applications, so development nor operations teams need to care about it.
Infrastructure From Code
First of all, let me clarify something. In my opinion, those niche players betting on Infrastructure from Code frameworks qualify as cloud providers. As I said multiple times, you don't need to own a data center and offer your infrastructure and runtimes as a service to be a cloud provider. Instead, evolve and industrialize your services (even if it is pure software) and offer your resources as commodities with sprinkles of self-service, and you also will become a cloud provider.
Okay, what examples of cloud providers fit this definition and offer Infrastructure from Code features? The answer is many already, with players such as Ampt and Shuttle disrupting the space with purpose-built IfC features and development kits.
However, I have to reckon that I have a hard bias toward Vercel for multiple reasons, the main one being precisely their Infrastructure from Code features designed explicitly for frontend developers building with the Next.js framework. Think about the following potential scenarios if you are building a Next.js app:
- If you are hosting the app on your own infrastructure, it is very likely that after you get your UI bundle with all the assets, you have to put it on a
nextjs
hot server running inside a container (or inside an AWS Lambda function if you are brave enough). In any case, you need to think about the computing services you need, provision them with Terraform or AWS CDK scripts (if you are using AWS), and get your app deployed. - However, if you are hosting the app on Vercel, you don't need to think about this. Vercel takes care of the bundle splitting to deploy the respective assets on the right piece of infrastructure. And they do this transparently during the build process, so you don't even care what AWS (or other) serverless services they are using under the hood. They could get the static assets to an AWS S3 bucket, the SSR bits to an AWS Lambda function, and the Edge Functions to another server running on the CDN. In fact, that's what they actually do!
In the case of Vercel, the Infrastructure from Code features are super-powerful for your Next.js app. Because they own both the framework and the platform, they are in a very good position to offer this kind of added-value service, while others need to run behind catching up all the time!.
Conclusions
It's hard to predict where things might go in terms of cloud development, but one thing remains clear to me: there is an opportunity for startups to bet on serverless development experience. This is something that hyperclouds wouldn't do. This is still an unchartered area where we need definitions and techniques, but this is very likely what the future of native cloud development looks like.
(Cover photo credit: Alex Kulikov via Unsplash).
Top comments (0)