There's no doubt that the world has gone async. People no longer wait at loading screens for 30 minutes while their task completes behind the scenes. We've figured out how to make delightful experiences that make end users more productive while other things are going on. We know how to maximize our users' lives.
It's something we've come to expect nowadays. If software makes you wait and doesn't provide something at least as a distraction, we'll leave. We will stop using your app because it's just not tolerable anymore to make someone sit and wait.
This realization can be a little daunting for software engineers. You know you need to build APIs that are fast and scale well but what happens when your workflow simply takes a long time? 2023 led us straight into a world of AI-heavy workloads - processes that traditionally don't complete in 250ms or less. These flows take upwards of 30 seconds to a minute if you're lucky, sometimes much more.
So what do we do? Lean in on async processing.
Kick off background jobs, post status updates to a WebSocket, fire and forget child processes, and going heavy on event-driven architectures are what's going to keep us moving at this breakneck speed of innovation. While these things sound scary, they aren't too bad. I've built something that makes it a little easier for you.
Event-Driven APIs
I'm all about APIs. I think all software should be API-first with intentional thought and effort on the contracts and paths that make up your application. Designing how users interact with your API should be a concious and well-thought-out endeavor because that's how your customers will get their impression of you.
The problem with many REST-based APIs is that they tend to be synchronous. You make a call and wait for a response. That doesn't always work the best with our async workflows. Of course it's necessary for enrichments and fetching data, but for things like a DELETE, PUT, or sometimes even a POST, getting that response back isn't necessary.
If you don't need a response and your intention is to call the service and move on, you're following a fire and forget approach. AWS makes communicating with APIs in this manner a breeze.
Through the use of EventBridge API destinations, you can invoke an HTTP endpoint as a target of an EventBridge rule. This means you could simply drop an event on an event bus and carry on with your workflow as the HTTP endpoint is called asynchronously. Pretty nice, right?
I was building out API destinations and EventBridge rules by hand for a Momento integration the other day and thought there had to be an easier way. There are only a few AWS resources involved with an API destination and most of them can be reused if they are targeting the same API. So I turned my eyes toward my Open API Spec.
Converting Open API Specs to API Destinations
If you know me even a little bit, you know I'm a huge advocate of Open API. When written properly, they contain all the information you need to fully integrate with an API. It contains server/environment information, security schemes, endpoint definitions, request and response schemas, and a list of all parameters (query, header, and path) you could pass in at any time.
Turns out this information is all you need when building API destinations! You need the full invocation url, path parameters, query string parameters, headers, and authentication information. There's room here for a happy conversion process!
I initially started off trying to convert my specs to CloudFormation via a Node script on my machine. I got that working in pretty short order but didn't think it would actually be used by others with their specs. Something about running a JavaScript script on a local machine puts people off a bit 😊.
So instead I've decided to release a GitHub action so you can incorporate it into your CI pipeline to run as you make updates to your spec. This action will automatically take your API definition and create a CloudFormation script you can use as a one-click deploy to add EventBridge rules and API destinations in your AWS account.
You can use this CloudFormation for yourself or put it behind a "Launch Stack" button that opens up the CloudFormation console that prompts you for deployment variables (if necessary), much like I did here.
Let's take a look at the action definition and understand some of the prerequisites for your Open API spec.
Action Definition
To use the action in your workflows, you can add the following step in your pipelines:
- name: Generate CloudFormation from OpenAPI spec
uses: allenheltondev/api-spec-to-api-destinations@v1
with:
specPath: path/to/openapi/spec.yaml
blueprint: path/to/template.yaml
environment: prod
httpMethods: POST,DELETE,PUT
resourcePrefix: MYAPP
outputFilename: template.yaml
Let's talk about these:
Name | Description | Required |
---|---|---|
specPath | Path to the OpenAPI spec. | ✅ |
blueprint | Path to template file you'd like to use as a basis. Useful if you have authentication parameters to provide. | ❌ |
environment | Value in the Description field of a server in your OpenAPI spec. Used to get the base path for the API destinations. Defaults to the first server if none is provided. | ❌ |
httpMethods | Comma-separated list of HTTP methods to convert to API Destinations (e.g. "GET,POST,PUT,DELETE"). | ❌ |
resourcePrefix | Prefix to use for all generated resources. | ❌ |
outputFilename | The filename for the generated output. Defaults to 'template.yaml' if not provided . | ❌ |
It's a relatively simple action and each one of the optional fields has a meaningful default.
Open API Requirements
One thing I like most about OAS is the little room for ambiguity. It is a clear specification that defines exactly what you can and can't have throughout the entire file. Because of this, we can make assumptions when building integrations and transformations because we know the format and location of all the data we need to get to. So let's talk about the optional fields you need to include in your spec for this action to work.
- You must include at least 1 server in your spec - these represent your environments (dev, stage, prod). You pass the value from the description of a server in the
environment
argument of the action. This tells the action what the base url is for your API. - Include an operationId for every endpoint. This uniquely identifies the path and http method, giving us a detail type for EventBridge.
- Any supported query string parameter must be defined in the spec. This will be used to build an input transformer for your EventBridge rules.
Of course, the rest of your spec needs to be properly defined, like your path definitions and path parameters. For an example spec, you can check out this one.
Constraints
Alright, time for the bad news. We need to cover what is not yet supported in this solution.
- Custom headers are not supported
- API Key authentication only
If you'd like to update the source code to add header support or support for other forms of auth besides API keys, I accept pull requests! But for now, path parameters, query string parameters, and API keys only.
EventBridge Events
Once you deploy the generated CloudFormation stack, you can immediately start invoking your endpoints via EventBridge. Using the data from your spec, EventBridge rules are created that map values from an event payload to your API endpoint.
Let's take an example endpoint from a theoretical Open API spec that adds an injury report to a football player:
paths:
/players/{playerId}/injuries:
parameters:
- name: playerId
in: path
required: true
schema:
type: string
post:
parameters:
- name: followUpDate
in: query
required: false
schema:
type: string
format: date
operationId: Add Player Injury
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
type:
type: string
injuryDate:
type: string
format: date
required:
- type
- injuryDate
This endpoint has a path parameter of playerId
and a query string parameter of followUpDate
. It also requires a JSON payload in the body of the request with type
and injury
properties.
If we were to trigger this endpoint via our EventBridge rule, it might look something like this via the AWS SDK v3 for JavaScript:
import { EventBridgeClient, PutEventsCommand } from '@aws-sdk/client-eventbridge';
const events = new EventBridgeClient();
await events.send(new PutEventsCommand({
Entries: [
{
Source: 'my-football-app',
DetailType: 'Add Player Injury',
Detail: JSON.stringify({
playerId: "7"
followUpDate: "2023-10-31"
message: {
type: "Ankle sprain",
injuryDate: "2023-09-20"
}
})
}
]
}));
You can see we've used the operation id as the DetailType of the event, the path and query parameters are root-level properties in the detail, and our request body is contained in a message
property. That's it! This will asynchronously invoke our endpoint while we carry on with our workflow.
Take It For a Spin
This is a very easy way to invoke API calls asynchronously. Take your spec and convert it to events. Boom done.
Of course, this doesn't work for all use cases. It's possible to get failed delivery due to network issues or bad input. These will automatically route to a dead letter queue in those cases. Be sure to monitor it! The problem with a "fire and forget" approach is error-handling and resiliency (aka production-grade development).
But in many use cases this does work and I highly encourage you to use it! API destinations are cheap - they cost $.20/M invocations, which is the same cost as Lambda minus the compute time. Your API credentials are stored for free in Secrets Manager when you create an API destination connection, dropping your costs even more!
I love this form of invocation because it opens the doors to so many possibilities, including direct integration with Step Functions! Instead of invoking the call in a Lambda function or proxied API Gateway endpoint, just drop an event instead. So simple and it just works.
Let me know if you try it out or if you want to make a contribution, a little bit of help goes a long way!
Happy coding!
Top comments (0)