Part 1 in a Series on Event Driven Documentation
In this series we're going to SUPERCHARGE developer experience by implementing Event Driven Documentation. In this post, we'll start by using CDK to deploy EventCatalog to a custom domain using CloudFront and S3. In Part 2 we'll use AWS Service Events from CloudFormation to detect when an API Gateway has deployed and export the OpenAPI spec from AWS to bundle it in our EventCatalog. In Part 3 we'll export the JSONSchema of EventBridge Events using schema discovery and bundle them into the EventCatalog.
🛑 Not sure where to start with CDK? See my CDK Crash Course on freeCodeCamp
The architecture we'll be deploying with CDK is:
In this post we'll be focusing on creating the "Watcher Stack" that creates the EventCatalog UI Bucket along with the CloudFront distribution. I'll be going into more detail on our target architecture in the follow-on posts.
💻 The code for this series is published here: https://github.com/martzcodes/blog-event-driven-documentation
🤔 If you have any architecture or post questions/feedback... feel free to hit me up on Twitter @martzcodes.
What is EventCatalog?
EventCatalog is an awesome Open Source project built by David Boyne that helps you document your events, services and domains. It ingests a combination of markdown files, OpenAPI specs and EventBridge event schemas (or AsyncAPI specs) to build a static documentation site.
🙈 SPOILER ALERT: You can see this in action at docs.martz.dev which will be the result of this series🤫
Deploying EventCatalog
Our EventCatalog is going to be stored in S3 and hosted via CloudFront. To do that we're going to create a Level 3 CDK Construct that will:
- Create the UI Bucket
- Create the CloudFront Distribution that hosts the contents of the UI Bucket
- Use a BucketDeployment resource to upload the EventCatalog assets to the UI Bucket
After initializing a CDK project, we can install EventCatalog using: npx @eventcatalog/create-eventcatalog@latest catalog
and it will go into a catalog
subfolder in our project.
Create the UI Bucket
Creating Buckets with CDK is fairly simple. We'll use the L2 Bucket Construct in our L3 Catalog construct to create the bucket... and then we'll give it some sensible defaults.
export class CatalogOne extends Construct {
constructor(scope: Construct, id: string) {
super(scope, id);
const destinationBucket = new Bucket(this, `EventCatalogBucket`, {
removalPolicy: RemovalPolicy.DESTROY,
blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
objectOwnership: ObjectOwnership.BUCKET_OWNER_ENFORCED,
autoDeleteObjects: true,
});
}
}
-
removalPolicy: RemovalPolicy.DESTROY
will delete the Bucket if the Stack is destroyed. In order to do this, we need to make sure the bucket is empty.autoDeleteObjects: true
creates a CustomResource that will empty the bucket if the Stack is destroyed. -
blockPublicAccess: BlockPublicAccess.BLOCK_ALL
will prevent users from directly retrieving files from S3 (forcing them to go through CloudFront). -
objectOwnership: ObjectOwnership.BUCKET_OWNER_ENFORCED
enforces normal IAM permissions on the bucket (instead of the hard-to-use ACL-based permissions that S3 started with long ago...)
Create the CloudFront Distribution
In order for CloudFront to access the files in the S3 bucket, we need to grant read access to this. We do this by creating an OriginAccessIdentity
(emphasis on IDENTITY) and using the bucket's grantRead
method to grant the access.
const originAccessIdentity = new cloudfront.OriginAccessIdentity(
this,
`OriginAccessIdentity`
);
destinationBucket.grantRead(originAccessIdentity);
We then create the CloudFront Distribution with an S3 Origin that uses the identity.
const distribution = new cloudfront.Distribution(
this,
`EventCatalogDistribution`,
{
defaultRootObject: "index.html",
defaultBehavior: {
origin: new S3Origin(destinationBucket, { originAccessIdentity }),
},
}
);
For convenience, we'll create a CloudFormation Output that has the Catalog's CloudFront-hosed url. This will be logged out as part of the deployment.
new CfnOutput(this, `CatalogUrl`, {
value: `https://${distribution.distributionDomainName}`,
});
Use a BucketDeployment to Upload Assets
Finally, we need something to actually host. We can use S3 Deployment constructs to upload our EventCatalog's output and deploy it to S3.
const execOptions: ExecSyncOptions = {
stdio: ["ignore", process.stderr, "inherit"],
};
const uiPath = join(__dirname, `../../../catalog/out`);
const bundle = Source.asset(uiPath, {
bundling: {
command: ["sh", "-c", 'echo "Not Used"'],
image: DockerImage.fromRegistry("alpine"), // required but not used
local: {
tryBundle(outputDir: string) {
execSync("cd catalog && npm i");
execSync("cd catalog && npm run build");
copySync(uiPath, outputDir, {
...execOptions,
recursive: true,
});
return true;
},
},
},
});
Source.asset
can accept commands to locally bundle things. Normally it tries to use docker to do the bundling, but will accept a local override. The command
and image
are used as the fall-back in case the tryBundle
output returns falsey. Within tryBundle
we can use any commands we need to create the output.
-
cd catalog && npm i
changes directory into our catalog folder and makes sure the dependencies are installed -
cd catalog && npm run build
run's the build script for EventCatalog -
copySync(...
recursively copies the output folder of EventCatalog as an S3 Asset
Next we use that S3 Asset in a Bucket Deployment:
new BucketDeployment(this, `DeployCatalog`, {
destinationBucket,
distribution,
sources: [bundle],
prune: true,
memoryLimit: 1024,
});
Here, we specify our UI Bucket, CloudFront distribution and S3 Asset. We include prune: true
to ensure old versions of the static site's assets get removed on subsequent deployments and we bumped he memory limit of the BucketDeployment lambda so it's a little faster. In the background this BucketDeployment
construct uses a lambda to do the S3 upload. By setting the memoryLimit
we're setting the memory of that lambda.
💡 If you use BucketDeployments for other things and run into issues with slowness or failures... try increasing the memory. The default memory is only 128 MB.
If we deploy our EventCatalog now and go to the Catalog's output URL in the deployment log... we'll see our EventCatalog!
EventCatalog includes some Examples built-in! BUT if we reload any page, we'll get a NoSuchKey
error 😱
We get this because EventCatalog is trying to access something in S3 at some direct object and doesn't know to add index.html
to the end of it. For Single Page Apps that use UI frameworks like React or Angular... this is less of a problem because you can set a default routing to go to the root index.html where the framework will handle it. But this isn't a single page app. It's a static site!
Fixing the lack of CloudFront URL rewrites
We can use an edge lambda to do this rewrite for us. CloudFront has an example of how to do this. Using CDK I'll create an Edge Function to do the rewriting. First we need the lambda:
const edgeFn = new cloudfront.experimental.EdgeFunction(
this,
`EdgeRedirect`,
{
code: Code.fromInline(
'"use strict";var n=Object.defineProperty;var u=Object.getOwnPropertyDescriptor;var c=Object.getOwnPropertyNames;var d=Object.prototype.hasOwnProperty;var a=(e,r)=>{for(var i in r)n(e,i,{get:r[i],enumerable:!0})},o=(e,r,i,s)=>{if(r&&typeof r=="object"||typeof r=="function")for(let t of c(r))!d.call(e,t)&&t!==i&&n(e,t,{get:()=>r[t],enumerable:!(s=u(r,t))||s.enumerable});return e};var f=e=>o(n({},"__esModule",{value:!0}),e);var l={};a(l,{handler:()=>h});module.exports=f(l);var h=async e=>{let r=e.Records[0].cf.request;return r.uri!=="/"&&(r.uri.endsWith("/")||r.uri.lastIndexOf(".")<r.uri.lastIndexOf("/"))&&(r.uri=r.uri.concat(`${r.uri.endsWith("/")?"":"/"}index.html`)),r};0&&(module.exports={handler});'
),
handler: "index.handler",
runtime: Runtime.NODEJS_16_X,
logRetention: RetentionDays.ONE_DAY,
}
);
Since Edge Lambda Functions don't support automatic bundling using esbuild like regular lambdas do we just used the Code.fromInline
method to automatically upload the inline code as our Lambda source.
Next, we can update our CloudFront Distribution's props to include the edge function:
{
defaultRootObject: "index.html",
defaultBehavior: {
origin: new S3Origin(destinationBucket, { originAccessIdentity }),
viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
edgeLambdas: [
{
functionVersion: edgeFn.currentVersion,
eventType: cloudfront.LambdaEdgeEventType.VIEWER_REQUEST,
},
],
},
}
⚡️ Are you making an internal (private) documentation page? See my last post on how to Protect a Static Site with Auth0 Using Lambda@Edge and CloudFront
⚠️ When you use Lambda@Edge functions... you need to specify the region in your stack. If you don't, you'll get an error like this Error: stacks which use EdgeFunctions must have an explicitly set region
when you try to deploy. Set your region like this:
new BlogDevCatalogStack(app, 'BlogDevCatalogWatcherStack', {
env: {
region: process.env.CDK_DEFAULT_REGION,
account: process.env.CDK_DEFAULT_ACCOUNT
}
});
Now when we deploy, we can refresh our pages (or go directly to pages) without having NoSuchKey
errors!
Deploying to a Custom Domain
That's great but <random cloudfront url>
is pretty boring. AWS hosts domains too! If you purchased a domain and have a hosted zone set up, you can have CloudFront use it.
First we look up the HostedZone by the domain name.
const domainName = `docs.${hostDomain}`;
const hostedZone = HostedZone.fromLookup(this, `UIZone`, {
domainName: hostDomain,
});
Then we create a DNS Certificate (so we can use https):
const certificate = new DnsValidatedCertificate(this, `EventCatalogCert`, {
domainName,
hostedZone,
});
Then we pass these in to our CloudFront Distribution props:
{
defaultRootObject: "index.html",
certificate, // <--
domainNames: [domainName], // <--
// ...
}
And finally we create an A Record
where the target is for our CloudFront Distribution:
new ARecord(this, `ARecord`, {
zone: hostedZone,
recordName: domainName,
target: RecordTarget.fromAlias(new CloudFrontTarget(distribution)),
});
Putting it all together, I can host my personal documentation at docs.martz.dev!
⚠️ When using hosted zones... you need to specify the account in your stack. If you don't, you'll get an error like this Error: Cannot retrieve value from context provider hosted-zone since account/region are not specified at the stack level.
when you try to deploy. Set your account like this:
new BlogDevCatalogStack(app, 'BlogDevCatalogWatcherStack', {
env: {
region: process.env.CDK_DEFAULT_REGION,
account: process.env.CDK_DEFAULT_ACCOUNT
}
});
What's Next?
Hosting a static site is great, but we haven't even scratched the surface of Event Driven Documentation yet. In parts 2 and 3 we'll automatically fetch API Gateway OpenAPI specs + EventBridge Event Schemas and bundle them into our EventCatalog!
🙌 If anything wasn't clear or if you want to be notified on when I post parts 2 and 3... feel free to hit me up on Twitter @martzcodes.
Oldest comments (0)