DEV Community

Cover image for Using Workers KV to build an edge cached blog 🌍
Bryce Dorn
Bryce Dorn

Posted on • Updated on

Using Workers KV to build an edge cached blog 🌍

Last time I covered Cloudflare Workers I built a dev.to proxy to host the articles I write here on my personal site:

It was a nice project but had some flaws, mainly performance. Cloudflare Workers boast about 0ms cold starts and low latency but my little blog felt sluggish. So what'd I miss?

A global issue 🗺

Since the Worker's /post route hits the dev.to API directly, there's no caching involved. This is one of Cloudflare's primary features so I thought there would be some layer baked in but there's nothing of the sort. The problem here is summed up pretty well by this tweet:

Even though the blog worker is globally distributed, because it hits a (central) API for each request the turnaround time makes running at the edge pointless! This is especially the case for me as I live in Europe and the round-trip time to load the page is palpable.

If you haven't heard the term 'edge' before, it means a distributed network of servers that are geographically close to end users, enabling lower latency and faster load times. In this case, a request coming from Europe would hit a different server than one coming from the US but the content is the same.

I recently attended JSWorld Amsterdam and heard a talk that mentioned Workers KV which sounded exactly like the caching layer that would boost performance while still providing benefits of being at the edge. (And thankfully the free tier is more than sufficient for my needs.) This hypothesis was quickly verified by replacing the API call with a KV getter, decreasing time to first byte (server response delay) by ~10x!

From: 🐌

slower ttfb

To: 🐆

faster ttfb

Setting up KV 🗄

The steps to do this are pretty simple. Using the wrangler CLI you instantiate a binding to a variable name:

$ wrangler kv:namespace create "POSTS"
Enter fullscreen mode Exit fullscreen mode

And add it to wrangler.toml:

kv_namespaces = [
  { binding = "POSTS", id = "asdfjkl12345" }
]
Enter fullscreen mode Exit fullscreen mode

That variable is then available in your Worker and you can call get/put on it to retrieve/store data. With this I added a function to fetch post data, store it in the cache and track that it's been cached:

export async function updateEdgeCacheForPost(id: number) {
  const post = await getPost(id)
  await POSTS.put(`${post.slug}`, JSON.stringify(post))

  const posts = await getCachedPosts()
  const cachedPostIndex = posts.map(p => p.id).indexOf(id)
  posts[cachedPostIndex].cached = true
  await POSTS.put('INDEX', JSON.stringify(posts))
}
Enter fullscreen mode Exit fullscreen mode

Now /:slug routes can hit this cache directly:

export async function getCachedPost(slug: string): Promise<PostDetailType> {
  const response = await POSTS.get(`${slug}`)
  return JSON.parse(response)
}
Enter fullscreen mode Exit fullscreen mode

Then I replaced the /post:id route with a redirect that will permanently redirect to the cached version of the post:

app.get('/post/:id', async (c) => {
  const id = Number(c.req.param('id'))
  const slug = await getCachedSlugById(id)
  return c.redirect(`/${slug}`, 301)
})
Enter fullscreen mode Exit fullscreen mode

Subsequent index renders will pick up on the cached flag and render the link as /:slug, avoiding this redirect. I added an /update route (that requires a password matching an environment variable) that will update the index of posts.

Note: since the dev.to API rate-limits requests (somewhere around ~1 a second) and the Worker runtime length limit is 10ms it's not possible to cache all data in one request. So on-demand caching is needed in lieu of batching. But once the post is cached it doesn't need to be again!


And now I can officially say it's cached at the edge! The index needs to be refreshed when I write a new post but that's as simple as loading the /update route and waiting a few seconds. Super happy with the result!

Can check out the code here (& fork to deploy your own):

GitHub logo brycedorn / blog

Using Cloudflare Workers to proxy dev.to posts and cache at edge

Deploy RSS

A project built using Hono for Cloudflare Workers. Uses KV for edge caching and thumbhash to generate image placeholders.

Fork and deploy your own for free!

Development

Install dependencies:

npm install
Enter fullscreen mode Exit fullscreen mode

Set up environment:

cp .env.example .env
Enter fullscreen mode Exit fullscreen mode

Start via miniflare:

npm start
Enter fullscreen mode Exit fullscreen mode

Updating cache

This project uses KV as a distributed store for article data and image placeholders.

To populate the cache, open the /update endpoint in your browser with the password set in environment passed via query parameter, e.g. /update?password=test.

Deploying your own blog

Fork this repository & set your dev.to username in consts.ts and a password in actions secrets.

Then generate an API token and set CF_API_TOKEN and CF_ACCOUNT_ID in actions secrets as well. The deploy action will automatically deploy via Wrangler.






Top comments (2)

Collapse
 
lauragift21 profile image
Gift Egwuenu

Hey Bryce! Great post. I'm happy to see KV helped you with massive performance for your application.

P.S I gave that talk at the conference, always happy to see people go on to build cool stuff after :)

Collapse
 
bryce profile image
Bryce Dorn

Hey Gift, thank you & your talk was great!