DEV Community

Ces-D
Ces-D

Posted on

How I Built An Incomplete CMS

I began building a blog application for a site that I may or may not finish. The bulk of the project is based on the blog-starter-typescript example in the NextJs GitHub repo. Their example loads markdown files from a folder in the root directory and uses remark and gray-matter to convert the markdown into html and to get file metadata. The example code for these function is located in their lib folder as api.ts and markdownToHtml.ts.

The system they used works very well, however, if you plan on creating many posts and incorporating many images then that is when the system begins to breakdown. Since you store all of the markdown files and images in your root directory then it will perpetually grow and hosting costs for packages that large are affected. This is where I was looking for a solution that would be light weight and adaptable.

In my search for a solution I considered databases to store the content. I initially considered a postgres relational database because that is what the typical first choice is when considering databases, but maintaining flexibility in the kind of content that is included in the post (videos, images, etc.) was important to me. Next I considered non-relational databases, but opted for my third option.

AWS S3 buckets can hold a wide variety of objects in storage and can be accessed through Node APIs. This seemed like the best option because I would be able to scale the number of posts I make indefinitely while not affecting my file size and I could maintain the already existing and tested markdownToHtml conversion. It is possible to store markdown files in AWS S3 buckets and then stream the contents into my existing functions. Before going any further, here is are the dependencies from my package.json.

"dependencies": {
    "@aws-sdk/client-s3": "^3.18.0",
    "@fontsource/signika-negative": "^4.4.5",
    "@fontsource/source-code-pro": "^4.4.5",
    "gray-matter": "^4.0.3",
    "next": "10.2.3",
    "react": "17.0.2",
    "react-dom": "17.0.2",
    "rehype-document": "^5.1.0",
    "rehype-sanitize": "^4.0.0",
    "rehype-stringify": "^8.0.0",
    "remark": "^13.0.0",
    "remark-parse": "^9.0.0",
    "remark-rehype": "^8.1.0",
    "slug": "^5.0.1",
    "unified": "^9.2.1"
  }
Enter fullscreen mode Exit fullscreen mode

I created a new file called awsHandler.ts to store the AWS related functions. In my index.tsx file, which is typically found in the src/pages/ folder of NextJs application I would list the excerpts and descriptions of the blog posts. Initially this was done by reading all the markdown files contained in the root directory but now was changed to reading all the objects in S3 bucket.

export async function requestAllBucketObjects(): Promise<RequestAllBucketObjectsOutput> {
  const command = new ListObjectsV2Command({ Bucket: BUCKET_NAME, MaxKeys: 10 });
  const objectsOutput = await client.send(command);

  let promises: Promise<GetObjectCommandOutput>[] = [];
  let contentKeys: string[] = [];

  objectsOutput.Contents?.forEach((content) => {
    if (content.Key) {
      promises.push(requestSingleBucketObject(content.Key));
      contentKeys.push(content.Key);
    } else {
      return; //skip
    }
  });
  let allBucketObjects: GetObjectCommandOutput[] = await Promise.all(promises);
  return { allBucketObjects: allBucketObjects, contentKeys: contentKeys };
}
Enter fullscreen mode Exit fullscreen mode

Notice that I return an object with allBucketObjects and contentKeys. This is where the incomplete aspect comes in. Using the sdk I am only able to search for specific bucket objects using the key in the form {BUCKET_NAME}/{FILE_PATH}.{FILE_TYPE}. I use these keys to request the metadata that I previously grabbed using gray-matter.

export async function getPosts(fields: string[] = []) {
  let posts: Items[] = [];
  const { allBucketObjects, contentKeys } = await requestAllBucketObjects();
  let promises: Promise<string>[] = [];

  allBucketObjects.forEach((output) => {
    promises.push(getMarkdownBody(output));
  });
  const markdowns = await Promise.all(promises);

  markdowns.forEach((markdown, index) => {
    const key = contentKeys[index];
    posts.push(organizePostItems(markdown, key, fields));
  });

  return posts;
}
Enter fullscreen mode Exit fullscreen mode

Interestingly I was only able to use Node streams when requesting Bucket Objects from S3 requiring me to ensure the stream is Readable then adding each data chunk into a string. I need more practice with streams because ideally I would be able to stream and convert the entire function as it is being read instead of in the following format which defeats the purpose of the stream.

async function streamToString(readable: Readable): Promise<string> {
  let data = "";
  for await (const chunk of readable) {
    data += chunk;
  }
  return data;
}
Enter fullscreen mode Exit fullscreen mode

So why is this a CMS. It is a loose term in this instance because I am using AWS S3 as my storage and a few general functions to control requesting, handling, and presenting the incoming data.

Top comments (0)