DEV Community

Cover image for 🤯 Private S3 HTTP Server
Michael Rowlinson
Michael Rowlinson

Posted on

🤯 Private S3 HTTP Server

What are we solving here?

In my AWS adventures, I've come across use cases where it'd be awesome to have easy access internally to some files in AWS without making something public....or creating an S3 bucket behind a Cloudfront behind a WAF with a whitelist rule.....

Anyway, http-server is an awesome package that does this exact thing for file shares. So I figured I'd fork the code base and replace all the file server bits with S3 calls.

Opening the hood of http-server, I realized for my specific use case, most of the code has features I didn't need. So, I opted to create s3-http-server from scratch with http-server as inspiration.

What follows is an explanation of the interesting bits.

A look at the 🥩🥔 code

Firstly, the stack used for this solution:

  • nodejs - javascript runtime
  • express - http server
  • pug - template engine

The most important features are listing, downloading, and uploading objects.

Listing Objects

The code snippet for this is straight forward, but ensuring you are only returning objects AND prefixes at a given level is a little obscure. Below is the excerpt of code from the express route's async function handler -

const AWS = require("aws-sdk");
const s3 = new AWS.S3();
const Bucket = 'MyBucket'
const Prefix = ''

...
    const s3Res = await s3
      .listObjectsV2({
        Bucket,
        Prefix,
        Delimiter: "/",
        MaxKeys: 1000,
      })
      .promise();
    const data = [
      ...s3Res.CommonPrefixes.map(({ Prefix }) => ({
        Key: Prefix,
        isDir: true,
      })),
      ...s3Res.Contents.filter((c) => c.Key !== Prefix).map(({ Key }) => ({
        Key,
        isDir: false,
      })),
    ];
...

Enter fullscreen mode Exit fullscreen mode

The first part returns a list of s3 objects at a given prefix. Note that in a large bucket, you would want to handle pagination of the objects as well.

The shenanigans creating the data variable is the good part. If, for example, you call listObjectsV2 with a Prefix of "" (the root of the bucket), you only get objects in the return object's Content property. To get the prefixes at the root (or anywhere else) you'll need to look at the CommonPrefixes property.

Downloading Objects

The downloading of an object in S3 is a fun implementation as well. Here is an abstracted excerpt of that code.

...
      const readStream = new stream.PassThrough();
      const fileName = ...
      res.set("Content-disposition", "attachment; filename=" + fileName);
      s3.getObject({
        Bucket: bucket,
        Key: decodeURI(req.originalUrl.substring(1)),
      })
        .on("error", (err) => {
          console.log(err);
        })
        .on("httpData", (chunk) => {
          readStream.push(chunk);
        })
        .on("httpDone", () => {
          readStream.end();
        })
        .send();
      readStream.pipe(res);
...
Enter fullscreen mode Exit fullscreen mode

This works by creating a pass through stream. We then call getObject and configure a listener for httpData. Each time the listener function fires the stream gets the current chunk pushed into it. Finally we pipe the stream to the express response stream.

Uploading Objects

The client website also allows uploading objects into the current S3 prefix.

...
    const form = new formidable.IncomingForm();
    form.parse(req, async function (err, fields, files) {
      const { originalFilename, filepath } = files.filetoupload;
      const rawData = fs.readFileSync(filepath);
      await s3
        .upload({
          Bucket: bucket,
          Key: req.originalUrl.substring(1) + originalFilename,
          Body: rawData,
        })
        .promise();
...
Enter fullscreen mode Exit fullscreen mode

We leverage the formidable package to simplify the file upload. Simply call the IncomingForm function and follow that up with the parse function on the return object. The callback passed to the parse function is where the magic happens. We get a local file path of the file that has been uploaded and the file name. We then read the uploaded file into memory and create a key using the current prefix and supplied file name which is all passed to the s3.upload function.

Using the npm package

Here's the s3-http-server repo if you'd like to look at the code base in full.

Install the package with npm

npm install s3-http-server --global
Enter fullscreen mode Exit fullscreen mode

Make sure you have AWS keys available in the environment

Run the following to fire up the server

s3-http-server my-bucket-name
Enter fullscreen mode Exit fullscreen mode

Navigate to http://localhost:8080 and start browsing your bucket.

Wrap up

Thanks for reading. There are a few libraries in the wild that serve a similar purpose. This was my take on it. I'll add features like deletion of objects and website serving in the future. Let me know if you have any suggestions.

Peace ✌️

Top comments (1)

Collapse
 
huncyrus profile image
huncyrus • Edited

Could you add details for the first part also? I mean

WS without making something public....or creating an S3 bucket behind a Cloudfront behind a WAF with a whitelist rule....

this part could get some detail, screenshot and so on? And some example of whitelist and WAF?