DEV Community

Lucas Bleme
Lucas Bleme

Posted on • Edited on

Building a CLI tool to deploy static websites

Using S3 static content hosting is arguably one of the cheaper and simpler ways to host static web sites. The problem starts when you have to repeatedly create buckets using the AWS console, set static website hosting policies, upload files, and finally make them public all the time. This repetitive process gets even more annoying when we need to upload only a few files specific to a site, rather than all of them.

With this in mind, I searched for some tools that propose to solve some of these problems. I found some, but none of them simple enough focusing on the important tasks: creating the bucket with static hosting policies, and uploading the files. That was when I had the idea to create a simple command-line interface, light and easy to be installed, to manage the deploy of this kind of website in S3.

Here I will present the step by step to create a simple tool to help us deploy static sites using only Nodejs.

If you just want to use the app, you can run in your terminal:

npm install -g theros

Visit https://www.npmjs.com/package/theros to see the complete documentation.

Here is the link with the complete code implementation on Github. Let's go to the code...

Command structure

We want to be able to perform the basic operations we have just described in the terminal using simple commands.

To create a bucket:

theros create --bucket

To deploy all files:

theros deploy --bucket

Theros is the name of our npm package. Don't worry, we will publish it at the end of this post.

The library we are going to use in order to provide these commands is the commander.js.

Having already created the brand new npm project running npm init, we need to install the commander.js by running npm install -s commander. Let's see the basic struct of the two commands (create bucket and deploy):

#!/usr/bin/env node
const program = require('commander')

const awsCredentials = {
  region: 'us-east-1',
  accessKeyId: '',
  secretAccessKey: ''
}

const bucketParams = {
  Bucket : ''
}

program
  .command('create')
  .option('-b, --bucket <s>', 'Bucket name', setBucket)
  .option('-k, --key <s>', 'AWS Key', setKey)
  .option('-s, --secret <s>', 'AWS Secret', setSecret)
  .action(function () {
    console.log('Creating bucket')
  })

program
  .command('deploy')
  .option('-b, --bucket <s>', 'Bucket name', setBucket)
  .option('-k, --key <s>', 'AWS Key', setKey)
  .option('-s, --secret <s>', 'AWS Secret', setSecret)
  .action(function () {
    console.log('Performing deploy')
  })

function setKey(val) {
  awsCredentials.accessKeyId = val
}

function setSecret(val) {
  awsCredentials.secretAccessKey = val
}


function setBucket(val) {
  bucketParams.Bucket = val
}

program.parse(process.argv)
Enter fullscreen mode Exit fullscreen mode

Let's start by understanding the first line: #!/Usr/bin/env node. This line is the one that tells unix-like systems that our file should be run via the command line. Whenever you see this #! (hashbang or shebang), you can assume that it's an executable file. Since our cli.js file will run whenever a user types theros in the command line, we need this line to be at the beginning of out cli.js file.

The .command('create') function is the one that generates the "create" command. The .option('- b, --bucket <s>', 'Bucket name', setBucket) function specifies a parameter that we can use with the "create" command, this parameter can be used as "-bucket" or only "-B". The last parameter of the .option() function accepts another function, which in our case will be executed to capture the parameter value typed by the user: setBucket(val).

The "deploy" command follows exactly the same structure.

The user needs to use its Access Key and Client Secret to authorize our application to create or modify buckets and upload files to its account. You can find these credentials on the AWS console.

Here we are already able to capture the user input for both commands. To test just run in the terminal:

node cli.js create –bucket my_bucket –key my_key –secret my_secret

Creating the bucket

Now we need to effectively use the AWS SDK to perform operations on the user account. To do so, first we need to install the SDK: npm install --save aws-sdk.

Let's create a new s3Services.js file containing the operations: authenticate, create bucket and upload:

const AWS = require('aws-sdk')

function setAwsCredentials(awsCredentials) {
  AWS.config.update(awsCredentials)
}

function createBucket(bucketParams, staticHostParams) {
  const s3 = new AWS.S3()
  s3.createBucket(bucketParams, function(err, data) {
    if (err) {
      console.log('Error creating bucket: ', err)
    } else {
      console.log('Successfully created bucket at ', data.Location)
      setPoliciesForWebSiteHosting(staticHostParams)
    }
  });
}

function setPoliciesForWebSiteHosting(staticHostParams) {
  const s3 = new AWS.S3()
  s3.putBucketWebsite(staticHostParams, function(err, data) {
    if (err) {
      console.log('Error defining policies: ', err)
    } else {
      console.log('Successfully defined static hosting policies.')
    }
  });
}

module.exports = {
  setAwsCredentials,
  createBucket
};
Enter fullscreen mode Exit fullscreen mode

The setAwsCredentials() function updates the credentials of the AWS object.

The createBucket() function creates the bucket with the specified name, and if the operation succeeds, invokes the setPoliciesForWebSiteHosting() function that updates the policies of the existing bucket by configuring the bucket to host static sites.

Let's look at our cli.js file after implementing the call of each bucket creation function:

#!/usr/bin/env node
const program = require('commander')
const s3Services = require('./app/s3Services')

const awsCredentials = {
  region: 'us-east-1',
  accessKeyId: '',
  secretAccessKey: ''
}

const bucketParams = {
  Bucket : ''
}

const staticHostParams = {
  Bucket: '',
  WebsiteConfiguration: {
    ErrorDocument: {
      Key: 'error.html'
    },
    IndexDocument: {
      Suffix: 'index.html'
    },
  }
}

program
  .command('create')
  .option('-b, --bucket <s>', 'Bucket name', setBucket)
  .option('-k, --key <s>', 'AWS Key', setKey)
  .option('-s, --secret <s>', 'AWS Secret', setSecret)
  .action(function () {
    s3Services.setAwsCredentials(awsCredentials)

    staticHostParams.Bucket = bucketParams.Bucket
    s3Services.createBucket(bucketParams, staticHostParams)
  })

// hidden deploy command

function setKey(val) {
  awsCredentials.accessKeyId = val
}

function setSecret(val) {
  awsCredentials.secretAccessKey = val
}

function setBucket(val) {
  bucketParams.Bucket = val
}

program.parse(process.argv)
Enter fullscreen mode Exit fullscreen mode

Deploying the website

Uploading our files involves two distinct steps: first we have to read all the files in the current directory, and after that, upload it using the AWS SDK.

Interacting with the File System

We will use the Node native library, FS, to read recursively and synchronously all the files in the current directory and its subdirectories.

We also need to capture the MIME type of each of these read files so when we upload it, the "content type" field of the file metadata record is correctly filled. When uploading an index.html file for example, the correct "content-type" should be "html". To do so, let's use the node-mime library.

To install it run: npm install --save mime.

Just like we did with the interactions with S3, let's now create a new file containing the files reading operations. We will call it filesystem.js:

const fs = require('fs')
const path = require('path')
const mime = require('mime')

function getAllFilesFrom(currentDirPath, callback) {
  fs.readdirSync(currentDirPath).forEach(function (name) {
    const filePath = path.join(currentDirPath, name)
    const stat = fs.statSync(filePath)

    if (stat.isFile()) {
      fs.readFile(filePath, function (err, data) {
        if (err) {
          throw err
        }
        callback(filePath, data)
      })
    } else if (stat.isDirectory()) {
      getAllFilesFrom(filePath, callback)
    }
  });
}

function getMimeType(filePath) {
  return mime.getType(filePath)
}

module.exports = {
  getAllFilesFrom,
  getMimeType
};
Enter fullscreen mode Exit fullscreen mode

The getAllFilesFrom() function here returns via callback all the files found in the directory specified in the parameter as well its subdirectories. This function verifies if the checked file is actually a file if (stat.isFile()), if true, the function returns via callback the full file path and its content: callback (filePath, data).

If the fetched file is actually a directory else if (stat.isDirectory()), the function is called recursively, so that the files in that subdirectory are also read and returned.

Finally, the getMimeType() function has the simple goal of returning the MIME type corresponding to the given file path.

Performing the upload

Now that we can read the files of a directory and get their paths and types, we can implement in our s3Services.js the function to perform the upload:

const AWS = require('aws-sdk')
const filesystem = require('./filesystem')

function setAwsCredentials(awsCredentials) {
  // updates credentials
}

function createBucket(bucketParams, staticHostParams) {
  // creates bucket
}

function uploadObject(bucket, filePath, data) {
  const s3 = new AWS.S3()
  s3.putObject({
    Bucket: bucket,
    Key: filePath,
    Body: data,
    ACL: 'public-read',
    ContentType: filesystem.getMimeType(filePath)
  }, function(error, dataS3) {
    if (error) {
      return console.log('There was an error uploading your file: ', error.message)
    }
    console.log('Successfully uploaded file: ', filePath)
  });
}

function setPoliciesForWebSiteHosting(staticHostParams) {
  // updates bucket policies
}

module.exports = {
  setAwsCredentials,
  createBucket,
  uploadObject,
};
Enter fullscreen mode Exit fullscreen mode

The uploadObject() function is fairly simple. We call the s3.putObject method containing the bucket name, the file name, body (content in bytes of the file), ACL (access permission), and finally the ContentType.

If the upload fails for some reason we simply return an error message to the user.

Putting all together

Now that we have the code for reading files and uploading, both encapsulated, we can make the calls in our cli.js file:

#!/usr/bin/env node
const program = require('commander')
const s3Services = require('./app/s3Services')
const filesystem = require('./app/filesystem')

const awsCredentials = {
  region: 'us-east-1',
  accessKeyId: '',
  secretAccessKey: ''
}

const bucketParams = {
  Bucket : ''
}

const staticHostParams = {
  Bucket: '',
  WebsiteConfiguration: {
    ErrorDocument: {
      Key: 'error.html'
    },
    IndexDocument: {
      Suffix: 'index.html'
    },
  }
}

// hidden create command

program
  .command('deploy')
  .option('-b, --bucket <s>', 'Bucket name', setBucket)
  .option('-k, --key <s>', 'AWS Key', setKey)
  .option('-s, --secret <s>', 'AWS Secret', setSecret)
  .action(function () {
    s3Services.setAwsCredentials(awsCredentials)

    filesystem.getAllFilesFrom('.', function (filePath, data) {
      s3Services.uploadObject(bucketParams.Bucket, filePath, data)
    })

});

function setKey(val) {
  awsCredentials.accessKeyId = val
}

function setSecret(val) {
  awsCredentials.secretAccessKey = val
}

function setBucket(val) {
  bucketParams.Bucket = val
}

program.parse(process.argv)
Enter fullscreen mode Exit fullscreen mode

For each file read by the filesystem.getAllFilesFrom() function, we upload it using our s3Services.uploadObject() function.

To test the deploy command, just run:

node cli.js deploy –bucket my_bucket –key my_key –secret my_secret

Publishing the package to the NPM repository

Now that we have the two basic functionalities ready, we want to make it available to the world. We'll do this by making our CLI app available as a node package at npm: https://www.npmjs.com/package/theros.

1. The first step is to create an account at https://www.npmjs.com/.

2. Having created your account, we now need to add the account created in the npm installed on the machine we are using. Your email and password will be requested when the following command is executed in the terminal:

npm adduser

3. For the operating system to recognize our package as an application running on the terminal, we need to include the following piece of code at the package.json file:

"bin": {
  "theros": "cli.js"
}
Enter fullscreen mode Exit fullscreen mode

The name of the command that will run our application can be any one, here I chose theros, pointing to the cli.js file.

4. Now we just need to publish the package to our account by running:

npm publish –access=public

If you got an error while trying to publish, make sure the name you chose for the package does not exist in the manager: https://www.npmjs.com/search?q=your_package.

If it already exists, you need to choose another one.

If the errors persist, see here my complete package.json file and make sure you did nothing wrong.

Bonus

There are some cool extra features I have implemented, such as:

  • Ignoring specific files when using the deploy command by using the --ignore <list_of_files> parameter.

  • Pointing to some custom directory in order to deploy files stored in a different place by using the --root <directory_path> parameter.

And some improvements we can do, for example:

  • When creating a new bucket, it might be interesting for users to be able to create a new CloudFront distribution associated with this bucket. It is a very common step every user deploying static websites at AWS needs to perform, it could be easily implemented. Check the Github issue.

  • Use a configuration file such as a theros.yaml, containing authentication keys, a default root folder and bucket names to avoid the repetition of typing the same things all the time.

Sample file:

default:
  root: 'build/'

production:
  key: 'XXX'
  secret: 'XXX'
  bucket: 'theros.io'

development:
  key: 'XXX'
  secret: 'XXX'
  bucket: 'theros-dev.io'
Enter fullscreen mode Exit fullscreen mode

Check the Github issue.

That's it!

The biggest difficulty I faced when creating this simple application was dealing with files using the filesystem (FS) API. The functions are not intuitive at all, and the documentation of this API is not so good. I know it's not fair to put the blame on the tool, since Node was not originally intended for applications of this nature.

The main benchmark I used was this application called Stout, made by Cloudflare staff. They chose to build a CLI using Go lang, which seems pretty smart to me, since the Go language offers an infinitely richer tool for manipulating files than does Javascript.

Personally I have little experience with Javascript and Node, so be sure to comment if you have any suggestions for some code improvement or ideas for new features :)

Top comments (0)