In this part, we'll be connecting a database to our project. As mentioned in Part #1, PostgreSQL will be used as the database. To set up PostgreSQL on your machine, you can follow this guide for Windows or this for Linux.
We'll also be using an Object-Relational Mapper (ORM) - Sequelize a promise-based Node.js ORM for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server to query the database.
Object-Relational Mapping is a technique that lets you query and manipulates data from a database using an object-oriented pattern. This makes interacting with the database easier to work with as we won't be writing SQL statements often but instead interacting with the database using objects.
To begin, install sequelize, sequelize-cli and the driver for our database as a dependency;
$ yarn add sequelize sequelize-cli pg pg-hstore
To create an empty project you will need to execute init
command;
$ npx sequelize-cli init
This will create the following folders:
-
config
, contains config file, which tells CLI how to connect with database -
models
, contains all models for your project -
migrations
, contains all migration files -
seeders
, contains all seed files
Let's move these folders into the src folder;
stacky
|-- node_modules
|-- src
|-- config
|-- migrations
|-- models
|-- seeders
|-- app.js
|-- package.json
|-- yarn.lock
Since we changed the location of the folders created by sequelize-cli, we need to inform sequelize of this change. To do this, create a .sequelizerc file at the root of our project.
stacky
|-- node_modules
|-- src
|-- .sequelizerc
|-- package.json
|-- yarn.lock
Copy the below into the newly created file.
const path = require('path');
module.exports = {
"config": path.resolve('./src/config'),
"models-path": path.resolve('./src/models'),
"seeders-path": path.resolve('./src/seeders'),
"migrations-path": path.resolve('./src/migrations')
};
With this config we are telling sequelize-cli to:
- Use
src/config
file for config settings - Use
src/models
as models folder - Use
src/seeders
as seeders folder - Use
src/migrations
as migrations folder
Next, we need to tell sequelize how to connect to the database. To do that, open the config.json file in the src/config folder.
Rename the file to config.js. The reason which we will see in a minute is, we will be using the import statement which we can't use in JSON files. Copy the content below into the file and modify to match your PostgreSQL credentials.
module.exports = {
development: {
username: 'username',
password: 'password',
database: 'stacky',
host: '127.0.0.1',
dialect: 'postgres'
},
production: {
username: 'username',
password: 'password',
database: 'stacky',
host: '127.0.0.1',
dialect: 'postgres',
}
};
Edit the .sequelizerc file to inform sequelize of the file name change;
"config": path.resolve('./src/config', 'config.js'),
Next is to create our database. Run the command in the terminal. This will create a database using the name supplied in the config.js file. Launch pgAdmin tool and you will see the database created.
$ npx sequelize-cli db:create
NOTE: If you encounter the error below, ensure you are using the right credentials in the **config.js* file.*
$ ERROR: password authentication failed for user "username"
Let's briefly take a look at the other folders created.
In the models folder, the index.js requires modules needed. It then reads the current environment, if no environment is defined, it defaults to development. Next, it establishes connections with the database, read all the files in the model's folder, imports and adds it to the DB object, applies relationships between models if exists and finally exports the DB object.
It's in this folder, we are going to be adding our model files.
A model is a set of criteria that determines what counts as a valid database entry. In a SQL object-relational manager, each candidate table row flows through the model, which determines whether the row has the required column-cells and whether those cells are well-formed based on the model’s rules. If the row passes all the model’s checks, it is entered into the database
Source
The migrations folder is where we will be creating our migration files.
A migration is a set of database operations that has an “up” function and a “down” function. The “up” function changes the database and the “down” function attempts to restore the database to how it was before the “up” function ran. You can use migrations for structural database operations, like adding tables, columns, and whatnot.
Source
Lastly, the seeder folder is where we will be having our seed files.
[Database seeding is the initial seeding of a database with data. Seeding a database is a process in which an initial set of data is provided to a database when it is being installed. It is especially useful when we want to populate the database with data we want to develop in future. This is often an automated process that is executed upon the initial setup of an application. The data can be dummy data or necessary data such as an initial administrator account.
Source
Environment Variables
Let's paint a scenario where a developer, B picks up our project to work on. Definitely, his database credentials will be different, so he needs to edit the database config file in our project. Developer C also picks up the project and on his system port 3000 which we are serving the project on is in use by another application, he needs to change this. By the time we are moving to production the same thing. What about the secret keys our project might use.
One thing common above is some values change depending on the environment the project is in. This is why we need environment variables. It takes away the continuous tweaking of certain values in our code base.
Simply put, use environment variables in any place a value in your code will change based on the environment. Using an environment variable in our node application is as simple as;
const PORT = process.env.VARIABLE_NAME
To manage our environment variables, a popular solution is by creating a .env file in our project.
Create a .env file in the root of the project and let's add variables and values to it.
stacky
|-- node_modules
|-- src
|-- .env
|-- .sequelizerc
|-- package.json
|-- yarn.lock
Let's add some environment variables and its value to the file;
PORT=3000
DB_HOST=127.0.0.1
DB_USERNAME=username
DB_PASSWORD=password
DB_DATABASE=database
NOTE: If any value will have space between, wrap in double quotes ("another value").
Next is to find a way for Node.js to be aware of the .env file, its location and contents. This is a good package - dotenv we can install to do the heavy lifting.
$ yarn add dotenv
Next is to import the package as early as possible and configure it. This we can do in the app.js file.
import express from 'express';
import dotenv from 'dotenv';
dotenv.config();
const app = express();
// Rest of code omitted
Still in the app.js file, let us use the PORT environment variable.
// Rest of code omitted
// Environment variables are returned as strings
// so we need to convert to number
const port = parseInt(process.env.PORT, 10) || 5000;
// Rest of code omitted
Run the application and everything should still work.
NOTE: Any changes to the *.env** file requires the application to be stopped and started again for the application to be aware of the changes*
Let's head back to the config.js file in the config folder and user the environment variables.
const dotenv = require('dotenv');
dotenv.config();
module.exports = {
development: {
username: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD,
database: process.env.DB_DATABASE,
host: process.env.DB_HOST,
dialect: 'postgres'
},
production: {
username: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD,
database: process.env.DB_DATABASE,
host: process.env.DB_HOST,
dialect: 'postgres',
}
};
In this part, we have successfully created and connected a database to our project. In the next part, we'll be pushing our code to Github.
Top comments (0)