After about two months building and testing pieces of our application, it's time to put the pieces together and run the application again! I apologies for the tedium of the tutorials and commend you for hanging in there!
Here's the diagram we've been looking at over the last few tutorials, with check-marks next to things we've completed.
With these steps completed, we're now able to:
- Instantiate a database connection to Postgres from within
package main
- Inject this connection into the
UserRepository
- Inject the
UserRepository
into theUserService
- Inject the
UserService
andTokenService
into theHandler
layer.
Later we'll also inject a TokenRepository
into the TokenService
, but I wanted to run this dadgummed application before boring y'all to death!
If at any point you are confused about file structure or code, go to Github repository and check out the branch for the previous lesson to be in sync with me!
The final file structure should be:
If you prefer video, check out the video version below!
Migrating Users Table
Migrations, in my rough definition, are snapshots of each change you make to your database. For each migration, we create two files with SQL statements: one for updating the database, and another for reverting those updates. These files are usually prefixed/suffixed with a sequence or timestamp which indicate the order in which the migrations are applied.
Install golang-migrate CLI
I ended up using a tool called golang-migrate CLI to create and apply migrations. Check out the link for instructions on how to install the CLI on your OS. I'm just going to install the CLI directly on my machine as it's a bit simpler than setting it up inside of Docker.
After installing, make sure to check your installation by running:
➜ migrate --version
4.13.0
Migration Commands
Since remembering all of the command-line arguments for migrate
can be difficult, let's update the Makefile
with some commands for migrating a database.
.PHONY: keypair migrate-create migrate-up migrate-down migrate-force
PWD = $(shell pwd)
ACCTPATH = $(PWD)/account
MPATH = $(ACCTPATH)/migrations
PORT = 5432
# Default number of migrations to execute up or down
N = 1
# Create keypair should be in your file below
migrate-create:
@echo "---Creating migration files---"
migrate create -ext sql -dir $(MPATH) -seq -digits 5 $(NAME)
migrate-up:
migrate -source file://$(MPATH) -database postgres://postgres:password@localhost:$(PORT)/postgres?sslmode=disable up $(N)
migrate-down:
migrate -source file://$(MPATH) -database postgres://postgres:password@localhost:$(PORT)/postgres?sslmode=disable down $(N)
migrate-force:
migrate -source file://$(MPATH) -database postgres://postgres:password@localhost:$(PORT)/postgres?sslmode=disable force $(VERSION)
What do these commands do?
migrate-create
creates migration files in "sequential order" (-seq) inside of the MPATH
folder. That folder will be a ~/migrations
folder in the account application, which you should create.
The next commands are used for applying a discrete number of migrations, as defined by N
. The default for N
is 1. The migrate-up
command applies database updates, and the migrate-down
command reverts these updates.
migrate-force
can be used to force a version number (in our case, the sequence number), which is often required if you have to fix an error in a previous migration. I'll admit I still struggle with this, and have to sort of "hack" or manually fix migration issues before forcing a version.
The good news is that the migrate CLI has reasonably readable warnings.
Create Migration Files
To create a migration, execute the following from the project root (where the Makefile
is), where NAME
describes the change you're making:
make migrate-create NAME=add_users_table
You'll now have 2 files in the migrations
folder (which you have hopefully created). Notice the sequence number is added to the beginning of the file, and that the NAME
is added to the end of the file.
Let's add the SQL statements for our first migration to 00001_add_users_table.up.sql
.
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE IF NOT EXISTS users (
uid uuid DEFAULT uuid_generate_v4() PRIMARY KEY,
name VARCHAR NOT NULL DEFAULT '',
email VARCHAR NOT NULL UNIQUE,
password VARCHAR NOT NULL,
image_url VARCHAR NOT NULL DEFAULT '',
website VARCHAR NOT NULL DEFAULT ''
);
The first line adds an extension for creating UUIDs. When we create a user with email and password, UUIDs will be automatically generated for the uid
column.
All other columns are non-nullable VARCHARs. We will use an empty value to represent no value (like no imageURL). We also require email
addresses to be unique.
Let's add the "down" statements in 00001_add_users_table.down.sql
.
DROP TABLE users;
DROP EXTENSION IF EXISTS "uuid-ossp";
The down file just drops the entire user table and then removes the UUID extension. While this looks simple, you should make sure to drop the table first, as the uid
column depends on the UUID extension. Postgres will prevent us from deleting an extension if any remaining table column depends on it.
Update "Auth" to "Account" in Docker-Compose
We need to run our Postgres container to execute apply the migrations.
Before running the Postgres container, I want to update some of the naming in our docker-compose.yaml
file at the root of our project.
postgres-account:
...
volumes:
- "pgdata_account:/var/lib/postgresql/data"
...
account:
...
depends_on:
- postgres-account
...
volumes:
pgdata_account:
First, we'll change the auth
suffix of postgres-auth
to account
.
Also make sure to change the depends_on
value of the account
service, and to change the volume name of postgres-account
to pgdata_account
.
With these updates, let's run this specific service.
docker-compose up postgres-account
You should now be able to log into your Postgres server with PSQL, PGAdmin, or your preferred SQL client. (If you check out the video, you can see this in PGAdmin.) If you're a boss and want to use PSQL, enter the following command, then entering "password" when prompted.
psql -h localhost -d postgres -U postgres
Apply First Migration
With the container running, open up another terminal in the project root.
Run make migrate-up
.
This by default should run migration 00001...
, if we were to have 2 migrations we could set N=2 to run multiple migrations.
Looking at the database, you should see a users
table and a schema_migrations
table, the latter tracking which migration we're on and whether the migration state is dirty. The dirty flag would be set, for example, if you created an SQL file with an error and tried to migrate it. You'd then need to resolve some issues manually and possibly run make migrate-force
.
For kicks, let's run the down migration to make sure it works, and then reapply the up migration.
make migrate-down
- make migrate-down
It might be worth going to the video to see me showing this inside of PGAdmin
Initialize DB and Inject
Queue the Hallelujah Chorus
Let's review the dependency injection flow (it may help to reference the big diagram at the top).
- Initialize a connection to Postgres when our application starts up.
- Provide or inject this connection into our
UserRepository
- Inject the
UserRepository
into theUserService
- Initialize the
TokenService
- Inject the
TokenService
andUserService
into the handler - Use the handler in the gin router/engine and start the application.
Then we'll be able to run our application and make HTTP requests to our signup endpoint.
Connect to Postgres
Lets exit out of our currently running docker-compose since we have already migrated our database table and because we need to run both our account application and Postgres containers.
docker-compose down
or ctrl-c
.
To keep our main file from getting unwieldy, I am going to create a ~/data_sources.go
file. We'll also use this file later on for initializing our Redis container and Google Cloud Storage Client. We'll also add a close
method on dataSources
for shutting down all connections when the application closes.
package main
import (
"fmt"
"log"
"os"
"github.com/jmoiron/sqlx"
_ "github.com/lib/pq"
)
type dataSources struct {
DB *sqlx.DB
}
// InitDS establishes connections to fields in dataSources
func initDS() (*dataSources, error) {
log.Printf("Initializing data sources\n")
// load env variables - we could pass these in,
// but this is sort of just a top-level (main package)
// helper function, so I'll just read them in here
pgHost := os.Getenv("PG_HOST")
pgPort := os.Getenv("PG_PORT")
pgUser := os.Getenv("PG_USER")
pgPassword := os.Getenv("PG_PASSWORD")
pgDB := os.Getenv("PG_DB")
pgSSL := os.Getenv("PG_SSL")
pgConnString := fmt.Sprintf("host=%s port=%s user=%s password=%s dbname=%s sslmode=%s", pgHost, pgPort, pgUser, pgPassword, pgDB, pgSSL)
log.Printf("Connecting to Postgresql\n")
db, err := sqlx.Open("postgres", pgConnString)
if err != nil {
return nil, fmt.Errorf("error opening db: %w", err)
}
// Verify database connection is working
if err := db.Ping(); err != nil {
return nil, fmt.Errorf("error connecting to db: %w", err)
}
return &dataSources{
DB: db,
}, nil
}
// close to be used in graceful server shutdown
func (d *dataSources) close() error {
if err := d.DB.Close(); err != nil {
return fmt.Errorf("error closing Postgresql: %w", err)
}
return nil
}
An important note in this code is that we must import the Postgres driver, _ "github.com/lib/pq"
, in order to establish a connection to Postgres with the SQLX library.
We also read in environment variables with SQL connection information. I'll explain where these come from in a moment.
From the environment variables, we create a formatted connection string, and make sure we can ping the database server to verify our connection is working. This function returns a package private datasources
struct which will be used as input for our dependency injection.
Environment variables
Let's go back to the environment variables.
Recall that in our docker-compose.yaml
file, under our account service we have an env_file
key which references our .env.dev
file. Docker-compose will load environment variables from this file and make them available to our account
application.
I actually want to move the .env.dev
file into the account
project folder. To continue following along, do so now!
Inside of docker-compose.yaml
, let's update:
env_file: ./account/.env.dev
Then update the .env.dev
file:
ACCOUNT_API_URL=/api/account
PG_HOST=postgres-account
PG_PORT=5432
PG_USER=postgres
PG_PASSWORD=password
PG_DB=postgres
PG_SSL=disable
REFRESH_SECRET=areallynotsuperg00ds33cret
PRIV_KEY_FILE=./rsa_private_dev.pem
PUB_KEY_FILE=./rsa_public_dev.pem
In addition to Postgres connection string variables, I've also added paths to our RSA keys. We'll create the necessary keypair soon.
Back inside of main.go
, let's initialize the datasources
at the top of the function.
log.Println("Starting server...")
// initialize data sources
ds, err := initDS()
if err != nil {
log.Fatalf("Unable to initialize data sources: %v\n", err)
}
I also encourage you to add the close
method at the end of main.go
for shutting down the datasources
.
...
// The context is used to inform the server it has 5 seconds to finish
// the request it is currently handling
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// shutdown data sources
if err := ds.close(); err != nil {
log.Fatalf("A problem occurred gracefully shutting down data sources: %v\n", err)
}
// Shutdown server
log.Println("Shutting down server...")
if err := srv.Shutdown(ctx); err != nil {
log.Fatalf("Server forced to shutdown: %v\n", err)
}
...
Now we can run docker-compose up
, and our account application should successfully connect to the database (check account and postgres-account docker logs).
Dependency Injection
Let's create a file called injection.go
at the root of the account application.
This file will be a little long, but aside from loading key files, it's mostly just making calls to the factories in our service and repository layers to make sure our handler gets access to the concrete implementations of our app's features.
package main
// your imports here
// will initialize a handler starting from data sources
// which inject into repository layer
// which inject into service layer
// which inject into handler layer
func inject(d *dataSources) (*gin.Engine, error) {
log.Println("Injecting data sources")
/*
* repository layer
*/
userRepository := repository.NewUserRepository(d.DB)
/*
* repository layer
*/
userService := service.NewUserService(&service.USConfig{
UserRepository: userRepository,
})
// load rsa keys
privKeyFile := os.Getenv("PRIV_KEY_FILE")
priv, err := ioutil.ReadFile(privKeyFile)
if err != nil {
return nil, fmt.Errorf("could not read private key pem file: %w", err)
}
privKey, err := jwt.ParseRSAPrivateKeyFromPEM(priv)
if err != nil {
return nil, fmt.Errorf("could not parse private key: %w", err)
}
pubKeyFile := os.Getenv("PUB_KEY_FILE")
pub, err := ioutil.ReadFile(pubKeyFile)
if err != nil {
return nil, fmt.Errorf("could not read public key pem file: %w", err)
}
pubKey, err := jwt.ParseRSAPublicKeyFromPEM(pub)
if err != nil {
return nil, fmt.Errorf("could not parse public key: %w", err)
}
// load refresh token secret from env variable
refreshSecret := os.Getenv("REFRESH_SECRET")
tokenService := service.NewTokenService(&service.TSConfig{
PrivKey: privKey,
PubKey: pubKey,
RefreshSecret: refreshSecret,
})
// initialize gin.Engine
router := gin.Default()
handler.NewHandler(&handler.Config{
R: router,
UserService: userService,
TokenService: tokenService,
})
return router, nil
}
We'll initialize the router, or *gin.Engine
, in this injection file instead of in the main file.
This means we can remove the initialization of gin in main.go
, and instead call this inject
function.
// router := gin.Default() <- remove this line
router, err := inject(ds)
if err != nil {
log.Fatalf("Failure to inject data sources: %v\n", err)
}
srv := &http.Server{
Addr: ":8080",
Handler: router,
}
In our environment file, .env.dev
, we reference a "dev.pem" file. So let's create a second key-pair, though in reality we could just use test keys that we created in the last tutorial.
make create-keypair ENV=dev
Sending Requests
Let's send some requests to our application to make sure it works. I'll include the curl command here, but you can check out how to do this in Postman in the video.
curl --location --request POST 'http://malcorp.test/api/account/signup' \
--header 'Content-Type: application/json' \
--data-raw '{
"email": "guy01@guy.com",
"password":"validpassword123"
}'
You should get a response with an idToken
and a refreshToken
.
{
"tokens": {
"idToken": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjp7InVpZCI6ImY2ZTUxOTVhLWMzYjgtNGE4YS1hNTU0LWMzNzgxMGYxOTZmZiIsImVtYWlsIjoiZ3V5MDFAZ3V5LmNvbSIsIm5hbWUiOiIiLCJpbWFnZVVybCI6IiIsIndlYnNpdGUiOiIifSwiZXhwIjoxNjA1OTE0NDk2LCJpYXQiOjE2MDU5MTM1OTZ9.Q62fFJLNkTpG1uoB5ikMG_N2KgDXPNz12rSuXjOImVxW_JWditBeE3pYo6AC89cadKtMbDDW9M4D5sCT43LKLVpB7TUuWkGRxMTakXmF_aBg-bWaQMcQHPi9qzWooc_Hpd0zfFA06-mZNJTwFXQhY_p1rfj-L0BFEqFmm9xmBj3xHQaH14elKkzxA8f4RY9ihjpDio_uo_xGjDWfqbhX4rSt_C5OgX5YgfzgywACMILFZ--KWucTWbcBHTwvyJMzggqYjqkHoykWX1Py7aod96Oa-YGMh_mBE8pAZrnQ9-6I2O45DDUZa-4ZiK40u_0Vu9VGuF39fhnaV1SyIpTuiA",
"refreshToken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiJmNmU1MTk1YS1jM2I4LTRhOGEtYTU1NC1jMzc4MTBmMTk2ZmYiLCJleHAiOjE2MDYxNzI3OTYsImp0aSI6ImEzYjQ5ODQ4LTZiZTItNDVkOC05YmYzLTJkYjFiOTE5NGE0YiIsImlhdCI6MTYwNTkxMzU5Nn0.7oZRdPmEWjQnYsa1u19IiAspO__Q3vJzArbE8V9oBLU"
}
}
Sweetness! I recommend you copy these tokens into the jwt.io debugger to see if the payloads are what you expect. You can even copy the key files or refresh secret into the debugger to verify the token signatures!
If you try running the same request again, you should get a conflict error. since the user already exists.
{
"error": {
"type": "CONFLICT",
"message": "resource: email with value: guy01@guy.com already exists"
}
}
Conclusion
That's it for today. I'm not gonna lie, these last three tutorials were kinda of a b!@#$! For any of you that have read or watched even a segment here or there, thanks so much! That makes doing this worth while! I hope you've learned half as much as I have!
Next time, we'll do some cleanup. In two tutorials, we'll create a TokenRepository
for storing refresh tokens in Redis. After that, we'll create a "handler timeout" middleware. While golang's http server has some nice settings for read and write timeouts between server and client, we want to create a timeout for the handler's themselves. While golang has this, it's not directly compatible with gin or with sending JSON responses.
¡Bueno! De nuevo, les agradezco. ¡Chau!
Top comments (2)
I had a lot of trouble installing golang-migrate on my mac, and following their documentation examples did not help either.
If anyone is having trouble installing golang-migrate with homebrew or even from their official github page docs these are the steps I followed to install this package on mac:
in my opinion there is not enough information about what extensions are and how to install them.