DEV Community

Sean Allin Newell
Sean Allin Newell

Posted on • Originally published at sean.thenewells.us on

Type Safe DB Access checked on every Commit

Type Safe DB Access checked on every Commit

Photo by Ryan Quintal on Unsplash.

The previous post had a local setup for type safe SQL access with F#'s Type Provider - SQLProvider. In this post we're going to move into CI territory from this commit in our repo; enforcing that every commit has to be working with the DB Schema in a valid way. In order to do that - in general for any CI provider - we're going to have to duplicate the steps we did locally on our machines

We'll track our work in a branch feat/ci-capabilities and land it in a Merge Request.

General CI

So, in general, in order to compile our F# project with SQLProvider, we need:

  • A running database
    • With a schema loaded
  • A valid connection string to the running database
  • Nuget's restore to pull down the right dlls
    • We have a pre build script that can move those dlls into a spot for the F# compiler.

Depending on your CI provider, there are a few different ways to approach getting a db, loaded with a schema, into your pipeline.

  1. Have a 'stable' external environment accessible to your builds
  2. Use a feature from your provider (ex GitLab services)
  3. Leverage docker/custom-images/nested-docker/docker compose to set things up and orchestrate (containerization solves everything right?)

In this post we'll explore GitLab's postgres service solution on shared runners.

GitLab Shared Runners

Note - We'll explore how to leverage a personal k8s cluster of runners in the next post.

GitLab offers free CI/CD for all of their repositories on their shared runner infrastructure. They have a feature called services (linked above as option 2) that we can leverage to have a postgres db loaded for a job.

One of the annoying things about CI/CD (this goes for any CI provider) is trying to test out your pipeline/job before just pushing and seeing if it works. That can be... a not-so-great experience. It's one thing to test a shell script locally, but it's another to:

  • Have valid yaml
  • Have a valid sequence of steps
  • Use a feature (aka service) correctly

And if you try to do anything complicated that propogate artifacts or has dependent jobs, it becomes a lot harder to test the pipeline before git push and pray.

Luckily, GitLab has a nice way to install a tool to help you test locally before pushing. They have a nice set of instructions on their docs, and on my debian buster system it was fairly straight forward:

curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh \
  | sudo bash \
  && sudo apt-get install gitlab-runner \
  && which gitlab-runner
Enter fullscreen mode Exit fullscreen mode

In order to see if this works, let's just do the simplest build step:

build:
  script:
    - echo "Hello World"
Enter fullscreen mode Exit fullscreen mode

Now we can run this locally with the shell executor:

▶ gitlab-runner exec shell build
Runtime platform arch=amd64 os=linux pid=6518 revision=003fe500 version=12.7.1
Running with gitlab-runner 12.7.1 (003fe500)
Using Shell executor...
executor not supported job=1 project=0 referee=metrics
Running on flantop...
Fetching changes...
Initialized empty Git repository in /home/sean/repos/fsharp-sql-provider/builds/0/project-0/.git/
Created fresh repository.
From /home/sean/repos/fsharp-sql-provider
 * [new branch] master -> origin/master
Checking out 5829e774 as master...
Skipping Git submodules setup
$ echo "Hello World"
Hello World
Job succeeded
Enter fullscreen mode Exit fullscreen mode

OH WOWEE.

Type Safe DB Access checked on every Commit

If you add an image to the yaml, you can use the docker executor (assuming you have docker setup locally).

  build:
+ image: alpine:3.11.3
  script:
    - echo "Hello World!"
Enter fullscreen mode Exit fullscreen mode

Now you can use the docker executor instead of the shell executor, which will be 'more real' since we'll need to have the dotnet core sdk available for our F# builds: $ gitlab-runner exec docker build.

OH NICE!

Type Safe DB Access checked on every Commit

Okay, that's all fine and dandy, but we're business-business and need to build enterprising enterprise apps (which of course means we need the dotnet core sdk).

Type Safe DB Access checked on every Commit

  build:
- image: alpine:3.11.3
+ image: mcr.microsoft.com/dotnet/core/sdk:3.1-alpine
  script:
    - dotnet build
Enter fullscreen mode Exit fullscreen mode

Unfortunately that won't work. We need a database too - so let's set that up with some services! But wait... we need to load our schema into this database, and our current run-postgres-container.sh script assumes we're using docker to exec psql - when we really should be executing psql in our job, connecting to the service that gitlab will set up for us. We'll have to write a custom script the ci will run, since the service will setup the user for us, but not the schema. Let's call that ci-pg-setup.sh and it'll look like:

#!/bin/sh

echo "CREATE SCHEMA IF NOT EXISTS test AUTHORIZATION test_app;" \
  | PGPASSWORD=pass psql -h postgres -U test_app -d postgres -f -

cat schema.sql | PGPASSWORD=pass psql -h postgres -U test_app -d postgres -f -
Enter fullscreen mode Exit fullscreen mode

We'll also need to make a change to our prebuild.sh to make it POSIX compliant, using sh instead of bash - see this commit for deets.

Now before we update the CI yaml we'll need to update our code to conditionally use the postgres host (due to docker networking in the docker executor). We can do that by conditionally creating a debug symbol in our fsproj file based on the presence and value of an environment variable, and using that symbol to switch our compile time constant ConnString in DbAccess.fs. That's done in this commit. It's important to note that we're depending on the psql tool in our ci setup script, so we need to make sure we add that (apk add postgres-client) in our job.

Our full yaml is setup in one commit here - copied here for your full pleasure:

build:
  image: mcr.microsoft.com/dotnet/core/sdk:3.1-alpine
  services:
    - postgres:12.1-alpine
  variables:
    POSTGRES_DB: postgres
    POSTGRES_USER: test_app
    POSTGRES_PASSWORD: pass
  script:
    - apk add postgresql-client
    - ./ci-pg-setup.sh
    - CI=TRUE dotnet build
Enter fullscreen mode Exit fullscreen mode

And voila! We have a passing job that checked our data layer!

Break it

Let's break our build!

One way to break our build would be to write the code without updating our schema script (maybe some dev manually changes their local schema without committing the schema changes) - or to change the schema in a way that causes F# to blow up. Let's do the latter.

schema.sql

@@ -20,6 +20,7 @@ CREATE TABLE IF NOT EXISTS test.breed_attributes(
 CREATE TABLE IF NOT EXISTS test.cats(
   id SERIAL PRIMARY KEY,
   name TEXT NOT NULL DEFAULT '',
+ age INTEGER NOT NULL DEFAULT 1,
   breedId INTEGER NOT NULL DEFAULT 1,
   FOREIGN KEY (breedId) REFERENCES test.breeds
 );
Enter fullscreen mode Exit fullscreen mode

With this we should expect an error like this (emphasis added by mua):

 error FS0039:
   The field, constructor or member 'Create(breedid, name)' is not defined.
   Maybe you want one of the following:
+ Create(age, breedid, name)
Enter fullscreen mode Exit fullscreen mode

And the proof is in the pudding! A failed build in our pipeline!

Let's fix it. Now that Cat's require an age as per our schema, we should update our data layer, and our domain type too. Those changes will cascade down to the loader and cli programs, but after that the compiler has done all the hunting for us, and if it compiles, the program will run soundly.

Merging it in

There are a few tweaks I've made before merging:

  • Bundle psql into the image we're using
  • Fix typos here and there
  • Use the module alias like we did in Loader in Cli

And that's a wrap!

Type Safe DB Access checked on every Commit

Future Work

There's still a lot to do to get to a a point where this is workable for a modern CI/CD web app, in the future I'll put up posts to:

  • Instead of shared runners, use a dedicated k8s cluster
  • Dockerize the F# application
  • Introduce a web layer for API
  • Create a deployment
  • Automate the deployment

Stay tuned!

Top comments (0)