DEV Community

Martin Heinz
Martin Heinz

Posted on • Originally published at martinheinz.dev

Ultimate Setup for Your Next Golang Project

Note: This was originally posted at martinheinz.dev

Alt Text

For me, the biggest struggle when starting new project has always been trying to set the project up "perfectly". I always try to use the best directory structure so everything is easy to find and imports work nicely, setup all commands so that I'm always one click/command away from desired action, find the best linter, formatter, testing framework for the language/library that I'm using...

The list goes on and it never gets to the point that I'm actually satisfied with the setup... except for this ultimate and best (IMHO) setup for Golang!

Note: This setup works so well partly because it is based on existing projects which can be found here and here.

TL;DR: Here is my repository - https://github.com/MartinHeinz/go-project-blueprint

Directory Structure

First of all, let's go over directory structure of our project. There are few top-level files as well as 4 directories:

  • pkg - Let's start simple - pkg is a Go package that contains only global version string. This is substituted for actual version computed from commit hash during build.
  • config - Next, there is configuration directory which holds files with all necessary environment variables. Any file type can be used, but I recommend YAML files, as they are more readable.
  • build - This directory contains all shell scripts needed to build and test your application as well as generate reports for code analysis tools.
  • cmd - Actual source code! By convention, the source directory is named cmd, inside there is another one with name of the project - in this case blueprint. Next, inside this directory is a main.go that runs the whole application, along with it, there are all other source files divided into modules (more on that later).

Note: From some feedback, I found out, that a lot of people prefer to use internal and pkg directories to house all their source code. I personally find it unnecessary and redundant, therefore I put everything into cmd, but each to their own.

Other than directories, there are also quite a few files and we will talk about those in following sections.

Go Modules for Perfect Dependency Management

Go projects use wide variety of dependency management strategies. However, since version 1.11 Go has official dependency management solution called Go modules.
All our dependencies are listed in go.mod file, which can be found in root directory. This is how it might look:

module github.com/MartinHeinz/go-project-blueprint

go 1.12

require (
    github.com/spf13/viper v1.4.0
    github.com/stretchr/testify v1.4.0
)
Enter fullscreen mode Exit fullscreen mode

You may ask "How is the file populated with dependencies?". Well, it's pretty simple, all you need is one command:

go mod vendor
Enter fullscreen mode Exit fullscreen mode

This command resets the main module's vendor directory to include all packages needed to build and test all of the module's packages based on the state of the go.mod files and Go source code.

Actual Source Code and Configuration

Now we're finally getting to source code. As mentioned above, the source code is divided into modules. Each module is a directory in source root. In each module there are source files along with their tests files, e.g.:

./cmd/
└── blueprint
    ├── apis <- Module
    │   ├── apis_test.go
    │   ├── user.go
    │   └── user_test.go
    ├── daos <- Module
    │   ├── user.go
    │   └── user_test.go
    ├── services <- Module
    │   ├── user.go
    │   └── user_test.go
    ├── config <- Module
    │       └── config.go
    └── main.go

Enter fullscreen mode Exit fullscreen mode

This structure help with readability and maintainability, as it divides code into reasonable chunks, which are easier to traverse. As for the configuration, in this setup I use Viper, which is Go configuration library, that can handle various formats, commandline flags, environment variables, etc.
So how do we use it (Viper) here? Let's have a look at config package:


var Config appConfig

type appConfig struct {
    // Example Variable, which is loaded in LoadConfig function
    ConfigVar string
}

// LoadConfig loads config from files
func LoadConfig(configPaths ...string) error {
    v := viper.New()
    v.SetConfigName("example")  // <- name of config file
    v.SetConfigType("yaml")
    v.SetEnvPrefix("blueprint")
    v.AutomaticEnv()
    for _, path := range configPaths {
        v.AddConfigPath(path)  // <- // path to look for the config file in
    }
    if err := v.ReadInConfig(); err != nil {
        return fmt.Errorf("failed to read the configuration file: %s", err)
    }
    return v.Unmarshal(&Config)
}

Enter fullscreen mode Exit fullscreen mode

This package consist of a single file. It declares one struct that holds all config variables and has one function LoadConfig which, well, loads config. It takes path to config files, in our case, we would use path to config directory which resides in project root and contains our YAML files (mentioned above). And how do we use it? We run it as first thing in main.go:

if err := config.LoadConfig("./config"); err != nil {
    panic(fmt.Errorf("invalid application configuration: %s", err))
}
Enter fullscreen mode Exit fullscreen mode

Simple and Fast Testing

The second most important thing after code itself? Quality tests. To be willing to write lots of good tests, you need a setup that will make it easy for you to do so. To achieve that we will use Makefile target called test, which collects and runs all tests in cmd subdirectories (all files with _test.go suffix). These tests are also cached, so they are ran only if there were some changes to relevant code. This is crucial as if the tests are too slow you will (most likely) eventually stop running and maintaining them. Besides unit testing, the make test also helps you maintain general code quality, as it also runs gofmt and go vet with every test run. go fmt forces you to format your code properly and go vet finds any suspicious code constructs using heuristics. Example output:

foo@bar:~$ make test
Running tests:
ok      github.com/MartinHeinz/go-project-blueprint/cmd/blueprint   (cached)
?       github.com/MartinHeinz/go-project-blueprint/cmd/blueprint/config    [no test files]
?       github.com/MartinHeinz/go-project-blueprint/pkg [no test files]

Checking gofmt: FAIL - the following files need to be gofmt'ed:
    cmd/blueprint/main.go

Checking go vet: FAIL
# github.com/MartinHeinz/go-project-blueprint/cmd/blueprint
cmd/blueprint/main.go:19:7: assignment copies lock value to l: sync.Mutex

Makefile:157: recipe for target 'test' failed
make: *** [test] Error 1
Enter fullscreen mode Exit fullscreen mode

Always Running in Docker

People often say "It works on my machine (and not in cloud)...", to avoid this we have simple solution - always run in docker container. And when I say always I really mean it - build in container, run in container, test in container. Actually I didn't mention it in previous section, but the make test really is "just" docker run.

So, how does it work here? Let's start with Dockerfiles we have in root of the project - we have two of them one for testing (test.Dockerfile) and one running the application (in.Dockerfile):

  • test.Dockerfile - In ideal world, we would have just one Dockerfile for both running and testing the application. However, there might be need for little adjustments in environment for when the tests are ran. That's why we have this image here - to allow us to install additional tools and libraries, in case our tests require it. As an example, let's assume that we have database that we are connecting to. We don't want to spin up whole PostgreSQL server with every test run or be dependent some database running on host machine. So instead, we can use SQLite in-memory database for our test runs. But, guess what? SQLite binary requires CGO. So, what do we do? We just install gcc and g++, flip the CGO_ENABLED flag and we are good to go.

  • in.Dockerfile - If you look at this Dockerfile in the repository, it's just bunch of arguments and copying of config into image - so, what's going on in there? in.Dockerfile is used only from Makefile, where the arguments are populated, when we run make container. Now, it's time to look at the Makefile itself, which does all the docker stuff for us. 👇

Tying it all together with Makefile

For a longest time, Makefiles seemed scary to me as I've only seen them used with C code, but they are not scary and can be used for so many things, including this project! Let's now explore targets we have in our Makefile:

  • make build - First in the workflow - application build - it builds binary executable in bin directory:
@echo "making $(OUTBIN)"
  @docker run                                              \ # <- It's just a `docker run`
    -i                                                     \ #    command in disguise  
    --rm                                                   \ # <- Remove container when done
    -u $$(id -u):$$(id -g)                                 \ # <- Use current user
    -v $$(pwd):/src                                        \ # <- Mount source folder
    -w /src                                                \ # <- Set workdir
    -v $$(pwd)/.go/bin/$(OS)_$(ARCH):/go/bin               \ # <- Mount directories where
    -v $$(pwd)/.go/bin/$(OS)_$(ARCH):/go/bin/$(OS)_$(ARCH) \ #    binary will be outputted
    -v $$(pwd)/.go/cache:/.cache                           \
    --env HTTP_PROXY=$(HTTP_PROXY)                         \
    --env HTTPS_PROXY=$(HTTPS_PROXY)                       \
    $(BUILD_IMAGE)                                         \
    /bin/sh -c "                                           \ # <- Run build script
        ARCH=$(ARCH)                                       \ #    (Checks for presence
        OS=$(OS)                                           \ #    of arguments, sets
        VERSION=$(VERSION)                                 \ #    env vars and runs
        ./build/build.sh                                   \ #    `go install`)
    "
  @if ! cmp -s .go/$(OUTBIN) $(OUTBIN); then \ # <- If binaries have changed 
   mv .go/$(OUTBIN) $(OUTBIN);               \ #    move them from `.go` to `bin`
   date >$@;                                 \
  fi
Enter fullscreen mode Exit fullscreen mode
  • make test - Next one is testing - it once again uses docker run which is nearly identical, with only difference being the test.sh script (only relevant parts):
TARGETS=$(for d in "$@"; do echo ./$d/...; done)

go test -installsuffix "static" ${TARGETS} 2>&1

ERRS=$(find "$@" -type f -name \*.go | xargs gofmt -l 2>&1 || true)

ERRS=$(go vet ${TARGETS} 2>&1 || true)
Enter fullscreen mode Exit fullscreen mode

The lines above are the important part of the file. First of them collects testing targets using path given as parameter. Second line runs the tests and prints output to std out. Remaining two lines run go fmt and go vet respectively, both collecting errors (if there are any) and printing them.

  • make container - Now, the most important part - creating container that can be deployed:
.container-$(DOTFILE_IMAGE): bin/$(OS)_$(ARCH)/$(BIN) in.Dockerfile
    @sed                                 \
        -e 's|{ARG_BIN}|$(BIN)|g'        \
        -e 's|{ARG_ARCH}|$(ARCH)|g'      \
        -e 's|{ARG_OS}|$(OS)|g'          \
        -e 's|{ARG_FROM}|$(BASEIMAGE)|g' \
        in.Dockerfile > .dockerfile-$(OS)_$(ARCH)
    @docker build -t $(IMAGE):$(TAG) -t $(IMAGE):latest -f .dockerfile-$(OS)_$(ARCH) .
    @docker images -q $(IMAGE):$(TAG) > $@
Enter fullscreen mode Exit fullscreen mode

Code for this target is pretty simple, it first substitutes variables in in.Dockerfile and then runs docker build to produce image with both "dirty" and "latest" tags. Finally it prints container name to standard output.

  • make push - Next, when we have image, we need to store it somewhere, right? So, all that make push does is push image to Docker registry.
  • make ci - Another good use for Makefile is to leverage it inside our CI/CD pipeline (next section). This target is very similar to make test - it also runs all the tests, but on top of that, it also generates coverage reports which are then used as an input to code analysis tools.
  • make clean - Lastly, if we want to clean-up our project, we can run make clean, which removes all files generated by previous targets.

I will omit the remaining ones as they are not needed for normal workflow or are just part of other targets.

CI/CD for Ultimate Coding Experience

Last, but definitely not the least - CI/CD. With such a nice setup (if I say so myself), it would be a shame to omit some fancy pipeline, that can do tons of stuff for us, right? I won't go into too much detail about what is in the pipeline, because you can check it out yourself here (I also included comments for pretty much every line, so everything is explained), but I want to point out few things:

This Travis build uses Matrix Build with 4 parallel jobs to speed up whole process

  • The 4 parts (jobs) here are:
    • Build and Test where we verify that application works as expected
    • SonarCloud where we generate coverage reports and send them to SonarCloud server
    • CodeClimate - here, again as in previous one - we generate reports and send them, this time to CodeClimate using their test reporter
    • Push to Registry - finally, we push our container to GitHub Registry (stay tuned for blog post on that!)

Conclusion

I hope this post will help you in your future Go coding adventures. If you want to see more details, go ahead and checkout the repository here. Also, if you have any feedback or ideas for improvements, don't hesitate and submit issue, fork the repo or just give a star, so I know it makes sense to work on it little more. 🙂

In the next part we will look at how you can extend this blueprint to easily build RESTful APIs, test with in-memory database and setup swagger documentation (you can have a sneak peek in rest-api branch in the repository).

Top comments (2)

Collapse
 
kataras profile image
Gerasimos (Makis) Maropoulos • Edited

Nice article Martin, Gophers are missing these type of resources to make their projects perfect from the beginning of their journey! Keep going with more parts!

Gerasimos (Makis) Maropoulos,
Author of Iris Web Framework.

Collapse
 
martinheinz profile image
Martin Heinz

I didn't know about Taskfile, looks pretty nice, will definitely try it out, thanks!