Having worked with Jenkins in the past, I've been curious about GoCD for quite a while, and decided I would finally take the plunge and figure out how two set it up.
A deployment of GoCD needs at least two parts, a server, and an agent. This will cover my experience and notes from doing that configuration and setup.
Step 1 - Docker things
Choose the specific version of the container/OS you'd like to use, for my build, I'm going with gocd/gocd-server
version 23.1.0
which can be pulled with:
docker pull gocd/gocd-server:v23.1.0
Simple usage from their docker docs would indicate that you can run it with the following:
docker run -d -p8153:8153 gocd/gocd-server:v23.1.0
This would spin up GoCD locally on port 8153, but since I'm going to be deploying this using portainer, I want to build out a docker compose file to do this.
The docker docs do provide more useful information, such as what paths in the image contain useful data that you may want to preserve with volumes:
/godata
/home/go
It includes the note that the go
user in the container should have access to these directories, and shares its user id - 1000
.
An additional note that seemed useful here was around running custom entrypoint scripts, and loading configuration from an existing git repo, but at this point I wasn't sure I needed any of that, so I left it for now.
Step 2 - Portainer things
I'm going to create a portainer stack to deploy this in, which probably adds unneeded complexity, but it is what I run most of my home lab systems in, so I'm at least fairly familiar with its pitfalls. Because I already have services deployed in this system, I stole one of my own configs and tweaked it to run GoCD. Below is what my stack/compose file looks like to test things out:
version: '3.2'
services:
gocd-server:
restart: always
image: gocd/gocd-server:v23.1.0
container_name: gocd-server
ports:
- "8153:8153"
volumes:
- type: volume
source: gocd-data
target: /godata
volume:
nocopy: true
- type: volume
source: gocd-home
target: /home/go
volume:
nocopy: true
#environment:
#depends_on:
networks:
- gocd-net
networks:
gocd-net:
volumes:
gocd-data:
driver_opts:
type: "nfs"
o: "addr=<nas-ip>,nolock,soft,rw"
device: ":/storage-path/gocd/gocd-data"
gocd-home:
driver_opts:
type: "nfs"
o: "addr=<nas-ip>,nolock,soft,rw"
device: ":/storage-path/gocd/gocd-home"
I should note I have a pretty unique storage system in place for my volumes on my portainer system, and I wouldn't recommend following that setup unless you absolutely know what you're doing. I've obscured some values in those volumes, but I wanted to include the setup for the sake of completeness.
After a bit of debugging to get to the compose file you see above, it finally deployed without complaint, so now I could test and make sure the UI actually spun up. A quick visit to the IP and port showed I had moderate success so far, I had the start-up wizard.
Step 3 - Agents
My first visit to the gocd ui dropped me into a Add a new pipeline
screen, which present a whole host of things I hadn't used before, but made sense upon review. It wanted a material
which is a source of some kind, for most people - myself included - this will probably be a git repo.
This did in turn, provide me with my first hiccup. Attempting a test connection to a public repo of mine of course failed, as I lacked any kind of authentication. I also feared I was getting ahead of myself as I knew that GoCD required an agent - which I hadn't setup yet - but I was already messing with pipelines.
After poking around the GoCD docs a little more, I decided I wasn't quite ready for a pipeline, and went back to the install portion of the documentation to see what I should do next. It turns out, that was setting up an agent.
I browsed the available docker images for agents, and decided on an ubuntu 22 agent. This would also be version 23.1.0
Unlike the server that required absolutely no env configuration unless you were in the advanced options, the agent docker required at least one env value, the server for it to talk to. Makes sense.
Back to the docker compose file, I added another service entry:
gocd-agent-ubuntu:
restart: always
image: gocd/gocd-agent-ubuntu-22.04:v23.1.0
container_name: gocd-agent-ubuntu
volumes:
- type: volume
source: gocd-agent
target: /godata
volume:
nocopy: true
- type: volume
source: gocd-home
target: /home/go
volume:
nocopy: true
environment:
- GO_SERVER_URL=https://ip.add.re.ss:8153/go
#depends_on:
networks:
- gocd-net
It should be noted that their own docs say an https server url is required. I have my setup behind an nginx proxy manager, so I added an entry for it to get a signed cert, but I believe it can self-generate one for you to use if you toggle off a specific security setting for the agent (see the Configuring SSL section of the docker agent linked above).
The next steps in the GoCd docs told me to head to the agents tab in the UI where I looked for this option in the admin drop-down menu despite it being a top-level menu item for far too long. I did see mine listed, and getting it attached to the server was as simple as selecting the checkbox next to its name and hitting the enable
button at the top.
Step 4 - Users
The next thing I needed to do before actually giving any real data to GoCD to deal with, was configure user management, as right now it was wide open. Turns out this was not as simple as I was hoping. GoCD does not do user management of its own. It takes in an authentication service of some type. You can import users from that source, and give them specific permissions, but it won't let you just spin up a user in its own system.
What it does allow out of the box, is either a password file, or LDAP server. At some point, I may spin up a docker container for LDAP, that would be cool for all sorts of things, but for this case I opted for a password file to try and keep things simple.
The docs pointed me to a github repo for the plugin where I learned gocd-passwd
can be used to generate the hashed password.
The example in the docs showed the file being in /godata/config/
on the container, so I connected to the container console with portainer, and set about making the file. However this was a misunderstanding on my part, as that command and file actually isn't part of the docker image.
I did however, have an ubuntu agent, and the docs on the github repo showed how to use htpasswd
to generate a password file on ubuntu, so I attached to that console instead, and ran the following
# first grab the utils
apt-get update
apt-get install apache2-utils
# genereate a password file called passwd, using bcrypt, for the user admin
htpasswd -c -B passwd admin
Because I had the home directory mapped across both of these containers, I should be able to share this file between them, or simply copy it and delete the original once I had it on the server container.
After reattaching to the server container, I confirmed the file was there, and moved it to where it belonged in the /godata/config
folder.
Back in the UI, I went to Admin -> Authorization Configurations
and clicked add. Selected password file authentication plugin
, and filled in my details. I wasn't sure exactly what to call the id
for this auth config, so I went with the value in the example screenshot - file-auth-config
, and figured it would yell at me if that wasn't allowed.
I then did the following:
- I hit
Check connection
and it reported that it was ok - I hit
Allow only known users to login
. do not do this. - I clicked
Save
.
This was, a massive mistake, as I had not allowed my newly created user to login yet - it just got added - so I essentially locked myself out of the ui.
If you happen to do this, good news, this configuration is written to a file, and you can manually edit that config. The file lives in /godata/db/config.git/cruise-config.xml
, and `/godata/config/cruise-config.xml
Edit the line that looks like this
`xml
<authConfig allowOnlyKnownUsersToLogin="true" id="file-auth-config" pluginId="cd.go.authentication.passwordfile">
`
To look like this:
`xml
<authConfig allowOnlyKnownUsersToLogin="false" id="file-auth-config" pluginId="cd.go.authentication.passwordfile">
`
Save those files, and if needed, restart your container.
Once you have yourself back in the system - or hopefully didn't lock yourself out at first, you can navigate to Admin -> Users management
and check the little box that makes you a system admin.
Once this is done, you can navigate back to Admin -> Authorization configuration
and edit that password file configuration, and check that allow only known users to login
if you want.
Yay, a working user!
Step 5 - Pipelines!
Having gotten users and agents setup, I was now back at my Add a new pipeline
screen from earlier. I realized I was being silly, as the volume I created for the home directory was for exactly this, to allow adding ssh keys. If you haven't done this, github has good docs on this subject.
I created a new key from the docker container, and saved it to the volume so it wouldn't be lost on container restart.
Next I added it to my github account - docs are here - and now it was time to see if it actually worked.
It did not.
This made me sad, but this felt like such a common thing that I had to be missing something, so back to the docs I went.
Under faq/troubleshooting
I found a very useful page on ssh keys and docker. The step I had been missing was running a clone in the console to accept the github server signature. This could have also been done with ssh-keyscan
. I opted to try and clone a repo, and it did prompt me. I accepted, and then tried to test my connection again in the UI.
This time, it worked!
I set my repo URL in the materials section, gave my pipeline a name of build-and-deploy
- we'll see if it actually ends up doing that - added a build stage, and a task. This got me to the point where I could click save + run
which finally landed me on the dashboard page.
Top comments (1)
Great one.