So I have a rather robust DevOps platform at home in my lab - private GitLab, Kubernetes, registry, and so on. Now that I'm pushing to a public and production Kubernetes cluster in the cloud, I have a very interesting problem - getting my container images from my private lab network, which has an inaccessible subnet and custom TLD, pulled into the public Kubernetes clusters...
I cooooould create a VPN between my pfSense router and the Kubernetes cluster which would then allow routing to the private container registry. Problem with that is that I also use a custom Certificate Authority in my lab that works great, but since I use a managed Kubernetes service, I can't easily add my certificates to those nodes.
So I guess I have to deploy another Harbor Container Registry in the cloud to mirror my images publicly...
Deploy the VM
I'm working with DigitalOcean's managed Kubernetes service - with their Marketplace of apps and the GitLab integration I use, it's pretty easy to get up and going and use. That's key because I was getting tired of managing my own clusters and their constantly changing deployments.
I'm also going to deploy this Harbor Container Registry in DigitalOcean, in the same Region and Availability Zone so that I can use private networking between the Kubernetes cluster and the registry.
You can quickly deploy the same sorta VM I'm using by clicking the following link which will take you to the DigitalOcean Cloud Panel:
(That's not a referral link, but this is: https://m.do.co/c/9058ed8261ee)
Configuring the VM
So to deploy Harbor, we're going to use Docker and Docker Compose. This is going to make things very, very easy...connect to that VM you just made and run:
sudo yum install epel-release -y
sudo yum update -y
reboot
Once we've done a quick system update and reboot to bring in any new kernels, we can continue with confidence. Speaking of confidence, let's set some security basics:
sudo yum install fail2ban firewalld wget nano -y
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo systemctl enable fail2ban
sudo systemctl enable firewalld
sudo systemctl start fail2ban
sudo systemctl start firewalld
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
Next, we can install Docker and Docker Compose:
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io -y
sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo systemctl enable docker
sudo systemctl start docker
Getting Harbor
You can deploy Harbor a number of different ways - today I'll be using Docker Compose to do so. Some may ask "Why not just deploy on your Kubernetes cluster?" - well, that's because your registry is best suited in a separate environment than a Kubernetes cluster that can be torn down and redeployed at a moment's notice.
Head on over to https://goharbor.io/ and grab the latest release.
wget https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-online-installer-v1.10.2.tgz
tar xvf harbor-online-installer-v1.10.2.tgz
sudo mv harbor /opt
cd /opt/harbor
## Create a directory for the container data and give it the right SELinux contexts
sudo mkdir /data
sudo chcon -Rt svirt_sandbox_file_t /data
SSL Certificates
Now, you could follow the Harbor docs and deploy your own self-signed certificates - I do in my lab. However, in this instance since we're working in the public cloud, we're going to use Let's Encrypt to generate SSL certificates for us.
sudo yum install certbot -y
certbot certonly --standalone --preferred-challenges http --non-interactive --staple-ocsp --agree-tos -m you@example.com -d example.com
# Setup letsencrypt certificates renewing
cat <<EOF > /etc/cron.d/letsencrypt
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
30 2 * * 1 root /usr/bin/certbot renew >> /var/log/letsencrypt-renew.log && cd /etc/letsencrypt/live/example.com && cp privkey.pem domain.key && cat cert.pem chain.pem > domain.crt && chmod 777 domain.*
EOF
# Rename SSL certificates
# https://community.letsencrypt.org/t/how-to-get-crt-and-key-files-from-i-just-have-pem-files/7348
cd /etc/letsencrypt/live/example.com && \
cp privkey.pem domain.key && \
cat cert.pem chain.pem > domain.crt
sudo mkdir -p /etc/docker/certs.d/example.com
sudo ln -s /etc/letsencrypt/live/example.com/chain.pem /etc/docker/certs.d/example.com/ca.crt
sudo ln -s /etc/letsencrypt/live/example.com/cert.pem /etc/docker/certs.d/example.com/example.com.cert
sudo ln -s /etc/letsencrypt/live/example.com/privkey.pem /etc/docker/certs.d/example.com/example.com.key
sudo systemctl restart docker
Deploy Harbor
Now that we have our SSL certificates in place, let's configure the harbor.yml
file - ensure the following lines are changed:
hostname: EXAMPLE.com
https:
port: 443
# The path of cert and key files for nginx
certificate: /etc/letsencrypt/live/example.com/fullchain.pem
private_key: /etc/letsencrypt/live/example.com/privkey.pem
With that and whatever else you'd like changed configured, let's deploy the full Harbor suite:
sudo ./prepare
sudo ./install.sh --with-notary --with-clair --with-chartmuseum
With that, you should now be able to navigate to your hostname and access the Web UI - making sure to change the default admin password ASAP, of course.
Registry Replication
Now, while you already have a great working registry at hand, what if you're like me and you also need to replicate a private registry into a public one? Thankfully, replication is a first-class citizen in Harbor.
Harbor can replicate via a push or a pull action - in this case I'll be pushing from private into the public registry.
1. Public Harbor - Create a Project & Robot Account
First thing we have to do is navigate to our newly created Public Registry. We'll create a Project and in that project we'll create a Robot Account - this Robot Account will be the API key we need to interact with the registry.
- Navigate to Projects
- Click +New Project
- Fill in the modal to your liking
- Click the link of the new Project you created to enter it
- Click the Robot Accounts tab
- Click the + New Robot Account button
- Fill in the modal as you'd like
- Note the robot$YOUR_NAME - that's your Access ID, the long Token is your Access Secret. You'll need these two values in a moment to enable the Private registry to push into this Public registry.
2. Private Harbor - Create a Registry
Now switch over to the Private registry, we'll create the Registry entry and the Replication rule with that entry to push images into that Public registry.
- Navigate to Administration > Registries
- Click the + New Endpoint button
- Fill in the modal with the details as you see needed for your Public registry - including the Access ID and Access Secret from the Robot Account in the Public Registry's Project you created earlier - whew.
- Click Test Connection and OK if it pans out
3. Private Harbor - Create a Replication Rule
- Navigate to Administration > Replications
- Click the + New Replication Rule button
- Fill out the modal as you'd like, referencing the newly created remote destination public registry we just defined. Note: I like to set the Trigger Mode to Event Based so the images will push automatically.
- Click Save
- Click the option circle to the left of your Replication rule you just created - click the Replicate button to run a manual replication
This can take some time depending on your network, how many repositories, how many tags, layers, and so on. A meager 22 artifacts at 1.7GB took about 16 minutes to push manually - the individual streams will be much faster though.
Top comments (0)