DEV Community

Brian Burton
Brian Burton

Posted on • Updated on

Cloud Build, Docker and Artifact Registry: CI/CD Pipelines with Private Packages

If you use Artifact Registry to store private Java/Node/Python packages and Cloud Build to compile your code before deployment, you'll quickly discover that the Docker container it creates can't access the Artifact Registry by default.

This solution was discovered after a day of trial and error so I hope it saves you from the same forehead-to-desk frustration.

Step by Step

  1. Deploy your private packages to the Artifact Registry.

  2. Create a Service Account that only has access to read from the Artifact Registry. In my case I created artifact-registry-reader@<PROJECT>.iam.gserviceaccount.com and gave it access to the Artifact Registry repository as an "Artifact Registry Reader."

  3. Edit the newly created artifact-registry-reader@<PROJECT>.iam.gserviceaccount.com Service Account and under permissions add your Cloud Builder Service Account (<PROJECT_ID>@cloudbuild.gserviceaccount.com) as a Principal and grant it the "Service Account Token Creator" role. [Note, this works even if you use Cloud Build in a separate project.]

  4. Next, your cloudbuild.yaml file should look something like this:

steps:
  # Step 1: Generate an Access Token and save it
  #
  # Here we call `gcloud auth print-access-token` to impersonate the service account 
  # we created above and to output a short-lived access token to the default volume 
  # `/workspace/access_token`.  This is accessible in subsequent steps.
  #
  - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
    args:
      - '-c'
      - >
        gcloud auth print-access-token --impersonate-service-account
        artifact-registry-reader@<PROJECT>.iam.gserviceaccount.com >
        /workspace/access_token
    entrypoint: sh
  # Step 2: Build our Docker container
  #
  # We build the Docker container passing the access token we generated in Step 1 as 
  # the `--build-arg` `TOKEN`.  It's then accessible within the Dockerfile using
  # `ARG TOKEN`
  #
  - name: gcr.io/cloud-builders/docker
    args:
      - '-c'
      - >
        docker build -t us-docker.pkg.dev/<PROJECT>/services/frontend:latest
        --build-arg TOKEN=$(cat /workspace/access_token) -f
        ./docker/prod/Dockerfile . &&

        docker push us-docker.pkg.dev/<PROJECT>/services/frontend
    entrypoint: sh
Enter fullscreen mode Exit fullscreen mode

FIVE. This next step is specific to private npm packages in the Artifact Registry, but my app has a partial .npmrc file with the following content:

@<NAMESPACE>:registry=https://us-npm.pkg.dev/<PROJECT>/npm/
//us-npm.pkg.dev/<PROJECT>/npm/:username=oauth2accesstoken
//us-npm.pkg.dev/<PROJECT>/npm/:email=artifact-registry-reader@<PROJECT>.iam.gserviceaccount.com
//us-npm.pkg.dev/<PROJECT>/npm/:always-auth=true
Enter fullscreen mode Exit fullscreen mode

[Note: All that's missing is the :_authToken line]

SIX. Finally my Dockerfile uses the minted token to update my .npmrc file, giving it access to pull private npm packages from the Artifact Registry.

ARG NODE_IMAGE=node:17.2-alpine

FROM ${NODE_IMAGE} as base

ENV APP_PORT=8080

ENV WORKDIR=/usr/src/app
ENV NODE_ENV=production

FROM base AS builder

# Create our WORKDIR
RUN mkdir -p ${WORKDIR}

# Set the current working directory
WORKDIR ${WORKDIR}

# Copy the files we need
COPY --chown=node:node package.json ./
COPY --chown=node:node ts*.json ./
COPY --chown=node:node .npmrc ./
COPY --chown=node:node src ./src

#######################
# MAGIC HAPPENS HERE
# Append our access token to the .npmrc file and the container will now be 
# authorized to download packages from the Artifact Registry
# 
# IMPORTANT! Declare the TOKEN build arg so that it's accessible
#######################

ARG TOKEN
RUN echo "//us-npm.pkg.dev/<PROJECT>/npm/:_authToken=\"$TOKEN\"" >> .npmrc

RUN npm install

RUN npm run build

EXPOSE ${APP_PORT}/tcp

CMD ["cd", "${WORKDIR}"]
ENTRYPOINT ["npm", "run", "start"]
Enter fullscreen mode Exit fullscreen mode

This is NPM specific but you can transfer these concepts to any other GCP resource to give your Docker build containers secure access with short-lived tokens to any resource in your project(s).

Oldest comments (0)