If you're following along, this is the second part in a series about how I built APOD color search.
In this post I'll cover how everything was built with the intent to run via a hosted service instead of a local environment/database. Making things cloud-first puts more effort up front but saves a tremendous amount of time when deploying and running remotely.
Everything via GitHub Actions 🛰️
Since processing images for this project involves iterating over a large dataset, it was clear early on that the amount of computation was going to be immense. This would require processing things over a long period of time and ensuring consistency when there are any errors.
I already had some experience with GitHub Actions (their native CI/CD framework) and after finding rust-toolchain I wanted to experiment with whether it'd be feasible to use Actions as a compute service.
Turns out: works great! ✨ The hardware is more than sufficient to do CPU-bound image processing, and the only issue that I encountered related to builds stopping due to hitting my account limit of 50 compute hours/month. But with the Actions UI I was able to trigger workflows and process batches of APODs one year at a time over the course of a few months.
Rust in action(s) ⚡️
The Rust-based utility to process images from the previous article can be invoked via CLI. To be able to do this in an Action, the environment must have the right dependencies installed and appropriate environment variables.
A safe way to handle environment variables is via GitHub's encrypted secrets - it requires manually copying values from a local .env
file used for development but it's better than storing unencrypted values in your repository. For this project, the Rust script fetches from apod-api and saves to a supabase Postgres instance, hence the keys for SUPABASE_*
and APOD_API_*
.
For the environment to have Rust installed, including the dtolnay/rust-toolchain@stable
toolchain will ensure it can be used in the action:
name: Process images for month
env:
SUPABASE_REST_URL: ${{ secrets.SUPABASE_REST_URL }}
SUPABASE_URL: ${{ secrets.SUPABASE_URL }}
SUPABASE_PUBLIC_API_KEY: ${{ secrets.SUPABASE_PUBLIC_API_KEY }}
APOD_API_URL: ${{ secrets.APOD_API_URL }}
APOD_API_KEY: ${{ secrets.APOD_API_KEY }}
jobs:
process-images-for-month:
runs-on: ubuntu-latest
steps:
- name: Checkout 🛎️
uses: actions/checkout@v3
- name: Install Rust 🦀
uses: dtolnay/rust-toolchain@stable
with:
components: rustfmt, clippy
toolchain: stable
Dispatching via workflow UI 🛎️
The only remaining piece is to have a way to trigger the workflow to process images. The on
field is the entrypoint for when a workflow runs, and thankfully there's a way to trigger directly from GitHub's UI.
Adding an on: workflow_dispatch
will provide an entrypoint to run the workflow via the Action UI with given inputs. These arguments can then be referenced via github.event.inputs
in the run
command:
...
on:
workflow_dispatch:
inputs:
year:
description: 'Year'
required: true
month:
description: 'Month'
required: true
...
jobs:
...
- name: Run 🤖
run: cargo run -- ${{ github.event.inputs.year }} ${{ github.event.inputs.month }}
Once that is pushed to the main branch for the repository, the option to dispatch should be available. Simply navigate to the workflow under the 'Actions' tab:
And on the right side of the table of workflow runs will be a 'Run workflow' option (will inform you that this is because the "workflow has a workflow_dispatch
event trigger"):
Assuming all secrets are in order, triggering a run will kick of a build to process the images!
Lastly, I added a couple additional jobs to avoid formatting errors or regressions when the workflow runs with new commits:
...
jobs:
...
- name: Lint 🧹
run: |
cargo fmt --all -- --check
cargo clippy -- -D warnings
- name: Test 🔨
run: cargo test
...
This made it super easy to trigger a bunch of workflows in parallel to process images over time. Would highly recommend using them for small projects like this, assuming your requirements aren't too resource-intensive.
Top comments (0)