If you've ever found yourself repeatedly typing commands to build and manage your projects, it might be time to explore a build automation tool like Make. In this article, we'll cover the basics of Make, which can significantly streamline your development process.
Introduction
The evolution of a project's build process often follows a familiar pattern:
- Initial Setup: Dump all your code into a repository.
- Basic Instructions: Add a README with rough build instructions.
- Script Overload: Include scripts for common tasks.
- Build Bottlenecks: Encounter slow builds due to unnecessary rebuilds.
- Enter Build Systems: Switch to a more efficient build system like Make.
Let's dive into Make and how it can make your builds better.
Understanding Make's Concepts
Targets and Rules
In the world of Make, a target represents a desired outcome, like a file you want to create. A rule defines how to fulfill a target. Unlike some other build tools that use the concept of "tasks", Make focuses on generating files from other files.
Dependencies and Execution
Make is clever when deciding whether to rebuild a target or not. It compares the timestamps of files to determine if a target needs an update. It rebuilds a target when either:
- The target file doesn't exist.
- The target file is older than any of its dependencies.
If a target isn't a real file, it's considered a "phony" target, and Make will always run its associated rule.
Rule Syntax
target: prerequites
commands
Rules are defined as:
- Target: file path, followed by a colon.
- Prerequisites: files or targets required before running this rule, followed by a newline.
- Commands: Shell commands to be executed, indented by a tab.
Read more on the rule syntax in the docs.
Variables
Variables in Make help you avoid repetition. You can use variables to store values, filenames, or even results from commands. Variables can be used in target names, prerequisites and commands.
Variables are assigned using an equals sign (=
) and their names are case-sensitive.
Common expressions are:
- Simple values.
- Listing files using the
wildcard
function. - Calling a shell script (e.g.
cat
orfind
).
project = digital-tortoise
src = $(wildcard src/*)
assets = $(shell cat assets.txt)
Read more on using variables in the docs.
Running make
Run make [target]
to build a specific target.
- You can specify multiple targets, separated by spaces.
- If you don't specify a target, the first rule in the makefile is the default target.
Read more on running Make in the docs
Example: Compiling and Running a Program
Let's work through a basic example to solidify your understanding. Imagine you have a simple program written in a src
folder that you want to compile and run to generate an output file.
# Our default rule runs the "build" and "generate" rules
all: build generate
# Friendly aliases for running specific commands
build: bin/my-program
run: out/result.txt
# Tell make that the "all", "build" and "run" targets are
# "phony" and aren't really files or folders.
.PHONY: all build run
# 1st target compiles our src into the bin directory
bin/my-program: src/*
# A pretend compiler which takes "src" and "out" args
compile --src src/* --out bin/my-program
# 2nd step uses our program to generate a text output
out/result.txt: bin/my-program
bin/my-program --write-to out/result.txt
Handling Non-File Targets
Sometimes, you need to handle targets that aren't single files. You can create sentinel files to represent these targets and their last update times.
My preference is to organize all these sentinel files into a single .make
folder. This directory should be ignored and not committed by Git.
There's a few tricks we'll use here:
- Variables are computed before any rules run, so we create a variable we won't actually use to run a script to ensure our
.make
folder exists before we try to write files to it. - There's a special variable
$@
which is the name of the current target being run. We use this in the last command of our rule to update the timestamp of our sentinel file (.make/test
). - We prefix our
touch
command with@
to avoid printing it to the output.
# Ensure the .make folder exists when starting make
_ := $(shell mkdir -p .make)
# Add a phony target for "test" as an alias
.PHONY: test
test: .make/test
.make/test:
# Run our testing tool
run-tests --all
# Mark .make/test as up-to-date
@touch $@
Real-World Example: Compiling TypeScript
Let's see how you can use Make to manage a TypeScript project:
# Ensure the .make folder exists when starting make
_ := $(shell mkdir -p .make)
# Find all our source files
SRC := $(wildcard src/*.ts)
# Default to only building
default: build
# Alias dist as build
build: dist
# Alias dist sentinel target
dist: .make/dist
# Alias package output file
pack: bin/package.tgz
# Mark aliases as phony (not real files)
.PHONY: default build dist pack
# Install packages if definitions changed
.make/node_modules: package.json yarn.lock tsconfig.json
yarn install
# Mark .make/node_modules as up-to-date
@touch $@
# Build dist directory from source and packages
.make/dist: $(SRC) .make/node_modules
yarn tsc --outDir dist
# Mark .make/dist as up-to-date
@touch $@
# Build package zip from dist directory
bin/package.tgz: .make/dist
cd dist && tar --gzip -cf bin/package.tgz .
This Makefile installs dependencies, compiles TypeScript to JavaScript, and packages the result.
Tips and Recap
- Use Meaningful Targets: Name targets by their output file.
- Use Sentinel Files: For non-file targets, create sentinel files.
- Leverage Dependencies: Model dependencies using file targets.
- Provide Phony Targets: Create helpful phony targets for easier use.
Conclusion
Make is a powerful tool to streamline your build process. By understanding its concepts and syntax, you can significantly improve your development workflow.
For an extended guide to makefiles I'd recommend downloading the free Modern Make Handbook.
For a deeper dive, consult the official Make documentation. Happy building!
Appendix: Common Pitfalls To Avoid
Each command is run in its own shell by default. This means that setting an environment variable won't work unless you chain the commands together with
&&
.The
wildcard
function isn't recursive. Use$(shell find ...)
as an alternative.Beware of version-specific features. There's lots of fancy features in the docs, but MacOS ships with a very old version (3.81 from 2006) so won't work out of the box. Either stick to the basic features or ask MacOS users to install a newer version.
Don't use directories as targets.
Don't use committed files as targets.
Why avoid using directories as targets?
Using a directory as a target is generally fine if you're only creating the directory itself. However, if you're also creating files within that directory, the issue arises. The problem is that the "atomic" nature of the target, which ensures it's either fully updated or not, is compromised when dealing with directories.
Targets are considered "atomic" based on timestamps, which determine if something is up-to-date. To maintain this atomicity, timestamps should only be updated after the entire target is successfully completed. When you create a directory as part of the target and it's successful, the timestamp becomes up-to-date. If subsequent commands fail after the directory creation (such as when writing files), the target remains up-to-date despite the failure, and it won't be rerun.
Assuming the directory already exists and updating the timestamp at the end could lead to problems if the directory is accidentally deleted. Manually recreating the directory would give it a newer timestamp, causing the target not to run. While you can use the force option in "make" to work around this, it's not an ideal user experience compared to using sentinel files.
Why avoid using committed files as targets?
The issue with using committed files as targets stems from the expectation that committed files are manually maintained sources. This is the same problem as the question "Why shouldn't target files be edited?" When committed files are treated as targets, it can create confusion when version control updates the file itself and therefore treats the files as already up-to-date after a checkout.
Top comments (1)
Some recommendations from me:
I think that Make is incredible given how conceptually simple it is, but it is easy to hit it's limits. When you have massive Makefiles, you can also run into bottlenecks, which is why a lot of projects these days use Ninja instead, which does the same thing as Make (rebuild only stuff where the input files changed) but the file format is designed to be machine-generatable (by a build system generator such as CMake) and it is blazing fast.