Like approximately 3% of the desktop user pool globally, I use Linux on my workstations. Most of the time, this is all I need, and all I've needed for about the past 15 years since I was introduced to Linux on the desktop. However, every so rare often in my platforms and developer tooling work, I need to test something on MacOS specifically (which the bulk of the team here at Forem uses for their development environments). Thankfully, it's reasonably straightforward to spin up temporary-ish Macs in the cloud with AWS's EC2, including with secure graphical access over VNC, so let's do a bit of a lightning round (or as close to one as we can get - this is still somewhat of an elaborate dance!) setting such a thing up.
Before we begin anything here, it's important to note that unlike many EC2 instances, Mac instances have a minimum time allocation of 24 hours. Consider this for billing purposes.
First, let's define what we want:
- An ARM64 ("Apple Silicon") MacOS EC2 instance
- Secure access to both the CLI and GUI of said machine
- Quick setup involving as few tools on our own (presumably Linux) workstation, and as little AWS knowledge, as possible
- The ability to tear all of this down quickly when we're done with it
- To not break the bank doing all of the above
These criteria restrict us in a few noteworthy ways:
Due to ARM Mac Mini region support, our EC2 instance must live in
ap-southeast-1(Singapore). At time of writing, these regions vary slightly in pricing.
- Since MacOS's remote desktop support runs over VNC, a protocol not known for being particularly secure, we'll want to SSH tunnel it rather than forward a direct port (which would require more tinkering in AWS's firewalling console anyway, so this is a win-win).
- To avoid needing to set up tools and workflows locally to support it, we won't use Terraform or other Infrastructure As Code tooling to define the infrastructure here (to anyone who had bets out on whether I'd ever encourage the use of the AWS console over an HCL file, here you have it! An exception case!.. though I may write a Terraform-friendly follow-up some day 😄). One fewer
tfstateto worry about is a nice bonus.
Let's dig in.
First, we need to log into the AWS Console. If you're logging in as an IAM user, that user needs to have quite broad permissions. Here's an example I haven't personally tested, but seems pretty close, from memory. IAM security best practices are out of scope for this article; if you're doing everything as the account root user, that's between you and your password/API key management systems 🙂
Next, let's head to the EC2 management interface in the region of choice (note the aforementioned region restrictions). In my case, I'll be using
us-west-2 since I live in the Pacific Northwest anyway. Region selection always matters in AWS, but it will particularly matter here: we'll eventually be seeing the entire desktop of our instance, which is a considerable amount of graphical data being pushed over the wire, and perhaps more importantly, our mouse movements will be subject to whatever latency we have to the datacenter in question (on top of VNC protocol lag).
From here, we're going to click the bright orange "Launch Instance" button. At time of writing, this is the only accent-colored button on the whole page, it can't be missed.
Let's give our instance a name, perhaps, "Josh's MacOS Sandbox", and perhaps give it a tag (to make it easier to find later, or easier for whoever handles your billing audits to know that this charge is from a developer and not some application we host in this AWS account, if applicable). I'm adding
kind=dev as a tag.
Next up, we can use a Quick Start AMI rather than needing to browse through all of the AMIs to have ever roamed the earth - let's click the macOS Quick Start AMI. At the bottom left of this section of the page, there's a toggle for Architecture (this is below where it tells us the version of MacOS, which should always default to the newest version AWS supports). This will default to
64-bit (Mac), which is Intel! We need to change this dropdown to
64-bit (Mac-Arm) to get an M1 Mac!
We can now skip the
Instance Type subheading: there's only one type of M1 Mac instance, and that's
mac2.metal, with 8 CPU cores and 16GB of RAM.
Next up, we need to select a keypair that will be used to SSH into the box in question. If you already have one that you use for other EC2 boxen, it'll work here, too (SSH keys from Linux machines work fine on MacOS), or you can generate a new pair if needed.
Next up is the part many folks dread about EC2: the networking settings. Thankfully, the defaults are exactly what we want: default VPC, default subnet in any availability zone, automatic public IP addresses, a new (not existing, unless you know what you're doing!) security group, and SSH traffic (and only SSH traffic) allowed from anywhere (
0.0.0.0/0). Is this what we want for a production system? Of course not! Is this a production system? Of course not! Quick and dirty, get 'er done. If your company (or default VPC) forbids access from "anywhere", you'll need to reconcile this somehow: a common way to do so is with a "bastion host", which beyond mentioning the existence of such, is beyond the scope of this article and an exercise for the reader.
If you're reasonably confident your external IP address rarely-if-ever changes, you could consider restricting traffic to "My IP" from the dropdown. This is a great option if you work from a corporate office with a single outbound connection (small businesses often quality), an okay option if you work from a home office on cable or fibre, and a horrible option if you use LTE, Starlink, or any other heavily-CGNAT-ed connection. If you don't know what "heavily-CGNAT-ed connection" means or don't know what the backbone of your outside-world connection is, stick to "Anywhere".
Finally, if needed, increase the disk size from the default 100GiB, but note that it's impossible to shrink below that (the AMI snapshot was created with a 100GiB disk).
When we try to click the orange
Launch Instance button on the right sidebar (see image below), we'll get an error complaining that we need to choose a Dedicated Host to launch the instance onto, an implementation detail of the fact that EC2 Macs are bare-metal instances and not VMs like most Linux instances are. Select "Dedicated host - launch this instance on a dedicated Host" from the dropdown.
If we try launching the instance again at this point, we get this lovely error message (transcribed in the alt-text):
Don't close the tab here! Instead, let's right click the
EC2 link found towards the top of the page and open it in a new tab to go take care of some Dedicated Host housekeeping to get a host to launch this instance on.
In the sidebar, let's head over to "Dedicated Hosts".
On this page there's a single accent-colored button labeled "Allocate Dedicated Host". Let's do the thing. On the next screen, we can name the instance whatever we want (I'm going with "Josh's MacOS Sandbox" again since I intend to tear this Dedicated Host down fairly soon rather than to leave it for reuse by a coworker later, which is what this tutorial will assume, but season to taste as necessary), and must select
mac2 for the Instance Family.
mac2.metal should be the only option for Instance Type. Pick an Availability Zone at random (
Math.ceil(Math.random() * 4) if you don't believe in humans' ability to choose values at random, a philosophical debate for another time), and enable Instance Auto-Placement (something I failed to do in the screenshot below...). You must disable Host Maintenance as
mac2 instances don't support it. Everything else is somewhat at your discretion, though it's helpful to add the same tags you added to the EC2 instance config to the Dedicated Host as well (in my case,
kind=dev). Review the screen and smash that Allocate button (there's thankfully no bell to ring).
With the host now allocated, we can close this tab and head back to the instance config tab and hit Retry Failed Tasks. At this point we should see a successful instance launch:
Next up, let's connect to the instance over SSH. AWS helpfully provides a shortcut button for this:
On this screen we're given SSH connection instructions. You'll need to modify them to reflect wherever you store your SSH keys locally, but in general, they should Just Work to get a shell connection to our Mac, which will be the basis for getting our GUI set up. Note that you should probably give the box about 5 minutes to boot up after launching the instance. This is a great opportunity to stare off into the abyss and think about your life decisions, or grab a coffee, or whatever.
Note! As a rule of thumb, never expect a MacOS terminal to have a clue how to handle Linux terminal emulators or their terminfo, even if you override the
TERMenvironment variable to something generic like
xterm-256color. Prepare for your Home and End keys to likely do nothing useful, as just one example.
While the box comes with a few useful things such as Homebrew out of the box, such exploration is mostly left as an exercise to the reader depending on what they actually want to accomplish with the instance. If you're only interested in CLI access to the box, you're done with setup and can head down to the Teardown Notes below. For the rest of us looking for GUI access, read on.
As the AWS re:Post Knowledge Center describes, we'll need to run a few commands in the shell to get VNC access to the machine:
sudo defaults write /var/db/launchd.db/com.apple.launchd/overrides.plist com.apple.screensharing -dict Disabled -bool false sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.screensharing.plist # And now set a password for the user, since we otherwise connect only with keyfiles over SSH. sudo /usr/bin/dscl . -passwd /Users/ec2-user
exit to leave the shell we currently have open, and let's spin up a new SSH connection that adds port forwarding (
-L 5900:localhost:5900, which says "when I connect to port 5900 on my workstation, pass the data through the SSH tunnel onwards to whatever
localhost means on the remote server, port 5900). If we previously may have used
ssh -i ~/.ssh/aws-forem-klardotsh-1.pem email@example.com, we'll now use
ssh -i ~/.ssh/aws-forem-klardotsh-1.pem -L 5900:localhost:5900 firstname.lastname@example.org
Importantly, you'll see no special output relating to that port forward here! You'll be dumped into a plain old normal shell, just like we had before. That's okay, just know that you must keep this shell open while VNCing into the box.
Next up, we need some sort of VNC client on our workstation. I strongly recommend Remmina if you value not spending your time debugging and configuring things. Just select "VNC" from the main connection bar's dropdown, punch in
localhost:5900, and hit enter. An authentication screen will pop up: fill it with
ec2-user for the username, and whatever password you provided to
Hit Ok, wait a moment, and - voila! We have graphical access to our Mac, albeit a locked one. We'll need to log in one more time, but at this point, we have full GUI access to a real M1 Mac! Develop away, but be sure to check out Teardown Notes below.
If you close the terminal hosting the SSH connection, the VNC connection will also die, so be sure to disconnect cleanly from VNC before closing that terminal.
To tear down our stack when we're done, we'll need to keep a few things in mind, which are documented in the AWS User Guide:
- Destroying the EC2 instance will take lots of time.
- Destroying the EC2 instance only destroys the running system, but does not release our claim on the underlying Reserved Host. We will be billed for the reserved host in the meantime.
- Releasing the Dedicated Host can be done only a minimum of 24 hours after we originally claimed it. Blame Apple's licensing here for the lack of quick turnaround.
- Optionally, remember to tear down the related security group that was automatically created by AWS during instance config creation.