A few weeks ago, I got assigned to a new project. Like a lot of my work, it's fully remote. Unlike most of my prior such gigs, while the customer does implement network-isolation for their cloud-hosted resources, they aren't leveraging any kind of trusted developer desktop solution (virtualized – cloud-hosted or otherwise – or via customer-issued, hardened, VPN-enabled laptop). Instead, they have per-environment bastion-clusters and leverage IP white-listing to allow remote access to those bastions. To make managing that white-listing less onerous, they require each of their vendors to coalesce all of the vendor-employees behind a single origin-IP.
Working for a small company, the way we ended up implementing things was to put a Linux-based EC2 (our "jump-box") behind an EIP. The customer adds that IP to their bastions' whitelist-set. That EC2 is also configured with a default-deny security-group with each of the team members' "work-from" (usually "home") IP addresses whitelisted.
Not wanting to incur pointless EC2 charges, the EC2 is in a single-node AutoScaling Group (ASG) with scheduled scaling actions. At the beginning of each business day, the scheduled scaling-action takes the instance-count from 0 to 1. Similarly, at the end of each business day, the scheduled scaling-action takes the instance-count from 1 to 0.
This deployment-management choice also has the benefit of not only reducing compute-costs but ensures that there's not a host available to attack outside of business hours (in case the the default-deny + whitelisted source IPs isn't enough protection). Since the auto-scaled instance's launch-automation includes an "apply all available patches" action, it means that day's EC2 is fully updated with respect to security and other patches. Further, it means that on the off chance that someone had broken into a given instantiation, any beachhead they establish goes "poof!" when the end-of-day scale-to-zero action occurs.
Obviously, it's not an absolutely 100% bulletproof safety-setup, but it does raise the bar fairly high for would-be attackers
At any rate, beyond our "jump box" are the customer's bastion nodes and their trusted IPs list. From the customer-bastions, we can then access the hosts that they have configured for development activities to be run from. While they don't rebuild their bastions or their "developer host" instances as frequently as we do our "jump box", we have been trying to nudge them in a similar direction.
For further fun, the customer-systems require using a 2FA token to access. Fortunately, they use PIN-protected SmartCards rather than something like RSA fobs.
Overall, to get to the point where I'm able to either SSH into the customer's "developer host" instances or use VSCode's git-over-ssh capabilities, I have to go:
- Laptop
- (Employer's) Jump Box
- (Customer's) Bastion
- (Customer's) Development host
Wanting to keep my customer-work as close to completely separate from the rest of my laptop's main environment, I use Hyper•V to run a purpose-specific RHEL8 VM. For next-level fun/isolation/etc., my VM's vDisk is LUKS-encrypted. I configure my VM to provide token-passthrough, to make it easy to do my SmartCard-authenticated access to the customer-system(s). But, still, there's a whole lot of hop-skip-jump just to be able to start running my code-editor and pushing commits to their git-based SCM host.
During screen-sharing sessions, I've observed both my company's other consultants and the customer's other vendors' consultants executing these long-assed SSH commands. Basically, they do something like:
$ ssh -N -L <LOCAL_PORT>:<REMOTE_HOST>:<REMOTE_PORT> <USER>@<REMOTE_HOST> -i ~/.ssh/key.pub
…and they do it for each of hosts 2-4 (or just 3 & 4 for the consultants that are VPNing to a trusted network). Further, to keep each hop's connection open, they fire up top
(or similar) after each hop's connection is established.
I'm a lazy typist. So, just one of those ssh invocations makes my soul hurt. In general, I'm a big fan of the capabilities afforded by using a suitably-authored ${HOME}/.ssh/config file. Prior to this engagement, I mostly used mine just to set up host-aliases and ensure that things like SSH key and X11 forwarding are enabled. However, I figured there was a way to further configure things to result in a lot fewer key-strokes and typing for this project's connection-needs. So, started digging around.
Ultimately, I found that OpenSSH's client-configuration offers a beautiful option for making my life require far fewer keystrokes and eliminate the need for starting up "keep the session alive" processes. That option is the "ProxyJump" directive (combined with suitable "LocalForward" and, while we're at it, "User" directives). In short, what I did was I set up one stanza to define my connection to my "jump box". Then added a stanza that defines my connection to the customer's bastion, using the "ProxyJump" directive to tell it "use the jump box to reach the bastion host". Finally, I added a stanza that defines my connection to the customer's development host, using the "ProxyJump" directive to tell it "use the bastion host to reach the development host". Since I've also added requisite key- and X11-forwarding directives as well as remote service-tunneling directives, all I have to do is type:
$ ssh <CUSTOMER>-dev
And, after the few seconds it takes the SSH client to negotiate three, linked SSH connections, I'm given a prompt on the development host. No need to type "ssh …" multiple times and no need to start top
on each hop.
Side note: since each of the hops also implement login banners, adding:
LogLevel error
To each stanza saves a crapton of banner-text flying by and polluting your screen (and preserving scrollback buffer!).
As a bit of a closing note: if any of the intermediary nodes are likely to change with any frequency and change in a way that causes a given remote's HostKey to change, adding:
UserKnownHostsFile /dev/null
To your config will save you from polluting your ${HOME}/.ssh/known_hosts file with no-longer-useful entries for a given SSH host-alias. Similarly, if you want to suppress the "unknown key" prompts, you can add:
StrictHostKeyChecking false
To a given host's configuration-stanza. Warning: the accorded convenience does come with the potential cost of exposing you to undetected man-in-the-middle attacks.
Top comments (0)