DEV Community

Thomas H Jones II
Thomas H Jones II

Posted on • Originally published at thjones2.blogspot.com on

TIL: You Gotta Be Explicit

Started working on a new contract, recently. This particular customer makes use of S3FS. To be honest, in the past half-decade, I've had a number of customers express interest in S3FS, but they're pretty much universally turned their noses up at it (due to any number of reasons that I can't disagree with — trying to use S3 like a shared filesystem is kind of horrible).

At any rate, this customer also makes use of Ansible for their provisioning automation. One of their "plays" is designed to mount the S3 buckets via s3fs. However, the manner in which they implemented it seemed kind of jacked to me: basically, they set up a lineinfile-based play to add to add s3fs commands to the /etc/rc.d/rc.local file, and then do a reboot to get the filesystems to mount up.

It wasn't a great method, to begin with, but, recently, their their security people made a change to the IAM objects they use to enable access to the S3 buckets. It, uh, broke things. Worse, because of how they implemented the s3fs-related play, there was no error trapping in their work-flow. Jobs that relied on /etc/rc.d/rc.local having worked started failing with no real indication as to why (when you pull a file directly from S3 rather than an s3fs mount, things are pretty immediately obvious what's going wrong).

At any rate, I decided to try to see if there might be a better way to manage the s3fs mounts. So, I went to the documentation. I wanted to see if there was a way to make them more "managed" by the OS such that, if there was a failure in mounting, the OS would put a screaming-halt to the automation. Overall, if I think a long-running task is likely to fail, I'd rather it fail early in the process than after I've been waiting for several minutes (or longer). So I set about simulating how they were mounting S3 buckets with s3fs.

As far as I can tell, the normal use-case for mounting S3 buckets via s3fs is to do something like:

s3fs <bucket> <mount> -o <OPTIONS>

However, they have their buckets cut up into "folders" and sub-folders and wanted to mount them individually. The s3fs documentation indicated that you could both mount individual folders and that you could do it via /etc/fstab. You simply needed an /etc/fstab that looks sorta like:

s3fs-build-bukkit:/RPMs /provisioning/repo fuse.s3fs _netdev,allow_other,umask=0000,nonempty 0 0
s3fs-build-bukkit:/EXEs /provisioning/installer fuse.s3fs _netdev,allow_other,umask=0000,nonempty 0 0
s3fs-users-bukkit:/build /Data/personal fuse.s3fs _netdev,allow_other,umask=0000,nonempty 0 0

However, I was finding that, even though the mount-requests weren't erroring, they also weren't mounting. So, hit up the almighty Googs and found an issue-report in the S3FS project that matched my symptoms. The issue ultimately linked to a (poorly-worded) FAQ-entry. In short, I was used to implicit "folders" (ones that exist by way of an S3 object containing a slash-delimited key), but s3fs relies on explicitly-created "folders" (e.g., null objects with key-names that end in / — as would be created by doing aws s3api put-object --bucket s3fs-build-bukkit --key test-folder/). Once I explicitly created these trailing-slash null-objects, my /etc/fstab entries started working the way the documentation indicated they ought to have been doing all along.

Top comments (0)