In the Bearcam Companion application I am developing, I have been linking images directly to Explore.org Snapshots. Now I want to store the images in S3. This will make it easier to work with Rekognition and will eventually enable me to create training sets for machine learning model development. This post describes my experience working with Amplify Storage.
Add S3 resource
Adding an S3 bucket in the Amplify CLI is as simple as:
amplify add storage
The command will walk you through some options, for example, storage type and access permissions (see the documentation for details). You can also set up S3 Lambda triggers, but I plan to trigger my Lambdas in a different way (more on that in another post to come).
Once the setup is complete, I did an amplify push
to update my cloud resources. Now I can see the storage in Amplify Studio Storage:
Add S3 file object to Images
I already have an Images data model from before. Currently, the model has a url, a date and a 1:many relationship with Objects. Now I want to reference an object in the new S3 bucket. I added an S3 file object in Images. I decided to keep url for future reference.
In Amplify Studio Data modeling, I added a new model, which I called S3Object. This model contains a bucket, region and key.
I then added file, of type S3Object, to the Images data model:
UI to upload an Image
Next, I modified the Add Image interface I created last time. It should upload an image to S3, in addition to saving the relevant information in the Images model. Add Image already has a form to input a url and date then save the data to Images. I added a file selector and image preview to my client code:
<input type="file" align="center" onChange={uploadFromFile}></input>
<img src={uploadImage} width={200} alt="Selected file" />
The function uploadFromFile
sets a couple state variables, one for the uploadFile
and one for a URL to the file, uploadImage
, to display in the <img>
above:
async function uploadFromFile(e) {
const file = e.target.files[0];
setUploadFile(file);
setUploadImage(URL.createObjectURL(file));
}
Once the image is selected and the URL and date are entered, clicking the Save button will fire buttonOnClick()
. This function will upload the image to the S3 bucket, then create an entry in the Images data model. Writing to S3 is pretty easy with Storage.put()
from aws-amplify
:
async function buttonOnClick () {
try { // Upload Image to S3
const result = await Storage.put(uploadFile.name, uploadFile);
console.log("Stored", uploadFile.name, "to", result);
} catch (error) {
console.log("Error uploading file: ", error);
}
try { // Create new entry in Images
saveImage (textURLValue, textDateValue, uploadFile.name);
} catch (error) {
console.log("Error uploading file: ", error);
}
}
In the code above, creating the Images record is handled by saveImage()
. This function uses DataStore.save()
from aws-amplify
:
async function saveImage (url, date, key) {
await DataStore.save(
new Images({
url: url,
date: date,
file: {
bucket: awsExports.aws_user_files_s3_bucket,
region: awsExports.aws_user_files_s3_bucket_region,
key: key
}
})
);
}
The bucket
and region
information come from constants added to awsExports by the amplify add storage
command earlier.
Adding and viewing images
After implementing everything above, I can add images through the Bearcam Companion app:
I can view the Images data in the Content section of Amplify Studio:
A detailed view of an entry shows the new S3Object information under file:
Conclusion
Amplify makes it very easy to add storage to your web application. I used the CLI to add an S3 bucket and updated my data models using Amplify Studio. I modified the Bearcam Companion React application to upload a file to S3 and save the metadata to a table in DynamoDB using Storage
and DataStore
from aws-amplify
.
Next time I will write about triggering a Lambda function to run the Rekognition object detector whenever a new image is added to the Images table in DynamoDB. Follow along here and on Twitter (bluevalhalla).
Top comments (0)