DEV Community

Cover image for Bearcam Companion: Amplify Studio
Ed Miller for AWS Community Builders

Posted on

Bearcam Companion: Amplify Studio

I finally had a chance to start working on the Bearcam Companion app over the weekend. I had described my plans for the app in a previous post, where I defined my Minimum Lovable Product (MLP) as follows:

  • Grab a frame from the webcam
  • Detect the bears (and possibly other animals) in the frame
  • Display the frame and bounding boxes on a webpage
  • Allow users (and eventually an ML model) to identify the bears (and edit detection errors)
  • Provide the top identifications for each bear in the frame on the webpage

Since I haven't done much web application work recently, I decided to start there. I had watched a few tutorials on getting started with AWS Amplify using the Amplify CLI and various frontend frameworks. To get up and running quickly, I decided to focus on the backend and gave AWS Amplify Studio a try.

AWS Amplify provides a set of tools to make it easier to configure web services and connect them to a web frontend UI. AWS Amplify Studio is a visual interface to do many of the same things. There are some Getting Started tutorials which show how to build a basic todo list app or web blog using various front end frameworks.

Data Model

I started out by defining the basic data model I needed. As a start, I need an Images model. This can be pretty basic for now. I decided on including a URL to the image and the date for the image (which I can use to order them).

Once I have images, I plan to use an object detection model to find the bears. I will need to store the information for the found objects, which includes a label (e. g. 'Bear'), confidence level and the bounding box information: width, height, left (x) and top (y). I called this model, Objects. Since one image can have multiple bears, I need to create a 1:n relationship between Images and Objects.

This is what my data models look like in Amplify Studio:

Data Model

Content

I will need some data in my models for testing my application. I'm not ready to implement adding images to my applications yet (I don't have an app yet!), so I'll create some manually. Amplify Studio has a Content manager to make this easier. You can create data automatically using dummy data, but I entered some manually.

Images

To keep my data as real as possible, I used some images from the Explore Snapshots page for the Brooks Falls Brown Bears. I manually added 6 images to my Images model by copying the URL and the date from the snapshot:

Images Data Model

Objects

When editing the Images data, I can also add Objects, but first, I need to figure out the object data details. The easiest way to start was to use Amazon Rekognition demo, since Bear is a recognized label. Click on the Get Started with Amazon Rekognition button which takes you to the AWS Console. From there, click the Try Demo button. On the demo page, you can add a URL and get back the response:

Amazon Rekognition of Bears

The Response is provided in json, the same way that would be returned by an API call to Rekognition. Here's an example from an image which contained two bears:

{
    "Labels": [
        {
            "Name": "Bear",
            "Confidence": 96.10771942138672,
            "Instances": [
                {
                    "BoundingBox": {
                        "Width": 0.29982757568359375,
                        "Height": 0.6181714534759521,
                        "Left": 0.6161065697669983,
                        "Top": 0.14535307884216309
                    },
                    "Confidence": 96.10771942138672
                },
                {
                    "BoundingBox": {
                        "Width": 0.23141272366046906,
                        "Height": 0.5379980206489563,
                        "Left": 0.33176615834236145,
                        "Top": 0.011607191525399685
                    },
                    "Confidence": 72.4666976928711
                }
            ]
        }
    ],
    "LabelModelVersion": "2.0"
}
Enter fullscreen mode Exit fullscreen mode

I may use Rekognition when I first implement an automatic flow for bear detection. For now, I can take the Response results from the Rekognition demo and plug them into one or more Objects for each of the Images:

Images Data Model Edit

After doing this for each of the 6 images, I ended up with 12 objects in my Objects model.

UI Library

Next I worked on the design elements for the UI. AWS provides some example UI elements developed in Figma, which you can duplicate and use in your projects. I don't need anything special at this point, so I started with the example components and synced them with my Amplify UI Library.

StandardCard

I want to have a list of recent images across the bottom of the screen. I used the StandardCard imported from the Figma examples and attached them to my Images data model in Amplify Studio. I connected the image to images.url and used images.data for the first line of the text group. I hid the other 2 lines in the text group since I don't need them. My StandardCard looks like this:

UI Library StandardCard

As you can see, Amplify Studio is showing the StandardCard with real data from my data model. You can click Shuffle preview data to see how the card looks with different data, which is pretty handy.

FrameCollection

I want to have multiple StandardCards to show the most recent images. From the StandardCard configuration I can click Create collection. In the collection configuration I can control the layout and add features like search and pagination. I chose a list from left to right with pagination and a page size of 4. I also set a sort condition to see the most recent images first. Here's my FrameCollection configuration:

UI Library FrameCollection

Again, the component is show with data from my data model, including the sort condition and pagination.

Conclusion

That's all I need in the back end for now. It was quite simple to get started with AWS Amplify Studio. I set up a data model, connected it to a simple UI component (StandardCard) and created and configured a collection (FrameCollection). Next I need to pull the backend configuration to my development machine and get started on the front end.

I'll save that for next time...

Discussion (0)