In my last post, I set up the backend for the Bearcam Companion app using AWS Amplify Studio. This time I'll write about the frontend code and connecting it to the backend using the Amplify CLI.
There are a variety of frontend frameworks to chooses from. Since I am building a web app using AWS Amplify and I am familiar with JavaScript, I was able to narrow things down considerably. In the end, I decided on React (mainly because I found most of the AWS Amplify examples use React).
Check out Amplify Getting Started for React to learn the basics.
Setup
I started off with an empty React app (you can change the name from myapp
to whatever you want to call your app):
npx create-react-app@latest myapp
cd myapp
I already had the Amplify CLI installed from a previous tutorial, so I just need to pull my project. I got the appropriate command from Amplify Studio by clicking the Local setup instruction link near the top-right of the Studio page. The command will look something like this:
amplify pull --appId <app-ID> --envName <environment>
The <app-ID>
will be filled in for you, and you can select between your <environments>
(I only have a staging
environment so far).
App
I followed various tutorials to connect my React frontend with the Amplify backend. Once I had a basic setup, I edited App.js (under src/App.js
) to add a FrameView
. This will be the main view for the Bearcam Companion app. I need to import it to App.js and add the JSX in the function's return()
:
import FrameView from './FrameView';
function App() {
return (
<div className="App">
<h2>Bearcam Companion</h2>
<FrameView/>
</div>
);
}
export default App;
Frame View
In FrameView
I want to use the FrameCollection
I built in Amplify Studio, to show the recent video frames in my Images table. I already connected the FrameCollection
component to the data model using Amplify Studio. The code was pulled down when I did the amplify pull
. In fact, all the components from the original Figma examples plus the ones I created appear under src/ui-components
. Here's my initial FrameView
code, including the FrameCollection
component:
import { FrameCollection } from './ui-components'
export default function FrameView () {
return(
<div>
<FrameCollection width={"100vw"} itemsPerPage={4} />
</div>
)
}
Note: itemsPerPage
provides and easy way to override how many images you want to include in the collection.
View in the Browser
At this point I can start npm:
npm start
Now I can view my app in a browser (I'm using Chrome) at http://localhost:3000/
. So far it looks like this:
The main point of FrameView
is to display a frame (FrameCollection
will be used to select which frame). I also want to be able to draw the bounding boxes from the Objects data model on the frame. First, I'll work on displaying and selecting a frame.
Add the Frame Image
I added an <img>
into the FrameView
, initially hardcoding the image source to one of the images from my Amplify Content set. Now the app is starting to take shape:
Select a Frame from the FrameCollection
I added an onClick
event to the FrameCollection
using the following code in FrameView.js (see this page for more info):
<FrameCollection width={"100vw"} itemsPerPage={4}
overrideItems={({ item, index }) =>
({onClick: () => {updateFrame(item)}
})} />
Then I created updateFrame
which updates the image source:
function updateFrame(item) {
document.getElementById("refImage").src = item.url
}
Now when I click on an image in the FrameCollection
, my main frame view updates to that image.
Draw the Bounding Boxes
I still need to add the bounding boxes on the image. My first thought was to use the HTML Canvas element. I added a <canvas>
where I had the <img>
element and hid the <img>
. Since the browser already took care of loading the <img>
, I didn't need to worry about loading logic. I could reference it with a document.getElementById
and draw it on the canvas. I used the image.id
to look up all the bounding boxes for that image in Objects with a line like this:
const boxes = await DataStore.query(Objects, c => c.imagesID("eq", imageID));
Now I iterated through boxes
and drew each onto the <canvas>
. I ended up with something like this:
I wan't happy with this solution, for 2 main reasons:
- It's really had to make this look good.
- I can't easily handle hover or click actions for the boxes, which will be important when I want additional information or click to edit.
There Must Be a Better Way
For inspiration, I looked back at the demo for Amazon Rekognition (which I used to get bounding boxes for my test content). The Rekognition demo uses a relatively positioned <div>
with styled boarders for each box. This looks much better (and can be changed with CSS) and should make it easier to handle user actions.
I will dive into this next time...
Top comments (0)