INTRODUCTION:
Imagine a system that can analyze live video feeds in real time, interpret scenes, and respond intelligently to questions about the environment—just like a virtual assistant with eyes. This is the potential of combining cutting-edge technologies like OpenCV for video processing and Google's Gemini vision model, leveraging its latest "gemini-1.5-flash-latest" model.
In this article, I will guide you through building a Real-Time Object Detection System that uses live video streaming and AI-powered scene analysis to deliver insightful, context-aware responses. We'll deploy the application on AWS EC2, setting the stage for scalability and real-world use while employing Github Actions for automated CI/CD, ensuring a seamless update pipeline.
By the end of this tutorial, you'll have a fully functional AI-powered system ready for deployment, with the confidence to expand and customize it for various use cases.
PROJECT STRUCTURE
project/
├── app.py # Flask application code
├── requirements.txt # Python dependencies
├── templates/
│ └── index.html # Frontend UI
└── .env # Environment variables (API keys, etc.)
Core Components
- Real-Time Video Capture (OpenCV) The WebcamCapture class in app.py handles video streaming:
self.stream = cv2.VideoCapture(0) # Open the default webcam
This ensures efficient, thread-safe frame capture and processing.
- AI-Powered Object Detection (Google Gemini) Using the Gemini model, we analyze frames for real-time scene understanding:
self.model = ChatGoogleGenerativeAI(model="gemini-1.5-flash-latest")
response = self.chain.invoke({"prompt": prompt, "image_base64": image_base64})
- Flask Backend The Flask application provides endpoints for video streaming, AI queries, and system status checks:
/video_feed
: Streams live video.
/process_query
: Handles AI-powered analysis based on user input and video frames.
-
Frontend UI
The
index.html
file provides a responsive web interface for interacting with the system. It captures user queries and displays real-time AI responses.PREREQUISITES
An AWS account.
A registered domain name (e.g., example.com).
A Google Cloud Account or Open AI account
GitHub actions configured in your repository.
4. Basic knowledge of SSH and Linux command-line tools.
APPLICATION CLONING & DEPLOYMENT
Step 1: Clone the Repository, Generate the API & Push the application files to Github
A. Clone the repository
$ git clone https://github.com/Abunuman/Real-Time-ODS.git
$ cd Real-Time-ODS
B. Generate your API key and add to a .env file
i. Create a .env file either manually from the options available on the left-hand side of your text editor (I used VScode)
OR
On the terminal, run:
$ touch .env
Then add these in the .env
GOOGLE_API_KEY=your_google_api_key
OPENAI_API_KEY=your_openai_api_key
FLASK_DEBUG=True
ii. Log into Google Cloud and follow these steps to generate your API key.
a. Navigate to the API & Services Section
b. Click on Credentials then follow the other steps below
Create Credentials > API Key , then the API Key is generated. Remember to note the name of your API key. You can also give it a name during the process.
Copy the API Key generated, go back to your .env file and replace your_google_api_key
with the key you just copied.
c. Enable Gemini API
Search for Gemini API and click on ENABLE
Confirm that your API Key is under the METRICS and Credentials section under the Enabled Gemini API.
iii. Create a .gitignore file and add .env to the file so that it is not pushed to github.
N.B.: Standard practice is to ensure that secrets and environment variables are not exposed to the public. Hence the need for a .gitignore to ignore files added therein while pushing to Github.
B. Push to Repository.
i. Create a Github repository with the application name and follow the commands below to push to github
$ git init
$ git add .
$ git commit -m "first commit"
$ git branch -M main
$ git remote add origin https://github.com/Abunuman/repository-name.git
git push -u origin main
N.B: change repository-name
to your repository name
Step 2: Set up Github Actions Environment Secrets
Configure your AWS IAM user secrets and environment variables needed for the project.
Deploying the project through AWS requires that AWS secrets as well as the environment variables added locally to your .env are added to the Github Actions environment. This is to ensure access to the specific AWS account meant for deployment and also ensure the necessary environment variables are available within the deployment environment.
i. Navigate to Settings in your repository
ii. Click on Secrets and Variables > Actions
iii. Add your Secrets and Variables like below
![Secrets](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7t6vyolkj2jyq85oswh7.png
Step 3: Setting Up AWS EC2 Instance
i. Launch an EC2 Instance
- Use the AWS Management Console to launch an EC2 instance (e.g., Ubuntu 22.04).
- Select an instance type (e.g., t2.micro for free tier users).
- Create and download a key pair (.pem file) for SSH access.
Create a new key pair or use an existing one.
If you are creating a new key pair, click on create key pair and give it a name of your choice.
Select Key Pair type as RSA
File format as .pem
The key pair is automatically downloaded to your system.
- Configure Security Groups
Allow the following inbound rules:
i. HTTP (port 80): For serving your application.
ii. HTTPS (port 443): For secure access.
iii. SSH (port 22): For management access.
- Click on Launch instance and allow the instance to be fully launched.
Now your instance is ready to use once the status shows “Running”.
ii. Configure the key pair (.pem key) for SSH access
For Mac book users or Linux users with bash terminal, configure your key pair for SSH access thus:
a. Open the downloaded .pem key using VScode or Xcode
b. On your terminal, navigate to the .ssh directory from the root directory(~)
$ cd .ssh
c. Create a .pem file in the .ssh directory using nano or vim text editors; I will be using nano in this tutorial.
Install nano if you don't have it installed.
For macbook users
$ brew install nano
For linux users
$ sudo apt install nano
Having installed it, create the .pem file in the .ssh directory using nano.
Ensure the file to be created bears the exact name of your .pem file.
$ sudo nano name_of_pem.pem
Then copy the already opened .pem file and paste in the .pem to be created in the .ssh directory.
Press Ctrl X, then Y , then Enter to save.
d. Change the .pem file permission
$ chmod 400 name_of_pem.pem
iii. Access the Instance - SSH into your EC2 instance:
Click on the Instance ID . Once the instance is in running state, select on the connect option
Once you are at Connect page , Go to SSH Client
Then copy the last command on the page that looks like this:
ssh -i path/to/key.pem ubuntu@<ec2-public-ip>
Paste this on your terminal and press enter. You should connect seamlessly.
For Windows Users
- Windows Setup
Open CMD on your windows machine
Locate desired directory where .pim file is stored
Ideally from this directory , we can run the copied ssh command and we should be able to connect to EC2. However, sometimes we get a security permissions error when we run the ssh command.
We have to change the permissions to the .pem file.
For that follow the steps below.
Locate the .pem file folder , right click on the file and select properties
Go to Security tab
Go to Advanced tab
Click Disable inheritance
This Advance options also shows other user having all permissions to .pem file. Remove permission for all other users
Add the user with which you are trying to connect to EC2 if not already present in the user list.
Enable all permissions for this user.
Ideally with these steps, you should not encounter an error.
Run the SSH command from CMD prompt
Once the permissions are fixed , prompt will successfully connect to EC2
Now you are successfully completed the steps and you can run commands from windows CMD on EC2 instance.
iv.Install Dependencies - Update the package list and install necessary packages:
Having connected to your EC2 instance via SSH, install dependencies on EC2.
On your connected terminal, run the following commands:
$ sudo apt update
$ sudo apt install -y python3 python3-pip nginx
Check the version of python3 installed, ensure its 3.12
python3 --version
Step 2: Deploying the Application
Set Up the Application
Transfer app.py, index.html, and requirements.txt to the EC2 instance:
scp -i path/to/key.pem app.py requirements.txt index.html ubuntu@<ec2-public-ip>:~/
Step 3: Configuring GitHub Actions for CI/CD
Create a Workflow File in your repository, add a .github/workflows/main.yml file:
name: Deploy to AWS EC2
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y sshpass
- name: Deploy application
env:
EC2_USER: ${{ secrets.EC2_USER }}
EC2_HOST: ${{ secrets.EC2_HOST }}
PEM_FILE: ${{ secrets.PEM_FILE }}
run: |
sshpass -f $PEM_FILE scp -o StrictHostKeyChecking=no app.py requirements.txt index.html $EC2_USER@$EC2_HOST:~/
sshpass -f $PEM_FILE ssh -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST << EOF
pip3 install -r requirements.txt
pkill -f app.py || true
nohup python3 app.py &
EOF
CONCLUSION
In this tutorial, we embarked on a comprehensive journey to build and deploy a real-time object detection system that seamlessly integrates OpenCV for live video capture and Google's ChatGoogleGenerativeAI for intelligent scene analysis. From configuring the application locally to deploying it securely on AWS EC2 with a custom domain and SSL, we covered every essential step to transform your idea into a functional and scalable solution.
This project highlights the power of combining cutting-edge technologies like Flask, OpenCV, and AI to solve real-world problems while ensuring best practices for cloud deployment. By following these steps, you've not only deployed a robust AI-powered system but also ensured scalability, security, and efficient CI/CD pipelines.
Top comments (0)