DEV Community

Cover image for Flutter Object Detection App + YOLOV5 Model.
AneeqMalik
AneeqMalik

Posted on

Flutter Object Detection App + YOLOV5 Model.

Flutter is a popular open-source mobile application development framework that allows developers to create high-performance, cross-platform applications for both Android and iOS devices. With its ease of use and flexibility, Flutter has become a popular choice for developers looking to build mobile applications.

One of the most popular use cases for mobile applications is object detection, where an application can identify and classify objects in images or videos. YOLOv5 is an advanced object detection algorithm that has gained popularity in recent years for its high accuracy and speed.

In this post, we will explore how to integrate YOLOv5 with Flutter to create an object detection application.

1. Setting Up the Environment:

To get started, you'll need to set up your development environment. You'll need to have Flutter and Python installed on your computer. After that open the Vscode and initialize a new flutter project.

flutter create object_detection
Enter fullscreen mode Exit fullscreen mode

Wait for the project to be created.
Now, open the https://pub.dev/packages/flutter_pytorch and navigate to Installing tab there you can find the latest package version for the time being the package version is 1.0.1
Run this command:

With Flutter:
$ flutter pub add flutter_pytorch
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  flutter_pytorch: ^1.0.1
Enter fullscreen mode Exit fullscreen mode

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it
Now in your Dart code, you can use:

import 'package:flutter_pytorch/flutter_pytorch.dart';
Enter fullscreen mode Exit fullscreen mode

2. Preparing the Model.

Before you can use YOLOv5 in your Flutter application, you'll need to train the model on your specific dataset. You can use an existing dataset or create your own dataset to train the model.
For this post I am using the pretrained model of yolov5 available on https://github.com/ultralytics/yolov5 as we are performing object detection we need to converts the pretrained model weights to torchscript format.

classification

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
model = torch.load('model_scripted.pt',map_location="cpu")
model.eval()
example = torch.rand(1, 3, 224, 224)
traced_script_module = torch.jit.trace(model, example)
optimized_traced_model = optimize_for_mobile(traced_script_module)
optimized_traced_model._save_for_lite_interpreter("model.pt")
Enter fullscreen mode Exit fullscreen mode

object detection (yolov5)

!python export.py --weights "the weights of your model" --include torchscript --img 640 --optimize
Enter fullscreen mode Exit fullscreen mode

example

!python export.py --weights yolov5s.pt --include torchscript --img 640 --optimize
Enter fullscreen mode Exit fullscreen mode

3. Creating a basic UI for the App.

Open the lib folder of the app there you will see main.dart file this is the main compile file of dart.
Run the file in debug mode on an emulator or phone connected.

emulator -avd "Your emulator name"
Enter fullscreen mode Exit fullscreen mode

Image description

Image description

Now run your App through Vscode in debug as it offers quick reload and hot restart.

Your initial app should like the following
Image description

Creating the UI design
To design the App we will follow a WBS (Work Breakdown Structure) for designing app and integrating the ml model.
Go to the main.dart file and replace all the code with the following:

import 'package:flutter/material.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'OBJECT DETECTOR',
      debugShowCheckedModeBanner: false,
      theme: ThemeData(
        // This is the theme of your application.
        //
        // Try running your application with "flutter run". You'll see the
        // application has a blue toolbar. Then, without quitting the app, try
        // changing the primarySwatch below to Colors.green and then invoke
        // "hot reload" (press "r" in the console where you ran "flutter run",
        // or simply save your changes to "hot reload" in a Flutter IDE).
        // Notice that the counter didn't reset back to zero; the application
        // is not restarted.
        primarySwatch: Colors.blue,
      ),
    );
  }
}

Enter fullscreen mode Exit fullscreen mode

The above code is nothing just a basic class structure of App with all the unnecessory code removed. After reload you will see a black screen like the following:

Image description
If you are a beginner don't panic, be patient we are just starting 😂.

Create a new dart file with the name HomeScreen.dart inside the lib folder.
Now if you type stf and press enter this will auto-magically create the relevant structure of code here we are using statefull widget.

class HomeScreen extends StatefulWidget {
  const HomeScreen({super.key});

  @override
  State<HomeScreen> createState() => _HomeScreenState();
}

class _HomeScreenState extends State<HomeScreen> {
  @override
  Widget build(BuildContext context) {
    return Container();
  }
}
Enter fullscreen mode Exit fullscreen mode

Now create a text inside the Scaffold Class as:

 return Scaffold(
      backgroundColor: Colors.white,
      body: Text("Home Screen"),
    );
Enter fullscreen mode Exit fullscreen mode

and call this HomeScreen state to your App State as:

home: HomeScreen(),
Enter fullscreen mode Exit fullscreen mode

Now you can see the App changes to:

Image description

Now add the following code on HomeScreen.dart to design the base UI of the App.

import 'package:flutter/material.dart';
import 'package:flutter/src/widgets/container.dart';
import 'package:flutter/src/widgets/framework.dart';

class HomeScreen extends StatefulWidget {
  const HomeScreen({super.key});

  @override
  State<HomeScreen> createState() => _HomeScreenState();
}

class _HomeScreenState extends State<HomeScreen> {
  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(title: const Text("OBJECT DETECTOR APP")),
      backgroundColor: Colors.white,
      body: Center(
          child: Column(
        mainAxisAlignment: MainAxisAlignment.center,
        children: [
          //Image with Detections....
          //Button to click pic
          ElevatedButton(
            onPressed: () {},
            child: const Icon(Icons.camera),
          )
        ],
      )),
    );
  }
}
Enter fullscreen mode Exit fullscreen mode

Image description
Now the app is Ready to be integerated with the model.

4. Integrating the Model with Flutter

Once you've trained the model, you can integrate it with your Flutter application. First create a folder of assets with sub folder of model and labels also declare it on your pubspec.yaml file. As shown in the images bellow:

Folder Structure after creating the additional model and labels folder.
Image description

Pubspec.yaml file after adding the assets path:

Image description
Now place the models converted to the models folder and create labels.txt file place your labels there.

Image description
Or you can download the model and labels from the following link:
https://github.com/AneeqMalik/flutter_pytorch/tree/main/example/assets
Plz star the repository 😊.
After placing the labels and model to their respective folders it is the time we are waiting for integrating the model to our app.

Integrating the YOLOv5 Model

  • Create the following Variables:
File? _imageFile;
  late ModelObjectDetection _objectModel;
  String? _imagePrediction;
  List? _prediction;
  File? _image;
  ImagePicker _picker = ImagePicker();
  bool objectDetection = false;
  List<ResultObjectDetection?> objDetect = [];
Enter fullscreen mode Exit fullscreen mode
  • Create a Function to load the model into the App:
Future loadModel() async {
    String pathObjectDetectionModel = "assets/models/yolov5s.torchscript";
    try {
      _objectModel = await FlutterPytorch.loadObjectDetectionModel(
          //Remeber here 80 value represents number of classes for custom model it will be different don't forget to change this.
          pathObjectDetectionModel, 80, 640, 640,
          labelPath: "assets/labels/labels.txt");
    } catch (e) {
      if (e is PlatformException) {
        print("only supported for android, Error is $e");
      } else {
        print("Error is $e");
      }
    }
  }
Enter fullscreen mode Exit fullscreen mode
  • Call the load function inside the initState() to load the model as the App Opens:
@override
  void initState() {
    // TODO: implement initState
    super.initState();
    loadModel();
  }
Enter fullscreen mode Exit fullscreen mode
  • Add the following Widgets inside the Column:
 body: Center(
          child: Column(
        mainAxisAlignment: MainAxisAlignment.center,
        children: [
          //Image with Detections....
          Expanded(
            child: Container(
              height: 150,
              width: 300,
              child: objDetect.isNotEmpty
                  ? _image == null
                      ? Text('No image selected.')
                      : _objectModel!.renderBoxesOnImage(_image!, objDetect)
                  : _image == null
                      ? Text('No image selected.')
                      : Image.file(_image!),
            ),
          ),
          Center(
            child: Visibility(
              visible: _imagePrediction != null,
              child: Text("$_imagePrediction"),
            ),
          ),
          //Button to click pic
          ElevatedButton(
            onPressed: () {
              runObjectDetection();
            },
            child: const Icon(Icons.camera),
          )
        ],
      )),
Enter fullscreen mode Exit fullscreen mode
  • Finally create a object detection function to get the inferences of the image detected:
Future runObjectDetection() async {
    //pick an image

    final XFile? image = await _picker.pickImage(
        source: ImageSource.gallery, maxWidth: 200, maxHeight: 200);
    objDetect = await _objectModel.getImagePrediction(
        await File(image!.path).readAsBytes(),
        minimumScore: 0.1,
        IOUThershold: 0.3);
    objDetect.forEach((element) {
      print({
        "score": element?.score,
        "className": element?.className,
        "class": element?.classIndex,
        "rect": {
          "left": element?.rect.left,
          "top": element?.rect.top,
          "width": element?.rect.width,
          "height": element?.rect.height,
          "right": element?.rect.right,
          "bottom": element?.rect.bottom,
        },
      });
    });
    setState(() {
      _image = File(image!.path);
    });
  }
Enter fullscreen mode Exit fullscreen mode

Changing the default sdk version
When you run the App after the above aditions the app will not compile and will give an issue similiar to the following:

Image description
You need to change the default sdk version from the following file
ProjectName\object_detection\android\app\build.gradle:

Image description
Save the changes and re-run the app build.
Wait for the build to complete it may give some warning but ignore them for the time being.

5. Testing the Application

To test your application, you can provide an image or video to the application and see how the model detects objects in the media.

Screenshots

Image description

Image description

Image description

🥳🥳🥳🥳The App is Running Fine and giving detections.

Link to Source Code

https://github.com/AneeqMalik/Flutter-Object-Detector-App-YOLOv5-

Additional Resources/Info

https://pub.dev/packages/flutter_pytorch

Top comments (6)

Collapse
 
muntahashams profile image
Muntaha Shams

I am trying to run your repo by replacing your models and number of classes with mine but it is not working. Can you help me with this?

Collapse
 
fathulaziss profile image
Muhamad Fathul Azis

Hello, i have problem when load model from yolov5. Can you help me please ?
Image description

Collapse
 
mayur310 profile image
Mayur Pawar

Hello @aneeqmalik I am using your repo and I want to a Capture button through which I can capture a predicted image and store it. So is there is any way to do this.

Thank you !!

Collapse
 
aneeqmalik profile image
AneeqMalik

Yes you can use screenshot method in flutter to do so.

Collapse
 
aslizenith profile image
asliZenith

how do i make it for real time?

Collapse
 
aneeqmalik profile image
AneeqMalik

USE CAMERA CONTROLLER YOU CAN SEE AN EXAMPLE ON THE REPO