DEV Community

Dave Glover for Microsoft Azure

Posted on

Creating an image recognition solution with Azure IoT Edge and Azure Cognitive Services

Raspberry Pi 3A+ running Azure IoT Edge Image Classifier

Image Classification with Azure IoT Edge

There are lots of applications for image recognition but what I had in mind when developing this application was a solution for vision impaired people scanning fruit and vegetables at a self-service checkout.

Parts Required

  1. Raspberry Pi 3B or better, USB Camera, and a Speaker.

    Note, the solution will run on a Raspberry Pi 3A+, it has enough processing power, but the device is limited to 512MB RAM. I would recommend a Raspberry Pi 3B+ as it has 1GB of RAM and is faster than the older 3B model. Azure IoT Edge requires an ARM32v7 or better processor. It will not run on the ARM32v6 processor found in the Raspberry Pi Zero.

  2. Alternatively, you can run the solution on desktop Linux - such as Ubuntu 18.04. This solution requires USB camera pass through into a Docker container as well as Azure IoT Edge support. So for now, that is Linux.

Quick Installation Guide for Raspberry Pi

If you do not want to download and build the solution you can use the prebuilt Azure IoT Edge configuration from my GitHub repository and use the associated Docker images.

  1. Set up Raspbian Stretch Lite on Raspberry Pi. Be sure to configure the correct Country Code in your wpa_supplicant.conf file.
  2. If you don't already have an Azure account then sign up for a free Azure account. If you are a student then sign up for an Azure for Students account, no credit card required.
  3. Follow these instructions to create an Azure IoT Hub, and an Azure IoT Edge device.
  4. Install Azure IoT Edge runtime on Raspberry Pi
  5. Download the deployment configuration file that describes the Azure IoT Edge Modules and Routes for this solution. Open the deployment.arm32v7.json link and save the deployment.arm32v7.json in a known location on your computer.
  6. Install the Azure CLI and the IoT extension for Azure CLI command line tools. For more information, see Deploy Azure IoT Edge modules with Azure CLI
  7. Open a command line console/terminal and change directory to the location where you saved the deployment.arm32v7.json file.
  8. Finally, from the command line run the following command, be sure to substitute [device id] and the [hub name] values.
az iot edge set-modules --device-id [device id] --hub-name [hub name] --content deployment.arm32v7.json
Enter fullscreen mode Exit fullscreen mode
  1. The modules will now start to deploy to your Raspberry Pi, the Raspberry Pi green activity LED will flicker until the deployment completes. Approximately 1.5 GB of Dockers modules will be downloaded and decompressed on the Raspberry Pi. This is a one off operation.

Solution Overview

The system identifies the item scanned against a pre-trained machine learning model, tells the person what they have just scanned, then sends a record of the transaction to a central inventory system.

The solution runs on Azure IoT Edge and consists of a number of services.

  1. The Camera Capture Module handles scanning items using a camera. It then calls the Image Classification module to identify the item, a call is then made to the "Text to Speech" module to convert item label to speech, and the name of the item scanned is played on the attached speaker.

  2. The Image Classification Module runs a Tensorflow machine learning model that has been trained with images of fruit. It handles classifying the scanned items.

  3. The Text to Speech Module converts the name of the item scanned from text to speech using Azure Speech Services.

  4. A USB Camera is used to capture images of items to be bought.

  5. A Speaker for text to speech playback.

  6. Azure IoT Hub (Free tier) is used for managing, deploying, and reporting Azure IoT Edge devices running the solution.

  7. Azure Speech Services (free tier) is used to generate very natural speech telling the shopper what they have just scanned.

  8. Azure Custom Vision service was used to build the fruit model used for image classification.

IoT Edge Solution Architecture

What is Azure IoT Edge

The solution is built on Azure IoT Edge which is part of the Azure IoT Hub service and is used to define, secure and deploy a solution to an edge device. It also provides cloud-based central monitoring and reporting of the edge device.

The main components for an IoT Edge solution are:-

  1. The IoT Edge Runtime which is installed on the local edge device and consists of two main components. The IoT Edge "hub", responsible for communications, and the IoT Edge "agent", responsible for running and monitoring modules on the edge device.

  2. Modules. Modules are the unit of deployment. Modules are docker images pulled from a registry such as the Azure Container Registry, or Docker Hub. Modules can be custom developed, built as Azure Functions, or as exported services from Azure Custom Vision, Azure Machine Learning, or Azure Stream Analytics.

  3. Routes. Routes define message paths between modules and with Azure IoT Hub.

  4. Properties. You can set "desired" properties for a module from Azure IoT Hub. For example, you might want to set a threshold property for a temperature alert.

  5. Create Options. Create Options tell the Docker runtime what options to start the module with. For example, you may wish to open ports for REST APIs or debugging ports, define paths to devices such as a USB Camera, set environment variables, or enable privilege mode for certain hardware operations. For more information see the Docker API documentation.

  6. Deployment Manifest. The Deployment Manifest pulls everything together and tells the Azure IoT Edge runtime what modules to deploy, from where, plus what message routes to set up, and what create options to start each module with.

Azure IoT Edge in Action

iot edge in action

Solution Architectural Considerations

So, with that overview of Azure IoT Edge here were my considerations and constraints for the solution.

  1. The solution should scale from a Raspberry Pi (running Raspbian Linux) on ARM32v7, to my desktop development environment, to an industrial capable IoT Edge device such as those found in the Certified IoT Edge Catalog.

  2. The solution needs camera input, I used a USB Webcam for image capture as it was supported across all target devices.

  3. The camera capture module needed Docker USB device pass-through (not supported by Docker on Windows) so that plus targeting Raspberry Pi meant that I need to target Azure IoT Edge on Linux.

  4. I wanted my developer experience to mirror the devices I was targeting plus I needed Docker support for the USB webcam, so I developed the solution on my Ubuntu 18.04 developer desktop. See my Ubuntu for Azure Developers guide.

    As a workaround, if your development device is locked to Windows then use Ubuntu in Virtual Box which allows USB device pass-through which you can then pass-through to Docker in the Virtual Machine. A bit convoluted but it does work.

raspberry pi image classifier

Azure Services

Creating the Fruit Classification Model

The Azure Custom Vision service is a simple way to create an image classification machine learning model without having to be a data science or machine learning expert. You simply upload multiple collections of labelled images. For example, you could upload a collection of banana images and label them as 'banana'.

To create your own classification model read How to build a classifier with Custom Vision for more information. It is important to have a good variety of labelled images so be sure to read How to improve your classifier.

Exporting an Azure Custom Vision Model

This "Image Classification" module includes a simple fruit classification model that was exported from Azure Custom Vision. For more information read how to Export your model for use with mobile devices. It is important to select one of the "compact" domains from the project settings page otherwise you will not be able to export the model.

Follow these steps to export your Custom Vision project model.

  1. From the Performance tab of your Custom Vision project click Export.

    export model

  2. Select Dockerfile from the list of available options

    export-as-docker.png

  3. Then select the Linux version of the Dockerfile.

choose docker

  1. Download the docker file and unzip and you have a ready-made Docker solution with a Python Flask REST API. This was how I created the Azure IoT Edge Image Classification module in this solution. Too easy:)

Azure Speech Services

Azure Speech Services supports both "speech to text" and "text to speech". For this solution, I'm using the text to speech (F0) free tier which is limited to 5 million characters per month. You will need to add the Speech service using the Azure Portal and "Grab your key" from the service.

azure speech service

Open the deployment.template.json file and update the BingKey with the key you copied from the Azure Speech service.

speech key

How to install, build and deploy the solution

  1. Clone this GitHub repository.
   git clone https://github.com/gloveboxes/Creating-an-image-recognition-solution-with-Azure-IoT-Edge-and-Azure-Cognitive-Services.git
Enter fullscreen mode Exit fullscreen mode
  1. Install the Azure IoT Edge runtime on your Linux desktop or device (eg Raspberry Pi).

    Follow the instructions to Deploy your first IoT Edge module to a Linux x64 device.

  2. Install the following software development tools.

    1. Visual Studio Code
    2. Plus, the following Visual Studio Code Extensions
    3. Docker Community Edition on your development machine
  3. With Visual Studio Code, open the IoT Edge solution you cloned from GitHub to your developer desktop.

Understanding the Project Structure

The following describes the highlighted sections of the project.

  1. There are two modules: CameraCaptureOpenCV and ImageClassifierService.

  2. The module.json file defines the Docker build process, the module version, and your docker registry. Updating the version number, pushing the updated module to an image registry, and updating the deployment manifest for an edge device triggers the Azure IoT Edge runtime to pull down the new module to the edge device.

  3. The deployment.template.json file is used by the build process. It defines what modules to build, what message routes to set up, and what version of the IoT Edge runtime to run.

  4. The deployment.json file is generated from the deployment.template.json and is the Deployment Manifest

  5. The version.py in the project root folder is a helper app you can run on your development machine that updates the version number of each module. Useful as a change in the version number is what triggers Azure IoT Edge runtime to pull the updated module and it is easy to forget to change the module version numbers:)

visual studio code project structure

Building the Solution

You need to ensure the image you plan to build matches the target processor architecture specified in the deployment.template.json file.

  1. Specify your Docker repository in the module.json file for each module. If you are using a supported Linux Azure IoT Edge Distribution, such as Ubuntu 18.04 as your development machine and you have Azure IoT Edge installed locally then I strongly recommend setting up a local Docker Registry. It will significantly speed up your development, deployment and test cycle.

    To set up a local Docker Registry for prototyping and testing purposes.

docker run -d -p 5000:5000 --restart always --name registry registry:2
Enter fullscreen mode Exit fullscreen mode
  1. If pushing the image to a local Docker repository the specify localhost:5000.
"repository": "localhost:5000/camera-capture-opencv"
Enter fullscreen mode Exit fullscreen mode
  1. Confirm processor architecture you plan to build for.
    From the Visual Studio Code bottom bar click the currently selected processor architecture, then from the popup select the desired processor architecture.

  2. Next, Build and Push the solution to Docker by right mouse clicking the deployment.template.json file and select "Build and Push IoT Edge Solution". The first build will be slow as Docker needs to pull the base layers to your local machine. If you are cross compiling to arm32v7 then the first build will be very slow as OpenCV and Python requirements need to be compiled. On a fast Intel i7-8750H processor cross compiling this solution will take approximately 40 minutes.

    docker build and push

Deploying the Solution

When the Docker Build and Push process has completed select the Azure IoT Hub device you want to deploy the solution to. Right mouse click the deployment.json file found in the config folder and select the target device from the drop-down list.

deploy to device

Monitoring the Solution on the IoT Edge Device

Once the solution has been deployed you can monitor it on the IoT Edge device itself using the

iotedge list

command.

iotedge list
Enter fullscreen mode Exit fullscreen mode

watch iotedge list

Monitoring the Solution from the Azure IoT Edge Blade

You can monitor the state of the Azure IoT Edge module from the Azure IoT Hub blade on the Azure Portal.

azure iot edge devices

Click on the device from the Azure IoT Edge blade to view more details about the modules running on the device.

azure iot edge device details

Done!

When the solution is finally deployed to the IoT Edge device the system will start telling you what items it thinks have been scanned.

Congratulations you have deployed your first Azure IoT Edge Solution!

congratulations

Top comments (44)

Collapse
 
daveam profile image
Andrea Marson

This is really a useful and interesting article. Thank you very much.

That being said, I'm trying to run the project on a PC running Ubuntu 18.04.
I'm using a local registry for testing.
It seems that docker images were pushed to the local registry correctly:

$ docker image list         
REPOSITORY                                                    TAG                 IMAGE ID            CREATED             SIZE
mcr.microsoft.com/azureiotedge-simulated-temperature-sensor   1.0                 c86e0d919bd6        4 weeks ago         96.1MB
localhost:5000/image-classifier-service                       1.1.91-amd64        3147a0658034        4 weeks ago         1.71GB
localhost:5000/camera-capture-opencv                          1.1.91-amd64        cdcc320bd8a6        4 weeks ago         1.26GB
python                                                        3.5                 61bbcc36b492        5 weeks ago         909MB
...
mcr.microsoft.com/azureiotedge-agent                          1.0                 46ad173076af        2 months ago        137MB
mcr.microsoft.com/azureiotedge-diagnostics                    1.0.8               d16965225a70        2 months ago        8.71MB
...
mcr.microsoft.com/azureiotedge-hub                            1.0.7               ed05376f97bd        4 months ago        155MB
mcr.microsoft.com/azureiotedge-agent                          1.0.7               219c2aff4adc        4 months ago        140MB
...
registry                                                      2                   f32a97de94e1        6 months ago        25.8MB
hello-world                                                   latest              fce289e99eb9        8 months ago        1.84kB

However, deployment can't be completed. I found this in the edgeAgent logs:

2019-09-25 08:47:51.916 +00:00 [WRN] - Reconcile failed because of invalid configuration format
Microsoft.Azure.Devices.Edge.Agent.Core.ConfigSources.ConfigFormatException: Agent configuration format is invalid. ---> System.ArgumentException: Image localhost:5000/camera-capture-opencv:1.1.91-amd64 is not in the right format
   at Microsoft.Azure.Devices.Edge.Agent.Docker.DockerConfig.ValidateAndGetImage(String image) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Docker/DockerConfig.cs:line 93

So it seems that there is a syntax error or something like that in the deployment file, but I can't find it. This file looks like this:

{
  "modulesContent": {
    "$edgeAgent": {
      "properties.desired": {
        "schemaVersion": "1.0",
        "runtime": {
          "type": "docker",
          "settings": {
            "minDockerVersion": "v1.25",
            "loggingOptions": "",
            "registryCredentials": {}
          }
        },
        "systemModules": {
          "edgeAgent": {
            "type": "docker",
            "settings": {
              "image": "mcr.microsoft.com/azureiotedge-agent:1.0.7",
              "createOptions": "{}"
            }
          },
          "edgeHub": {
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "mcr.microsoft.com/azureiotedge-hub:1.0.7",
              "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
            }
          }
        },
        "modules": {
          "camera-capture": {
            "version": "1.0",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "localhost:5000/camera-capture-opencv:1.1.91-amd64",
              "createOptions": "{\"Env\":[\"Video=0\",\"azureSpeechServicesKey=2f57f2d9f1074faaa0e9484e1f1c08c1\",\"AiEndpoint=http://image-classifier-service:80/image\"],\"HostConfig\":{\"PortBindings\":{\"5678/tcp\":[{\"HostPort\":\"5678\"}]},\"Devices\":[{\"PathOnHost\":\"/dev/video0\",\"PathInContainer\":\"/dev/video0\",\"CgroupPermissions\":\"mrw\"},{\"PathOnHost\":\"/dev/snd\",\"PathInContainer\":\"/dev/snd\",\"CgroupPermissions\":\"mrw\"}]}}"
            }
          },
          "image-classifier-service": {
            "version": "1.0",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "localhost:5000/image-classifier-service:1.1.91-amd64",
              "createOptions": "{\"HostConfig\":{\"Binds\":[\"/home/pi/images:/images\"],\"PortBindings\":{\"8000/tcp\":[{\"HostPort\":\"80\"}],\"5679/tcp\":[{\"HostPort\":\"5679\"}]}}}"
            }
          }
        }
      }
    },
    "$edgeHub": {
      "properties.desired": {
        "schemaVersion": "1.0",
        "routes": {
          "camera-capture": "FROM /messages/modules/camera-capture/outputs/output1 INTO $upstream"
        },
        "storeAndForwardConfiguration": {
          "timeToLiveSecs": 7200
        }
      }
    }
  }
}

Any help would be greatly appreciated.

Collapse
 
daveam profile image
Andrea Marson

The problem is related to edgeAgent 1.0.7, as explained here.

After updating to 1.0.8, I can deploy the modules:

$ sudo iotedge list
NAME                      STATUS           DESCRIPTION      CONFIG
image-classifier-service  running          Up 11 minutes    localhost:5000/image-classifier-service:1.1.91-amd64
edgeHub                   running          Up 11 minutes    mcr.microsoft.com/azureiotedge-hub:1.0.8
edgeAgent                 running          Up 14 minutes    mcr.microsoft.com/azureiotedge-agent:1.0.8
camera-capture            running          Up 11 minutes    localhost:5000/camera-capture-opencv:1.1.91-amd64
Collapse
 
gloveboxes profile image
Dave Glover

Ah, fantastic. Thanks and great you got working. I'll update the deployment template so it starts with 1.0.8. Let me know how you get on. Cheers Dave

Thread Thread
 
daveam profile image
Andrea Marson

Dave, I'm diving into your project to understand how it works in more detail.

First of all, I'm exploring the camera capture process. I'm running the project on a Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz.
I noticed that if the scene shot by the camera is still, no frames are processed. If the scene changes or if I move the camera, on average about 4 frames per second are processed:

$ iotedge logs camera-capture -f  | ts %F-%H:%M:%.S
2019-09-27-10:42:31.697454 pygame 1.9.6
2019-09-27-10:42:31.697609 Hello from the pygame community. https://www.pygame.org/contribute.html
2019-09-27-10:42:31.697688 sasToken
2019-09-27-10:42:31.697758 
2019-09-27-10:42:31.697833 Python 3.5.2 (default, Nov 12 2018, 13:43:14) 
2019-09-27-10:42:31.697877 [GCC 5.4.0 20160609]
2019-09-27-10:42:31.697904 
2019-09-27-10:42:31.697931 Camera Capture Azure IoT Edge Module. Press Ctrl-C to exit.
2019-09-27-10:42:31.697957 opening camera
...
2019-09-27-10:40:45.445710 sending frame to model: 476
2019-09-27-10:40:45.668826 label: Hand, probability 0.8052769303321838
2019-09-27-10:40:45.925346 sending frame to model: 477
2019-09-27-10:40:46.148182 label: Hand, probability 0.8468263745307922
2019-09-27-10:40:46.404615 sending frame to model: 478
2019-09-27-10:40:46.630166 label: Hand, probability 0.8512248992919922
2019-09-27-10:40:46.886933 sending frame to model: 479
2019-09-27-10:40:47.120413 label: Hand, probability 0.877470850944519
2019-09-27-10:40:47.377079 sending frame to model: 480
2019-09-27-10:40:47.601168 label: Hand, probability 0.8282925486564636
2019-09-27-10:40:47.857675 sending frame to model: 481

Did you achieve similar performances on your development host?
What about the RPi?

Thread Thread
 
daveam profile image
Andrea Marson

I've just found this in CameraCapture.py ...

# slow things down a bit - 4 frame a second is fine for demo purposes and less battery drain and lower Raspberry Pi CPU Temperature
            time.sleep(0.25)

It answers my question ;)

Thread Thread
 
gloveboxes profile image
Dave Glover

Hey yes, I did some optimisations, 1) if pixel change was greater that 70000 pixels RGB then send frame to ml model. 2) slowed down frame rate down, logic was the model would be more available to process a frame if something changed...

Thread Thread
 
gloveboxes profile image
Dave Glover

A raspberry pi 4 take approx 0.8 seconds per inference. Raspberry pi 3b plus approx 1.2 seconds per inference

Thread Thread
 
daveam profile image
Andrea Marson

Thank you, Dave. These numbers are very useful.
How did you get them?

My goal is

  • to run your project on a couple of ARM-based embedded platforms we manufacture
  • to perform some basic profiling
  • to figure out if and how the project could be optimized.

That's why I would like to measure the inference time the same way you did.

Thread Thread
 
gloveboxes profile image
Dave Glover

hey from Bash I just did 'time curl ....' and just used the curl example in the readme from the downloaded custom vision docker container

Thread Thread
 
daveam profile image
Andrea Marson • Edited

Hi Dave

I tested the custom docker container on my PC first and it worked fine:

$ time curl -X POST http://127.0.0.1:32769/image -F imageData=@red-apple.jpg
{"created":"2019-10-01T12:40:10.052750","id":"","iteration":"","predictions":[{"boundingBox":null,"probability":1.5830000847927295e-05,"tagId":"","tagName":"Avocado"},{"boundingBox":null,"probability":2.420000100755715e-06,"tagId":"","tagName":"Banana"},{"boundingBox":null,"probability":0.026290949434041977,"tagId":"","tagName":"Green Apple"},{"boundingBox":null,"probability":2.8750000637955964e-05,"tagId":"","tagName":"Hand"},{"boundingBox":null,"probability":0.00048392999451607466,"tagId":"","tagName":"Orange"},{"boundingBox":null,"probability":0.9731781482696533,"tagId":"","tagName":"Red Apple"}],"project":""}

real    0m0,285s
user    0m0,005s
sys     0m0,008s

Then, I built your project for arm32v7 architecture and pulled the resulting image from my embedded device (I had to use a registry on the docker Hub because I couldn't pull from the local registry running on my PC).
I tried to run the same test on my embedded device running armbian distribution, but it didn't work although the container seems up and running:

root@sbcx:~# docker images
REPOSITORY                                                    TAG                 IMAGE ID            CREATED             SIZE
dave1am/image-classifier-service                              1.1.91-arm32v7      804d48001df8        6 days ago          1.05GB
mcr.microsoft.com/azureiotedge-simulated-temperature-sensor   1.0                 a626b1a36236        2 months ago        200MB
mcr.microsoft.com/azureiotedge-hub                            1.0                 3a84bfb86c7d        2 months ago        252MB
mcr.microsoft.com/azureiotedge-agent                          1.0                 58276103181c        2 months ago        238MB
mcr.microsoft.com/azureiotedge-diagnostics                    1.0.8               a480fa622e2a        2 months ago        7.34MB
root@sbcx:~# docker run -P -d 804d48001df8
9f197d878088d97b33f5ef6338bbd5a1eeaa87fd8890a94e05f78614af1ebdc6
root@sbcx:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                            NAMES
9f197d878088        804d48001df8        "/usr/bin/entry.sh pโ€ฆ"   34 seconds ago      Up 27 seconds       0.0.0.0:32769->80/tcp, 0.0.0.0:32768->5679/tcp   sweet_kepler
root@sbcx:/home/armbian/devel/azure-iot-edge/image-classifier# time curl -X POST http://127.0.0.1:32769/image -F imageData=@red-apple.jpg
curl: (52) Empty reply from server                                                                                                   

real    0m1.669s                                                                                                                                 
user    0m0.030s
sys     0m0.040s

Any advice on how I could analyze this issue?

Thread Thread
 
daveam profile image
Andrea Marson

I just noticed that, after running this test on the embedded device, the container stops and the following warning message appears in its log:

# docker logs --details -f 9f197d878088
 Loading model... * Serving Flask app "app" (lazy loading)
...
WARNING:tensorflow:From /app/predict.py:123: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
Thread Thread
 
gloveboxes profile image
Dave Glover

Hey there, I've not tried armbian. Those Tensorflow messages are just warnings. You can try the arm image I built from glovebox/image-classifier-service:1.1.111-arm32v7 ie docker run -it --rm -p 80:80 glovebox/image-classifier-service:1.1.111-arm32v7. And test with 'curl -X POST xxx.xxx.xxx.xxx/image -F imageData=@image.jpg' My Pi is running Docker version 19.03.3, build a872fc2. Cheers Dave

Thread Thread
 
daveam profile image
Andrea Marson

Hi Dave,
I verified the docker version running on my board:

# docker version
Client: Docker Engine - Community
 Version:           19.03.3
 API version:       1.40
 Go version:        go1.12.10
 Git commit:        a872fc2
 Built:             Tue Oct  8 01:12:57 2019
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.10
  Git commit:       a872fc2
  Built:            Tue Oct  8 01:06:58 2019
  OS/Arch:          linux/arm
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Unfortunately, the outcome is the same even with your image:

# curl -X POST 127.0.0.1/image -F imageData=@red-apple.jpg
curl: (52) Empty reply from server
root@sbcx:/home/armbian/devel/azure-iot-edge/image-classifier# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS                          NAMES
71f9b808440d        glovebox/image-classifier-service:1.1.111-arm32v7   "/usr/bin/entry.sh pโ€ฆ"   3 minutes ago       Up 3 minutes        0.0.0.0:80->80/tcp, 5679/tcp   admiring_mccarthy
root@sbcx:/home/armbian/devel/azure-iot-edge/image-classifier# curl -X POST 127.0.0.1/image -F imageData=@red-apple.jpg
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
root@sbcx:/home/armbian/devel/azure-iot-edge/image-classifier# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

I'm afraid I have to debug at a lower level to understand what's going on (that is quite common for embedded devices ...).
I'm not an expert of Azure-based development approach, so I don't know what is the best thing to do in such a situation.

If there are no better ideas, I'm thinking of:

  • writing a simple Python application to exercise the model by following this tutorial
  • remote debugging it as described in this article you wrote.
Thread Thread
 
gloveboxes profile image
Dave Glover

Hey there, I'm pretty sure that the contents of the container are fine, and they are isolated too. Do you have a Raspberry Pi you can test against? There is nothing to stop you running the contents of the docker project that is exported by Custom Vision directly on the device (ie outside of a container). dg

Thread Thread
 
gloveboxes profile image
Dave Glover • Edited

also try curl to localhost curl -X POST localhost/image -F imageData=@red-apple.jpg or by hostname curl -X POST mydevice.local/image -F imageData=@red-apple.jpg. I've seen issues where name resolution doesnt always work as you'd expect...

Thread Thread
 
daveam profile image
Andrea Marson • Edited

Hi Dave,
unfortunately, neither localhost nor mydevice.local worked :(

So I tried the other approach that doesn't make use of any container.
For convenience, I first tried to make it work on my development PC. I followed this tutorial, but it didn't work either :(

Apart from several warning messages, the simple Python program I wrote crashes because of this error:

2019-10-17 09:53:43.957158: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3092910000 Hz
2019-10-17 09:53:43.957622: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1da3970 executing computations on platform Host. Devices:
2019-10-17 09:53:43.957668: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
  File "/home/sysadmin/.vscode/extensions/ms-python.python-2019.10.41019/pythonFiles/ptvsd_launcher.py", line 43, in <module>
    main(ptvsdArgs)
  File "/home/sysadmin/.vscode/extensions/ms-python.python-2019.10.41019/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 432, in main
    run()
  File "/home/sysadmin/.vscode/extensions/ms-python.python-2019.10.41019/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 316, in run_file
    runpy.run_path(target, run_name='__main__')
  File "/usr/lib/python3.6/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/sysadmin/devel/azure/custom-vision/glover-image-classifier/image-classifier.py", line 143, in <module>
    main()
  File "/home/sysadmin/devel/azure/custom-vision/glover-image-classifier/image-classifier.py", line 138, in main
    predict_image()
  File "/home/sysadmin/devel/azure/custom-vision/glover-image-classifier/image-classifier.py", line 115, in predict_image
    predictions, = sess.run(prob_tensor, {input_node: [augmented_image] })
  File "/home/sysadmin/devel/azure/custom-vision/glover-image-classifier/glover-image-classifier-venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/home/sysadmin/devel/azure/custom-vision/glover-image-classifier/glover-image-classifier-venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1149, in _run
    str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1,) for Tensor 'Placeholder:0', which has shape '(?, 224, 224, 3)'
Terminated

I'll try to figure out what's going on, but I don't think I'll be able to solve it quickly, as I'm not an Tensorflow expert ...
That being said, as far as I know, I can't exclude that the docker version of the classifier doesn't work on my embedded device for the same problem ...

Thread Thread
 
daveam profile image
Andrea Marson

I had a stupid bug in my code.
I fixed it and now everything works fine. I'm gonna run it on my embedded device.

Thread Thread
 
gloveboxes profile image
Dave Glover

Yah awesome!

Thread Thread
 
daveam profile image
Andrea Marson • Edited

Hi Dave,

installing tensorflow and all its dependencies wasn't easy on armbian at all!

I tried several TF/Python combinations, but none of them worked :(
This table lists the combinations I tried and the reason why they fail.

I think that the Illegal instruction problem might explain why your container doesn't work either on this device.

By the way, does your container make use of Python 2.x o 3.x?

In the meantime, I think I'm gonna try a different distro.

Thread Thread
 
daveam profile image
Andrea Marson • Edited

Hi Dave
I also tried Armbian Stretch (Debian 9), but nothing changed. I got an Illegal Instruction error as well.

Then I managed to get an RPi 3. I set it up by following this tutorial. On this platform, my simple test program runs correctly:

pi@raspberrypi:~/devel/glover-image-classifier-0.1.0 $ python3 image-classifier.py             
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.

WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.COMPILER_VERSION is deprecated. Please use tf.version.COMPILER_VERSION instead.

WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.CXX11_ABI_FLAG is deprecated. Please use tf.sysconfig.CXX11_ABI_FLAG instead.

WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/__init__.py:98: The name tf.ConditionalAccumulator is deprecated. Please use tf.compat.v1.ConditionalAccumulator instead.

2019-10-22 15:42:53,478 - DEBUG - Starting ...
2019-10-22 15:42:53,479 - DEBUG - Importing the TF graph ...
Classified as: Red Apple
2019-10-22 15:42:58,061 - DEBUG - Prediction time = 1.8572380542755127 s
Avocado 2.246000076411292e-05
Banana 3.769999921132694e-06
Green Apple 0.029635459184646606
Hand 4.4839998736279085e-05
Orange 0.0009084499906748533
Red Apple 0.9693851470947266
2019-10-22 15:42:58,067 - DEBUG - Exiting ...

I used mounted the same raspbian root file system used with RPi from my embedded platform and I got an Illegal Instruction error again.
So it seems there is a structural incompatibility between one of the software layers (maybe TensorFlow) and my platform, which is based on NXP i.MX6Q.

Thread Thread
 
gloveboxes profile image
Dave Glover

Hey, I had a brief look at armbian and I spotted that it was on a fairly old kernel release - 3.x from memory. I think Stretch on RPi was on 4.3 or something similar. I did wonder if that was where the issue is. There is nothing to stop you from retargeting the Custom Vision model Docker image to different base a image... I think you said you got the CV/Tensorflow running directly on Armbian so that might be a good starting point...

Thread Thread
 
daveam profile image
Andrea Marson • Edited

Actually, I used only the armbian root file system.
Regarding the Linux kernel, I used the one that belongs to the latest official BSP of our platform. It is based on release 4.9.11.
Anyway, I agree with you, in the sense that I can't exclude that the root cause is somehow related to the kernel.

Thread Thread
 
daveam profile image
Andrea Marson

Hi Dave,
finally, I managed to solve the problem.
The root cause is related to how the Tensor Flow packages I used were built. Because of the compiler's flags, these packages make use of instructions that are not supported by the i.MX6Q SoC.

So I rebuilt TF with the proper flags ... et voilร :

$ python3 image-classifier.py 
2019-10-25 11:17:15,288 - DEBUG - Starting ...
2019-10-25 11:17:15,289 - DEBUG - Importing the TF graph ...
Classified as: Red Apple
2019-10-25 11:17:21,591 - DEBUG - Prediction time = 2.567471504211426 s
Avocado 2.246000076411292e-05
Banana 3.769999921132694e-06
Green Apple 0.029635440558195114
Hand 4.4839998736279085e-05
Orange 0.0009084499906748533
Red Apple 0.9693851470947266
2019-10-25 11:17:21,594 - DEBUG - Exiting ...
Thread Thread
 
gloveboxes profile image
Dave Glover

Woohoo, well done!

Collapse
 
tam66662 profile image
Tam66662 • Edited

Hi, just wanted to let you know that I got your demo working on a Linux desktop x86_64 architecture, running Ubuntu 18.04.3, using a Logitech USB C922 webcam.

There were several challenges in finding all the things that needed tweaking, so I thought I'd share for others who may run into some of these issues.

1) deployment.template.json: Edit the azureSpeechServicesKey to match your Azure Cognitive Service's Speech service key (not BingKey, as stated in the tutorial)
2) module.json: In each module's folder, edit the "repository" line to point to your localhost:5000 instead of glovebox
3) azure_text_speech.py: Edit the TOKEN_URL to point to the one that Azure provides for you when you set up your speech service. Also edit the BASE_URL to point to the text-to-speech base URL for your region. For example, I had to edit mine to point to my region:

  TOKEN_URL = "https://westus2.api.cognitive.microsoft.com/sts/v1.0/issuetoken"
  BASE_URL = "https://westus2.tts.speech.microsoft.com/"

4) text2speech.py: For whatever reason, wf.getframerate() would not return the correct frame rate of my audio, causing an error.

Expression 'paInvalidSampleRate' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2048

So I ran 'pacmd list-sinks' to find my actual audio sample rate (48000) and hardcoded it in place of wf.getframerate.
5) predict.py: Lastly, my camera-capture module kept getting connectivity issues, which was actually because it kept returning a response error stating:

Error: Could not preprocess image for prediction. module 'tensorflow' has no attribute 'Session'

This method was deprecated, so to fix this, edit the predict.py's line from 'tf.Session()' to 'tf.compat.v1.Session()'

After all was said and done, I was able to get it working:

Image

Collapse
 
heldss profile image
heldss

Hi ,

Thanks for sharing.I am receiving the same error of your 5 point. In my predict.py file "tf.Session" is missing. Any help will be great.

Collapse
 
heldss profile image
heldss

if possible could u share your this predict.py file.

Thread Thread
 
tam66662 profile image
Tam66662

Sure I could share my predict.py edit. It's simply a one line edit on line 123 from tf.Session() to tf.compat.v1.Session().

from urllib.request import urlopen
from datetime import datetime
import time
import tensorflow as tf
from PIL import Image
import numpy as np
import sys


class Predict():

    def __init__(self):

        self.filename = 'model.pb'
        self.labels_filename = 'labels.txt'
        self.network_input_size = 0
        self.output_layer = 'loss:0'
        self.input_node = 'Placeholder:0'
        self.graph_def = tf.compat.v1.GraphDef()
        self.labels = []
        self.graph = None

        self._initialize()

    def _initialize(self):
        print('Loading model...', end=''),
        with tf.io.gfile.GFile(self.filename, 'rb') as f:
            self.graph_def.ParseFromString(f.read())

        tf.import_graph_def(self.graph_def, name='')
        self.graph = tf.compat.v1.get_default_graph()

        # Retrieving 'network_input_size' from shape of 'input_node'
        input_tensor_shape = self.graph.get_tensor_by_name(
            self.input_node).shape.as_list()

        assert len(input_tensor_shape) == 4
        assert input_tensor_shape[1] == input_tensor_shape[2]

        self.network_input_size = input_tensor_shape[1]

        with open(self.labels_filename, 'rt') as lf:
            self.labels = [l.strip() for l in lf.readlines()]

    def _log_msg(self, msg):
        print("{}: {}".format(time.time(), msg))

    def _resize_to_256_square(self, image):
        w, h = image.size
        new_w = int(256 / h * w)
        image.thumbnail((new_w, 256), Image.ANTIALIAS)
        return image

    def _crop_center(self, image):
        w, h = image.size
        xpos = (w - self.network_input_size) / 2
        ypos = (h - self.network_input_size) / 2
        box = (xpos, ypos, xpos + self.network_input_size,
               ypos + self.network_input_size)
        return image.crop(box)

    def _resize_down_to_1600_max_dim(self, image):
        w, h = image.size
        if h < 1600 and w < 1600:
            return image

        new_size = (1600 * w // h, 1600) if (h > w) else (1600, 1600 * h // w)
        self._log_msg("resize: " + str(w) + "x" + str(h) + " to " +
                      str(new_size[0]) + "x" + str(new_size[1]))
        if max(new_size) / max(image.size) >= 0.5:
            method = Image.BILINEAR
        else:
            method = Image.BICUBIC
        return image.resize(new_size, method)

    def _convert_to_nparray(self, image):
        # RGB -> BGR
        image = np.array(image)
        return image[:, :, (2, 1, 0)]

    def _update_orientation(self, image):
        exif_orientation_tag = 0x0112
        if hasattr(image, '_getexif'):
            exif = image._getexif()
            if exif != None and exif_orientation_tag in exif:
                orientation = exif.get(exif_orientation_tag, 1)
                self._log_msg('Image has EXIF Orientation: ' +
                              str(orientation))
                # orientation is 1 based, shift to zero based and flip/transpose based on 0-based values
                orientation -= 1
                if orientation >= 4:
                    image = image.transpose(Image.TRANSPOSE)
                if orientation == 2 or orientation == 3 or orientation == 6 or orientation == 7:
                    image = image.transpose(Image.FLIP_TOP_BOTTOM)
                if orientation == 1 or orientation == 2 or orientation == 5 or orientation == 6:
                    image = image.transpose(Image.FLIP_LEFT_RIGHT)
        return image

    def predict_url(self, imageUrl):
        self._log_msg("Predicting from url: " + imageUrl)
        with urlopen(imageUrl) as testImage:
            image = Image.open(testImage)
            return self.predict_image(image)

    def predict_image(self, image):
        try:
            if image.mode != "RGB":
                self._log_msg("Converting to RGB")
                image = image.convert("RGB")

            # Update orientation based on EXIF tags
            image = self._update_orientation(image)

            image = self._resize_down_to_1600_max_dim(image)

            image = self._resize_to_256_square(image)

            image = self._crop_center(image)

            cropped_image = self._convert_to_nparray(image)

            with self.graph.as_default():
                with tf.compat.v1.Session() as sess:
                    prob_tensor = sess.graph.get_tensor_by_name(
                        self.output_layer)
                    predictions, = sess.run(
                        prob_tensor, {self.input_node: [cropped_image]})

                    result = []
                    for p, label in zip(predictions, self.labels):
                        truncated_probablity = np.float64(round(p, 8))
                        if truncated_probablity > 1e-8:
                            result.append({
                                'tagName': label,
                                'probability': truncated_probablity,
                                'tagId': '',
                                'boundingBox': None})
                    print('[%s]' % ', '.join(map(str, result)))

                    response = {
                        'id': '',
                        'project': '',
                        'iteration': '',
                        'created': datetime.utcnow().isoformat(),
                        'predictions': result
                    }

                return response

        except Exception as e:
            self._log_msg(str(e))
            return 'Error: Could not preprocess image for prediction. ' + str(e)
Collapse
 
heldss profile image
heldss

Hi ,
This article is helpful.
How can i make the same image classification module without raspberry pi and on my Ubuntu platform.
Any help would be great.

Collapse
 
gloveboxes profile image
Dave Glover

Yes absolutely. I mostly built the project on Ubuntu 18.04 on my laptop and then ported to Raspberry Pi. You will see there are Dockerfiles for x86 in the project. Cheers Dave

Collapse
 
heldss profile image
heldss

Thanks .
Also do u have any document for connecting a physical device like camera to this project.

Thread Thread
 
gloveboxes profile image
Dave Glover

On the bottom bar of Visual Studio Code there is the option to switch the project from armv32 to amd64. That is how you build the containers for arm64. The project will work with most USB cameras and the camera module is using OpenCV to capture frames from the USB camera.

Thread Thread
 
heldss profile image
heldss

Thanks a lot sir for your efforts.

Collapse
 
heldss profile image
heldss

Also , do i have to delete the arm 32v7 files and from platform also if i am not using raspberry pi.

Thanks a lot again.

Thread Thread
 
heldss profile image
heldss

I tried it but it's showing a 500 error in azure portal. How to know whether my camera is connected or not.

Collapse
 
t04glovern profile image
Nathan Glover

Thank you very much, you've saved me countless hours over the last couple days while I've been learning Azure IoT.

Your Dockerfile for OpenCV are also a life saver (i found the ones in the Azure-Samples github.com/Azure-Samples/Custom-vi... to fail to build properly)

p.s. nice surname.

Collapse
 
rayssoftware profile image
Chandra Mohan

Very helpful tutorial and relevant to my current work.

Collapse
 
gloveboxes profile image
Dave Glover

Hey, that is awesome and great that you find helpful.

Would you mind telling me how you found the posting - did you come to dev.to and what did you search for as I was not sure how to tag this post.

Cheers and thanks Dave

Collapse
 
rayssoftware profile image
Chandra Mohan

I follow you on twitter and came across this post in twitter updates. Also I came to know about dev.to only through your post.

Thread Thread
 
gloveboxes profile image
Dave Glover

ah cool - thanks for the follow. Feel free to ask any questions. Cheers Dave

Collapse
 
heldss profile image
heldss

HI,need a little help.I am receiving this in my camera capture log. Camera is not opening.I am using Ubuntu and not raspberry pi. Following is my code:

Camera Capture Azure IoT Edge Module. Press Ctrl-C to exit.
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib confmisc.c:1286:(snd_func_refer) Unable to find definition 'cards.ICH.pcm.surround71.0:CARD=0'
ALSA lib conf.c:4292:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:4771:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM surround71
ALSA lib setup.c:548:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory
ALSA lib setup.c:548:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory
ALSA lib setup.c:548:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline

Collapse
 
tam66662 profile image
Tam66662

@heldss

All of those "ALSA lib" lines have nothing to do with your camera, those are a result of your audio device, so you can ignore them as they are just warnings.

Is there any LED light on your camera that light up when the camera-capture module is running? For example, I have a C922 Logitech webcam, and on the device you can see the white LED lights turn on when camera-capture module starts running.

If you see no lights, or suspect camera-capture module isn't connecting to your camera, then most likely it means your camera device index doesn't match what is in your deployment.template.json file. For example, my webcam shows up as "/dev/video0" and "/dev/video1" whenever I plug it in. Find yours by opening up a terminal window, unplug your camera, type "ls /dev/video*" and see what shows up, then plug in your camera and type "ls /dev/video*" again to determine the number of your camera index. Then in your deployment.template.json, edit the "PathOnHost" and "PathInContainer" parameters to match your device.

                "modules": {
                    "camera-capture": {
                        "version": "1.0",
                        "type": "docker",
                        "status": "running",
                        "restartPolicy": "always",
                        "settings": {
                            "image": "${MODULES.CameraCaptureOpenCV.amd64}",
                            "createOptions": {
                                "Env": [
                                    "Video=0",
                                    "azureSpeechServicesKey=d4f26304e1cc4507b0185e9f257ff292",
                                    "AiEndpoint=http://image-classifier-service:80/image"
                                ],
                                "HostConfig": {
                                    "PortBindings": {
                                        "5678/tcp": [
                                            {
                                                "HostPort": "5678"
                                            }
                                        ]
                                    },
                                    "Devices": [
                                        {
                                            "PathOnHost": "/dev/video0",
                                            "PathInContainer": "/dev/video0",
                                            "CgroupPermissions": "mrw"
                                        },

Restart your camera-capture from Terminal with "iotedge restart camera-capture", and see if it works.

Collapse
 
heldss profile image
heldss

Hi,
I am facing this issue.
Camera Capture Azure IoT Edge Module. Press Ctrl-C to exit.
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map

Collapse
 
gloveboxes profile image
Dave Glover

Hey there - as Tom pointed out about the ALSA messages are to do with audio and are just warnings. I think there must be an issue with your USB camera and OpenCV. What camera are you using? Cheers Dave