DEV Community

Microsoft Azure

Using Cognitive Services Containers with Azure IoT Edge

toolboc profile image Paul DeCarlo Updated on ・8 min read


In this post we will cover how to enable use of Cognitive Services Containers within an Azure IoT Edge Deployment.


IoT Solutions often involve a number of challenges related to the environment where devices are ultimately deployed. These can include intermittent access to the internet, challenges related to remote updates of the device code, and security of the device itself. On the software side, Azure IoT Edge addresses these challenges through a variety of features that were designed specifically for IoT scenarios.

Here are some brief explanations of how IoT Edge addresses these challenges:

  • Intermittent access to the internet is addressed by buffering outbound device telemetry locally on the device until internet access is restored, where it is then sent along with the original timestamps of the data at the time it was generated locally.
  • Remote updates are able to be configured using targeted deployment configurations which specify a list of modules (containerized applications) which are to run on a given device
  • Security is employed by only allowing devices which have been registered to an Azure IoT Hub instance to be able to submit telemetry data to it's respective Azure hosted endpoint in addition to a local Hardware Security Manager which runs on the device itself to enable transitioning trust from underlying hardware root of trust hardware (if available) to securely bootstrap the IoT Edge runtime and monitor the integrity of its operations.

It is important to note that IoT Edge accomplishes safe deployment of code through the use of containerized modules. Containers allow for easy distribution via familiar docker pull commands and enable runtime-recovery of modules via container restarts in mission critical IoT deployments. This makes containers an ideal candidate for IoT Solutions where an OS environment is available. The IoT Edge runtime can run on Linux X64, Linux ARM32, and Windows X64 environments. Once enabled, you can take advantage of all the benefits mentioned above in addition to some handy features in the Device SDKs which power custom IoT Edge modules.

Azure Cognitive Services allow developers to leverage intelligent algorithms into apps, websites, and bots so that they see, hear, speak, and understand by enabling common AI functionality through a Software as a Service offering. Microsoft recently announced support for a subset of it's line of Cognitive Services to run locally in the form of containers. These include services for Computer Vision, Face, LUIS, and various forms of Text Analytics. It is important to note that the Computer Vision and Face containers are currently in preview but may you may request access if you are interested in trying them out by filling out the Cognitive Services Vision Containers Request form. Please be aware the Cognitive Services Containers currently only support Linux X64 and require that internet connectivity is re-established every ten minutes, otherwise the container will stop producing results when it's API endpoint is queried.

Leveraging Azure IoT Edge and Cognitive Services Containers together, we can build out IoT Solutions which allow for local AI processing in environments where internet connectivity may be intermittent. We no longer need to rely on an external services to produce insights from our data, everything can be processed locally and deployed / configured from the cloud to enable rollout of updates in a securely designed fashion at scale.


To begin development, you will want to ensure that you have a recent installation of VSCode on your dev machine, you will also need to install the IoT Edge Extension for VSCode.

First create a new IoT Edge Solution with F1 => "Azure IoT Edge: New IoT Edge Solution"

Create New Solution

When asked to create a module, leave the default values and select "C# module" for the module template.

Next, open the deployment.template.json file included in the solution directory. If you are using container images that are in preview, you will have been supplied a username and password to connect to a private container repository. If this is the case, update the registryCredentials section as shown below, to enable pulling images from the private repository.

"registryCredentials": {
"containerpreview": {
"username": "{YourUsername}",
"password": "{YourPassword}",
"address": ""

Next we need to create a Cognitive Services resource in Azure:
Create Cognitive Services Resource

After you have created the Cognitive Services resource, obtain the API Key for the service, this will become the value used later for {YourCogServicesApiKey}:

Get Cognitive Services API Key

Now we can begin adding a module configuration to specify deployment of a Cognitive Services Container. We will start using the "cognitive-services-recognize-text" container. Keep in mind that the process will be similar for other Cognitive Services Containers. You will want to look at the "modules" section of deployment.template.json and update it with the following additional entry (be sure to replace {YourCogServicesLocale} and {YourCogServicesApiKey} with appropriate values):

        "cognitive-services-recognize-text": {
        "version": "1.0",
        "type": "docker",
        "status": "running",
        "restartPolicy": "always",
        "settings": {
          "image": "",
            "Cmd": [
            "HostConfig": {
              "PortBindings": {
              "5000/tcp": [
                "HostPort": "5000"

The createOptions are inferred by looking at the documentation to run the container locally, outside of IoT Edge, i.e.:

docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \ \
Eula=accept \

This specifies a container which maps port 5000 on the host to port 5000 running inside of the container along with Cmd's for setting Eula, Billing, and ApiKey values.

If you are curious how the appropriate syntax was obtained for the deployement.template.json module entry, I executed the command above and ran docker inspect on the container which will provide clues on how to formulate the appropriate structure for the "Cmd" section. For a full list of available container create options, see the Docker Engine API documentation.

Now to call the API from our C# module, add the following Global variables to SampleModule.cs:

// ApiKey is not needed on client side talking to a container
private const string ApiKey = "000000000000000000000000000000";
//Note: Endpoint value matches the module name used in deployment.template.json to allow internal resolution from custom modules
private const string Endpoint = "http://cognitive-services-recognize-text:5000";
private static HttpClient client = new HttpClient { BaseAddress = new Uri(Endpoint) };

Next, add the following method:

            private static async Task ExtractText(Stream image)

                String responseString = string.Empty;
                using (var imageContent = new StreamContent(image))
                    imageContent.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");

                    var requestAddress = "/vision/v2.0/recognizetextDirect";

                    using (var response = await client.PostAsync(requestAddress, imageContent))
                        var resultAsString = await response.Content.ReadAsStringAsync();
                        var resultAsJson = JsonConvert.DeserializeObject<JObject>(resultAsString);

                        if (resultAsJson["lines"] == null)
                            responseString = resultAsString;
                            foreach (var line in resultAsJson["lines"])
                                responseString += line["text"] + "\n";


The process for calling the appropriate container endpoint was determined by perusing the source code of the cognitive-services-containers-samples/Recognize-Text Sample

To trigger the ExtractText method, produce an appropriate mechanism to supply a Stream for a suitable image file to the method and monitor the result using docker logs -f <container_id>

Here is an example image and response supplied from using a screengrab of the NES Classic "The Addams Family"


If you think this kind of thing is cool (translating retro video games), I have a full open-source project which makes use of IoT Edge and Cognitive Services to do exactly that @

When your code is ready to deploy to a device, follow the instructions for how to deploy modules from VSCode.


While I have only explicitly demonstrated the process for working with the cognitive-services-recognize-text container, the overall process can be applied to other Cognitive Services Containers.

Here is the general flow you will want to follow:

  1. Create a proper module entry for the Cognitive Services container in deployment.template.json (Also provide docker registry credentials if using a preview image or images stored in a private docker registry). If using multiple Cognitive Services Containers as modules, be sure to map the external host port from a unique external port number to the internal port 5000 for all entries.
  2. Peruse the cognitive-services-containers-samples repo for examples on how to interact with the service in question.
  3. Implement a method which calls the appropriate API endpoint in your Custom Module code using information obtained in step 2 and employ logging to stdout to track module output and status at runtime
  4. Deploy and test the code by watching the docker logs of the module in question


With the announcement of Cognitive Services Containers, we can now employ AI services into IoT Edge deployment configuration to allow for AI scenarios in IoT Edge Solutions without a need for reliance on external cloud services. This provides extremely powerful AI capabilities without the need for the latency and overhead of external services. The possibilities are endless: imagine using localized face recognition for access systems, live-translating video game text from one language to another, or processing license plate text on vehicles using an attached camera. These are exactly the types of scenarios that will be at the crest of the next wave of IoT solutions, i.e. scenarios which employ localized AI functionality in disconnected environments to produce intelligent decisions on-site. It will interesting and exciting to see what kinds of systems will be created using AI processing paired with IoT solutions in the next five years. Do you have any cool ideas you would like to see built on these concepts? Drop a line in the comments and let us know what kinds of things you think will be part of the next wave of AI-enhanced IoT solutions!

Until next time,

Happy Hacking!


Editor guide
manuinnz profile image
Emmanuel Auffray

Quick one please.

I get how to specify the cpu and ram of a docker run but struggle to find how to speicfy this in the deployment template of egde. Would there be a docker in particular I could review?


toolboc profile image
Paul DeCarlo Author

Great question, you can find examples of Memory and Cpu* "container create options" in the docker "create options" documentation.


geekloper profile image
Mohammed Abdellah DERFOUFI

Very helpful, thank you !

When i configured the PortBindings, the port wasn't exposed.

So i added ExposedPorts before PortBindings to expose the port :

    "ExposedPorts": {
        "5000/tcp": {}
    "HostConfig": {
        "PortBindings": {
            "5000/tcp": [
                    "HostPort": "5000"

Edit : Apparently, it's known issue.

tam66662 profile image

Thank you! This was extremely helpful guidance. I wish Microsoft would have included this information in their instructions for ACS containers (

I searched hi and low online for any articles related to creating IoT Edge Modules for existing docker containers such as this, but couldn't find anything. I was only able to find your article by a cryptic search for "iot edge module docker \"eula\"".

Thanks again for your writeup, it got me one step further!