DEV Community

Roy Segal
Roy Segal

Posted on

Algorithms in Embedded Computing

Abstract

One of the main aspects in embedded computing is how to execute complex calculations on multiple streams of data effectively in regard to real-time processing, throughput and packet loss.
This can affect how we handle resources (multi-process, multi-threading, scheduler) and even which programming language to use.
The purpose of this document is to organize guidelines between algorithm and embedded software developers.

The Problem

The problem starts with perspective. The KPIs for an algorithm developer are very different from those of a SW developer.
Algorithmic KPls include: Recall, Precision and Accuracy whereas in SW the KPIs would be: run-time, memory usage, Cl compatibility, framework and infrastructure.
It is the reason above that makes the following document very important. It isn't unheard of that whole projects and plenty of development time may go to waste unless we pay very close attention to these guidelines.

System Approach

The algorithmic value chain is mainly comprised of 3 dependent layers:

  • Physical layer - sensors, i.e: video cameras, microphones and data sources, i.e: texts, raw multi-dimensional vectors.

  • Data layer - this includes data handling and annotation procedures. These vary widely and are dependent on the data types.

  • Computational layer - the calculations performed based on the data.

Both the physical and computational layers heavily coincide with the whole system in the following manners.

Physical Layer

This meets both the hardware and software engineers. When selecting the correct sensor for the algorithmic calculations, we must make sure that it is supported by the system.
The sensor could be analog where no A2D component is present, or it could be integrated with an onboard controller which doesn't allow the necessary data manipulation in the computational layer.
Therefore, the following are key factors to consider:

  • Data transfer protocols - Are they supported by the software? How flexible are they? How much latency do they add? How will the computational layer deal with incomplete data?

  • Hardware compatibility - Will the sensor have the correct interface?

  • Debug and analysis - How easy it is to analyze data? Is a simulator included? Is the source code available?

  • Software compatibility - Is the OS distribution supported by the component? Is there an API to configure the component and receive data streams?

Computational Layer

This layer requires the most cooperation and coordination between the algorithm and the software developers. Work methods in conventional algorithm development dictate how to minimize neural network learning rate using high-end hardware. However, looking into the aspects of embedded computing there is an inherent collision, because of the following characteristics:

  • Low energy - Usually it would be necessary to maintain efficient computing in order to increase battery life and reduce CPU consumption.

  • Volume - Memory is a limited resource which could come to -100KB RAM in MCUs. This requires strict memory management and usage of different techniques which could affect calculations using large buffers.

  • Run-time - Embedded computer's architecture is very different from PCs and large servers. CPU frequency is lower, less cores are available and throughput is limited. Therefore, calculations which seem regular for a PC may strain an edge device. This will affect overall performance and increased latency (with a major impact on real-time) resulting in poor algorithmic results.

  • Reliability and Testing - Automated testing is a major issue in SW development, which is why all written code must support easy integration in CI automation and also support UT.

  • Communications - This is where many factors should be taken into considerations by an algorithm developer: packet loss, throughput restrictions, real-time issues and latency. All of these must be benchmarked and we need to make sure the system based on the algorithm can handle them.

Development Stage

Framework
Frameworks vary widely. This is a major issue which we should pay close attention to. Some examples: Matlab, Python (2 vs. 3), C, C++ etc...
Programming languages have a huge impact on the edge device's system design and on issues like IPC, logging and integration. This is why this is the realm of the software developer and where the algorithm developers must try to make the necessary adjustments in order to increase work efficiency.
Matlab is the least preferable in terms of software because it relies on high-resource environments so heavy code refactoring will be needed.
In addition, floating point precision varies between platforms and programming languages. Thus, different outputs should be expected.

Dependencies

External libraries are often used in the algorithm. If so, the following must be taken into account:

  • Support for the edge device's OS and the programming language. For example, if one develops an algorithm in Matlab and uses Matlab's libs there might be no option to use the same libraries and functions.

  • Versioning - make sure the version integrated and tested during the algorithm development is the same one integrated within the OS. Sometimes, a certain version won't exist for the edge device so backporting could occur in order to maintain similarity. This is not mandatory if we understand all the changes merged between these versions and no major was bumped.

Thinking About Deployment

Algorithms and software often use configuration files (a lot) in order to change parameters in order to maximize performance and adjust to the changing surroundings. This could prevent a software update which is costly. However, the cost flexibility of changing parameters could result in exposing 1000s of parameters, so in order to simplify and manage all of them 3 configuration levels are defined:

  • Level 1 - A short list of at most 10 presets should be decided and exposed to the user. A single preset could contain a set of parameter values. This will keep the operational context and solve most cases.

  • Level 2 - All parameters are saved within the context of JSON files. When a more pin-point change is needed one can alter the values in the file itself. This requires good knowledge of the file-system and the ability to modify the file.

  • Level 3 - This is the last resort and the most undesired level. If a change in the algorithm itself is needed which isn't exposed in the configuration files - a software update is required which includes the change itself.

Levels 1 and 2 must be heavily considered in order to avoid unnecessary and costly software updates. In order to do that, the algorithm developer must define the presets and exported parameters exactly to the software developer.

Testing

In order to make sure the transition from the development platform to the edge device was successful, testing is needed. Usually this will be performed by treating the algorithm itself as a blackbox while using a predefined input and configuration and expecting a specific result.
Different use-cases should be implemented. These tests can be implemented in CI.

Top comments (0)