DEV Community

Timothy Spann.   🇺🇦
Timothy Spann. 🇺🇦

Posted on • Originally published at datainmotion.dev on

The Rise of the Mega Edge (FLaNK)

At one point edge devices were cheap, low energy and low powered. They may have some old WiFi and a single core CPU running pretty slow. Now power, memory, GPUs, custom processors and substantial power has come to the edge.

Sitting on my desk is the NVidia Xaver NX which is the massively powerful machine that can easily be used for edge computing while sporting 8GB of fast RAM, a 384 NVIDIA CUDA® cores and 48 Tensor cores GPU, a 6 core 64-bit ARM CPU and is fast. This edge device would make a great workstation and is now something that can be affordably deployed in trucks, plants, sensors and other Edge and IoT applications.

https://www.datainmotion.dev/2020/06/unboxing-most-amazing-edge-ai-device.html

Next that titan device is the inexpensive hobby device, the Raspberry Pi 4 that now sports 8 GB of LPDDR4 RAM, 4 core 64-bit ARM CPU and is speedy! It can also be augmented with a Google Coral TPU or Intel Movidius 2 Neural Compute Stick.

https://dzone.com/articles/efm-series-using-minifi-agents-on-raspberry-pi-4-w

https://www.datainmotion.dev/2020/02/edgeai-google-coral-with-coral.html

These boxes come with fast networking, bluetooth and the modern hardware running in small edge devices that can now deployed en masse. Enabling edge computing, fast data capture, smart processing and integration with servers and cloud services. By adding Apache NiFi's subproject MiNiFi C++ and Java agents we can easily integrate these powerful devices into a Streaming Data Pipeline. We can now build very powerful flows from edge to cloud with Apache NiFi, Apache Flink, Apache Kafka (FLaNK) and Apache NiFi - MiNiFi. I can run AI, Deep Learning, Machine Learning including Apache MXNet, DJL, H2O, TensorFlow, Apache OpenNLP and more at any and all parts of my data pipeline. I can push models to my edge device that now has a powerful GPU/TPU and adequate CPU, networking and RAM to do more than simple classification. The NVIDIA Jetson Xavier NX will run multiple real-time inference streams at 60 fps on multiple cameras.

I can run live SQL against these events at every segment of the data pipeline and combine with machine learning, alert checks and flow programming. It's now easy to build and deploy applications from edge to cloud.

I'll be posting some examples in my next article showing some simple examples.

By next year, 12 or 16 GB of RAM may be a common edge device RAM, perhaps 2 CPUs with 8 cores, multiple GPUs and large fast SSD storage. My edge swarm may be running much of my computing power as my flows running elastically on public and private cloud scale up and down based on demand in real-time.

Top comments (0)