In contrary to Cloud Computing1, Edge Computing refers to connected computing in a decentralised fashion at the edge of the network. The idea is to move applications, data and services away from a centralised data center, and shift them to the outer boundaries of a network (hence “edge computing”). In other words, data streams and resources are at least partially processed on-site where they are being created or collected, while still benefiting from the advantages of the cloud.
This approach requires the application of resources that are not necessarily connected to the network in a permanent fashion, such as sensors, controllers, smart-phones or notebooks.
Edge computing also includes numerous technologies such as sensor networks, data creation through mobile devices, signal processing, peer-to-peer systems as well as ad-hoc networking.
This architecture makes Edge Computing particularly interesting for Internet of Things (IoT) applications.
Arguably, the history of computing has always been oscillating between edge and cloud computing. The very first computers were neither edge nor cloud computers, because they were not interconnected. As the computers started to become interconnected, users could use "terminals" to log into those mainframes, which arguably were the first cloud computers.
When the Personal Computer (PC) was invented however, the computing power would again shift to the edge, until the Internet would re-centralise computing power to servers and cloud machines. It's only the fact that computers are becoming smaller, more efficient and more inter-connectable again, which moves the computing power back towards the edge.
Services of edge based applications significantly reduce the amount of Internet traffic and therefore costs and waiting times. There is no need for a centralised data center, which reduces the risk of data bottlenecks and single points of failure. Furthermore there is no issue with data privacy and data ownership as the data physically never leaves the space where it is generated.
There is also an advantage of security as the edge system is confined to the local, “physical” network.
Disadvantages of edge systems are that they take more effort to maintain, deploy and the data may be more distributed. This may also impact scalability.
Edge computing applications are becoming increasingly popular in any industry that is “up-smarting” physical space, whether it’s on the move or not. This can include making trains and planes more interconnected, autonomous driving, or shipping and logistics can include certain on-site computation. Stationery applications include production factories, smart buildings, public spaces, traffic systems or monitoring of agricultural infrastructure.
Especially big corporations that handle sensitive data are increasingly reluctant towards sending data to connected cloud system. They prefer if the “smartness” develops on-site and the building or office space can make “decisions” such as switching off unused appliances to save energy without sending any information other than analytics and control data to a centralised hub.
This strategy also increases the level of security as the outside is not involved in triggering any control-actions on-site.
In the future we can expect that various edge computing systems can interact between each other through an agreed standard. I mentioned an example of this in my other blogpost, where a traffic light system at a cross road is an edge system, and so are the self driving cars that approach the traffic lights. Not only would the self-driving cars from various manufacturers be able to interchange data between each other through a local wireless connection without the internet, the cars would also be able to connect to the traffic light system during the time they are in its proximity.
Such systems can be imagined in many different contexts, including office buildings, retail, public space or even at home.
The future remains interesting.
Photo by Harrison Broadbent on Unsplash