In times of pandemic and rapid growth of IoT, edge computing is becoming a necessary technology.

Edge computing: What it is and what the benefits are

Edge computing is defined as a model that allows storage and computing resources to be moved closer to the end user or device (in the case of IoT). This obviously allows for significant bandwidth savings and faster response times. Interestingly, one of the factors that has significantly influenced the development of this technology in recent times is the pandemic, as it has spawned a huge demand for computing power and storage at the edge of the network. Huge numbers of remote workers uploading large files, students taking advantage of online learning, or patients consulting their doctors remotely – all of this caused significant network load and increased latency. In this article, we will introduce the concept of edge computing and mention the its benefits.

Edge computing definition

The origins of edge computing can be traced back to the 1990s. Even back then there were ideas to introduce nodes located closer to the user for the delivery of cached content such as images or videos. Edge computing is strongly associated with the Internet of Things and is based on the concept of processing data by end devices or micro-data centres operating in their vicinity.

On the Internet there are many different definitions of edge computing. However, the easiest way to define this technology is as a network of micro-data centres that process data locally and send it to either a central database operating in the cloud or a company's data centre.

Following the above definition, it is quite obvious that edge computing can become a great complement to cloud computing, which is increasingly used not only by large corporations, but also smaller companies.

Benefits of edge computing

One of the primary advantages of edge computing technology is the already mentioned reduction in Internet bandwidth requirements. For this reason, it is a great solution for locations where Internet access is somewhat limited. Besides, moving data processing centres closer to the place of their creation reduces the risk of downtime, which can be caused by Internet connection failures.

Another important advantage of edge computing is the significant reduction of delays in information processing. The basic requirement for real-time operation is that there is no delay greater than one millisecond. This is possible if the data centre is located no more than 150 km from the end device.

Edge devices are of course vulnerable to attacks, but edge computing provides slightly more security because data is not stored close on centralised servers. Fragmented data is stored on various edge devices, making it more difficult to attack.

Of course, one of the most important factors is cost. In the case of edge computing, the initial investment can be quite high. However, in the long run (if properly implemented), this technology brings a lot of savings due to the data processing model used. Small devices at the edge are more computationally efficient and powerful, while transfer and storage costs remain constant.

The huge development of various IoT-related technologies requires fast data processing. Autonomous cars can be used as an example. In-vehicle data processing will avoid delays, and when moving at high speed, even milliseconds are of great importance. Cars simply cannot afford to wait for data to be processed. Critical data must therefore be analysed at the source. While we are on the subject of future cars, it is also worth considering safety. After all, such a vehicle becomes an endpoint that must be properly secured. By combining the advantages of cloud and edge computing, passengers will be able to feel completely safe.