Living on the Edge: What is edge computing and why do we need it?

Date
March 3, 2020
Hot topics 🔥
Tech Insights
Contributor
Dmitry Ermakov
Living on the Edge: What is edge computing and why do we need it?

We’re living in the age of ‘stronger, better, faster’ where every piece of software or technology undergoes constant updates and improvements to broaden its scope of efficacy and efficiency. The advent of cloud computing revolutionised the way we handle and process data as it gave us an almost infinite platform to store data and computational wizardry. But the storage and speed bandwidth of our technological highway is quickly becoming cluttered and clogged as there are too many data vehicles sharing the same streams. The extraordinary rise of internet-ready devices (the IoT) and the development of applications requiring real-time processing power is the major cause of the congestion. This is leading to slow speeds, lagging and delays, which is becoming increasingly annoying to some, and detrimental to others.

So how have tech nerds and data whisperers around the globe tried to tackle this issue? The answer lies in edge computing.

What is edge computing anyway?

Edge computing is a fairly new paradigm which aims to draw computation and data storage closer in proximity to the source of where the data is being gathered and used. This is useful because it negates the necessity for devices to rely on centralised data sources (clouds) situated thousands of miles away, causing latency and performance issues arising from slow or low network bandwidth.

In simpler terms, edge computing means running fewer processes in the cloud and moving these processes to local places in order to free up space on our global bandwidth highway. By bringing computation closer to the network’s ‘edge’ it reduces the amount of long-distance communication required between users and service providers. It helps the processing of real-time data to perform at its best without suffering latency issues.

What is the network edge?

This is the place where a device, or network containing the device, connects and communicates with the internet. Unlike origin and cloud servers which are geographically situated far away, the edge of the network is located very close to the data source. For example, your computer, or a processor of an IoT device, are considered the network edge.

An example of edge computing

Before edge computing, facial recognition software in your smartphone (the data-heavy process where your phone takes a photo of your face and correlates the various nodal points on your face to make a positive match) would send the data to the cloud for processing. The information is sent from your phone thousands of miles away to the centralised cloud network, which gets processed and then the updated data is returned to your device. This takes up a lot of bandwidth for one application, and think about how many times you do it in a day, and moreover, how many people do it globally every hour. That is a lot of data causing traffic and bandwidth congestion. Edge computing allows for most of this process to be handled on or very close to the device, consolidating the data and computation processing, while only the relevant data is sent to the cloud. This speeds up the processes and frees up bandwidth.

Let’s take a deeper look…

Internet of Things (IoT)

It seems every device we come into contact with these days is becoming smarter, better, and faster. Fridges and toasters can now connect to the internet and perform functions far more elaborate than simply keeping your perishables fresh and your bread toasted. IoT devices connect to the internet to either receive data from a centralized cloud or to deliver information back to the cloud. Some of these devices create vast amounts of data during their lifespan which adds to global bandwidth congestion. This has developed the need to move data and computational processing away from the centralized cloud and closer to the device itself in an attempt to not rely so heavily on bandwidth.

5G

The world has been looking forward to the arrival of 5G wireless and the wait is soon to be over. 5G’s lightning-fast networking technology promises to allow edge computing systems to speed up the production of real-time applications, like video processing and analytics, robotics, artificial intelligence and even self-driving cars. 5G will allow edge computing to perform faster and without any latency issues as the increasingly sophisticated data and information processes can be handled on the network edge, rather than the cloud. This, combined with technologies like AI, facial recognition, machine learning and self-driving cars, bandwidth is going to have a lot more sophisticated tech relying on its network and processing capabilities. The aim is to move most internet-ready devices onto the network edge to keep the global bandwidth reserved for larger processes.

Cost-effective

Every business has a core concept — to make money while reducing costs. Bandwidth costs can be debilitating for companies utilising the cloud for large-scale applications. Edge computing allows companies to save money by having the processing of data done locally, which reduces the amount of data that needs to be processed in a centralised or cloud-based location. This results in a decrease in the need for server resources and the multitude of associated costs which go with it.

Why do we need edge computing?

The main takeaway for the benefits of edge computing is that it helps reduce bandwidth use and minimises server resources, both of which save time and money. The production and use of IoT smart devices will increase exponentially in the coming years. In order to support these devices, a significant amount of computation will have to be moved to the network edge.

We also need to reserve bandwidth rights for future technologies which will be too advanced to operate on the network edge but will have to utilise the global bandwidth of vast clouds to operate, at least initially. Smaller processes such as the requirements of smart-devices and IoT can occupy the edge computing space for now and leave a few lanes of bandwidth highway free-flowing for the technical requirements of the future.

Living on the edge

For all its intended and obvious benefits, edge computing does have its own set of drawbacks. When it comes to privacy and security, the nature of edge computing opens itself up to the threat of cyber attacks from malicious cyber players. Data at the edge can cause its fair share of troubles as it can be handled by numerous devices which aren’t as secure as a centralised or cloud-based system. IoT manufacturers need to be aware of the security concerns of their devices and ensure that each comes with the ability to secure data safely, including encryptions and correct access control processes.

More localised hardware in each IoT device is the natural result of improved edge computing. In order for edge computing to keep up with the demands of new technology (like video processing), smart devices will require more sophisticated hardware in order to utilise the benefits of edge computing.

The growth of IoT devices and real-time application software requiring local processing and storage will continue to drive edge computing well into the future. And because we live in an era where we require our technology to continuously be stronger, better and faster, this forward-thinking mindset has forced us to find new ways to define efficiency. — Edge computing is a surefire way to keep up with the speed of our innovations.

Dmitry Ermakov

Dmitry is our our Head of Engineering. He's been with WeAreBrain since the inception of the company, bringing solid experience in software development as well as project management.

Working Machines

An executive’s guide to AI and Intelligent Automation. Working Machines takes a look at how the renewed vigour for the development of Artificial Intelligence and Intelligent Automation technology has begun to change how businesses operate.