Edge computing, the Internet of Things (IoT), and 5G are reaching a near saturation point. With Kubernetes poised as the technology that powers the edge, confusion over what the edge is exactly is even more pronounced. There is little doubt that Kubernetes is an essential technology for future 5G and edge applications and their rollout, but its role differs depending on what edge you are applying it to. This post discusses how the edge is defined in different contexts and what its different parts are today. We’ll then touch on how Kubernetes and other open source technologies support edge environments.
Defining edge computing
As the technological environment has evolved, the edge has come to mean different things depending on the context. In the early days of the web, before 5G and the Internet of Things (IoT), the edge meant moving computing resources closer to the end-user to speed up page downloads or make online multiplayer gaming possible. Of course, there were many other use cases, but a reduction in latency generally on the public web was the main driver for this network configuration and topology.
As we head into the 4th industrial revolution or Industry 4.0, latency is still the most crucial factor. But now, there are many different types of things connected to the Internet, such as self-driving cars, single-function industrial devices like sensors, controllers, microcontrollers, home automation systems, wearable tech and many other items yet to be imagined. In other words, there is a diverse and growing spectrum of connected devices on the IoT that all require compute resources close to where they operate. In all of these situations, for the edge to be effective with real-time decisions that can be fed back into the device, compute resources should be as close as manageable to the client or ‘thing’ that relies on it.
Edge Computing Infrastructure
In many ways, edge computing extends traditional distributed cloud computing with specialized data centers situated closer to the devices. The diagram below shows a typical topology for an edge computing data center. The backbone of an edge data center is the public cloud or private 5G cloud. The cloud is the input for a distributed edge node network that provides service delivery, storage, caching, and computing offload management for IoT devices.
The edge is a location not a thing
The diagram above illustrates a typical high-level overview of what an edge topology looks like. However, the black boxes on the chart that represent the Internet of Things don’t reveal the real diversity of edge devices.
In reality, there are a few different edge locations that go from centralized data centers at the Internet Edge down to the User Edge on the other side of the last mile network. One of the main differentiators between these tier locations is how much control each has over its network and infrastructure. For example, public data centres typically rely on shared infrastructure which increases latency; whereas, devices on the user edge are more reliant on customized on-premise data centres with their own private networks that can reduce latency to milliseconds.
Three Tiers of the Edge
According to the whitepaper, Sharpening the Edge: Overview of the LF Edge Taxonomy and Framework, these are the main edge locations:
#1. Centralized Data Centers
Centralized data centers contain near-limitless cloud-based compute resources that are not available to devices at the User Edge. A centralized cloud-based data center can manage and track the devices at the user edge, but they are limited by the data center’s physical location and also its shared resources that can increase latency.
#2. Service Provider Edge
The service provider edge delivers infrastructure as services over mobile or global fixed networks. Like public clouds, services at this tier include computing resources and networks. However, a significant difference is the ability of telcos to offer private and secure networks.
Through a combination of mobile and fixed private networks, computing resources at the Service Provider Edge are brought much closer to where end-users and businesses need them. As a result, Communication Service Providers (CSPs) with their well positioned global hybrid mobile/baremetal network can offer innovative edge applications and new services and business models across mobile networks to both end users and enterprises.
#3. User Edge
On the far side of the last mile network is the User Edge that is made up of a very diverse set of devices connected to the Internet of things. The LF Edge Foundation splits the User Edge into on-premise data centres, the smart device edge and resource-constrained devices.
On-premise data centres are generally needed on the device edge to conserve network bandwidth and reduce the need to cross the last mile network. Other reasons for putting more compute resources on the device side of the last mile is for data storage and to manage security and privacy concerns. In general, devices at the user edge, including the smart edge device are fixed-function devices with limited resources and low specifications.
Kubernetes at the edge
Now that you have an idea of today’s various definitions of the edge, you can see that Kubernetes has a role to play on both sides of it: at the infrastructure edge as well as on-premises beyond the last mile. With its ability to massively scale distributed applications, Kubernetes is best suited for managing and analyzing device data and other telemetry for monitoring, observability and real-time decision making.
For most edge applications to be useful for real-time decision making and analysis, device data needs to firstly be in a standard format that is easily available to a cluster and secondly, it needs to be acted upon and fed back to the device at near real-time speeds. Kubernetes in most instances, depending on the app you have running on it, fits this scenario.
Other typical use cases for Kubernetes are Infrastructure as a Service platforms that provide cluster management services for app development, machine learning and even 5G network slicing for CSPs and enterprises.
Devices at the smart edge and IoT
However, one place that Kubernetes today is not generally running is on the devices themselves, particularly on the User Device end of the spectrum. Although there are some proposals for cluster-based device management that recognize fixed-function devices at the smart edge, most of these Linux-enabled devices do not have enough resources to run Kubernetes. Microsoft’s Akri project is also working on ‘leaf device’ management with Kubernetes.
How do you modernize and maintain IoT devices at the smart edge?
As mentioned most devices at the user edge are limited by their architecture and their resource capability. Even though many are Linux-enabled, they are not easily modernized and updated without completely replacing the hardware or installing a whole new operating system which comes at a significant cost.
The Pantacor platform modernizes smart edge devices using portable open source technology like containers and turns any Linux-enabled device from home WiFi routers to industrial control systems into a software-defined connected IoT without having to replace the hardware or the OS. The critical building block is Pantavisor Linux, an open source container-based init system that modularizes embedded systems so you can control the fleet’s firmware and software lifecycle. Simple APIs built into the Pantacor Hub enables device data and telemetry integration with the tools of your choice: Kubernetes, Prometheus, and other open source solutions, as well as off the shelf IoT systems from Azure, Google and AWS or even your own in-house solutions.
Contact email@example.com for a demo and more information.
As we move at speed toward the 4th industrial revolution, it’s important to understand that the Internet of Things is not homogenous and consists of a great variety of architectures, distributions and hardware with varying capabilities. Managing millions of devices will need flexible solutions that we can derive from open source software. And although Kubernetes is important for IoT device infrastructure, edge applications, data management, and deployments, running clusters directly on IoT devices is not likely one of them for the foreseeable future.