Comment: Unpacking the benefits of Edge AI

It's crucial to view edge AI as a complement to, rather than a replacement for, existing cloud-based AI systems, says Jack Ferrari, product manager, Edge AI, MathWorks.

AdobeStock

The Internet of Things (IoT) has ushered in an era of edge devices capable of collecting, processing, and analysing data on-site, often without the need for constant internet connectivity.

Historically, organisations have sent data collected at the edge to cloud servers for processing by machine learning models. However, advancements in edge device processing power and machine learning model compression techniques are lessening the reliance on cloud computing.

As a result, edge devices are now equipped to handle complex AI tasks locally - tasks once solely reserved for cloud processing, and with the number of internet-connected devices projected to reach 29 billion by 2030, the demand for edge AI solutions is surging.

What’s more, the edge AI market is expected to grow almost seven fold before the next decade, from $15.6 billion in 2022 to $107.4 billion by 2029, and technological advancements are continuously making edge AI more efficient to implement.

Four Key Drivers of Edge AI Adoption

Microcontrollers (MCUs) and Digital Signal Processors (DSPs) – Today's vector processors, which include MCUs and DSPs, have grown in power and customization, allowing them to meet the demands of AI processing. These components have become the backbone of edge AI hardware, thanks to their enhanced capabilities and the ability to handle AI algorithms effectively.

Graphics Processing Units (GPUs) - Originally the workhorses of video games and visual content creation, GPUs have been repurposed to accelerate AI model inference. Unlike traditional CPUs, which process tasks sequentially, GPUs are comprised of hundreds or thousands of smaller cores designed for parallel processing. This architectural advantage allows them to handle multiple operations simultaneously, making them exceptionally well-suited for the matrix and vector computations that are fundamental to machine learning and deep learning algorithms.

AI Accelerator ASICs – While GPUs perform better than CPUs for AI-related tasks, custom, application-specific integrated circuits (ASICs) tailor-made for AI workloads offer even greater speed and efficiency. Neural Processing Units (NPUs), a form of ASIC, are specifically designed to process AI models, which makes them better suited than a typical CPU.

Model Compression Techniques – Edge devices often face limitations in memory and processing capacity. To address this, AI models can be compressed using techniques that preserve performance while reducing size and complexity:

  • Pruning – Removes redundant parameters from AI models, streamlining them for faster operation and lower memory usage without significantly impacting accuracy.
  • Quantization – Reduces the processing and memory overheads of inference by representing the weights and activations of a model with datatypes of a lower precision than the original trained model.
  • Knowledge distillation – Transfers the knowledge of a complex model to one more compact that can mimic the performance of the original model.  
  • Low-rank factorisation – Compresses high-dimensional data by factoring it into lower-dimensional representations to simplify complex neural network models.  

Reducing the reliance on Cloud-Based Computing  

Edge AI will not eliminate cloud-based computing, but the need to handle large amounts of data makes one thing clear: engineers cannot afford to overlook the material benefits of edge AI.

The capacity for real-time processing and decision-making stands out among the advantages of edge AI. By minimizing latency and operating closer to the data source, edge AI systems can make swift decisions, which is crucial in time-sensitive applications. Additionally, by processing data locally, these systems reduce the amount of data that needs to be transmitted to the cloud, thereby saving on bandwidth and lowering costs associated with data transfer and cloud services. Another benefit of edge AI is its ability to function with intermittent or limited internet connectivity, which enhances its efficiency and reliability, particularly in remote or mobile environments.

Industry applications: Automotive and Medical Devices

An automobile is an edge device that collects and processes data locally, reducing the amount of data that needs sending to the cloud. Due to the self-contained nature of a car’s Electronic Control Unit (ECU), data processing must be performed locally, and safety-critical decisions are made in real-time. Machine learning models on the likes of a car ECU ensure passenger safety by using real-time data to adapt to road conditions.   

In the medical device field, edge AI can enable faster decision-making, independent of network connections. Real-time data analysis and anomaly detection enable more timely intervention and ultimately reduce the risks associated with life-threatening and long-term health conditions.

Medical edge devices can also communicate with cloud applications for data logging purposes. This way, cloud-based computing does not take away from but rather complements data inference on the edge to create a more powerful device network.   

Edge AI is complementing traditional AI, not replacing it

As engineers build on cloud-based inference and AI-enabling technologies continue to evolve, integrating AI into edge devices is becoming a key differentiator for certain products. It's crucial to view edge AI as a complement to, rather than a replacement for, existing cloud-based AI systems. This dual approach expands the toolkit available to engineers, enabling them to leverage the strengths of both edge and cloud AI across various industries. 

Jack Ferrari, product manager, Edge AI, MathWorks