MCU AI/ML - Bridging the Gap Between Intelligence and Embedded Systems

2024-11-09 SILICON LABS Official Website
MCU,microcontrollers,Wireless MCUs,EFR32xG24

Artificial Intelligence (AI) and Machine Learning (ML) are key technologies that enable systems to learn from data, make inferences, and enhance their performance over time. These technologies have often been used in large-scale data centers and powerful GPUs, but there's an increasing demand to deploy them on resource-limited devices, such as microcontrollers (MCUs).


In this blog, we will examine the intersection of MCU technology and AI/ML, and how it affects low-power edge devices. We'll discuss the difficulties, innovations, and practical use cases of running AI on battery-operated MCUs.


AI/ML and MCUs: A Brief Overview

AI creates computer systems that can do human-like tasks, such as understanding language, finding patterns, and deciding. Machine Learning, a subset of AI, involves using algorithms that let computers learn from data and get better over time. ML models can find patterns, sort objects, and predict outcomes from examples.


MCUs play an important role in making AI and ML possible on edge devices.


Some use cases for MCU based AI/ML at the edge include:

  • Keyword Spotting: Recognize specific words or phrases (e.g., voice commands) without the need for cloud connectivity

  • Sensor Fusion: Combining data from multiple sensors to make more informed decisions than with single sensor solutions

  • Anomaly detection: Detecting outliers or abnormal patterns in sensor data that may indicate faults, errors or threats for predictive maintenance or quality control

  • Object detection: Identifying and locating objects of interest (e.g., faces, pedestrians, vehicles) in images or videos captured by cameras or other sensors.

  • Gesture recognition: Interpreting human gestures (e.g., hand movements, facial expressions, body poses) in images or videos captured by cameras or other sensors to improve human computer interaction


Challenges of AI/ML on MCUs

Deep learning models, particularly deep neural networks (DNNs), have become indispensable for complex tasks like computer vision and natural language processing. However, their computational demands are substantial. Such resource-intensive models are impractical for everyday devices, especially those powered by low-energy MCUs found in edge devices. The growth of deep learning model complexity is undeniable. As DNNs become more sophisticated, their size balloons, making them incompatible with the limited computing resources available on MCUs.


What is TinyML?

TinyML refers to machine learning models and techniques optimized for deployment on resource-constrained devices. These devices operate at the edge, where data is generated, and inferencing is performed locally. Typically run on low power MCUs, TinyML systems perform inferences on data collected locally to the node. Inferencing is the moment of truth for an AI model, testing how well it can apply knowledge learned during training. Local inferencing enables MCUs to execute AI models directly, making real-time decisions without relying on external servers or cloud services.


Local inferencing in the context of AI/ML is crucial for several reasons:

Resource Constraints: Many embedded devices, especially those running on battery power, have limited resources such as memory, processing capability, and energy efficiency. Traditional general-purpose microcontrollers struggle to perform AI tasks efficiently due to their limited processing power and memory, constrained energy resources, or lack of on-chip acceleration. Local inferencing allows these resource-constrained devices to execute AI workloads without draining excessive power to improve efficiency and performance for things like:

User Experience Enhancement: Consider an example: an AI-enabled electronic cat flap. By training it to distinguish between cats and other objects, it can open the door only for the authorized cat. Here, local inferencing improves the user experience by ensuring safety and convenience without the need for additional hardware like RFID collars.

Efficiency and Performance: GPUs are commonly used for large-scale AI deployments because they can perform many processes in parallel, essential for effective AI training. However, GPUs are costly and exceed power budgets for small-scale embedded applications. AI-optimized MCUs, with specialized architectures, strike a balance by delivering better performance and power efficiency for AI workloads. Silicon Labs includes a matrix vector processor as part of its AI/ML enablement. This specialized peripheral is designed to enhance the performance of AI/ML algorithms or vector math operations to shorten inferencing time and perform these critical tasks at lower power.


In summary, local inferencing at the edge empowers real-time decision-making, reduces latency, enhances security, empowers battery-operated devices with AI capabilities and enhances user experiences making it a critical component of modern computing systems, while respecting resource limitations.


Silicon Labs Pioneering AI/ML Solutions for the Edge:

In the dynamic landscape of technology, Silicon Labs stands out as a trailblazer in bringing Artificial intelligence (AI) and Machine learning (ML) to the edge. Our commitment to innovation has led to groundbreaking solutions that empower resource-constrained devices, such as microcontrollers (MCUs), with intelligent capabilities.


Devices Optimized for TinyML

The EFR32xG24, EFR32xG28, and EFR32xG26 families of MCUs and Wireless MCUs combine a 78 MHz ARM Cortex®-M33 processor, high-performance radios, precision analog performance, and an AI/ML hardware accelerator giving developers a flexible platform for deploying edge intelligence. Supporting a broad range of wireless IoT protocols, these SoCs incorporate the highest security with the best RF performance/energy-efficiency ratio in the market.


Today’s developers are often forced to pay steep performance or energy penalties for deploying AI/ML at the edge. The xG24, xG28, and xG26 families alleviate those penalties as the first ultra-low powered devices with dedicated AI/ML accelerators built in lowering overall design complexity. This specialized hardware is designed to handle complex calculations up to an 8x faster inferencing along with up to a 6x improvement in energy efficiency when compared to a firmware only approach and with even more performance gained when compared to cloud-based solutions. The use of the hardware accelerator offloads the burden of inferencing from the main application MCU leaving more clock cycles available to service your application.


Tools for Simplifying AI/ML Development

The tools to build, test, and deploy the algorithms needed for machine learning are just as important as the MCUs running those algorithms. By partnering with leaders in the TinyML space like TensorFlow, SensiML, and Edge Impulse, Silicon Labs provides options for beginners and experts alike. Using this new AI/ML toolchain with Silicon Labs’s Simplicity Studio, developers can create applications that draw information from various connected devices to make intelligent machine learning-driven decisions.


Silicon Labs provides a variety of tools and resources to support machine learning (ML) applications. Here are some of them:

Machine Learning Applications: The development platform supports embedded machine learning (TinyML) model inference, backed by the TensorFlow Lite for Microcontrollers (TFLM) framework. The repository contains a collection of embedded applications that leverage ML.

Machine Learning Toolkit (MLTK): This is a Python package with command-line utilities and scripts to aid the development of machine-learning models for Silicon Lab's embedded platforms. It includes features for executing ML operations from a command-line interface or a Python script, determining how efficiently an ML model will execute on an embedded platform, and training an ML model using Google TensorFlow.


Silicon Labs provides a TinyML solution as part of the Machine Learning Toolkit (MLTK). The toolkit includes several models that are used by the TinyML benchmark. These models are available on the Silicon Labs GitHub and include anomaly detection, image classification, and keyword spotting.


AI/ML powered edge devices are opening new horizons for how we engage with our surroundings, and they will soon transform our lives in amazing ways. Silicon Labs is at the forefront of TinyML innovation, making it possible to bring these capabilities to low power, connected edge devices like never before.


Learn more about how our EFR and EFM MCU platform is optimized for AI/ML at the Edge in our recent Wireless Compute Tech Talk session, An Optimized Platform for AI/ML at the Edge.


  • +1 Like
  • Add to Favorites

Recommend

This document is provided by Sekorm Platform for VIP exclusive service. The copyright is owned by Sekorm. Without authorization, any medias, websites or individual are not allowed to reprint. When authorizing the reprint, the link of www.sekorm.com must be indicated.

Contact Us

Email: