Skip to content

Commit 8dc5e7e

Browse files
authored
Merge pull request #11 from ReinhardKeil/main
Added EdgeAI section
2 parents 5b945d0 + 7da3140 commit 8dc5e7e

7 files changed

Lines changed: 63 additions & 1 deletion

File tree

profile/EdgeAI.md

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
[**Arm Examples**](https://github.com/Arm-Examples/) » **Edge AI**
2+
3+
# Edge AI (Machine Learning)
4+
5+
Arm offers comprehensive tool and software support for Edge AI development targeting the [Cortex-M processor family](https://www.arm.com/products/silicon-ip-cpu?families=cortex-m&showall=true) and [Ethos-U NPU series](https://www.arm.com/products/silicon-ip-cpu?families=ethos%20npus). Simple machine learning algorithms even execute on an ultra-low-power Cortex-M0+ device while the Cortex-M52/55/85 processors with [Helium vector extension](https://www.arm.com/technologies/helium) are optimized for neural networks. Combining a Cortex-M processor with an Ethos-U NPU delivers up to 480 times performance uplift for ML workloads while maintaining minimal power consumption.
6+
7+
## ML Frameworks for Cortex-M and Ethos-U
8+
9+
The Arm software and tool ecosystem integrates seamlessly with popular ML frameworks including LiteRT (formerly TensorFlow Lite) and ExecuTorch (PyTorch-based).
10+
11+
[![LiteRT](LiteRT.png "LiteRT (formerly TensorFlow Lite)")](https://www.keil.arm.com/packs/tensorflow-lite-micro-tensorflow)
12+
13+
The **[LiteRT (TensorFlow Lite Runtime)](https://www.keil.arm.com/packs/tensorflow-lite-micro-tensorflow)** is a production-grade inference runtime optimized for Cortex-M microcontrollers, optionally with Ethos-U.
14+
15+
- Proven for Cortex-M only and Cortex-M + Ethos-U​
16+
- Optimized kernels for constrained memory​
17+
- Stable operator coverage for classic ML models ​
18+
- Strong ecosystem, ready today​
19+
20+
Explore the [LiteRT software pack for Arm Cortex-M](https://github.com/MDK-Packs/tensorflow-pack) with ready-to-use workflow templates and examples.
21+
22+
[![PyTorch](PyTorch.png "ExecuTorch (PyTorch-based)")](https://github.com/Arm-Examples/CMSIS-Executorch)
23+
24+
The **ExecuTorch (Lightweight PyTorch Runtime)** is today the dominant framework in research, making new ML model development and sharing easier.
25+
26+
- Strong for LLMs, vision, multimodal, generative AI
27+
- ML developer friendly with modern tooling
28+
- Rapidly growing eco-system momentum
29+
- Now maturing for Cortex-M + Ethos-U targets
30+
31+
Get started with [CMSIS-Executorch](https://github.com/Arm-Examples/CMSIS-Executorch) for PyTorch-based AI models on Cortex-M and Ethos-U targets.
32+
33+
## ML Runtime System
34+
35+
ML models from ecosystem partners or open-source model zoos such as Hugging Face are quantized and optimized for embedded deployment. CMSIS-NN executes these optimized models on Cortex-M processors. For Ethos-U NPU targets, Vela converts model operations for NPU acceleration, while operations that cannot be converted continue to run on Cortex-M via CMSIS-NN.
36+
37+
![ML Model Runtime System](ML_Model_Runtime.png "ML Model Runtime System")
38+
39+
Using these workflows developers can deploy trained models from PyTorch, TensorFlow, and other frameworks onto Arm targets with optimal performance and energy efficiency, enabling intelligent capabilities in IoT devices, wearables, industrial sensors, and other edge computing applications.
40+
41+
The [Vela compiler](https://pypi.org/project/ethos-u-vela/) provides detailed information on optimization output, diagnostics, and performance timing analysis for Ethos-U NPU targets.
42+
43+
## Embedded Development Workflow
44+
45+
The embedded development workflow for Edge AI applications requires training data for ML model training that executes on a host or MLOps system. Once trained, the ML model is deployed to the embedded target as described above. [Keil MDK](https://www.keil.arm.com/keil-mdk/) provides all tools that are required to develop and integrate the optimized ML models with the application code, device drivers, RTOS, and middleware components. Vela, Arm Compiler and FVP simulation models (the MLOps backend tools) are provided via Docker containers and CMSIS-ExecuTorch that includes also the runtime system.
46+
47+
The [SDS-Framework](https://www.keil.arm.com/packs/sds-arm) is a workbench for ML model development. You may capture and record real-world sensor, audio, or video data streams directly from your target hardware for ML model training. SDS enables data playback and validation of the ML model output against performance indicators. The option to run tests on hardware or FVP simulation models enables automated testing and CI/MLOps workflows without requiring physical target hardware at every step.
48+
49+
![Embedded Development Workflow](Embedded_Development.png "Embedded Development Workflow")
50+
51+
Discover [Arm's ML ecosystem partners](https://www.arm.com/partners/ai-and-ml) offering optimized models, tools, and solutions for edge AI applications.
52+
53+
## More Edge AI Developer Resources
54+
55+
- [CMSIS-NN](https://github.com/ARM-software/CMSIS-NN) - Optimized neural network kernels for Cortex-M processors
56+
- [TensorFlow Runtime System](https://www.keil.arm.com/packs/tensorflow-lite-micro-tensorflow) - LiteRT software pack with examples and integration support
57+
- [ML Evaluation Kit (MLEK)](https://www.keil.arm.com/packs/arm-mlek-arm) - Pre-configured ML projects and template applications for microcontroller targets
58+
- [SDS-Framework](https://github.com/ARM-software/SDS-Framework) - Workbench for capturing sensor data, validating ML models, and enabling CI/MLOps workflows
59+
- [CMSIS-Executorch](https://github.com/Arm-Examples/CMSIS-Executorch) - ExecuTorch integration for Cortex-M and Ethos-U targets
60+
- [CMSIS-Zephyr-Executorch](https://github.com/Arm-Examples/CMSIS-Zephyr-Executorch) - ExecuTorch integration for Zephyr RTOS applications

profile/Embedded_Development.png

76.6 KB
Loading

profile/ImageSource/images.pptx

206 KB
Binary file not shown.

profile/LiteRT.png

4.35 KB
Loading

profile/ML_Model_Runtime.png

48.9 KB
Loading

profile/PyTorch.png

11.1 KB
Loading

profile/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,9 @@ Keil Studio is designed for all types of embedded projects, ranging from bare-me
2424

2525
[<img src="ML_Video.png" alt="Development flow for optimized Edge AI devices" width="318" height="190" align="left">](https://armkeil.blob.core.windows.net/developer/Files/videos/KeilStudio/20250812_Multicore_Alif.mp4?#t=07:22 "Development flow for optimized Edge AI devices")
2626

27-
Comprehensive machine learning capabilities are available with ML Evaluation Kit (MLEK), Synchronous Data Streaming (SDS) Framework, LiteRT (TensorFlow), and Executourch that utilizes CMSIS-NN (for Cortex-M) or Vela (for Ethos-U). **[Watch this video to learn more...](https://armkeil.blob.core.windows.net/developer/Files/videos/KeilStudio/20250812_Multicore_Alif.mp4?#t=07:22 "Development flow for optimized Edge AI devices")**
27+
Arm offers for Edge AI development on the Cortex-M processor family and Ethos-U NPU series comprehensive tool and software support.
28+
29+
**[Watch this video](https://armkeil.blob.core.windows.net/developer/Files/videos/KeilStudio/20250812_Multicore_Alif.mp4?#t=07:22 "Development flow for optimized Edge AI devices")**, explore the projects below or read the section [**Edge AI**](EdgeAI.md) to learn more.
2830

2931
<br clear="left"/>
3032

0 commit comments

Comments
 (0)