NXP embedded AI for Edge and IoT
June 14, 2018
Dutch semiconductor firm NXP Semiconductors has launched a machine learning environment for building applications on edge devices and industrial systems. Machine learning apps can be built on NXP devices from low-cost microcontrollers (MCUs) to crossover i.MX RT processors and high-performance application processors.
The ML environment is designed to provide turnkey selection of the best execution engine from among Arm Cortex cores to high-performance GPU/DSP (Graphics Processing Unit/Digital Signal Processor) complexes and tools for deploying machine learning models, including neural nets, on those engines.
NXP's ML environment is designed for fast-growing machine learning use-cases in vision, voice, and anomaly detections.
The vision-based ML applications use cameras as inputs to the various machine learning algorithms of which neural networks are the most popular. These applications span most market segments and perform functions such as object recognition, identification, people-counting and others.
Voice Activated Devices (VADs) are driving the need for machine learning at the edge for wake word detection, natural language processing, and for 'voice as the user-interface' applications.
Machine learning-based anomaly detection (based on vibration/sound patterns) can recognise imminent industrial equipment failures and reduce down-times.
NXP is offering several approaches for integrating machine learning into applications. The NXP ML environment includes free software that allows import of trained TensorFlow or Caffe models, conversion to optimised inference engines, and deployment on NXP's range of processors from MCUs to i.MX and Layerscape processors.
The EdgeScale platform enables secure provisioning and management of IoT and Edge devices. EdgeScale enables end-to-end continuous development and provision by containerising AI/ML learning and inference engines in the cloud, and securely deploying the containers to edge devices automatically.
To support a broad range of customer needs, NXP also created a Machine Learning partner ecosystem to connect customers with technology vendors that can accelerate time-to-revenue with proven ML tools, inference engines, solutions and design services.
Members of the ecosystem include Au-Zone Technologies and Pilot.AI.
Au-Zone Technologies provides the industry’s first end-to-end embedded ML toolkit and RunTime inference engine,
DeepView, which enables developers to deploy and profile CNNs on NXP’s entire SoC portfolio that includes heterogeneous mixture of Arm Cortex-A, Cortex-M cores, and GPU’s.
Pilot.AI has built a framework to enable a variety of perception tasks - including detection, classification, tracking, and identification - across a variety of customer platforms, ranging from microcontrollers to GPUs, along with data collection/annotation tools and pre-trained models to enable drop-in model deployment.
“When it comes to machine learning in embedded applications, it’s all about balancing cost and the end-user experience. For example, many people are still amazed that they can deploy inference engines with sufficient performance even in our cost-effective MCUs,” said Markus Levy, head of AI technologies at NXP. “At the other end of the spectrum is our high-performance crossover and applications processors that have processing resources for fast inference and training in many of our customer's applications. As the use-cases for AI expand, we will continue to power that growth with next-generation processors that have dedicated acceleration for machine learning.”