NXP i.MX 93 processor combines Cortex-A55 cores with Ethos U65 microNPU

NXP i.MX 93 (935x/933x)

NXP has unveiled the i.MX 93 processor family comprised of i.MX 935x, 933x, 932x, and 931x parts at this time with up to two Cortex-A55 cores, one Arm Cortex-M33 real-time core, as well as an Ethos U65 microNPU for machine learning (ML). We wrote about i.MX 9 family back in March with NXP telling us it would include an Arm Ethos U-65 microNPU and EdgeLock secure enclave, be manufactured with a 16/12nm FinFET class process, and includes the “Energy Flex” architecture to optimize power consumption by turning on/off specific blocks in the processor. The NXP i.MX 93 is the first family leveraging those new features, and we know have some more details. NXP i.MX 93 processor specifications: CPU 1x or 2x Arm Cortex-A55 @ 1.7 GHz with 32KB I-cache, 32KB D-cache, 64KB L2 cache, 256KB L3 cache with ECC 1x Arm Cortex-M33 @ 250 MHz low power microcontroller with 256KB […]

Cortex-M55 based Arm Virtual Hardware is now available in AWS Cloud

Arm Virtual Hardware Components

The Arm DevSummit 2021 is taking place on October 19-21, and the first announcements from Arm are related to IoT with “Arm Total Solutions for IoT delivering a full-stack solution to significantly accelerate IoT product development and improve product ROI”, “Project Centauri” aiming to achieve for an extensive Arm Cortex-M software ecosystem in the way that Project Cassini does for the Cortex-A ecosystem, starting with support for PSA Certified and Open-CMSIS-CDI cloud-to-device specification, and Arm Virtual Hardware based on Corstone-300 IoT platform with a Cortex-M55 MCU core and an Ethos-U55 microNPU accessible from Amazon Web Services. The first two are quite abstract right now, and more information may become available in the future, but the Arm Virtual Hardware is available now from AWS as a public beta, with 100 hours of free AWS EC2 CPU credits for the first 1,000 qualified users. The virtual hardware does not emulate only the […]

Raspberry Pi smart audio devkit features AISonic IA8201 DSP, microphone array

AISonic-Raspberry Pi Development Kit

Knowles AISonic IA8201 Raspberry Pi development kit is designed to bring voice, audio edge processing, and machine learning (ML) listening capabilities to various systems, and can be used to evaluate the company’s AISonic IA8201 DSP that was introduced about two years ago. The kit is comprised of three boards with an adapter board with three buttons connecting to the Raspberry Pi, as well as the AISonic IA8210 DSP board itself connected via a flat cable to a microphone array. Knowles AISonic Raspberry Pi development kit Knowles did not provide the full details for the development but says it enables wake-on-voice processing for low latency voice UI, noise reduction, context awareness, and accelerated machine learning inferencing for edge processing of sensor inputs. Some of the use cases include Low Power Voice Wake to listen for specific OEM keywords to wake the host processor, Proximity Detection when combined with an ultrasonic capable […]

AIfES for Arduino high-efficiency AI framework for microcontrollers becomes open source

AlFes for Arduino

AIfES (AI for Embedded Systems) is a standalone, high-efficiency, AI framework, which allows the Fraunhofer Institute for Microelectronic Circuits and Systems, or Fraunhofer IMS for short, to train and run machine learning algorithms on resource-constrained microcontrollers. So far the framework was closed-source and only used internally by Fraunhofer IMS, but following a collaboration with Arduino, AIfES for Arduino is now open-source and free to use for non-commercial projects. The framework has been optimized to allow 8-bit microcontrollers such as the one found in Arduino Uno to implement an Artificial Neural Network (ANN) that can be trained in moderate time. That means offline inference and training on small self-learning battery-powered devices is possible with AIfES without relying on the cloud or other devices. The library implements Feedforward Neural Networks (FNN) that can be freely parameterized, trained, modified, or reloaded at runtime. Programmed in C language, AIfES uses only standard libraries based […]

Benchmarking TinyML with MLPerf Tiny Inference Benchmark

MLPerf Tiny Inference Benchmark

As machine learning moves to microcontrollers, something referred to as TinyML, new tools are needed to compare different solutions. We’ve previously posted some Tensorflow Lite for Microcontroller benchmarks (for single board computers), but a benchmarking tool specifically designed for AI inference on resources-constrained embedded systems could prove to be useful for consistent results and cover a wider range of use cases. That’s exactly what MLCommons, an open engineering consortium, has done with MLPerf Tiny Inference benchmarks designed to measure how quickly a trained neural network can process new data for tiny, low-power devices, and it also includes an optional power measurement option. MLPerf Tiny v0.5, the first inference benchmark suite designed for embedded systems from the organization, consists of four benchmarks: Keyword Spotting – Small vocabulary keyword spotting using DS-CNN model. Typically used in smart earbuds and virtual assistants. Visual Wake Words – Binary image classification using MobileNet. In-home security […]

Picovoice offline Voice AI engine now works on Arduino

PicoVoice Arduino

Last year, I wrote about Picovoice support for Raspberry Pi enabling custom wake-word and offline voice recognition to control the board with voice commands without relying on the cloud. They used  ReSpeaker 4-mic array HAT to add four “ears” to the Raspberry Pi SBC. I also tried to generate a custom wake-word using the “Picovoice Console” web interface, and I was able to use “Dear Master” within a few minutes on my computer. No need to provide thousands of samples, or wait weeks before getting a custom wake-word. It’s free for personal projects. But the company has now added Picovoice to Arduino, or more exactly  Arduino Nano 33 BLE Sense powered by a  Nordic Semi nRF52480 Arm Cortex-M4F microcontroller, and already equipped with a digital microphone, so no additional hardware is required for audio capture. To get started, you’d just need to install the Picovoice Arduino library, load the sample […]

Arducam Pico4ML Board – TinyML on Raspberry Pi RP2040 with QVGA Camera & Display

Arducam Pico4ML

A few months ago, ArduCAM demonstrated person detection on Raspberry Pi Pico with Arducam camera using TensorFlow Lite, and later we noted more work was being performed to bring machine learning to RP2040 platforms, notably with the development of Arducam Pico4ML board with a built-in camera and display. At the time, i..e last month, all we had were some renders of the board, but now Arducam Pico4ML pre-orders have launched for $49.99 on UCTRONICS and Tindie stores. Shipping is scheduled to start at the end of the month, so let’s have a closer look. Arducam Pico4ML TinyML devkit specifications: Microcontroller – Raspberry Pi RP2040 dual-core Cortex-M0+ MCU with 264 KB of embedded SRAM Storage – 2MB SPI flash Display – 0.96-inch LCD SPI Display (ST7735) with 160 x 80 resolution Camera – HiMax HM01B0 QVGA camera (320×240 @ 60fps) Audio – Built-in microphone Sensor – IDK ICM-20948 9-axis IMU (gyroscope, […]

Raspberry Pi CM4 based predictive maintenance gateway features Google Coral Edge AI accelerator

Techbase IModGate-AI predictive maintenance gateway

Techbase had already integrated the Raspberry Pi CM4 module into several industrial products including Modberry 500 CM4 DIN Rail industrial computer,  ModBerry AI GATEWAY 9500-CM4 with a Google Edge TPU, and ClusBerry 9500-CM4 that combines several Raspberry Pi CM4 modules into a DIN Rail mountable system. The company has now announced another Raspberry Pi Compute Module 4 gateway – iModGATE-AI – specially designed for failure monitoring and predictive maintenance of IoT installations, which also embeds a Google Coral Edge TPU AI module to accelerate computer vision. iModGATE-AI gateway specifications: SoM – Raspberry Pi Compute Module 4 with up to 32GB eMMC AI accelerator – Google Coral Edge TPU AI module Video Output – HDMI port Connectivity – Gigabit Ethernet USB – USB 2.0 port Sensors – 9-axis motion tracking module with 3-axis gyroscope with Programmable FSR, 3-axis accelerometer with Programmable FSR, 3-axis compass (magnetometer) Expansion 2x 16-pin block terminal Advanced […]

Exit mobile version