Coral Dev board news – NXP critical firmware update, manufacturing demo, and WebCoral in Chrome

WebCoral Coral USB Accelerator Chrome

Google Coral is a family of development boards, modules, M.2/mPCIe cards, and USB sticks with support with local AI, aka on-device or offline AI, based on Google Edge TPU. The company has just published some updates with one important firmware update, a manufacturing demo for worker safety & visual inspection, and the ability to use the Coral USB accelerator in Chrome. Coral firmware update prevents board’s excessive wear and tear If you own the original Coral development board or system-on-module based on NXP i.MX 8M processor, you may want to update your Mendel Linux installation with:

The update includes a patch from NXP with a critical fix to part of the SoC power configuration. Without this patch, the SoC might overstress and the lifetime of your board could be reduced. Note this only affects NXP-based boards, so other Coral products such as Coral Dev Mini powered by Mediatek MT8167S […]

NVIDIA TAO Transfer Learning Toolkit (TLT) 3.0 released with pre-trained models

NVIDIA TAO Transfer Learning Toolkit

NVIDIA first introduced the TAO (Train, Adapt and Optimize) framework to eases AI model training on NVIDIA GPU’s as well as NVIDIA Jetson embedded platforms last April during GTC 2021. The company has now announced the release of the third version of the TAO Transfer Learning Toolkit (TLT 3.0) together with some new pre-trained models at CVPR 2021 (2021 Conference on Computer Vision and Pattern Recognition). The newly released pre-trained models are applicable to computer vision and conversational AI, and NVIDIA claims the release provides a set of powerful productivity features that boost AI development by up to 10 times. Highlights of TAO Transfer Learning Toolkit 3.0 Various computer vision pre-trained models for Computer vision: Body Pose estimation model that supports real-time inference on edge with 9x faster inference performance than the OpenPose model. Emotion recognition Facial landmark License plate detection and recognition Heart rate estimation Gesture recognition Gaze estimation […]

Benchmarking TinyML with MLPerf Tiny Inference Benchmark

MLPerf Tiny Inference Benchmark

As machine learning moves to microcontrollers, something referred to as TinyML, new tools are needed to compare different solutions. We’ve previously posted some Tensorflow Lite for Microcontroller benchmarks (for single board computers), but a benchmarking tool specifically designed for AI inference on resources-constrained embedded systems could prove to be useful for consistent results and cover a wider range of use cases. That’s exactly what MLCommons, an open engineering consortium, has done with MLPerf Tiny Inference benchmarks designed to measure how quickly a trained neural network can process new data for tiny, low-power devices, and it also includes an optional power measurement option. MLPerf Tiny v0.5, the first inference benchmark suite designed for embedded systems from the organization, consists of four benchmarks: Keyword Spotting – Small vocabulary keyword spotting using DS-CNN model. Typically used in smart earbuds and virtual assistants. Visual Wake Words – Binary image classification using MobileNet. In-home security […]

Software-based neural video decoder leverages AI accelerator on Snapdragon 888

Software video decoding ai accelerator

Sometimes hardware blocks got to work on tasks they were not initially designed to handle. For example, AI inference used to be mostly offloaded to the GPU before neural network accelerators became more common in SoC’s. Qualcomm AI Research has now showcased a software-based neural video decoder that leverages both the CPU and AI engine in Snapdragon 888 processor to decode a 1280×704 HD video at over 30 fps without any help from the video decoding unit. The neural video decoder is still a work in progress as it only supports intra frame decoding, and inter frame decoding is being worked on. That means each frame is currently decoded independently without taking into account small changes between frames as all other video codecs do. The CPU handles parallel entropy decoding while the decoder network is accelerated on the 6th generation Qualcomm AI Engine found in Snapdragon 888 mobile platform. This […]

Picovoice offline Voice AI engine now works on Arduino

PicoVoice Arduino

Last year, I wrote about Picovoice support for Raspberry Pi enabling custom wake-word and offline voice recognition to control the board with voice commands without relying on the cloud. They used  ReSpeaker 4-mic array HAT to add four “ears” to the Raspberry Pi SBC. I also tried to generate a custom wake-word using the “Picovoice Console” web interface, and I was able to use “Dear Master” within a few minutes on my computer. No need to provide thousands of samples, or wait weeks before getting a custom wake-word. It’s free for personal projects. But the company has now added Picovoice to Arduino, or more exactly  Arduino Nano 33 BLE Sense powered by a  Nordic Semi nRF52480 Arm Cortex-M4F microcontroller, and already equipped with a digital microphone, so no additional hardware is required for audio capture. To get started, you’d just need to install the Picovoice Arduino library, load the sample […]

Modelplace.AI is an app store for OpenCV compatible AI models

modelplace.ai

OpenCV open-source computer vision library is used in a wide variety of projects and products, and last year, the community also launched the OpenCV AI Kit (OAK) Myriad X-based hardware solution for computer vision. However there’s a learning curve to use the library, especially in combination with artificial intelligence models, and it can be challenging and time-consuming to newcomers. So in order to broaden the reach of the solution, OpenCV has now introduced Modelplace.AI, an app store/marketplace for AI models working with the OpenCV library. The AI model marketplace is a store and try-before-you-buy service for artificial intelligence models, many of which are certified to work with the OpenCV AI Kit. There are currently over 40 models for detection (e.g. person, pedestrian…), classification, segmentation (e.g. extraction of objects from a scene), pose estimation, people counting, text detection, tracking, and more. Right now all models are free, but developers will be […]

Xilinx announces Versal AI Edge Series with Cortex-A72 & R5 cores, FPGA fabric

Xilinx Versal AI Edge

Edge AI solves the latency and security issues through on-device AI acceleration for optimal computations at a low power supply. Xilinx announces its Versal AI Edge Series which is 4th member of the Adaptive Compute Acceleration Platform (ACAP) family. The versal series consists of seven models ranging from VE2002 to VE2802 with the processor fabrication on 7 nm silicon technology. Talking more about ACAP, it is a platform that provides a combined essence of a processor and FPGA. The processing features efficient memory and I/Os, whereas programmable arrays allow logical control over the hardware. Also, as Xilinx specializes in FPGA products, the additional support of computational features makes the ACAP hardware even more flexible and dynamic. The Versal AI Edge series features different types of engines for specific functionalities in terms of adaptable, scalar, and intelligent engines. The seven processor models vary with respect to engine and platform specifications. However, […]

NVIDIA Jetson AGX Xavier Industrial module adds lockstep Cortex-R5 cluster, ECC RAM, and more

NVIDIA Jetson AGX Xavier Industrial

NVIDIA Jetson AGX Xavier is the most powerful module from the Jetson family packing 32 TOPS of AI inference performance. But with some customers wanting to use the embedded AI computer in harsher conditions, the company has now introduced a rugged version of the module with NVIDIA Jetson AGX Xavier Industrial. Some changes included a slightly lower performance (30 TOPS) to cater for an expanded temperature range, a dual-core Cortex-R5 cluster in lockstep, ECC memory, and compliance with shock and vibration standards. NVIDIA Jetson AGX Xavier Industrial specifications with bold highlights showing differences / new features: CPU – 8-core NVIDIA Carmel Arm v8.2 64-bit CPU with 8MB L2 + 4MB L3 GPU – NVIDIA Volta architecture with 512 NVIDIA CUDA cores and 64 Tensor cores for up to 20 TOPS (INT8) (Note: the standard version support 22 TOPS) DL Accelerator – 2x NVDLA accelerators for up to 10 TOPS (INT8) […]