Intrinsyc Unveils Open-Q 845 µSOM and Snapdragon 845 Mini-ITX Development Kit

Open-Q 845 µSOM Development Kit

Intrinsyc introduced the first Qualcomm Snapdragon 845 hardware development platform last year with its Open-Q 845 HDK designed for OEMs and device makers. But the company has now just announced a solution for embedded systems and Internet of Things (IoT) products with Open-Q 845 micro system-on-module (µSOM) powered by the Snapdragon 845 octa-core processor, as well as a complete development kit featuring the module and a Mini-ITX baseboard. Open-Q845 µSOM Specifications: SoC – Qualcomm Snapdragon SDA845 octa-core processor with 4x Kryo 385 Gold cores @ 2.649GHz + 4x Kryo 385 Silver low-power cores @ 1.766GHz cores, Hexagon  685 DSP, Adreno 630 GPU with OpenGL ES 3.2 + AEP (Android Extension Pack),  DX next, Vulkan 2, OpenCL 2.0 full profile System Memory – 4GB or 6GB dual-channel high-speed LPDDR4X SDRAM at 1866MHz Storage – 32GB or 64GB UFS Flash Storage Connectivity Wi-Fi 5 802.11a/b/g/n/ac 2.4/5Ghz 2×2 MU-MIMO (WCN3990) with 5 GHz external PA & U.FL antenna connector Bluetooth 5.x Audio & …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

NXP i.MX RT106F & RT106A/L Cortex-M7 Processors Target Offline Face Recognition & Smart Audio Applications

NXP i.MX RT crossover processors combine real-time capabilities of microcontrollers with the performance of application processors thanks to an Arm Cortex-M7 core clocked at 528 MHz and more. The performance is indeed impressive as shown by Teensy 4.0 benchmarks, but so far NXP i.MX RT processor targeted general purpose applications. The company has now introduced three new crossover processors designed for AI applications. NXP i.MX RT106F is designed for offline face recognition and expression Identification, while RT106L and RT106A are made for local and cloud-based embedded voice applications. NXP i.MX RT106F Processor Highlights of the processor: CPU – Arm Cortex-M7 @ 600 MHz (3020 CoreMark/1284 DMIPS) Memory – 1 MB On-Chip SRAM plus up to 512 KB configurable as Tightly Coupled Memory (TCM) External memory interface options – NAND, eMMC, QuadSPI NOR Flash, and Parallel NOR Flash Real-time, low-latency response as low as 20 ns Industry’s lowest dynamic power with an integrated DC-DC converter Low-power run modes at 24 MHz …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

CDVA (Compact Descriptors for Video Analysis) Enable “Video Understanding”

SuperCDVA CDVA Video Understanding

One of the most popular applications of artificial intelligence is object detection where you have models capable of detecting objects or subjects being cats, dogs, cars, laptops, or other. As I discovered in a press release by Gyrfalcon, there’s something similar for videos called CDVA (Compact Descriptors for Video Analysis) that’s capable of analyzing the scene taking place, and describe it in a precise manner. The CDVA standard, aka MPEG ISO/IEC 15938-15, describes how video features can be extracted and stored as compact metadata for efficient matching and scalable search. Gyrfalcon published a press release, their Lightspeeur line of AI chips will adapt CDVA. You can get the technical details in that paper entitled “Compact Descriptors for Video Analysis: the Emerging MPEG Standard”. CDVA still relies on (CNN Convoluted Neural Network) but do so but extracting frames first, append a timestamp and the encoded CDVA descriptor to the video, which is sent to a server or the cloud for analysis. …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Getting Started with Sipeed M1 based Maixduino Board & Grove AI HAT for Raspberry Pi

Grove AI HAT Face Detection

Last year we discovered Kendryte K210 processor with a RISC-V core and featuring AI accelerators for machine vision and machine hearing. Soon after,  Sipeed M1 module was launched with the processor for aroud $10. Then this year we started to get more convenient development board featuring Sipeed M1 module such as Maixduino or Grove AI Hat. Seeed Studio sent me the last two boards for review. So I’ll start by showing the items I received, before showing how to get started with MicroPython and Arduino code. Note that I’ll be using Ubuntu 18.04, but development in Windows is also possible. Unboxing I received two packages with a Maixduino kit, and the other “Grove AI HAT for Edge Computing”. Grove AI HAT for Edge Computing Let’s start with the second. The board is a Raspberry Pi HAT with Sipeed M1 module, a 40-pin Raspberry Pi header, 6 grove connectors, as well as connectors for camera and display. The USB-C port is …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

HuskyLens AI Camera & Display Board is Powered by Kendryte RISC-V Processor (Crowdfunding)

HuskyLens AI Camera

A couple of years ago, I reviewed JeVois-A33 computer vision camera  powered by Allwinner A33 quad-core Cortex-A7 processor running Linux. The tiny camera would implement easy-to-use software for machine vision with features such as object detection, eye tracking, QR code and ArUco marker detection, and so on. The camera could handle the tasks at hand, but since it relied on purely software computer vision, there were lag for some of the demo applications including 500ms for single object detection, and up to 3 seconds for YOLO test with multiple object types using deep learning algorithms. That’s a bit slow for robotics project, and software solutions usually consume more than hardware accelerated ones. Since then, we’ve started to see low-cost SoC and hardware with dedicated hardware AI accelerators, and one of those is Kendryte K210 dual-core RISC-V processor with a built-in KPU Convolutional Neural Network (CNN) hardware accelerator and APU audio hardware accelerator found in Sipeed 1 module, Maixduino SBC, and …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

ZED Depth and Motion Tracking Camera Supports NVIDIA Jetson Nano Board

ZED depth camera Jetson Nano

When NVIDIA launched their low cost Jetson Nano development board earlier this week, one reader asked whether it would support binocular depth mapping. It turns out Stereo Labs has updated the SDK (Software Development Kit) for the ZED depth and motion tracking camera in order to support the latest NVIDIA developer kit. Jetson Nano can manage depth and positional tracking at 30 fps in PERFORMANCE mode with 720p resolution, and while the more powerful and expensive Jetson TX2 achieves doubles the performance at 60 fps, it does so at a much higher cost. ZED depth and motion tracking camera specifications: Video 2.2K @ 15 fps (4416×1242 resolution) 1080p @ 30 fps (3840×1080 resolution) 720p @ 60 fps (2560×720 resolution) WVGA @ 100 fps (1344×376 resolution) Depth Resolution – Same as selected video resolution Range – 0.5 to 20 m Format – 32-bits Stereo Baseline – 120 mm Motion 6-axis Pose Accuracy Position – +/- 1mm Orientation – 0.1° Frequency – …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Inforce 6560 Snapdragon 660 Pico-ITX SBC Comes with 3 MIPI Camera Connectors

Inforce 6560

Inforce Computing has launched yet another Snapdragon-based single board computer with their Inforce 6560 SBC powered by Qualcomm Snapdragon 660 processor with stereoscopic depth sensing and deep learning capabilities made possible thanks to three MIPI camera connectors. The board also comes with to 3GB LPDDR4 RAM, 32GB flash, HDMI and MIPI DSI video outputs, Gigabit Ethernet, a wireless module, USB ports, sensors, and more. Inforce 6560 specifications: SoC – Qualcomm Snapdragon 660 (SDA660) with 8x Kryo ARMv8 compliant 64-bit CPUs arranged in two dual-clusters, running at 2.2GHz (Gold) and 1.8GHz (Silver) each, Adreno 512 GPU, Hexagon 680 DSP with dual-Hexagon vector processor (HVX-512) @ 787MHz for low-power audio and computer vision processing, Spectra 160 camera (dual) Image Signal Processors (ISPs) System Memory –     3GB onboard LPDDR4 RAM Storage – 32GB eMMC flash, 1x µSD card v3.0 socket Video Output / Display Interface HDMI V1.3a FullHD @ 60fps port 4-lane MIPI-DSI with FullHD+ capability UltraHD (4K) display on USB-C port Audio – …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Adding Machine Learning based Image Processing to your Embedded Product

Convert model tensorflow runtime to NNEF

CNXSoft: This is a guest post by Greg Lytle, V.P. Engineering, Au-Zone Technologies. Au-Zone Technologies is part of the Toradex Partner Network. Object detection and classification on a low-power Arm SoC Machine learning techniques have proven to be very effective for a wide range of image processing and classification tasks. While many embedded IoT systems deployed to date have leveraged connected cloud-based resources for machine learning, there is a growing trend to implement this processing at the edge. Selecting the appropriate system components and tools to implement this image processing at the edge lowers the effort, time, and risk of these designs. This is illustrated with an example implementation that detects and classifies different pasta types on a moving conveyor belt. Example Use Case For this example, we will consider the problem of detecting and classifying different objects on a conveyor belt. We have selected commercial pasta as an example but this general technique can be applied to most other …

Support CNX Software – Donate via PayPal or become a Patron on Patreon