XMOS launches Xcore.AI, a scalable AI processor for the Edge

XMOS, known for its high-performance voice interfaces, is joining the AIoT bandwagon with the announcement of the Xcore.ai, a flexible and economical processor delivering high-performance AI, DSP, control, and I/O’s in a single device. IoT and AI have been one of the most trending topics and fields in the last decade. Both areas have seen large innovations in between them. Deep neural networks have become better, IoT deployment cost has also been greatly reduced, and most importantly, they both have a significant impact on multiple industries. An interesting trend recently is the emergence of applications merging AI and IoT together to form so-called AIoT applications. IoT will be the digital nervous system, while AI will become the brain that makes all the critical decisions which will control the whole system. AIoT has led to the development and deployment of what we call AI processors or AI modules that can be deployed to the edge for high-performance Edge computing applications. An …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Arm Introduces Cortex-M55 MCU Core, Arm Ethos-U55 microNPU for Cortex-M Microcontrollers

Arm Cortex M55

Artificial Intelligence and the Internet of Things often go hand in hand with AIoT being a new buzz word that came up last year or so. But for AIoT to scale we need ultra-low-cost, low-power solutions capable of doing inference at the sensor node level, and this is only possible with microcontrollers. To achieve this goal, Arm has just unveiled the Arm Cortex-M55 microcontroller core optimized for artificial intelligence workloads that delivers up to a 15x uplift in ML performance and a 5x uplift in DSP performance with greater efficiency, as well as Ethos-U55 microNPU designed for Cortex-M microcontrollers that need even more AI performance (up to 480 times faster), while consuming as little power as possible. Arm Cortex-M55 Key features and specifications: Architecture – Armv8.1-M Bus interface – AMBA 5 AXI5 64-bit master (compatible to AXI4 IPs) Pipeline – 4-stage (for main integer pipeline) Security – Arm TrustZone technology (optional) DSP extension – 32-bit DSP/SIMD extension M-Profile Vector Extension …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Mustang-M2BM-MX2 M.2 Card Features Two Intel Movidius Myriad X VPUs


We’ve already seen M.2 cards based on one or more Intel Movidius Myriad X VPU with the likes of AAEON AI Core XM2280 M.2 card, but there’s now another option from Taiwan-based IEI Integration Corp with their Mustang-M2MB-MX2 card. Specifications: AI Accelerators – 2x Intel Movidius Myriad X MA2485 VPU Dataplane Interface – M.2 BM Key Power Consumption – Around 7.5W Cooling – Active Heatsink Dimensions – 22 x 80 mm Temperature Range – -20°C~60°C Humidity – 5% ~ 90% Just like other Myriad X devices, the card relies on Intel OpenVINO toolkit working on Ubuntu 16.04.3 LTS 64-bit, CentOS 7.4 64-bit or Windows 10 64-bit operating systems, and supporting AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101 topologies, as well as TensorFlow, Caffe, MXNet, and ONNX AI frameworks. The heatsink is really thick (~2 cm high), so it’s not something you’d just put in your laptop, and instead, it’s better suited to …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

ASUS Tinker Edge T SBC Launched for $168 and Up

Buy ASUS Tinker Edge T

ASUS unveiled Tinker Edge T & CR1S-CM-A SBCs based on Google Coral Edge TPU system-on-module featuring both NXP i.MX 8M processor and Google Edge TPU co-processor for AI acceleration in May 2019, but at the time none of the boards were available. But earlier this month, ASUS officially announced the board, and it can now be purchased on various sites including Provantage (~$168.35) and  Physical Computing (21,600 JPY ~ $200). It is also listed on Connection for about $198 but currently out of stock. Edge TPU module SoC – NXP i.MX 8M quad-core Arm Cortex-A53 processor with Arm Cortex-M4F real-time core,  GC7000 Lite 3D GPU ML accelerator – Google Edge TPU coprocessor delivering up to 4 TOPS System Memory – 1 GB LPDDR4 RAM Storage – 8 GB eMMC Flash memory Wireless Connectivity – Wi-Fi 2×2 MIMO (802.11b/g/n/ac 2.4/5GHz) Bluetooth 4.2 Baseboard Storage – MicroSD card slot Networking – Gigabit Ethernet port (via RTL8211F-CG) Video Output – MIPI DSI connector, …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

ESP Open Source Research Platform Enables the Design of RISC-V & Sparc SoC’s with Accelerators

ESP RISC-V & Sparc Platform

FOSDEM 2020 will take place next week, and there will be several interesting talks about open-source hardware and software development. One of those is entitled “Open ESP – The Heterogeneous Open-Source Platform for Developing RISC-V Systems” with an excerpt of the abstract reading: ESP is an open-source research platform for RISC-V systems-on-chip that integrates many hardware accelerators. ESP provides a vertically integrated design flow from software development and hardware integration to full-system prototyping on FPGA. For application developers, it offers domain-specific automated solutions to synthesize new accelerators for their software and map it onto the heterogeneous SoC architecture. For hardware engineers, it offers automated solutions to integrate their accelerator designs into the complete SoC. If we go to the official website, we can see ESP (Embedded Scalable Platform) actually supports both 32-bit Leon3 (Sparc) and 64-bit Ariane (RISC-V) cores, and various hardware accelerators from the platform or third parties. Highlights: Architecture Tile-based architecture: processor, memory and accelerator tiles NoC (Network-on-Chip) …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

SolidRun Janux GS31 Edge AI Server Combines NXP LX2160A & i.MX 8M SoCs with 128 Gyrfalcon AI Accelerators

SolidRun Janux GS31-Edge AI Inference Server

AI inference used to happen exclusively in powerful servers hosted in the cloud, but in recent years great efforts have been made to move inference at the edge, usually meaning on-device, due to much lower latency, and improved privacy. On-device inference works, but obviously, performance is limited, and on battery-operated devices, one also has to consider power consumption. So for some applications, it makes sense to have a local server with much more processing power than devices, and lower latency than the cloud. That’s exactly the use case SolidRun Janux GS31 Edge AI inference server is trying to target using several NXP processors combined with up to 128 Gyrfalcon Lightspeeur SPR2803 AI accelerators Janux GS31 server specifications: CPU Module – CEx7 LX2160A COM Express module with NXP LX2160A 16-core Arm Cortex A72 processor @ 2.0 GHz System Memory – Up to 64GB DDR4 RAM via 2x SO-DIMM sockets “Video” Processors – Up to 32x NXP i.MX 8M Cortex-A53 SoC with …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Getting Started with Amlogic NPU on Khadas VIM3/VIM3L

output type 2 yolov3

Shenzhen Wesion released the NPU toolkit for Khadas VIM3/VIM3L last November, so I decided to try the latest Ubuntu 18.04 image and the NPU toolkit on Khadas VIM3L, before switching to VIM3 for reasons I’ll explain below. I’ve followed two tutorials from the forum and wiki to run pre-built samples and then building a firmware image and samples from source. Khadas VIM3L and VIM3 Have Different & Optional NPUs This will be obvious to anyone who read the specs for Khadas VIM3 and VIM3L that the former comes with a 5 TOPS NPU, while the one in the latter only delivers up to 1.2 TOPS. But somehow, I forgot about this, and assume both had the same NPU making VIM3L more attractive but this type of task, Obviously I was wrong. But the real reason I stopped using Khadas VIM3L can be seen in the photo below. My board is an early sample that comes with Amlogic S905D3 processor, but …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Raspberry Pi 4 Powered Open Source Hardware Robot Paves the Way for Robot Maids

Raspberry Pi 4 Open Source Hardware Robot

Eventually, we all expect robots to do chores and other manual tasks performed by humans such as preparing and serving food at restaurants, carry objects over rough terrain as opposed to just inside the warehouse with a flat floor, or even moves pieces on a chessboard when other humans are no here to play with us. I’m fully expecting to eventually own a robot maid of sorts to wash dishes, mop the floors, and perform other tasks on my behalf. We are not there yet, but Raspberry Pi 4 powered Pollen Robotics’ Reachy open source-hardware robot is getting us closer to the goal as it can handle small objects and via two robotic arms and a dual-camera head, and can also interact with humans using a microphone and a speaker. Key features and specifications of Reachy robot: Main body SBC – Raspberry Pi 4 SBC with 2GB according to a teardown on Tom’s hardware AI accelerator – Google Coral AI …

Support CNX Software – Donate via PayPal or become a Patron on Patreon