SolidRun Janux GS31 Edge AI Server Combines NXP LX2160A & i.MX 8M SoCs with 128 Gyrfalcon AI Accelerators

SolidRun Janux GS31-Edge AI Inference Server

AI inference used to happen exclusively in powerful servers hosted in the cloud, but in recent years great efforts have been made to move inference at the edge, usually meaning on-device, due to much lower latency, and improved privacy. On-device inference works, but obviously, performance is limited, and on battery-operated devices, one also has to consider power consumption. So for some applications, it makes sense to have a local server with much more processing power than devices, and lower latency than the cloud. That’s exactly the use case SolidRun Janux GS31 Edge AI inference server is trying to target using several NXP processors combined with up to 128 Gyrfalcon Lightspeeur SPR2803 AI accelerators Janux GS31 server specifications: CPU Module – CEx7 LX2160A COM Express module with NXP LX2160A 16-core Arm Cortex A72 processor @ 2.0 GHz System Memory – Up to 64GB DDR4 RAM via 2x SO-DIMM sockets “Video” Processors – Up to 32x NXP i.MX 8M Cortex-A53 SoC with …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Getting Started with Amlogic NPU on Khadas VIM3/VIM3L

output type 2 yolov3

Shenzhen Wesion released the NPU toolkit for Khadas VIM3/VIM3L last November, so I decided to try the latest Ubuntu 18.04 image and the NPU toolkit on Khadas VIM3L, before switching to VIM3 for reasons I’ll explain below. I’ve followed two tutorials from the forum and wiki to run pre-built samples and then building a firmware image and samples from source. Khadas VIM3L and VIM3 Have Different & Optional NPUs This will be obvious to anyone who read the specs for Khadas VIM3 and VIM3L that the former comes with a 5 TOPS NPU, while the one in the latter only delivers up to 1.2 TOPS. But somehow, I forgot about this, and assume both had the same NPU making VIM3L more attractive but this type of task, Obviously I was wrong. But the real reason I stopped using Khadas VIM3L can be seen in the photo below. My board is an early sample that comes with Amlogic S905D3 processor, but …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Raspberry Pi 4 Powered Open Source Hardware Robot Paves the Way for Robot Maids

Raspberry Pi 4 Open Source Hardware Robot

Eventually, we all expect robots to do chores and other manual tasks performed by humans such as preparing and serving food at restaurants, carry objects over rough terrain as opposed to just inside the warehouse with a flat floor, or even moves pieces on a chessboard when other humans are no here to play with us. I’m fully expecting to eventually own a robot maid of sorts to wash dishes, mop the floors, and perform other tasks on my behalf. We are not there yet, but Raspberry Pi 4 powered Pollen Robotics’ Reachy open source-hardware robot is getting us closer to the goal as it can handle small objects and via two robotic arms and a dual-camera head, and can also interact with humans using a microphone and a speaker. Key features and specifications of Reachy robot: Main body SBC – Raspberry Pi 4 SBC with 2GB according to a teardown on Tom’s hardware AI accelerator – Google Coral AI …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Google Coral mPCIe and M.2 Cards for Sale, New Coral Dev Board Mini and Modules Coming in 2020

Google introduced Coral development board and USB accelerator with Google Edge TPU last year. The development board was comprised of a baseboard and Coral system-on-module with an NXP i.MX 8M quad-core Arm Cortex-A53 processor and the Edge TPU. Since then ASUS announced Tinker Edge T and CR1S-CM-A SBC based on the Coral module, and yesterday, I noticed Seeed Studio started selling mPCIe and M.2 AI accelerator card with Google Edge TPU, while today, Google announced upcoming Coral products for 2020. Coral Mini PCIe and M.2 Accelerators Coral Mini PCIe card specifications: Half-mini PCIe card with PCIe Gen2 x1 Supply voltage –  3.3VDC +/- 10 % Dimensions – 30.00 x 26.80 x 2.55 mm Weight – 3.6 g Temperature Range – Storage: -40 ~ 85°C; operating: -20 ~ 70°C Relative humidity – 0 ~ 100% (non-condensing) Op-shock – 100 G, 11ms (persistent); 1000 G, 0.5 ms (stress); 1000 G, 1.0 ms (stress) Op-vibe (random) – 0.5 Grms, 5 – 500 Hz …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

MediaPipe is an Open Source Perception Pipeline Framework Developed by Google

MediaPipeObjectDet

MediaPipe is an open-source perception pipeline framework introduced by Google, which helps to build multi-modal machine learning pipelines. A developer can build a prototype, without really getting into writing machine learning algorithms and models, by using existing components. This framework can be used for various vision & media processing applications (especially in VR) such as Object Detection, Face Detection, Hand Tacking, Multi-hand Tracking and Hair Segmentation. MediaPipe supports various hardware and operating system platforms such as Android, iOS & Linux by offering API’s in C++, Java, Objective-c, etc. And this framework also capable of utilizing GPU resources. MediaPipe Components The framework is comprised of three major components A framework for inference from the pipeline data Tools for evaluation And a collection of reusable inference and processing components It follows the approach of Graph-based frameworks in OpenCV and all processing happens with the context of the Graph. The Graph contains a collection of nodes and each node is implemented as a …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Bangle.js is an Hackable, Open Source JavaScript and TensorFlow-driven Smartwatch (Crowdfunding)

Espruino brought JavasScript to the Microcontroller, now Bangle.js is bringing Javascript plus TensorFlow Lite to your smartwatch. There has been some movement by some developers that says that JavaScript should be used for everything, even though I find that idea ridiculous, I still find JavaScript a fascinating language. The NeaForm Research team and Gordon Williams (the brain behind Espruino) have all teamed up in launching Bangle.js Smartwatch. Bangle.js isn’t your ordinary smartwatch, at the heart of it is the open-source ecosystem. JavaScript plus TensorFlow Lite and of course, a cool looking Smartwatch is what Bangle.js is offering. Bangle.js was launched at the recently concluded NodeConf EU conference, and the goal is to bootstrap an Open Health Platform hopefully. NodeWatch is the specific implementation of Bangle.js for NodeConf EU 2019, co-developed by Espruino and NearForm Research. This project has the potential to bootstrap a community-driven open health platform where anyone can build or use any compatible device and everyone owns their …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

AAEON M.2 and mPCIe Cards for AIoT Acceleration Run Kneron KL520 AI SoC

The AAEON announcement of its AI Acceleration M.2 and mini-PCIe cards AAEON uses Kneron KL520 AI SoC dual Cortex-M4 on a series of new modules that are accelerating AI edge computing and that only need 0.5 Watt of power. The modules are M.2 and mini-PCIe AI acceleration cards, that offer a new way to come at AI acceleration. What AI Features are Enhanced The cards are meant to enhance and accelerate AI functions, like gesture detection, facial and object recognition, driver behavior in such AIoT areas as access control, automation, and security. History of the AAEON Development Previously AAEON has been offering the M.2 and mini-PCIe AI core modules for the Boxer computers that are based on the Intel Movidius Myriad 2 and Myriad X Vision Processing Units (VPU). Reporting was done on these previous releases in the articles on the UP AI core mini-PCIe card and the  AI Core XM2280 M.2 card, using two Myriad X VPUs. AAEON is …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

AAEON BOXER-8310AI Rugged Fanless Mini PC Combines Apollo Lake Processor & Myriad X VPU for AI Edge Applications

AAEON BOXER-8310AI rugged fanless mini PC

We’ve covered several of AAEON rugged mini PCs part of BOXER-8100 family powered by an NVIDIA Tegra X2 processor and targetting AI Edge applications. The company has now introduced three new AI embedded computers for the same AI edge applications but using Intel processors together with Intel/Movidius Myriad X VPU (Vision Processing Unit) for AI acceleration. The three models are BOXER-8310AI, BOXER-8320AI, and the upcoming BOXER-8330AI based on respectively Intel Celeron/Pentium Apollo Lake processor, Intel Core i3 7th gen processor, and an Intel Core i3/77 or Xeon processor. I’ll focus on the Apollo Lake model in this post to introduce AAEON BOXER-8300AI family of rugged mini PCs. BOXER-8310AI specifications: SoC (one or the other) Intel Pentium N4200 quad-core Apollo Lake processor Intel® Celeron N3350 dual-core Apollo Lake processor System Memory –  1x DDR3L SODIMM slot supporting up to 8GB RAM @ 1867 MHz Storage Device – mSATA socket AI Module – AI Core X with Intel Movidius Myriad X VPU …

Support CNX Software – Donate via PayPal or become a Patron on Patreon