Bangle.js is an Hackable, Open Source JavaScript and TensorFlow-driven Smartwatch (Crowdfunding)

Espruino brought JavasScript to the Microcontroller, now Bangle.js is bringing Javascript plus TensorFlow Lite to your smartwatch. There has been some movement by some developers that says that JavaScript should be used for everything, even though I find that idea ridiculous, I still find JavaScript a fascinating language. The NeaForm Research team and Gordon Williams (the brain behind Espruino) have all teamed up in launching Bangle.js Smartwatch. Bangle.js isn’t your ordinary smartwatch, at the heart of it is the open-source ecosystem. JavaScript plus TensorFlow Lite and of course, a cool looking Smartwatch is what Bangle.js is offering. Bangle.js was launched at the recently concluded NodeConf EU conference, and the goal is to bootstrap an Open Health Platform hopefully. NodeWatch is the specific implementation of Bangle.js for NodeConf EU 2019, co-developed by Espruino and NearForm Research. This project has the potential to bootstrap a community-driven open health platform where anyone can build or use any compatible device and everyone owns their …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

AAEON M.2 and mPCIe Cards for AIoT Acceleration Run Kneron KL520 AI SoC

The AAEON announcement of its AI Acceleration M.2 and mini-PCIe cards AAEON uses Kneron KL520 AI SoC dual Cortex-M4 on a series of new modules that are accelerating AI edge computing and that only need 0.5 Watt of power. The modules are M.2 and mini-PCIe AI acceleration cards, that offer a new way to come at AI acceleration. What AI Features are Enhanced The cards are meant to enhance and accelerate AI functions, like gesture detection, facial and object recognition, driver behavior in such AIoT areas as access control, automation, and security. History of the AAEON Development Previously AAEON has been offering the M.2 and mini-PCIe AI core modules for the Boxer computers that are based on the Intel Movidius Myriad 2 and Myriad X Vision Processing Units (VPU). Reporting was done on these previous releases in the articles on the UP AI core mini-PCIe card and the  AI Core XM2280 M.2 card, using two Myriad X VPUs. AAEON is …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

AAEON BOXER-8310AI Rugged Fanless Mini PC Combines Apollo Lake Processor & Myriad X VPU for AI Edge Applications

AAEON BOXER-8310AI rugged fanless mini PC

We’ve covered several of AAEON rugged mini PCs part of BOXER-8100 family powered by an NVIDIA Tegra X2 processor and targetting AI Edge applications. The company has now introduced three new AI embedded computers for the same AI edge applications but using Intel processors together with Intel/Movidius Myriad X VPU (Vision Processing Unit) for AI acceleration. The three models are BOXER-8310AI, BOXER-8320AI, and the upcoming BOXER-8330AI based on respectively Intel Celeron/Pentium Apollo Lake processor, Intel Core i3 7th gen processor, and an Intel Core i3/77 or Xeon processor. I’ll focus on the Apollo Lake model in this post to introduce AAEON BOXER-8300AI family of rugged mini PCs. BOXER-8310AI specifications: SoC (one or the other) Intel Pentium N4200 quad-core Apollo Lake processor Intel® Celeron N3350 dual-core Apollo Lake processor System Memory –  1x DDR3L SODIMM slot supporting up to 8GB RAM @ 1867 MHz Storage Device – mSATA socket AI Module – AI Core X with Intel Movidius Myriad X VPU …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

TensorFlow Lite for Microcontrollers Benchmarked on Linux SBCs

TensorFlow Lite microcontrollers benchmark linux SBC

Dimitris Tassopoulos (Dimtass) decided to learn more about machine learning for embedded systems now that the technology is more mature, and wrote a series of five posts documenting his experience with low-end hardware such as STM32 Bluepill board, Arduino UNO, or ESP8266-12E module starting with simple NN examples, before moving to TensorFlow Lite for microcontrollers. Dimitris recently followed up his latest “stupid project” (that’s the name of his blog, not being demeaning here :)) by running and benchmarking TensorFlow Lite for microcontrollers on various Linux SBC. But why? you might ask. Dimitris tried to build tflite C++ API designed for Linux, but found it was hard to build, and no pre-built binary are available except for x86_64. He had no such issues with tflite-micro API, even though it’s really meant for baremetal MCU platforms. Let’s get straight to the results which also include a Ryzen platform, probably a laptop, for reference: SBC Average for 1000 runs  (ms) Ryzen 2700X (this …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Arm Techcon 2019 Schedule – Machine Learning, Security, Containers, and More

Arm Techcon 2019

Arm TechCon will take place on October 8-10, 2019 at San Jose Convention Center to showcase new solutions from Arm and third-parties, and the company has now published the agenda/schedule for the event. There are many sessions and even if you’re not going to happen it’s always useful to checkout what will be discussed to learn more about what’s going on currently and what will be the focus in the near future for Arm development. Several sessions normally occur at the same time, so as usual I’ll make my own virtual schedule with the ones I find most relevant. Tuesday, October 8  09:00 – 09:50 – Open Source ML is rapidly advancing. How can you benefit? by Markus Levy, Director of AI and Machine Learning Technologies, NXP Over the last two years and still continuing, machine learning applications have benefited tremendously from the growing number of open source frameworks, tools, and libraries to support edge inferencing. These include CMSIS-NN, ARM …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Turing Pi Clusterboard Takes up to 7 Raspberry Pi Compute Modules

Turing Pi Raspberry Pi Compute-Module 3+ Cluster Board

We’ve already covered several cluster solutions based on Raspberry Pi boards such as Bitscope Blade with up to 40 Raspberry Pi boards, a 16 Raspberry Pi Zero cluster board prototype, Circumference “datacenter-in-a-box” with up to 32 Raspberry Pi 3 B+ boards. If you want something more compact, it makes sense to develop a platform with Raspberry Pi Compute Modules instead, and we’ve already published news about MiniNodes Raspberry Pi 3 CoM Carrier Board that supports up to to 5 Compute Modules 3/3+ last year. There’s now another option with Turing Pi Clusterboard support up to 7 Compute Modules for applications leveraging Kubernetes, Docker, Jupyter Notebook, machine learning (TensorFlow/Caffe), and serverless stack. Turing Pi specifications: 7x Sockets for Raspberry Pi Compute Module 3/3+ Storage – 7x microSD card slots Video Output – 1x HDMI port, MIPI DSI connector Audio – 1x 3.5mm audio jack Camera I/F – 2x MIPI CSI connectors Networking – Gigabit Ethernet port and on-board switch USB – …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

96Boards RK1808 & RK3399Pro SoM & Devkit Now Available for Purchase

RK3399Pro SoM Development Kit

Back in April, we covered the very first 96Boards SoM’s (Systems-on-Module) which were based on Rockchip RK3399Pro or RK1808 processors, and targeted applications leveraging artificial intelligence acceleration. There were not quite available at the time, but Seeed Studio now has both BeiQi modules for pre-order for $119 and $59 respectively, while the carrier board goes with $125 with antennas, and power supply. Note that the RK3399Pro SoM and the carrier board are basically available now with shipping schedule for July 4th, but you’d had to wait until the end of the month for the RK1808 module. BeiQi RK1808 AIoT 96Boards Compute SoM Module specifications: SoC – Rockchip RK1808 dual-core Arm Cortex-A35  processor @ 1.6 GHz with NPU supporting 8-bit/16-bit operations up to 3.0 TOPS, TensorFlow and Caffe frameworks; 22nm FD-SOI process System Memory – 1GB LPDDR3 (I also read “4GB LPDRR3” (sic.) in other places, but the capacity is likely wrong) Storage – 16GB eMMC flash Networking – Gigabit Ethernet …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

AAEON AI Core XP4/XP8 PCIe Card Combines up to 8 Myriad X VPU’s

AAEON AI Core XP4 XP8

Movidius Myriad X is Intel’s latest vision processing unit (VPU) first unveiled in 2017, and available for evaluation in Intel Neural Compute Stick 2 since the end of 2018. Later on, AAEON also launched their own AI Core XM2280 M.2 card equipped with two Myriad X 2485 VPU’s and capable of up to 200 fps (160 fps typical) inferences, thanks to over 2 TOPS of deep neural network (DNN) performance. But what if you need even more performance? The company has now launched AI Core XP4/XP8 card with either two or four AI Core XM2280 M.2 cards that can be connected into any computer or workstation with a PCIe x4 slot. AAEON AI Core XP4/XP8 specifications: 4x M.2 sockets for 2x or 4x M.2 2280 M-key cards with 2x Myriad X VPU’s and 2x 4Gbit LPDDR4x memory each Asmedia PCIe switch Cooling – Fan heatsink PCIe x4 standard full-length low profile slot card Dimensions – 167 x 111 mm Temperature …

Support CNX Software – Donate via PayPal or become a Patron on Patreon