Arrow Embedded To Go Free Online Conference, and 3,000 Development Boards Giveaway

Arrow Embedded to Go

With the coronavirus outbreak on-going, many events are either canceled or moving online. Arrow Electronics has now announced what appears to be a completely new online event. Embedded To Go virtual technology exhibition for embedded systems will take place on April 1-3, 2020, and offer technical presentations, information on newly launched technology, and access to Arrow’s sales and engineering teams. The event will entirely free to attend, and you can register online today with a company’s email address. The event will start in about 10 days by so far the virtual “booth map”, “supplier guide” and “lecture area” are inaccessible. We only know what the event should consist of thanks to an article on EENew Embedded: Technical presentation webinars will be hosted by leading suppliers covering AI, IoT and Edge computing, precision measurement, high-performance computing, intelligent condition-based monitoring, and other technological subjects. Information will also be available in the form of videos and white papers on boards and applications. A …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Mustang-M2BM-MX2 M.2 Card Features Two Intel Movidius Myriad X VPUs

Mustang-M2BM-MX2

We’ve already seen M.2 cards based on one or more Intel Movidius Myriad X VPU with the likes of AAEON AI Core XM2280 M.2 card, but there’s now another option from Taiwan-based IEI Integration Corp with their Mustang-M2MB-MX2 card. Specifications: AI Accelerators – 2x Intel Movidius Myriad X MA2485 VPU Dataplane Interface – M.2 BM Key Power Consumption – Around 7.5W Cooling – Active Heatsink Dimensions – 22 x 80 mm Temperature Range – -20°C~60°C Humidity – 5% ~ 90% Just like other Myriad X devices, the card relies on Intel OpenVINO toolkit working on Ubuntu 16.04.3 LTS 64-bit, CentOS 7.4 64-bit or Windows 10 64-bit operating systems, and supporting AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101 topologies, as well as TensorFlow, Caffe, MXNet, and ONNX AI frameworks. The heatsink is really thick (~2 cm high), so it’s not something you’d just put in your laptop, and instead, it’s better suited to …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Gen 3 Intel Movidius “Keem Bay” VPU Introduced at 2019 Intel AI Summit

Intel Movidius Keem Bay VPU

Intel made announcements about upcoming AI solutions at 2019 Intel AI Summit. Those include Intel Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000) for cloud and data center customers, as well as Gen 3 Intel Movidis “Keem Bay” VPU. We’ll focus on the latter in this post with Intel claiming similar performance as NVIDIA Jetson AGX Xavier at much lower power consumption with claims of up to 4.7 times more power efficiency when using ResNet-50 benchmark inference measurement using INT8 with a batch size of 1. Considering Jetson Xavier AGX has a ~30W power budget, that would mean Movidius “Keem Bay” consumes around 6 Watts.  Compared to Myriad X MA2085, the new Gen3 VPU is said to have more than 10 times the inference performance. Intel did not provide any TOPS figure, but considering the company announced 1 TOPS of neural compute performance for Myriad X, one may expect Keem Bay to deliver 10 TOPS. The higher …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

DepthAI Brings AI plus Depth to the Raspberry Pi (Crowdfunding)

DepthAI Embedded Platform

Edge computing on the Raspberry PI has been a bit of ups and downs, especially with everyone gearing for AI in everything. The Raspberry Pi, on its own, isn’t really capable of any reliable AI applications. Typical object detection on the Raspberry Pi would get you something around 1 – 2 fps depending on the nature of your model and this because all those processing is done on the CPU. To address this poor performance of AI applications on the Raspberry Pi, AI Accelerators came to the rescue. The Intel Neural Compute Stick 2 is one such accelerator capable of somewhere around 8 – 15 fps depending on your application. The NCS2, which is based on the Myriad X VPU technology, offers so much more than the compute stick delivers, and this is something that the team behind DepthAI has exploited to create a powerful AI module for edge computing called DepthAI. It is one thing to do object detection. …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Intel Agilex SoC FPGA Features Four Arm Cortex-A53 Cores

Intel Agilex SoC FPGA

Intel announced their new Agilex FPGA family manufactured with a  10nm process earlier this April, but it only caught my eyes recently when I saw “Agilex SoC FPGA” listed in Linux 5.2 Arm’s changelog. The Intel SoC FPGA is there simply because it comes with four Arm Cortex-A53 cores. Three family have been announced so far, although the later is shown as coming soon: Intel Agilex F-Series FPGAs and SoCs – Transceiver support up to 58 Gbps, increased DSP capabilities, high system integration, and 2nd Gen Intel Hyperflex architecture for a wide range of applications in Data Center, Networking, and Edge. Option to integrate the quad-core Arm Cortex-A53 processor. Intel Agilex I-Series SoC FPGAs – Optimized for high performance processor interface and bandwidth intensive applications. Coherent attach to Intel Xeon processors with Compute Express Link, hardened PCIe Gen 5 support and transceiver support up to 112 Gbps. Intel Agilex M-Series SoC FPGAs – Optimized for compute and memory intensive applications. …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

AAEON AI Core XP4/XP8 PCIe Card Combines up to 8 Myriad X VPU’s

AAEON AI Core XP4 XP8

Movidius Myriad X is Intel’s latest vision processing unit (VPU) first unveiled in 2017, and available for evaluation in Intel Neural Compute Stick 2 since the end of 2018. Later on, AAEON also launched their own AI Core XM2280 M.2 card equipped with two Myriad X 2485 VPU’s and capable of up to 200 fps (160 fps typical) inferences, thanks to over 2 TOPS of deep neural network (DNN) performance. But what if you need even more performance? The company has now launched AI Core XP4/XP8 card with either two or four AI Core XM2280 M.2 cards that can be connected into any computer or workstation with a PCIe x4 slot. AAEON AI Core XP4/XP8 specifications: 4x M.2 sockets for 2x or 4x M.2 2280 M-key cards with 2x Myriad X VPU’s and 2x 4Gbit LPDDR4x memory each Asmedia PCIe switch Cooling – Fan heatsink PCIe x4 standard full-length low profile slot card Dimensions – 167 x 111 mm Temperature …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Linux 5.1 Release – Main Changes, Arm, MIPS & RISC-V Architectures

Linux 5.1 Changelog

Linus Torvalds has just announced the release of Linux 5.1: So it’s a bit later in the day than I usually do this, just because I was waffling about the release. Partly because I got some small pull requests today, but mostly just because I wasn’t looking forward to the timing of this upcoming 5.2 merge window. But the last-minute pull requests really weren’t big enough to justify delaying things over, and hopefully the merge window timing won’t be all that painful either. I just happen to have the college graduation of my oldest happen right smack dab in the middle of the upcoming merge window, so I might be effectively offline for a few days there. If worst comes to worst, I’ll extend it to make it all work, but I don’t think it will be needed. Anyway, on to 5.1 itself. The past week has been pretty calm, and the final patch from rc6 is not all that …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

AI Core XM2280 M.2 Card is Equipped with two Myriad X 2485 VPUs

AI Core XM2280

AAEON released UP AI Core mPCIe card with a Myriad 2 VPU (Vision Processing Unit) last year. But the company also has an AI Core X family powered by the more powerful Myriad X VPU with the latest member being AI Core XM2280 M.2 card featuring not one, but two Myriad X 2485 VPUs coupled with 1GB LPDDR4 RAM (512MB x2). The card supports Intel OpenVINO toolkit v4 or greater, and is compatible with Tensorflow and Caffe AI frameworks. AI Core XM2280 M.2 specifications: VPU – 2x Intel Movidius Myriad X VPU, MA2485 System Memory – 2x 4Gbit LPDDR4 Host Interface – M.2 connector Dimensions – 80 x 22 mm (M.2 M+B key form factor) Certification – CE/FCC Class A Operating Temperature – 0~50°C Operating Humidity – 10%~80%RH, non-condensing The card works with Intel Vision Accelerator Design SW SDK available for Ubuntu 16.04, and Windows 10. Thanks to the two Myriad X VPU’s, the card is capable of up to …

Support CNX Software – Donate via PayPal or become a Patron on Patreon