Intel’s Movidius Neural Compute Stick Brings Low Power Deep Learning & Artificial Intelligence Offline

Intel has released several Compute Stick over the years which can be used as tiny Windows or Linux computer connected to the HDMI port of your TV or monitor, but Movidius Neural Computer Stick is a complete different beast, as it’s a deep learning inference kit and self-contained artificial intelligence (A.I.) accelerator that connects to the USB port of computers or laptops.

Intel did not provide the full hardware specifications for the kit, but we do know the following specifications:

  • Vision Processing Unit – Intel Movidius Myriad 2 VPU with 12 VLIW 128-bit vector SHAVE processors @ 600 MHz optimized for machine vision, Configurable hardware accelerators for image and vision processing; 28nm HPC process node; up to 100 gigaflops
  • USB 3.0 type A port
  • Power Consumption – Low power, the SoC has a 1W power profile
  • Dimensions – 72.5mm x 27mm x 14mm

You can enter a trained Caffe, feed-forward Convolutional Neural Network (CNN) into the toolkit, profile it, then compile a tuned version ready for embedded deployment using Intel/Movidius Neural Compute Platform API. Inference occurs in real-time in the stick itself, and no cloud connection is needed. You can even connect multiple Movidius Compute Sticks to the same computer to scale performance.

It can help bring artificial intelligence to drones, robots, security camera, smart speakers, and anything that can leverage deep learning. The video below also shows the USB Compute Stick connected to what looks like a development board, so the target platform does not need to be powerful with most of the hard processing going inside in the stick. It currently does need to be an x86-64 computer running Ubuntu 16.04, so no ARM support.

Movidius Neural Compute Stick is sold for $79 via RS components and Mouser. You’ll find the purchase links, getting started guide and support forums on Movidius Developer site.

Share this:

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

ROCK Pi 4C Plus
Subscribe
Notify of
guest
The comment form collects your name, email and content to allow us keep track of the comments placed on the website. Please read and accept our website Terms and Privacy Policy to post a comment.
8 Comments
oldest
newest
crashoverride
crashoverride
6 years ago

This is actually one of the best write-ups on this thing I have seen. It doesn’t drift off into fantasy making up crap that thing does not do. The article is short and to-the-point based on the scant information Intel has provided. If after reading any article about this thing, it is still not clear WTF it is, that is because Intel is not saying. They apparently lost the only person in the company that can make PowerPoint slides and flowcharts/diagrams. More importantly, this article states what no other article does: > so no ARM support This product is useless… Read more »

Nobody of Import
Nobody of Import
6 years ago

At $79, do they actually have the beef described here? Does it actually accelerate the things claimed or is it like the S3 Virge, a 3D Decellerator?

crashoverride
crashoverride
6 years ago

According to the product description for the VPU (which may or may not apply to the chip used in the stick since Intel is not giving any technical details), there are 12 vector units that are 128bit wide (4 floats).
https://uploads.movidius.com/1463156689-2016-04-29_VPU_ProductBrief.pdf

What would be interesting is for someone to benchmark it. I would be interested particularly in the performance versus NEON/Mali GPU:
https://developer.arm.com/technologies/compute-library

The newest Mali-G72 is supposed to have machine learning enhancements.
https://developer.arm.com/products/graphics-and-multimedia/mali-gpus/mali-g72-gpu

John S.
John S.
6 years ago

I can’t imagine there’s developers or engineers looking to accelerate neural network performance on their local x86-64 platforms who wouldn’t be better served by just getting a GPU. I don’t know, maybe it’s really really great at accelerating neural networks more than the 100 GFlops suggests. The Myriad 2 VPU’s main selling point does seem to be about having hardware specifically for doing convolutions. The power usage isn’t useful for including it in products if it needs an x86-64 platform attached to it. I suppose it’s more a marketing point toward folks considering using the VPU chip standalone, like in… Read more »

Khadas VIM4 SBC