Intel’s Movidius Neural Compute Stick Brings Low Power Deep Learning & Artificial Intelligence Offline

Intel has released several Compute Stick over the years which can be used as tiny Windows or Linux computer connected to the HDMI port of your TV or monitor, but Movidius Neural Computer Stick is a complete different beast, as it’s a deep learning inference kit and self-contained artificial intelligence (A.I.) accelerator that connects to the USB port of computers or laptops.

Intel did not provide the full hardware specifications for the kit, but we do know the following specifications:

  • Vision Processing Unit – Intel Movidius Myriad 2 VPU with 12 VLIW 128-bit vector SHAVE processors @ 600 MHz optimized for machine vision, Configurable hardware accelerators for image and vision processing; 28nm HPC process node; up to 100 gigaflops
  • USB 3.0 type A port
  • Power Consumption – Low power, the SoC has a 1W power profile
  • Dimensions – 72.5mm x 27mm x 14mm

You can enter a trained Caffe, feed-forward Convolutional Neural Network (CNN) into the toolkit, profile it, then compile a tuned version ready for embedded deployment using Intel/Movidius Neural Compute Platform API. Inference occurs in real-time in the stick itself, and no cloud connection is needed. You can even connect multiple Movidius Compute Sticks to the same computer to scale performance.

It can help bring artificial intelligence to drones, robots, security camera, smart speakers, and anything that can leverage deep learning. The video below also shows the USB Compute Stick connected to what looks like a development board, so the target platform does not need to be powerful with most of the hard processing going inside in the stick. It currently does need to be an x86-64 computer running Ubuntu 16.04, so no ARM support.

Movidius Neural Compute Stick is sold for $79 via RS components and Mouser. You’ll find the purchase links, getting started guide and support forums on Movidius Developer site.

Share this:
FacebookTwitterHacker NewsSlashdotRedditLinkedInPinterestFlipboardMeWeLineEmailShare

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

ROCK Pi 4C Plus

8 Replies to “Intel’s Movidius Neural Compute Stick Brings Low Power Deep Learning & Artificial Intelligence Offline”

  1. This is actually one of the best write-ups on this thing I have seen. It doesn’t drift off into fantasy making up crap that thing does not do. The article is short and to-the-point based on the scant information Intel has provided. If after reading any article about this thing, it is still not clear WTF it is, that is because Intel is not saying. They apparently lost the only person in the company that can make PowerPoint slides and flowcharts/diagrams.

    More importantly, this article states what no other article does:
    > so no ARM support
    This product is useless to the 99.9% of the market that would want to purchase it.

  2. @crashoverride
    My understanding is that it can do things like object and speech recognition offline and at low power, better than it would be possible with standard hardware (smartphone/development board). But yeah, they did not exactly quantify it.

  3. At $79, do they actually have the beef described here? Does it actually accelerate the things claimed or is it like the S3 Virge, a 3D Decellerator?

  4. According to the product description for the VPU (which may or may not apply to the chip used in the stick since Intel is not giving any technical details), there are 12 vector units that are 128bit wide (4 floats).
    https://uploads.movidius.com/1463156689-2016-04-29_VPU_ProductBrief.pdf

    What would be interesting is for someone to benchmark it. I would be interested particularly in the performance versus NEON/Mali GPU:
    https://developer.arm.com/technologies/compute-library

    The newest Mali-G72 is supposed to have machine learning enhancements.
    https://developer.arm.com/products/graphics-and-multimedia/mali-gpus/mali-g72-gpu

  5. I can’t imagine there’s developers or engineers looking to accelerate neural network performance on their local x86-64 platforms who wouldn’t be better served by just getting a GPU. I don’t know, maybe it’s really really great at accelerating neural networks more than the 100 GFlops suggests. The Myriad 2 VPU’s main selling point does seem to be about having hardware specifically for doing convolutions.

    The power usage isn’t useful for including it in products if it needs an x86-64 platform attached to it. I suppose it’s more a marketing point toward folks considering using the VPU chip standalone, like in the DJI Phantom 4. It might actually be the same usb eval board from 1.5 years ago, just with a different case and an Intel logo: https://www.movidius.com/news/movidius-announces-deep-learning-accelerator-and-fathom-software-framework

    Maybe this would be useful in the education market, it’s cheap and it’s a lot easier to add a USB device to a student’s laptop than a GPU. But it doesn’t seem to be powerful enough to assist with actual training, it seems to be expected to be used to accelerate pre-trained models (and maybe only CNNs). So I wonder if any educational use would be limited to running some examples without really being able to change much about them.

Leave a Reply

Your email address will not be published. Required fields are marked *

Khadas VIM4 SBC
Khadas VIM4 SBC