$45 AIY Vision Kit Adds Accelerated Computer Vision to Raspberry Pi Zero W Board

AIY Projects is an initiative launched by Google that aims to bring do-it yourself artificial intelligence to the maker community by providing affordable development kits to get started with the technology. The first project was AIY Projects Voice Kit, that basically transformed Raspberry Pi 3 board into a Google Home device by adding the necessary hardware to support Google Assistant SDK, and an enclosure. The company has now launched another maker kit with AIY Project Vision Kit that adds a HAT board powered by Intel/Movidius Myriad 2 VPU to Raspberry Pi Zero W, in order to accelerate image & objects recognition using TensorFlow’s machine learning models. The kit includes the following items: Vision Bonnet accessory board powered by Myriad 2 VPU (MA2450) 2x 11mm plastic standoffs 24mm RGB arcade button and nut 1x Privacy LED 1x LED bezel 1x 1/4/20 flanged nut Lens, lens washer, and lens magnet 50 mil ribbon cable Pi0 camera flat flex cable MIPI flat flex …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

AWS DeepLens is a $249 Deep Learning Video Camera for Developers

Amazon Web Services (AWS) has launched Deeplens, the “world’s first deep learning enabled video camera for developers”. Powered by an Intel Atom X5 processor with 8GB, and featuring a 4MP (1080p) camera, the fully programmable system runs Ubuntu 16.04, and is designed expand deep learning skills of developers, with Amazon providing tutorials, code, and pre-trained models. AWS Deeplens specifications: SoC – Intel Atom X5 Processor with Intel Gen9 HD graphics (106 GFLOPS of compute power) System Memory – 8GB RAM Storage – 16GB eMMC flash, micro SD slot Camera – 4MP (1080p) camera using MJPEG, H.264 encoding Video Output – micro HDMI port Audio – 3.5mm audio jack, and HDMI audio Connectivity – Dual band WiFi USB – 2x USB 2.0 ports Misc – Power button; camera, WiFi and power status LEDs; reset pinhole Power Supply – TBD Dimensions – 168 x 94 x 47 mm Weight – 296.5 grams The camera can not only do inference, but also train deep …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Cheap Evil Tech – WiFi Deauther V2.0 Board and Autonomous Mini Killer Drones

Most technological advances usually improve life of people, and with the costs coming down dramatically over the years, available to more people. But technology can be used for bad, for example by governments and some hackers. Today, I’ve come across two cheap hardware devices that could be considered evil. The first one is actually pretty harmless and can be use for education, but disconnects you from your WiFi, which may bring severe physiological trauma to some people, but should not be life threatening, while the other is downright scary with cheap targeted killing machines. WiFi Deauther V2.0 board Specifications for this naughty little board: Wireless Module based on ESP8266 WiSoC USB – 1x Micro USB type changed, more stable. Expansion – 17-pin header with 1x ADC, 10x GPIOs, power pins Misc – 1x power switch,  battery status LEDs Power Supply 5 to 12V via micro USB port Support for 18650 battery with charging circuit (Over-charge protection, over-discharge protection) Dimensions – …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Hisilicon Hi3559A V100ES is an 8K Camera SoC with a Neural Network Accelerator

Earlier today, I published a review of JeVois-A33 machine vision camera, noting that processing is handled by the four Cortex A7 cores of Allwinner A33 processor, but in the future we can expect such type of camera to support acceleration with OpenCL/Vulkan capable GPUs, or better, Neural network accelerators (NNA) such Imagination Tech PowerVR Series 2NX. HiSilicon already launched Kirin 970 SoC with such similarIP, except they call it an NPU (Neural-network Processing Unit). However, while looking for camera SoC with NNA, I found a list of deep learning processors, including the ones that go into powerful servers and autonomous vehicles, that also included a 8K Camera SoC with a dual core CNN (Convolutional Neural Network) acceleration engine made by Hisilicon: Hi3559A V100ES. Hisilicon Hi3559A V100ES specifications: Processor Cores 2x ARM Cortex A73 @ 2 GHz, 32 KB I cache, 64KB D cache or 512 KB L2 cache 2x ARM Cortex A53 @ 1 GHz, 32 KB I cache, 32KB …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

JeVois-A33 Linux Computer Vision Camera Review – Part 2: Setup, Guided Tour, Documentation & Customization

Computer Vision, Artificial Intelligence, Machine Learning, etc.. are all terms we hear frequently those days. JeVois-A33 smart machine vision camera powered by Allwinner A33 quad core processor was launched last year on Indiegogo to bring such capabilities in a low power small form factor devices for example to use in robotics project. The company improved the software since the launch of the project, and has now sent me their tiny Linux camera developer kit for review, and I’ve already checked  out the hardware and accessories in the first post. I’ve now had time to test the camera, and I’ll explained how to set it up, test some of the key features via the provided guided tour, and show how it’s possible to customize the camera to your needs with one example. Getting Started with JeVois-A33 In theory, you could just get started by inserting the micro SD card provided with the camera, connect it to your computer via the USB …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

JeVois Smart Machine Vision Camera Review – Part 1: Developer / Robotics Kit Unboxing

JeVois-A33 computer vision camera was unveiled at the end of last year through a Kickstarter campaign. Powered by an Allwinner A33 quad core Cortex A7 processor, and a 1.3MP camera sensor, the system could detect motion, track faces and eyes, detect & decode ArUco makers & QR codes, follow lines for autonomous cars, etc.. thanks to JeVois framework. Most rewards from KickStarter shipped in April of this year, so it’s quite possible some of the regular readers of this blog are already familiar the camera. But the developer (Laurent Itti) re-contacted me recently, explaining they add improves the software with Python support, and new features such as the capability of running deep neural networks directly on the processor inside the smart camera. He also wanted to send a review sample, which I received today, but I got a bit more than I expected, so I’ll start the review with an unboxing of they call the “Developer / Robotics Kit”. I …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Arm Research Summit 2017 Streamed Live on September 11-13

The Arm Research Summit is “an academic summit to discuss future trends and disruptive technologies across all sectors of computing”, with the second edition of the even taking place now in Cambridge, UK until September 13, 2017. The Agenda includes various subjects such as architecture and memory, IoT, HPC, computer vision, machine learning, security, servers, biotechnology and others. You can find the full detailed schedule for each day on Arm website, and the good news is that the talks are streamed live in YouTube, so you can follow the talks that interest you from the comfort of your home/office. Note that you can switch between rooms in the stream above by clicking on <-> icon. Audio volume is a little low… Thanks to Nobe for the tip. Jean-Luc Aufranc (CNXSoft)Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in …

Support CNX Software – Donate via PayPal or become a Patron on Patreon

Getting Started with OpenCV for Tegra on NVIDIA Tegra K1, CPU vs GPU Computer Vision Comparison

This is a guest post by Leonardo Graboski Veiga, Field Application Engineer, Toradex Brasil Introduction Computer vision (CV) is everywhere – from cars to surveillance and production lines, the need for efficient, low power consumption yet powerful embedded systems is nowadays one of the bleeding edge scenarios of technology development. Since this is a very computationally intensive task, running computer vision algorithms in an embedded system CPU might not be enough for some applications. Developers and scientists have noticed that the use of dedicated hardware, such as co-processors and GPUs – the latter traditionally employed for graphics rendering – can greatly improve CV algorithms performance. In the embedded scenario, things usually are not as simple as they look. Embedded GPUs tend to be different from desktop GPUs, thus requiring many workarounds to get extra performance from them. A good example of a drawback from embedded GPUs is that they are hardly supported by OpenCV – the de facto standard libraries …

Support CNX Software – Donate via PayPal or become a Patron on Patreon