Archive

Posts Tagged ‘development kit’
Orange Pi Development Boards

Compulab CL-SOM-iMX8 SoM Features NXP i.MX 8M Processor for $68 and Up

January 15th, 2018 5 comments

I just covered one of the i.MX 8M systems-on-module last Friday with Variscite DART-MX8M SoM, but Variscite is not the only company about to launch such modules, and today I’ll have a look at Compulab CL-SOM-iMX8 system-on-module based on the same NXP i.MX 8M dual or quad core Cortex A53 processor.

Compulab’s SoM comes with up to 4GB RAM, 64GB eMMC flash, an optional WiFi & Bluetooth module, as well as optional support for Ethernet, LVDS, analog audio, and more. Contrary to most competitors, the company has also made an habit of releasing detailed pricing the basic configuration and per option.

Click to Enlarge

But first, let’s go through the specifications:

  • SoC (one of the other)
    • NXP i.MX8M Quad quad core Arm Cortex-A53 processor @ 1.5GHz with Arm Corex-M4 real-time core, Vivante GC7000Lite GPU supporting OpenGL ES 3.1, Open CL 1.2 and Vulkan
    • NXP i.MX8M Dual dual core Arm Cortex-A53 processor @ 1.5GHz with Arm Corex-M4 real-time core, Vivante GC7000Lite GPU supporting OpenGL ES 3.1, Open CL 1.2 and Vulkan
  • System Memory – 1 to 4GB LPDDR4
  • Storage – 4GB to 64GB eMMC flash
  • Connectivity
    • Optional WiFi 802.11a/b/g/n/ac WiFi & Bluetooth 4.1 BLE (Broadcom BCM4356 chipset)
    • Optional Gigabit Ethernet Atheros AR8033 PHY
  • Audio – Optional Wolfson WM8731L audio codec
  • 204-pinedge connector exposing the following interfaces:
    • Display
      • HDMI 2.0a up-to 4096 x 2160 @60Hz
      • LVDS up-to 1920 x 1080 @60Hz via on-module DSI to LVDS convertor
      • 4-lane MIPI-DSI up to 1920 x 1080 @60Hz
      • 24-bit Parallel RGB up to 1600 x 1200
      • Touchscreen Capacitive touch-screen support through SPI and I2C interfaces
    • Camera – 4-lane MIPI-CSI interface
    • Networking – 1x 10/100/1000Mbps Ethernet
    • Audio
      • Analog stereo output, stereo input and microphone support
      • Up to 4x I2S / SAI, S/PDIF input/output
    • PCIe – PCIe x1 Gen. 2.1, optional extra PCIe x1 Gen. 2.1
    • USB – 2x USB3.0 dual-role ports
    • Serial – Up to 4x UART
    • Up to 1x MMC/SD/SDIO
    • Up to 2x SPI, Up to 3x I2C, Up to 4x general purpose PWM signals
    • Up to 90x GPIO (multifunctional signals shared with other functions)
  • Debugging – JTAG debug interface
  • Misc – RTC Real time clock, powered by external battery;
  • Supply Voltage –  3.35V to 4.2V
  • Digital I/O – voltage 3.3V
  • Dimensions – 68 x 42 x 5 mm
  • Weight – 14 grams
  • Temperature Range
    • Operating – Commercial: 0° to 70° C; Extended: -20° to 70° C; Industrial: -40° to 85° C
    • Storage – -40° to 85° C
  • Relative humidity – 10% to 90% (operation); 05% to 95% (storage)
  • Reliability – MTTF – > 200,000 hours; Shock 50G / 20 ms; Vibration 20G / 0 – 600 Hz

CL-SOM-iMX8 Block Diagram – Click to Enlarge

The company provides support for the Yocto Project with Linux mainline, and Android support is coming soon. SBC-iMX8 Evaluation Kit can be used to kickstart development with SOM-iMX8-C1500Q-D2-N16-E-A-WB-H module (quad core version with 2GB RAM, and 16GB flash, Ethernet, Audio, Wireless module, and heat dissipation plate), SB-iMX8 carrier board, a WiFi antenna and cable, a serial port cable, a USB cable and adapter, as well as a 12V power supply. The company also provides 12-month technical support for the kit.

SB-iMX8 Block Diagram – Click to Enlarge

The SoM and devkit have not been formally launched (early product announcement), and I could not find a photo of the carrier board, nor the specifications in readable form, but the company has already released the schematics – from which I extracted the block diagram above -, PCB layout, and Gerber files for it, and it seems to pretty much expose all features from the SoM, as it should.

It’s unclear whether the SoM are available now, but they should be soon, with 10-year longevity. As mentioned in the introduction the company also released pricing with the most basic model SOM-iMX8-C1500D-D1-N4 (dual core, 1GB RAM, 4GB storage, no other option) will sell/sells for $68 for 1k-unit price, and option pricing are shown below.

Click to Enlarge

Price also fluctuates based on order quantity, and for example, the price for one sample is 2.5 times more expensive, while for 10K order, unit price is reduced by 5%. Visit the product page to find more details, including initial hardware and software documentations, and further pricing info.

Variscite DART-MX8M is a Compact NXP i.MX 8M System-on-Module

January 12th, 2018 3 comments

NXP has recently launched their i.MX 8M evaluation kit and released documentation, so we can expect multiple products based on the family in 2018. The new NXP i.MX 64-bit processors include three families with i.MX 8, i.MX 8X, and i.MX 8M, but so far it looks like many companies are launching products based on the latter.

The Embedded World Conference 2018 at the end of February should be the occasion for many product launches, especially systems-on-module and related development kit, but several companies have already posted information about their i.MX 8(M) modules minus pricing, and one of those is Variscite DART MX8M a company (55x30mm) module with i.MX 8M processor, up to 4GB LPDDR4, up to 64GB eMMC flash, as well as 802.11ac WiFi and Bluetooth 4.0.

Variscite DART-MX8M specifications:

  • SoC – NXP i.MX8M with dual or quad core Cortex A53 processor @ up to 1.5 GHz, Cortex-M4 real-time core @ 266 MHz, and Vivante GC7000Lit 2D/3D graphics accelerator
  • System Memory – 1 – 4 GB LPDDR4
  • Storage – 4 – 64 GB eMMC  flash, 4K I2C EEPROM
  • Connectivity – On-module Wi-Fi 802.11 ac/a/b/g/n & Bluetooth 4.2 LE (via Sterling LWB5), and Qualcomm Atheros AR8031 Gigabit Ethernet tranceiver
  • Video Acceleration – Up to 4K HEVC/H265, H264, VP9 Decode plus HDR
  • Audio – Audio codec on-module
  • 3x 90-pin board-to-board connectors with:
    • Video Inputs – 2x MIPI-CSI2 (4-Lane, each)
    • Display
      • HDMI 2.0 up to 4Kp60
      • Display Port eDP1.4/DP1.3 up to 4Kp60
      • MIPI-DSI 1080p60
      • LVDS Dual channel support 1920×1080 60fps
    • Networking – 10/100/1000 Mbps Ethernet
    • Audio
      • Analog & digital microphone I/F
      • Up to x5 I2S(SAI), SPDIF
      • Line In Yes
    • 1x SD/SDIO/MMC
    • 2x USB 3.0/2.0 OTG
    • 4x UART up to 4 Mbps
    • 3x I2C 3x SPI, 2x QSPI
    • RTC (on carrier)
    • 2 x PCIe 2.0 (1-Lane, each)
  • Supply voltage – 3.4 – 4.5 V
  • Digital I/O voltage – 3.3 V
  • Dimensions – 30.0 mm x 55.0 mm x 4.7 mm
  • Temperature range – Commercial: 0 to 70°C; Industrial: -40 to 85°C

Click to Enlarge

The company will provide Linux and Android BSPs for the module, but Windows Embedded Compact will not be supported. A DART MX8M kits will also be offered to speed up the early stage of development, with a module, with/without a 7″ WVGA capacitive touch display (VAR-DVK-MX8M/VAR-STK-MX8M) …

VAR-DVK-MX8M Kit

and VAR-DT8MCustomBoard carrier board with the following specifications:

 

  • SoM Interface – B2B socket for DART-MX8M module
  • Storage – SD Card Socket
  • Display
    • HDMI 2.0a
    • DP 1.3
    • 18-bit / 24-bit LVDS connector
    • Backlight Driver (PWM Control)
  • Touch Panel
    • 4-wire resistive touch panel (4-pin FFC/FPC)
    • capacitive touch panel (6-pin FFC/FPC)
  • Audio
    • Headphone – 3.5 mm connector
    • Line in – 3.5 mm connector
    • On-board digital microphone
  • USB – 2x USB3.0/2.0 ports, 1x USB3.0/2.0 type C connector
  • Network – Ethernet 10/100/1000 Mbps, RJ45
  • Camera interfaces –  Serial Camera
  • Serial Ports
    • USB to serial bridge via Micro USB port
    • FTDI Header for debugging
    • 2x RS232 header
  • Expansion
    • 2x mini PCIe connectors
    • Headers with QSPI, UART, SPI, I2C, GPIOs, JTAG, SAI, S/PDIF
  • Misc – RTC Backup battery socket (CR1225), buttons, LEDs
  • Power Supply – 5V DC input, 2.5 mm DC jack
  • Dimensions – 15 cm x 9 cm x 2.9 cm

Click to Enlarge

The kits also come with a micro USB cable, an optional Ethernet cable, optional 5V power supply, an antenna, a boot/rescure SD card, and a carrier board design package. There may also be a camera module offered in the future.

The module is shown as “coming soon” and we don’t have price information yet. More details may be found on the module page, where you’ll also find details about the kits and carrier board (Supporting Products tab). The company is also working on VAR-SOM-MX8 SoM based on NXP i.MX 8 Cortex A72/A53 processor, but less details are available for now.

Develop NXP i.MX 8M Voice Controlled Smart Devices with MCIMX8M-EVK Evaluation Kit

January 11th, 2018 8 comments

We first heard about NXP i.MX 8M processsors in October 2016, and at the end of last year, WandPi 8M development board was unveiled with shipping scheduled for Q2 2018 once the processor will start manufacturing. Other exciting i.MX 8M projects include Purism Librem 5 smartphone, MNT Reform DIY modular computer, and I’m sure there will be others development board & products, and plenty of system-on-modules introduced with the processor in 2018.

NXP i.MX 8M processor also got in the news at CES 2018, because it will be one of the hardware platforms certified for Android Things, and NXP also issued a press release to announced the processor’s multimedia capability with be used in voice controlled devices with or without video.

NXP i.MX 8M Block Diagram

The PR refers to Gartner Research saying that “voice commands will dominate 50 percent of all searches in the next two years”, and explains that with thinner and thinner TV, sound bars and smart speaker swill become more popular and integrate features such as voice control, home automation, … which can be served by iMX 8M family of applications processors. The company also expects the processors to be found in lighting, thermostats, door locks, home security, smart sprinklers, other smart home systems and devices. One of the main purpose of that press release was to say “come to see demos at our CES 2018 booth” including:

  • i.MX 8M hardware that will be driving voice, video, and audio all at the same time, while also displaying 4K HDR, dual screen and immersive audio capabilities.
  • Android Things demos of drawing robots (drawbots) that use on-device processing power to sketch attendee selfies in real-time, and Manny, a Things-powered robotic hand (handbot) that uses TensorFlow plus computer vision to mirror hand gestures and play games.
  • An Alexa solution with leading features such as display support, multi-room audio and integrated talk-to-call.

NXP i.MX 8M Evaluation Kit

Click to Enlarge

Since the processor is still new, many of those demos will be implemented with the company’s MCIMX8M-EVK evaluation kits with the following features:

  • Processor – NXP i.MX 8M Quad (MIMX8MQ6DVAJZAA) quad core Cortex A53 applications processor, 1x Cortex-M4F real-time core, Vivante GC7000L GPU
  • System Memory – 3 GB LPDDR4
  • Storage – 16GB eMMC 5.0 flash, 32MB SPI NOR flash, micro SD card connector
  • Display interface – HDMI 2.0a Connector, DSI interface via Mini-SAS connector
  • Audio connectors – 3.5 mm stereo headphone output
  • Camera – CSI interface via Mini-SAS connectors
  • Connectivity
    • Gigabit Ethernet via RJ45 connector
    • 1x on-board 802.11ac WiFi/Bluetooth 4.2 module
    • 1x M.2 slot (KEY-E type)
  • USB – 1x USB 2.0/3.0 type C connector, 1x USB 2.0/3.0 host connector
  • Expansion Port – FPC connector (SAI ports)
  • Debug connectors – JTAG (10-PIN header), MicroUSB for UART debug
  • Misc – ONOFF & RESET buttons; Power status & UART LEDs
  • Power – NXP PMIC PF4210 + Discrete DCDC/LDO
  • Dimensions – 10 x 10 cm; 10-layer PCB

MCIMX8M-EVK Block Diagram – Click to Enlarge

The board ships with USB cable, a 12V/5.0A! power supply, and a quick start guide. If you plan to use audio input, you may need to add an Audio card via the SAI/I2C expansion port The company has also released a whole bunch of documents, hardware design files, SDKs, BSPs, MQX RTOS, and software tools right before CES 2018, which you can find on the evaluation kit and processor pages. The evaluation kit is sold for $449.00.

Arduino & Grove Compatible StitchKit Mixes Fashion & Technology (Crowdfunding)

December 16th, 2017 No comments

I don’t really get fashion. For example, I don’t understand why somebody would spend $100 on a pair of “Jean-Patrick Coultier” trousers, while you could get pretty much the same for about $20. My clothes just need to keep me warm and comfortable. And now I can see people starting to attach blinking lights to their clothes. Heresy!!!

But others have a different opinions, and people interested in fashion, may not be interested in electronics, but still want those shiny things on their clothes. StitchKit is an Arduino compatible board that can also take Seeed Studio Grove module designed for those kids, teachers, designers, and cosplayers who want to easily add LEDs and other electronics to clothes or other wearable pieces without having to dig into the technical details.

The system works around MakeFashion board powered by an Arduino compatible Microchip / Atmel ATMega32U4 AVR micro-controller with two rows of 6 pin headers, and holes to lock the wires to modules. The board will be powered through a USB type C port (photos look like micro USB), usually via a power bank for this type of application. You can then connect Grove module like RGB LED strip, and other sensor modules. It’s unclear whether they use the Arduino IDE for programming, or a simple-to-use visual programming tools, but sample code is coming soon, and instructions are included in the various kits. The next step is to fit the electronics under or on your clothes, and you could end up with results as shown below.

The project has launched on Kickstarter with a funding goal of $10,000 CAD, and nearly $8,000 CAD raised so far with 42 days to go. Rewards start with $49 CAD ($38 US) FashionTech Starter kit including MakeFashion board, a full color LED pixel string, Grove button and connector, plastic case, a USB type C cable, and instructions. You could go up to $199 CAD (~$155 US) for the FashionTech Creator kit with everything from the Start kit plus a one meter sewable RGB LED strip, RGB LED halo rings, 5 meter waterproof RGB LED strip with 300 LEDs, more Grove modules (Vibration, Light sensor, Loudness sensors…), 50cm long Grove cables, and more. Bundles for educations and collectors are also offered. Rewards are expected to ship in April 2018. More details nay also be found on StitchKit.io website.

 

 

Amazon FreeRTOS Released for NXP, Texas Instruments, STMicro, and (soon) Microchip Microcontrollers

December 2nd, 2017 7 comments

FreeRTOS is an open source real-time operating system for microcontrollers released under an MIT license, and when it comes to adoption in embedded systems it’s right there near the top with embedded Linux according to Aspencore 2017 embedded markets study. For example, some Espressif SDKs for ESP8266 or ESP32 are based on FreeRTOS, and so is Mediatek LinkIt Development Platform for RTOS.

The recently announced Amazon FreeRTOS (a:FreeRTOS) leverages the open source operating systems, and extends it with with libraries that enable local and AWS cloud connectivity, security, and soon over-the-air updates. a:FreeRTOS is free of charge, open source, and available today.

Click to Enlarge

In order to get started, you’ll have a choice of 4 hardware platforms:

  • STMicro STM32L4 Discovery Kit IoT Node (B-L475E-IOT01A) powered by STM32L475 ARM Cortex-M4 MCU with 802.11 b/g/n WiFi, Bluetooth 4.1 LE, RF (868 / 915 MHz), and NFC connectivity, plenty of sensors

Click to Enlarge

  • Texas Instruments SimpleLink Wi-Fi CC3220SF LaunchPad development kit (CC3220SF-LAUNCHXL) with  CC3220SF single-chip WiFi microcontroller (MCU) with 1MB Flash, 256KB of RAM.

Click to Enlarge

  • Microchip Curiosity PIC32MZ EF Development Board (Amazon FreeRTOS support coming soon) powered by PIC32MZ EF MCU (415 DMIPS) with 2 MB Flash, 512 KB RAM, integrated FPU, crypto accelerator, and connectivity via an on-board 802.11 b/g/n Wi-Fi module, and two MikroBUS connector for add-on boards.

Click to Enlarge

If you don’t own any of those boards, or don’t plan to purchase one, but still would like to play with a:FreeRTOS you could run the Windows Simulator instead.

Once we’ve selected our hardware platform (or simulator), we can access Amazon FreeRTOS console to configure and download the FreeRTOS kernel and software libraries for our application.  Development of the application is done though the tools provided for the board for example TI Code Composer Studio, STM32 System Workbench, IAR Embedded Workbench, or Visual Studio Community Edition.

Click to Enlarge

Amazon FreeRTOS is free as in speech and free as in beer, with the source code and links to documentation available in Github. Amazon will make money when you utilize AWS services such as AWS IoT Core, data transfer, or AWS Greengrass. The price list of AWS services that may be charged (if enabled) while using Amazon FreeRTOS can be found here.

$45 AIY Vision Kit Adds Accelerated Computer Vision to Raspberry Pi Zero W Board

December 1st, 2017 2 comments

AIY Projects is an initiative launched by Google that aims to bring do-it yourself artificial intelligence to the maker community by providing affordable development kits to get started with the technology. The first project was AIY Projects Voice Kit, that basically transformed Raspberry Pi 3 board into a Google Home device by adding the necessary hardware to support Google Assistant SDK, and an enclosure.

The company has now launched another maker kit with AIY Project Vision Kit that adds a HAT board powered by Intel/Movidius Myriad 2 VPU to Raspberry Pi Zero W, in order to accelerate image & objects recognition using TensorFlow’s machine learning models.

Click to Enlarge

The kit includes the following items:

  • Vision Bonnet accessory board powered by Myriad 2 VPU (MA2450)
  • 2x 11mm plastic standoffs
  • 24mm RGB arcade button and nut
  • 1x Privacy LED
  • 1x LED bezel
  • 1x 1/4/20 flanged nut
  • Lens, lens washer, and lens magnet
  • 50 mil ribbon cable
  • Pi0 camera flat flex cable
  • MIPI flat flex cable
  • Piezo buzzer
  • External cardboard box and internal cardboard frame

Vision Bonnet Board – Click to Enlarge

Not that the accessory board features the same Movidius VPU as Intel Neural Compute Stick, which has been used with Raspberry Pi 3, and shown to deliver about 3 times the performance compared to a GPGPU implementation leveraging VideoCore IV GPU.

Back to the kit. You’ll need to add your own Raspberry Pi Zero W, Raspberry Pi camera 2, and blank SD card (at least 4 GB) to complete the kit. Follow the assembly guide, and the final results should look like this:

 

Once this is done flash the Vision Kit SD image (available soon) to your micro SD card, insert it into your Raspberry Pi Zero W, and connect the power. The software image will include three neural network models:

  • A model based on MobileNets that can recognize a thousand common objects.
  • A model for face detection capable of detecting faces and facial expressions (sadness, joy, etc…)
  • A model for discerning between cats, dogs and people.

The system will be able to run at speeds of up to 30 fps, providing near real-time performance. TensorFlow code and a compiler will also be included for people wanting to have their own models. A Python API will be provided to customize the RGB button colors, piezo element sounds, and (4x) GPIO pins.

AIY Vision Kit are up for pre-order for $44.99 at Micro Center with shipping planned for earlier December. Just like AIY Voice Kit, we should eventually expect international availability via other websites such as Piromini or Seeed Studio. The complete kit with RPi board and camera, and accessories should cost around $90.

JeVois-A33 Linux Computer Vision Camera Review – Part 2: Setup, Guided Tour, Documentation & Customization

November 22nd, 2017 4 comments

Computer Vision, Artificial Intelligence, Machine Learning, etc.. are all terms we hear frequently those days. JeVois-A33 smart machine vision camera powered by Allwinner A33 quad core processor was launched last year on Indiegogo to bring such capabilities in a low power small form factor devices for example to use in robotics project.

The company improved the software since the launch of the project, and has now sent me their tiny Linux camera developer kit for review, and I’ve already checked  out the hardware and accessories in the first post. I’ve now had time to test the camera, and I’ll explained how to set it up, test some of the key features via the provided guided tour, and show how it’s possible to customize the camera to your needs with one example.

Getting Started with JeVois-A33

In theory, you could just get started by inserting the micro SD card provided with the camera, connect it to your computer via the USB cable, and follow the other instructions on the website. But to make sure you have the latest features and bug fixed, you’d better download the latest firmware (jevois-image-latest-8G.zip), and flash it to the micro SD card with the multi-platform Etcher tool.

You could also use your own micro SD card, as long as it has 8GB or more capacity. Once this is done, insert the micro SD card into the camera with the fan of the camera and the golden contact of the micro SD card both facing upwards. Connect the camera to your computer with the provided mini USB to USB cable. I also added the USB power meter to monitor the power consumption for the different use cases, and USB serial cable to checkout output from the console. At least that was the plan, but I got no lights from the camera, and voltage was reported to be only 4V. Then I read the guide a little better, and found out I had to use a USB 3.0 port, or two USB 2.0 ports for power.

Once I switched to using two USB 2.0 ports from a powered USB 2.0 hub, I could see output from the serial console…

and both green and orange/red LEDs were lit. The instructions to use JeVois camera are mostly OS agnostic, except for the video capture software. If you are using Windows you can use the free OBS Studio or AMCap programs, and on Mac, select either PhotoBooth or OBS Studio. I’m a Ubuntu user, so instead I installed guvcview:

and ran it use 640×360 resolution and YUYV format as instructed in the getting started guide:

But then I got no output at all in the app:

The last line above would repeat in a loop. The kernel log (dmesg) also reported a crash linked to guvcview:

Another person had the same problem a few months ago, and it was suggested it may be a USB problem. So I connect the camera to directly to two of the USB ports on my tower, and it worked…

Click to Enlarge

The important part of the settings are in the Video Controls tab, where we can change resolution and frame rate to switch between camera modes as we’ll see later on.

But since my tower is under the desk, the USB cable is a bit too short, and the program crashed with the same error message a few minutes later. So I went with my Ubuntu 16.04 laptop instead. Powering the camera via the USB 3.0 port worked until I started deep learning modes, where the camera would stop, causing guvcview to gray out. Finally, I connected the camera to both my USB 3.0 port, and the power bank part of the kit, and the system was then much more stable.

Click to Enlarge

I contacted the company about the issues I had, but they replied this problem was not often reported:

… we have only received very few reports like that but we were able to confirm here using front panel ports on one machine. On my desktop I have a hub too, but usb3 and rated for fast charging (60W power supply for 7+2 ports) and it works ok with jevois. A single usb3 port on my mac laptop is also ok.

So maybe it’s just me with all my cheap devices and accessories…

So three main points to get started:

  1. Update the firmware
  2. Install the camera software
  3. Check power in case of issues / crashes (Both LEDs should be on if the camera is working)

JeVois-A33 Guided Tour

Now we have the camera running, we can try the different features, and the best way to do so is to download Jevois Guided Tour (PDF) that will give you an overview of the camera and how it works, as well as examples.

Click to Enlarge

As shown above, the PDF includes information for each module with the name, link to documentation, introduction, display explanation, and on the top right the resolution/framerate that can be used to launch a given module. On following pages, there will be example pictures that you can point to with the camera.

Some of modules include:

  • Visual attention – finding interesting things
  • Face and handwritten digit recognition
  • QR-codes and other tags
  • Road detection
  • Object matching
  • Object recognition with deep neural networks
  • Color-based object tracking
  • Moving object detection
  • Record video to the microSD card inside JeVois
  • Motion flow detection
  • Eye tracking
  • and more…

You could print the guide with a color printer, but the easiest way is problem to use two screens, once with the PDF guide open, and the other running the camera application (guvcview, OBS Studio…). I’ve gone through some of the example in the guided tour in the video below, with PDF shown on a TV box, and the camera application output shown on the laptop screen on the top bottom corner.

That’s lot of fun, and everything works pretty well most of the time. Some of the tests are quite demanding for such low power device, as for example Darknet based “Deep neural scene analysis” using 1280×480 @ 15 fps with the ability to recognize multiple object types would only refresh the results every 2.7 seconds or so.

Documentation & Customization of Salient SURF Module

If you’ve gone through the guide tour, you should now have a good understanding of what the camera is capable of. So now, let’s take one of the modules, and try to adjust it to our needs. I picked SaliencySURF module with the documentation available here for this section of the review. Introduction for the module:

Trained by default on blue iLab logo, point JeVois to it and adjust distance so it fits in an attention box.
Can easily add training images by just copying them to microSD card.
Can tune number and size of salient regions, can save regions to microSD to create a training set

So let’s take a few other images (Tux logo), copy them to the micro SD card in the camera, and tune some of the settings.

Ideally the camera should also be detected, as a storage device, so that we can easily copy files and edit parameters, and in my computer it was shown as a UVC camera, a USB ACM device, and USB storage device when I connect it:

But for some reasons, I could not see the /dev/sdb storage after that:

[Update: We can use use jevois-usbsd script to access the camera storage from the host computer / board:

]

So instead I had to take out the micro SD card from the camera, and copy the files in /modules/JeVois/SaliencySURF/images/ directory in JEVOIS partition.

The module will process those photo when we start it, and return the name of the file when detected.

We can go back to SaliencySURF directory to edit params.cfg file, and change some parameters to determine how strict a match should be, taking into account that a stricter matching may mean the object was not be detected, and looser matching that we get some false positives. But this is where it gets a little more complicated, as we’ll see from a subset of the list of parameters.

Click to Enlarge

I cannot understand what half of the parameters are supposed to do. That’s where you can click on the SaliencySURF / Saliency links to access the base documentation. and find out how the module works, find out more about each parameter, and easily access the source code for the functions used by the module. That type of documentation is available for all modules used in JeVois C++ framework, and it’s a very good learning tool for people wanting to know more about computer vision. You’ll have to be familiar with C++ to understand the code, and what it really does, beside learning jargon and acronyms specific to computer vision or machine learning.

By default params.cfg file includes just two lines:

Those are the parameters for ObjectMatcher module, with goodpts corresponding to the number range of good matches considered, and distthresh being the maximum distance for a match to be considered good.

I’ve set looser settings in params.cfg:

I saved the file, put the micro SD card back into the camera, and launch guvcview with 320×288 @ 30 fps resolution/framerate to enter SaliencySURF mode.

Click to Enlarge

Oops, it’s seeing Tux logos everywhere, even when there are none whatsoever, so our settings are clearly too loose. So I went back to the default settings, but the rsults was still similar, so since the distance was shown to be 0.30 in my first attempt, I reduced distthresh to 0.2. False positive are now mostly gone, except for very short period od time, and it’s now detecting CNX Tux logo accuractely. Note that Green square is for object detection, and the white squares for saliency zones.

However, it struggles to detect my third Tux logo repeatedly, often following back to CNX Tux logo.

But as you could see with the green square, the detection was done on the left flap of the penguin. That’s because SaliencySURF detection is done in a fixed size zone (64×64 pixels per detault), so camera distance, or size of the zone matter. You can change the size of the salient regions with SaliencySURF rsiz parameter which defined the height and length of the quare in pixel. When I did the test I first tried to detected if from the list of Tux images from DuckDuckGo search ut it was not small or blurry. After switchign to a bigger photo, the cable was too short to cover the logo, so instead I copied to gimp and resized it so that it could fit in the 64×64 square while using the camera, and in this case detection worked resaonably well.

The more you use the camera, the better you’ll be at understanding how it works, and leverage its capabilities.

Final Words

JeVois-A33 camera is an inexpensive way to get started with computer vision and deep learning, with excellent documentation, and if you put the efforts, you’ll even understand how it works at the source code level. It’s also fun to use with many different modules to try. I have not tried it n this review due to time limitations, but you could also connect the camera to an Arduino board controlling a robot (Cat chasing bot anyone?) via the serial interface.

The main challegenges you may face while getting started ar:

  1. Potential crashes due to power issues, but that’s solvable, and a power issues troubleshooting guide has even been published
  2. For robotics projects, you have to keep in mind there will be some lag for some modules, for example from 500ms (single object) to 3 seconds (YOLO test with multiple object types) for deep learning algorithms. Other modules such as ArUco marker are close to real-time performance however.

Bear in mind all processing is done by the Allwinner A33 CPU cores, as the Mali-400MP GPU is not suitable for GPGPU. As more affordable SoC with OpenCL/Vulkan capable GPU (e.g. Mali-T720) are launched, and in some cases even NNA (Neural Network Accelerator), we’ll be able to get similar low power smart cameras, but with much better computer vision performance.

JeVois-A33 can be purchased for $49, but to avoid wasting time with power issues, and give you more options, I’d recommend to go with JeVois-A33 Developer/Robotics Kit reviewed here, going for $99.99 on Amazon, RobotShop, or JeVois Store.

Intel Speech Enabling Developer Kit Works with Alexa Voice Service, Raspberry Pi 3 Board

October 28th, 2017 4 comments

We’ve known Intel has been working on Quark S1000 “Sue Creek” processor for voice recognition for several months. S1000 SoC is based on two Tensilica LX6 with HiFi3 DSP, some speech recognition accelerators, and up to 8x microphones interfaces which allows it to perform speech recognition locally. The solution can also be hooked to an application processor via SPI, I2S and USB (optional) when cloud based voice recognition is needed.

Intel has recently introduced their Speech Enabling Developer Kit working with Amazon Alexa Voice Service (AVS) featuring a “dual DSP with inference engine” – which must be Quark S1000 – and an 8-mic array. The kit also includes a 40-pin cable to connect to the Raspberry Pi 3 board.

Click to Enlarge

Intel only provided basic specifications for the kit:

  • Intel’s dual DSP with inference engine
  • Intel 8-mic circular array
  • High-performance algorithms for acoustic echo cancellation, noise reduction, beamforming and custom wake word engine tuned to “Alexa”
  • 6x Washers
  • 3x 6mm screws
  • 3x 40mm female-female standoffs (x3)
  • Raspberry Pi connector cable

I could not find detailed information to get started, except for assembly guide shown in the video below. We do not that the kit will work with Amazon Alexa, and requires a few extra bits, namely a Raspberry Pi 3 board, an Ethernet cable, a HDMI cable and monitor, USB keyboard and mouse, an external speaker, a micro USB power supply (at least 5V/1A), and a micro SD card.

The video also points to Intel’s Smart Home page for more details about software, but again I could not find instructions or guide there,  except links to register to a developer workshop at Amazon Re:Invent in Las Vegas on November 30, 2017.

Intel Speech Enabling Developer Kit can be pre-ordered for $399 directly on Intel website with shipping planned for the end of November. The product is also listed on Amazon Developer page, but again with little specific information about the hardware and how to use it. One can assume the workflow should be similar to other AVS devkits.

Thanks to Mustafa for the tip.