Archive

Posts Tagged ‘robotics’

Quokka IoT FPGA Board is Programmable with C# Language (Crowdfunding)

January 12th, 2018 1 comment

Quokka IoT (preliminary) hardware specifications:

  • FPGA – Intel Altera Cyclone IV, 6K logic elements, EP4CE6E22C8
  • Clock – 50MHz
  • Connectivity – WiFi via WiPy module
  • Expansion
    • 40x GPIO (3 banks by 8 pins, with direction and voltage (3.3V or 5V) configuration, 16 raw IO pins 3.3V)
    • 2x Dual Channel 10 bit ADC (3.3V)
    • 2x Dual Channel 10 bit DAC (3.3V)
    • H-Bridge for DC motors with support for external power
  • Power Supply – 5-24V DC input

The specifications are preliminary, because the FPGA may be replaced by one with more logic cells (e.g. 20K) depending on the popularity of the project. Drivers are available for each hardware component on the board including ADC and DAC drivers, UART, JSON serializer\deserializer and much more.

As mentioned in the introduction, C# programming is possible with QDT, and it’s not limited to Quokka IoT board, so you should be able to use it with other FPGA boards, although a license may be required as we’ll see below.

You can watch a short demo of the board in action while attached to a robotic chassis.

The project has launched on Kickstarter with a $30.000 AUD funding goal (~$23,600 US). Rewards start at 150 AUD ($118 US) for Quokka IoT board only, but if you want to use the board with QDT, you’d need to add 50 AUD extra for a total of 200 AUD (~$158 US). Shipping adds 25 AUD ($19.7 US), and delivery is scheduled for May 2018.

Thanks to TLS for the tip

JeVois-A33 Linux Computer Vision Camera Review – Part 2: Setup, Guided Tour, Documentation & Customization

November 22nd, 2017 4 comments

Computer Vision, Artificial Intelligence, Machine Learning, etc.. are all terms we hear frequently those days. JeVois-A33 smart machine vision camera powered by Allwinner A33 quad core processor was launched last year on Indiegogo to bring such capabilities in a low power small form factor devices for example to use in robotics project.

The company improved the software since the launch of the project, and has now sent me their tiny Linux camera developer kit for review, and I’ve already checked  out the hardware and accessories in the first post. I’ve now had time to test the camera, and I’ll explained how to set it up, test some of the key features via the provided guided tour, and show how it’s possible to customize the camera to your needs with one example.

Getting Started with JeVois-A33

In theory, you could just get started by inserting the micro SD card provided with the camera, connect it to your computer via the USB cable, and follow the other instructions on the website. But to make sure you have the latest features and bug fixed, you’d better download the latest firmware (jevois-image-latest-8G.zip), and flash it to the micro SD card with the multi-platform Etcher tool.

You could also use your own micro SD card, as long as it has 8GB or more capacity. Once this is done, insert the micro SD card into the camera with the fan of the camera and the golden contact of the micro SD card both facing upwards. Connect the camera to your computer with the provided mini USB to USB cable. I also added the USB power meter to monitor the power consumption for the different use cases, and USB serial cable to checkout output from the console. At least that was the plan, but I got no lights from the camera, and voltage was reported to be only 4V. Then I read the guide a little better, and found out I had to use a USB 3.0 port, or two USB 2.0 ports for power.

Once I switched to using two USB 2.0 ports from a powered USB 2.0 hub, I could see output from the serial console…

and both green and orange/red LEDs were lit. The instructions to use JeVois camera are mostly OS agnostic, except for the video capture software. If you are using Windows you can use the free OBS Studio or AMCap programs, and on Mac, select either PhotoBooth or OBS Studio. I’m a Ubuntu user, so instead I installed guvcview:

and ran it use 640×360 resolution and YUYV format as instructed in the getting started guide:

But then I got no output at all in the app:

The last line above would repeat in a loop. The kernel log (dmesg) also reported a crash linked to guvcview:

Another person had the same problem a few months ago, and it was suggested it may be a USB problem. So I connect the camera to directly to two of the USB ports on my tower, and it worked…

Click to Enlarge

The important part of the settings are in the Video Controls tab, where we can change resolution and frame rate to switch between camera modes as we’ll see later on.

But since my tower is under the desk, the USB cable is a bit too short, and the program crashed with the same error message a few minutes later. So I went with my Ubuntu 16.04 laptop instead. Powering the camera via the USB 3.0 port worked until I started deep learning modes, where the camera would stop, causing guvcview to gray out. Finally, I connected the camera to both my USB 3.0 port, and the power bank part of the kit, and the system was then much more stable.

Click to Enlarge

I contacted the company about the issues I had, but they replied this problem was not often reported:

… we have only received very few reports like that but we were able to confirm here using front panel ports on one machine. On my desktop I have a hub too, but usb3 and rated for fast charging (60W power supply for 7+2 ports) and it works ok with jevois. A single usb3 port on my mac laptop is also ok.

So maybe it’s just me with all my cheap devices and accessories…

So three main points to get started:

  1. Update the firmware
  2. Install the camera software
  3. Check power in case of issues / crashes (Both LEDs should be on if the camera is working)

JeVois-A33 Guided Tour

Now we have the camera running, we can try the different features, and the best way to do so is to download Jevois Guided Tour (PDF) that will give you an overview of the camera and how it works, as well as examples.

Click to Enlarge

As shown above, the PDF includes information for each module with the name, link to documentation, introduction, display explanation, and on the top right the resolution/framerate that can be used to launch a given module. On following pages, there will be example pictures that you can point to with the camera.

Some of modules include:

  • Visual attention – finding interesting things
  • Face and handwritten digit recognition
  • QR-codes and other tags
  • Road detection
  • Object matching
  • Object recognition with deep neural networks
  • Color-based object tracking
  • Moving object detection
  • Record video to the microSD card inside JeVois
  • Motion flow detection
  • Eye tracking
  • and more…

You could print the guide with a color printer, but the easiest way is problem to use two screens, once with the PDF guide open, and the other running the camera application (guvcview, OBS Studio…). I’ve gone through some of the example in the guided tour in the video below, with PDF shown on a TV box, and the camera application output shown on the laptop screen on the top bottom corner.

That’s lot of fun, and everything works pretty well most of the time. Some of the tests are quite demanding for such low power device, as for example Darknet based “Deep neural scene analysis” using 1280×480 @ 15 fps with the ability to recognize multiple object types would only refresh the results every 2.7 seconds or so.

Documentation & Customization of Salient SURF Module

If you’ve gone through the guide tour, you should now have a good understanding of what the camera is capable of. So now, let’s take one of the modules, and try to adjust it to our needs. I picked SaliencySURF module with the documentation available here for this section of the review. Introduction for the module:

Trained by default on blue iLab logo, point JeVois to it and adjust distance so it fits in an attention box.
Can easily add training images by just copying them to microSD card.
Can tune number and size of salient regions, can save regions to microSD to create a training set

So let’s take a few other images (Tux logo), copy them to the micro SD card in the camera, and tune some of the settings.

Ideally the camera should also be detected, as a storage device, so that we can easily copy files and edit parameters, and in my computer it was shown as a UVC camera, a USB ACM device, and USB storage device when I connect it:

But for some reasons, I could not see the /dev/sdb storage after that:

[Update: We can use use jevois-usbsd script to access the camera storage from the host computer / board:

]

So instead I had to take out the micro SD card from the camera, and copy the files in /modules/JeVois/SaliencySURF/images/ directory in JEVOIS partition.

The module will process those photo when we start it, and return the name of the file when detected.

We can go back to SaliencySURF directory to edit params.cfg file, and change some parameters to determine how strict a match should be, taking into account that a stricter matching may mean the object was not be detected, and looser matching that we get some false positives. But this is where it gets a little more complicated, as we’ll see from a subset of the list of parameters.

Click to Enlarge

I cannot understand what half of the parameters are supposed to do. That’s where you can click on the SaliencySURF / Saliency links to access the base documentation. and find out how the module works, find out more about each parameter, and easily access the source code for the functions used by the module. That type of documentation is available for all modules used in JeVois C++ framework, and it’s a very good learning tool for people wanting to know more about computer vision. You’ll have to be familiar with C++ to understand the code, and what it really does, beside learning jargon and acronyms specific to computer vision or machine learning.

By default params.cfg file includes just two lines:

Those are the parameters for ObjectMatcher module, with goodpts corresponding to the number range of good matches considered, and distthresh being the maximum distance for a match to be considered good.

I’ve set looser settings in params.cfg:

I saved the file, put the micro SD card back into the camera, and launch guvcview with 320×288 @ 30 fps resolution/framerate to enter SaliencySURF mode.

Click to Enlarge

Oops, it’s seeing Tux logos everywhere, even when there are none whatsoever, so our settings are clearly too loose. So I went back to the default settings, but the rsults was still similar, so since the distance was shown to be 0.30 in my first attempt, I reduced distthresh to 0.2. False positive are now mostly gone, except for very short period od time, and it’s now detecting CNX Tux logo accuractely. Note that Green square is for object detection, and the white squares for saliency zones.

However, it struggles to detect my third Tux logo repeatedly, often following back to CNX Tux logo.

But as you could see with the green square, the detection was done on the left flap of the penguin. That’s because SaliencySURF detection is done in a fixed size zone (64×64 pixels per detault), so camera distance, or size of the zone matter. You can change the size of the salient regions with SaliencySURF rsiz parameter which defined the height and length of the quare in pixel. When I did the test I first tried to detected if from the list of Tux images from DuckDuckGo search ut it was not small or blurry. After switchign to a bigger photo, the cable was too short to cover the logo, so instead I copied to gimp and resized it so that it could fit in the 64×64 square while using the camera, and in this case detection worked resaonably well.

The more you use the camera, the better you’ll be at understanding how it works, and leverage its capabilities.

Final Words

JeVois-A33 camera is an inexpensive way to get started with computer vision and deep learning, with excellent documentation, and if you put the efforts, you’ll even understand how it works at the source code level. It’s also fun to use with many different modules to try. I have not tried it n this review due to time limitations, but you could also connect the camera to an Arduino board controlling a robot (Cat chasing bot anyone?) via the serial interface.

The main challegenges you may face while getting started ar:

  1. Potential crashes due to power issues, but that’s solvable, and a power issues troubleshooting guide has even been published
  2. For robotics projects, you have to keep in mind there will be some lag for some modules, for example from 500ms (single object) to 3 seconds (YOLO test with multiple object types) for deep learning algorithms. Other modules such as ArUco marker are close to real-time performance however.

Bear in mind all processing is done by the Allwinner A33 CPU cores, as the Mali-400MP GPU is not suitable for GPGPU. As more affordable SoC with OpenCL/Vulkan capable GPU (e.g. Mali-T720) are launched, and in some cases even NNA (Neural Network Accelerator), we’ll be able to get similar low power smart cameras, but with much better computer vision performance.

JeVois-A33 can be purchased for $49, but to avoid wasting time with power issues, and give you more options, I’d recommend to go with JeVois-A33 Developer/Robotics Kit reviewed here, going for $99.99 on Amazon, RobotShop, or JeVois Store.

JeVois Smart Machine Vision Camera Review – Part 1: Developer / Robotics Kit Unboxing

October 24th, 2017 No comments

JeVois-A33 computer vision camera was unveiled at the end of last year through a Kickstarter campaign. Powered by an Allwinner A33 quad core Cortex A7 processor, and a 1.3MP camera sensor, the system could detect motion, track faces and eyes, detect & decode ArUco makers & QR codes, follow lines for autonomous cars, etc.. thanks to JeVois framework.

Most rewards from KickStarter shipped in April of this year, so it’s quite possible some of the regular readers of this blog are already familiar the camera. But the developer (Laurent Itti) re-contacted me recently, explaining they add improves the software with Python support, and new features such as the capability of running deep neural networks directly on the processor inside the smart camera. He also wanted to send a review sample, which I received today, but I got a bit more than I expected, so I’ll start the review with an unboxing of they call the “Developer / Robotics Kit”.

I got the kit in a white package, so I’ll skip the photo, and checking out directly the content.

Click to Enlarge

I was really expecting to receive a tiny camera, and not much else. So my first reaction was: “what!?” 🙂

You’ll find 5 mini USB cables inside (from top left to bottom middle):

Power Bank Info

  • USB to micro serial adapter cable, 1m long, to access the serial console in the camera when running in debug mode, or while troubleshooting Arduino code
  • mini USB + micro USB splitter cable, 15cm long, to power both the camera and Arduino board from the power bank
  • mini USB Y cable, 80cm long, to power the board via two USB 2.0 ports or to one USB 3.0 port on your host computer
  • mini USB cable, 23cm long, to power the camera from a USB port or power bank.
  • mini USB cable, 75cm long, to connect the camera to one USB 3.0 port or power bank.

The kit also includes an 8GB micro SD card pre-loaded with JeVois software, an SD adapter, a micro SD card reader, a 5V USB tester compatible with QuickCharge 2.0 to monitor the power consumption of the camera with your chosen algorithm, a 2,600 mAh power bank (large enough to power the camera for several hours), an Arduino compatible Pro mini board based on Microchip Atmel Atmega 32U4 MCU, and a business card providing useful information such as a link to a Quick Start Guide.

Oh… I almost forgot. Can you see the “fan” in the middle of photo above? That’s the actual JeVois-A33 camera. I knew it was small, but once you put it into your hands, you realize how tiny it is. The cable on the left of the camera is a micro serial cable to connect to an MCU board.

Click to Enlarge

The back of the camera features all the ports and connectors with a micro SD slot, a mini USB port, the micro serial port connector (which looks like a battery connector), and a dual color LED on left of the micro serial connector that indicates power and camera status.

Click to Enlarge

The bottom reveals an opening to cool down AXP223 PMIC.

Click to Enlarge

If you’re interested in the exact developer/robotics kit I’ve received, you can purchase it for $99.99 on JeVois, Amazon, or RobotShop (with locations in US, Canada, Japan, and France). But if you just want the camera without all cable and accessories, $49.99 will do.

CrazyPi Board Runs Ubuntu and ROS on Rockchip RK3128 SoC for Robotics & IoT Projects (Crowdfunding)

August 10th, 2017 4 comments

CrazyPi is a maker board powered by Rockchip RK3128 quad core Cortex A7 processor that can take various magnetically connected modules such as LIDAR, gimbal, 4G LTE, etc.., and runs both Ubuntu and ROS (Robot Operating System) for DIY robotics & IoT projects.

Click to Enlarge

CrazyPi main board specifications:

  • SoC – Rockchip RK3128 quad core Cortex A7 processor @ 1.2 GHz with ARM Mali GPU
  • MCU – ARM Cortex-M3 @ 72 MHz
  • System Memory – 1GB DDR3L @ 1066 MHz
  • Storage – 16GB eMMC flash pre-loaded with Ubuntu and ROS
  • Connectivity – 802.11 a/b/g/n WiFi @ 150 Mbps, Bluetooth 4.0
  • USB – 1x USB 2.0 host port
  • Expansion Headers – Two headers with a total of 36-pin exposing 1x HDMI, 1x speaker, 1x microphone, 3x PWM, 1x I2C, 1x UART, 1x SPDIF, 1x SPI, 1x USB
  • Power Supply – 5V via micro USB port ?
  • Dimensions – Smaller than credit card

The full details are not available yet, but the company claims CrazyPi is “completely open source and DIY”, so I’d assume more details will eventually show up on CrazyPi github repo (now empty). A cloud service also allows you to stream the webcam output from anywhere in the world.

Webcam View and Map Generated from CrazyPi Robot Kit

What’s  quite interesting is that the board is designed to be connected to add-on boards, modules and accessories allowing you to build robots:

  • Robotic shield board to control motors / servos
  • Media shield board for HDMI output and use the board as a mini computer
  • 4G LTE module (maybe part of the robotic shield board?)
  • Crazyou 4K LIDAR sensor with SLAM (Simultaneous Localization And Mapping) function to automatically create map of your environment
  • 720p camera module
  • 2-degrees gimbal
  • 4-wheel robot chassis
  • 2x 18650 batteries and case

Again, we don’t have the exact details for each, but the promo video explains what can be done with the kits.

Crazyou – that’s the name of the company – has launched the project on Kickstarter to fund mass production with a 200,000 HKD goal (around $25,800). The board is supposed to cost $29, but is not offered standalone in the crowdfunding campaign, so instead you could start with a $59 CrazyPi Media Kit with the mainboard, camera and media board. If you want the complete robot shown above, you’d have to pledge $466 for the CrazyPi Advanced Kit reward with the camera module, the mainboard, the gimbal, the robotic shield board, battery case and charger, the chassis, and LIDAR. Various bundles are available to match different projects’ requirements. Shipping to most countries adds around $19, and delivery is scheduled for October 2017. There’s not much to see on Crazyou website, but eventually more details may emerge there.

Thanks to Freire for the tip.

EduMIP Self-Balancing Robot Kit Based on BeagleBone Blue is Now Available for $50

July 14th, 2017 1 comment

BeagleBone Blue is a board designed for robotics projects, and one of those projects is EduMIP self-balancing robot that was first designed around BeagleBone Black and a robotics cape, but so far was not available for sale. Renaissance Robotics is now selling the kit, without board, for $50.

EduMIP with Beaglebone Blue (left) and BBB and Robotics Cape (right) – Click to Enlarge

The kit has been designed by UC San Diego Coordinated Robotics Lab in order to teach robotics to students, and it works with BeagleBone Blue, or BeagleBone Black with the Robotics Cape and an optional WiFi dongle.

Some of the subjects that can be learned with eduMIP include:
  • Dynamic modeling and feedback control (classical, state-space, adaptive, …) of unstable systems.
  • Robot motion planning and collision avoidance.
  • DC motor control via (built-in) H-bridges and encoder counters.
  • Attitude estimation via (built-in) IMU and barometer.
  • Communication via (built-in) WiFi (802.11b/g/n) and Bluetooth (4.1/BLE).
  • Charging, balancing, protection, and monitoring of 2-cell LiPo (included).
  • Multithreaded event-driven C programming in Debian Linux.
  • Multithreaded Graphical System Design for embedded applications.


eduMIP is compatible with Python, ROS, MATLAB & Simulink, and LabVIEW.
The CAD designs for the hardware are released under a Creative Commons CC-by-v4 License, while source code is released under a 3-Clause BSD License. There’s no link to those resources on Renaissance Robotics website, but you should find everything you need in that Hackster.io page.

$29 Bluey nRF52832 BLE & NFC Development Board Comes with Temperature, Humidity, Light, and Motion Sensors

July 5th, 2017 No comments

Electronut Labs, a startup based in Bangalore, India, has designed Bluey board powered by Nordic Semi nRF52832 Bluetooth LE SoC, and equipped with 3 sensor chips reporting temperature, humidity, light intensity, and acceleration data.

Bluey board specifications:

  • SoC – Nordic Semi nRF52832 ANT + BLE ARM Cortex-M4 @ 64 MHz processor with 512kB flash, 64kB RAM
  • Storage – Micro SD slot
  • Connectivity – Bluetooth 4.2/5 LE and other proprietary 2.4 GHz wireless standards via PCB Antenna, NFC via PCB antenna
  • Sensors
    • TI HDC1010 Temperature/Humidity sensor
    • APDS-9300-020 ambient light sensor
    • ST Micro LSM6DS3 accelerometer
  • Expansion Header – 18-pin header with GPIO, 5V, 3.3V, and GND
  • Debugging – CP2104 USB interface; 6-pin SWD header
  • Misc – CREE RGB LED; 2 push buttons; coin cell holder; on/off witch; external / battery power jumper
  • Power Supply – 5V via micro USB port, up to 6V battery voltage via 4-pin header

The board is partially open source hardware with KiCad & PDF schematics (v1.1 PCB) released in Github, but not the Gerber files nor the BoM released on Github, where you’ll find some documentation, and various samples relying on Nordic nRF5 SDK to play with Bluetooth LE and sensors, as well as sample code for a 2 wheeldrive ultrasonic robot.

The board is sold on Tindie for $29, but if you live in India, you can purchase it locally instead for 1,875 Rupiah. Visit the product page for a few more details. They do not sell the full robot, as it is based on off-the-shelf parts including HCSR-04 ultrasonic sensor, DRV8835 motor driver, and chassis made by Femtech RC Model Co that is similar to the Mini Robot Rover sold on Adafruit.

Husarion CORE2 STM32 Board for Robotics Projects Works with ESP32, Raspberry Pi 3, or ASUS Tinkerboard

June 30th, 2017 No comments

Husarion CORE2 is a board designed to make robotics projects simpler and faster to complete with pre-configured software and online management. Projects can start using LEGOs, before moving to 3D printed or laser-cut version of the mechanical parts without having to spend too much time on the electronics and software part of the project.

CORE2 and CORE2-ROS Boards – Click to Enlarge

Two versions of the board are available: CORE2 combining STM32 MCU with ESP32 WiFI & Bluetooth module, and CORE2-ROS with STM32 instead coupled to a Raspberry Pi 3 or ASUS Tinkerboard running ROS (Robot Operating System). Both solutions share most of the same specifications:

  • MCU -STMicro STM32F4 ARM CORTEX-M4 MCU @ 168 MHz with 192 kB RAM, 1 MB Flash
  • External Storage – 1x micro SD slot
  • USB – 1x USB 2.0 host port with 1A charging capability; 1x micro USB port for debugging and programming via FTDI chip
  • Expansion Headers
    • hRPi expansion header for
      • CORE2-ROS –  a single board computer Raspberry Pi 3 or ASUS Tinker Board
      • CORE2 – an ESP32 based Wi-Fi module
    • 2x motor headers (hMot) with
      • 4x DC motor outputs with built-in H-bridges
      • 4x quadrature encoder inputs 1 A cont./ 2 A max. current per output (2 A/4 A current when paralleled)
    • 6x servo ports with selectable supply voltage (5 / 6 / 7.4 / 8.6 V) 3 A cont./4.5 A max. current for all servos together
    • 6x 6-pin hSens sensor ports with GPIOs, ADC/ext. interrupt, I2C/UART, 5 V out
    • 1x hExt extension port with 12x GPIO, 7x ADC, SPI, I2C, UART, 2 x external interrupts
    • 1x CAN interface with onboard transceiver
  • Debugging – DBG SWD (Serial Wire Debug) STM32F4 debug port; micro USB port for serial console
  • Misc – 5x LEDs, 2x buttons
  • Power Supply – 6 to 16V DC with built-in overcurrent, overvoltage, and reverse polarity protection
  • Dimensions – 94 x 85 mm

On the software side, Husarion provide a set of open source libraries for robots as part of their hFramework, using DMA channels and interrupts internally to handle communication interfaces. The company has also prepared tutorials dealing with ROS introduction, creating nodes, simple kinematics for mobile robot, visual object recognition, running ROS on multiple machines, and SLAM navigation. CORE2 board can also be programming using the Arduino IDE, and finally Husarion Cloud allows you to securely create a web user interface to control the robot, and even program the robot firmware from a web browser.

That means you can program your robot using either the Web IDE, or offline with an SDK plus Visual Studio Code and the Husarion extension. The development work flow is summarized above.

CORE2 boards can be used for a variety of projects such as robotic arms, telepresense robots, 3D printers, education robots, drones, exoskeletons, and so on. If you want to learn about robots, but don’t have LEGO Mindstorms and don’t feel comfortable making your own mechanical parts yet, ROSbot might be a good way to start with CORE2-ROS board, LiDAR, a camera, four DC motors with encoders, an orientation sensor (MPU9250), four distance sensors, a Li-Ion battery (3 x 18650 batteries) and a charger, as well as aluminum mechanics. It also happens to be the platform they use for their tutorials.

ROSbot

You’ll find all those items, and some extra add-on boards, on the CrowdSupply campaign, starting at $89 for CORE2 board with ESP32 module, $99 for CORE2-ROS board without SBC, and going up to $1,290 for the complete ROSbot with ASUS Tinker Board. Shipping is free to the US, and $8 to $20 depending on the selected rewards, with delivery scheduled for September 2017, except for ROSbot that’s planned for mid-October 2017.

$399 Intel Euclid Robotics Devkit Runs Ubuntu & ROS on Intel Atom x7-Z8700 Processor

May 22nd, 2017 No comments

We’ve seen many mini PC based on Intel Atom x5/x7 “Cherry Trail” processor in the last year, but Intel has also integrated their low power processor into hardware aimed at robotics, such as Intel RealSense development kit based on Atom x5 UP Board and RealSense R200 depth camera. The company has now launched its one-in-all Intel Euclid development kit combining Atom X7-Z8700 processor with a RealSense camera in a single enclosure.

Click to Enlarge

Intel Euclid specifications:

  • SoC – Intel Atom x7-Z8700 Cherry Trail quad core processor @ up to 2.4GHz with Intel HD Graphics Gen 8
  • System Memory – 4GB LPDDR3-1600
  • Storage – 32GB eMMC 5.0 flash, Micro SD slot up to 128GB
  • Video Output – micro HDMI port up to 4K @ 30 Hz
  • Audio – 2x I2S interfaces, 1W mono speaker, 3x DMIC with noise cancellation
  • Camera – Intel RealSense ZR300 camera
    • RGB camera – 2MP up to [email protected], 16:9 aspect ratio, rolling shutter, fixed focus, 75° x 41.5° x 68° FOV
    • Stereo imagers – 2x [email protected], global shutter, fixed focus, 70° x 46° x 59° FOV
    • Depth output – up to 628 × 468 @ 60fps, 16-bit format; Minimal depth distance: 0.6 M (628 x 468) or 0.5 M (480 x 360); active IR stereo technology
    • Tracking module
      • Fisheye camera resolution: VGA @ 60fps,  FOV: 166° × 100° × 133° FOV,
      • IMU: 3-axis accelerometer & 3-axis gryroscope with 50 μsec time stamp accuracy
  • Connectivity – Dual band 802.11 a/b/g/n 1×1 WiFi, Bluetooth 4.0, GPS (GNS, GLONASS, Beidou, Galileo, QZSS, WAAS, EGNOS)
  • Sensors – Integrated Sensor Hub (ISH), accelerometer, digital compass, gyroscope, ambient light, proximity, thermal, environmental (barometer, altimeter, humidity, temperature)
  • USB – 1x USB 3.0 port, 1x micro USB OTG port with power, 1x micro USB 2.0 port for UART / serial console
  • Misc – ¼” standard tripod mounting hole; power and charging LEDs;
  • Battery – 2000 mAh @ 3.8V
  • Power Supply – 5V/3A via battery terminals
  • Temperature Range — up to 35°C (still air)

The kit runs Ubuntu 16.04 with Robotic Operating System (ROS) Kinetic Kame, and custom software layer to allow developers to control the device using a web interface. It also supports remote desktop application, and includes evaluation versions of Intel SLAM and Person Tracking Middleware.

Euclid Camera Output: Color Stream, Depth Stream, and Fisheye Stream – Click to Enlarge

Intel RealSense SLAM Library middleware enables applications in robots and drones to understand their location and surroundings more accurately than GPS allows in GPS denied environments and inside yet unmapped spaces. You’ll find documentation about SLAM, person tracking middleware, the camera API,  RealSense SDK framework, Euclid user guide and more in Intel Euclid product page. You’ll be able to get support in RealSense forums and Euclid developer kit community, where you’ll find tutorials and example projects.

Intel Euclid Development Kit can be pre-order for $399.00 on the product page with shipping starting on May 31, 2017.

Via LinuxGizmos