Archive

Posts Tagged ‘robotics’

$399 Intel Euclid Robotics Devkit Runs Ubuntu & ROS on Intel Atom x7-Z8700 Processor

May 22nd, 2017 No comments

We’ve seen many mini PC based on Intel Atom x5/x7 “Cherry Trail” processor in the last year, but Intel has also integrated their low power processor into hardware aimed at robotics, such as Intel RealSense development kit based on Atom x5 UP Board and RealSense R200 depth camera. The company has now launched its one-in-all Intel Euclid development kit combining Atom X7-Z8700 processor with a RealSense camera in a single enclosure.

Click to Enlarge

Intel Euclid specifications:

  • SoC – Intel Atom x7-Z8700 Cherry Trail quad core processor @ up to 2.4GHz with Intel HD Graphics Gen 8
  • System Memory – 4GB LPDDR3-1600
  • Storage – 32GB eMMC 5.0 flash, Micro SD slot up to 128GB
  • Video Output – micro HDMI port up to 4K @ 30 Hz
  • Audio – 2x I2S interfaces, 1W mono speaker, 3x DMIC with noise cancellation
  • Camera – Intel RealSense ZR300 camera
    • RGB camera – 2MP up to [email protected], 16:9 aspect ratio, rolling shutter, fixed focus, 75° x 41.5° x 68° FOV
    • Stereo imagers – 2x [email protected], global shutter, fixed focus, 70° x 46° x 59° FOV
    • Depth output – up to 628 × 468 @ 60fps, 16-bit format; Minimal depth distance: 0.6 M (628 x 468) or 0.5 M (480 x 360); active IR stereo technology
    • Tracking module
      • Fisheye camera resolution: VGA @ 60fps,  FOV: 166° × 100° × 133° FOV,
      • IMU: 3-axis accelerometer & 3-axis gryroscope with 50 μsec time stamp accuracy
  • Connectivity – Dual band 802.11 a/b/g/n 1×1 WiFi, Bluetooth 4.0, GPS (GNS, GLONASS, Beidou, Galileo, QZSS, WAAS, EGNOS)
  • Sensors – Integrated Sensor Hub (ISH), accelerometer, digital compass, gyroscope, ambient light, proximity, thermal, environmental (barometer, altimeter, humidity, temperature)
  • USB – 1x USB 3.0 port, 1x micro USB OTG port with power, 1x micro USB 2.0 port for UART / serial console
  • Misc – ¼” standard tripod mounting hole; power and charging LEDs;
  • Battery – 2000 mAh @ 3.8V
  • Power Supply – 5V/3A via battery terminals
  • Temperature Range — up to 35°C (still air)

The kit runs Ubuntu 16.04 with Robotic Operating System (ROS) Kinetic Kame, and custom software layer to allow developers to control the device using a web interface. It also supports remote desktop application, and includes evaluation versions of Intel SLAM and Person Tracking Middleware.

Euclid Camera Output: Color Stream, Depth Stream, and Fisheye Stream – Click to Enlarge

Intel RealSense SLAM Library middleware enables applications in robots and drones to understand their location and surroundings more accurately than GPS allows in GPS denied environments and inside yet unmapped spaces. You’ll find documentation about SLAM, person tracking middleware, the camera API,  RealSense SDK framework, Euclid user guide and more in Intel Euclid product page. You’ll be able to get support in RealSense forums and Euclid developer kit community, where you’ll find tutorials and example projects.

Intel Euclid Development Kit can be pre-order for $399.00 on the product page with shipping starting on May 31, 2017.

Via LinuxGizmos

$80 BeagleBone Blue Board Targets Robots & Drones, Robotics Education

March 14th, 2017 3 comments

Last year, we reported that BeagleBoard.org was working with the University of California San Diego on BeagleBone Blue board for robotics educational kits such as EduMiP self-balancing robot, and EduRover four wheel robot. The board has finally launched, so we know the full details, and it can be purchased for about $80 on Mouser, Element14 or Arrow websites.

Click to Enlarge

BeagleBone Blue specifications:

  • SiP (System-in-Package) – Octavo Systems OSD3358 with TI Sitara AM3358 ARM Cortex-A8 processor @ up to 1 GHz,  2×32-bit 200-MHz programmable real-time units (PRUs), PowerVR SGX530 GPU, PMIC, and 512MB DDR3
  • Storage – 4GB eMMC flash, micro SD slot
  • Connectivity – WiFi 802.11 b/g/n, Bluetooth 4.1 LE (TI Wilink 8) with two antennas
  • USB – 1x USB 2.0 client and host port
  • Sensors – 9 axis IMU, barometer
  • Expansion
    • Motor control – 8x 6V servo out, 4x DC motor out, 4x quadrature encoder in
    • Other interfaces – GPIOs, 5x UARTs, 2x SPI, 1x I2C, 4x ADC, CAN bus
  • Misc – Power, reset and 2x user buttons; power, battery level & charger LEDs; 6x user LEDs; boot select switch
  • Power Supply – 9-18V DC input via power barrel; 5V via micro USB port; 2-cell LiPo support with balancing,
  • Dimensions & Weight – TBD

The board ships pre-loaded with Debian, but it also supports the Robot Operating System (ROS) & Ardupilot, as well as graphical programming via Cloud9 IDE on Node.js. You’ll find more details, such as documentation, hardware design files, and examples projects on BeagleBone Blue product page, and github.

The board is formally launched at Embedded World 2017, and Jason Kridner, Open Platforms Technologist/Evangelist at Texas Instruments, and co-founder and board member at BeagleBoard.org Foundation, uploaded a video starting with a demo of various robotics and UAV projects, before giving a presentation & demo of the board at the 2:10 mark using Cloud 9 IDE.


If you attend Embedded World 2017, you should be able to check out of the board and demos at Hall 3A Booth 219a.

Open Surgery Initiative Aims to Build DIY Surgical Robots

February 7th, 2017 No comments

Medical equipments can be really expensive because of the R&D involved and resulting patents, low manufacturing volume, government regulations, and so on. Developed countries can normally afford those higher costs, but for many it may just be prohibitively expensive. The Open Surgery initiative aims to mitigate the costs by “investigating whether building DIY surgical robots, outside the scope of healthcare regulations, could plausibly provide an accessible alternative to the costly professional healthcare services worldwide”.

DIY Surgical Robot – Click to Enlarge

The project is composed of member from the medical, software, hardware, and 3D printing communities, is not intended for (commercial) application, and currently serves only academic purposes.

Commercial surgical robots can cost up to $2,000,000, but brings benefits like smaller incisions, reduced risks of complications and readmissions, and shorter hospital stays thanks to a faster recovery process. There have already been several attempts within the robotics community to come up with cheaper and more portable surgical robots, such as RAVEN II Surgical robot initially developed with funding from the US military to create a portable telesurgery device for battlefield operations, and valued at $200,000. The software used to control RAVEN II has been made open source, so other people can improve on it.

The system is currently only used by researchers in universities to experiment with robotic surgery, but it can’t be used on humans, as it lacks the required safety and quality control systems. This is a step in the right direction, but the price makes it still out of reach for most medical hacker communities, so Frank Kolkman, who setup the Open Surgery initiative, has been trying to build a DIY surgical robot for around $5000 by using as many off-the-shelf parts and prototyping techniques such as laser cutting and 3D printing for several months with the help of the community.

Three major challenges to designing a surgical robot (theoretically) capable of performing laparoscopic surgery have been identified:

  1. The number and size of tools: during a single operation a surgeon would switch between various types of tools, so a robot would either have to have many of them or they should be able to be interchangeable. The instruments are also extremely small, and difficult to make
  2. Anything that comes into contact with the human body has to be sterile to reduce the risk of infection, and most existing tools are made of stainless steel so that they can be sterilized by placing them in an autoclave, that may not be easily accessible to many people.
  3. The type of motion a surgical robot should be able to make, whereby a fixed point of rotation in space is created where the tool enters the body through an entry port – or ‘trocar’. The trocar needs to be stationary so as to avoid tissue damage.

He solved the first  issue by finding laporoscopic instruments on Alibaba, as well as camera, CO2 insufflation pumps, and others items. For the second hurdle, he realized a domestic oven turned to 160 degrees centigrade for 4 hours could be an alternative to an autoclave. The mechanical design was the most complicated, as it required many iterations, and he ended with some 3D printed parts, and DC servo motors. Software was written using Processing open source scripting language. You can see the results in the short video below.

While attempting surgery with the design would not be recommended just yet, a $5,000 DIY surgical robot appears to feasible. Maybe it could be evaluated by one or more trained surgeons first, and then tested on animals that needs surgery, before eventually & potentially being used on human, who would not get the treatment otherwise.

While there’s “Open” in “Open Surgery” and the initial intent was to make the project open source, it turned out it is almost impossible to design surgical robots without infringing on patents. That’s no problem as long as you make parts for private use, however Frank explains that sharing files could cause problems, and the legality of doing so requires some more research.

Samsung Introduces IoT-Ready POWERbot VR7000 Robot Vacuum Cleaner Compatible with Amazon Echo

December 29th, 2016 1 comment

2017 is the year where the future starts. You’ll be wandering in your automated home or office where lights and heating system are fully handled by a gateway taking into account sensors values, and equipped with a CO2 level controlled ventilation systems, your eyes constantly looking at your phone, wearing neckband speakers likely connected to your Amazon Echo to let you know when it’s time to get up, eat, go to work, brush your teeth, and get back to bed again. All your life will be taken care of on your behalf by the Internet of Things, relieving you of the stress of taking routine daily decisions… Luckily, you’ll still be have an illusion of control thanks to your “IoT-ready” Samsung POWERbot VR7000 vacuum cleaner that can be controlled with your voice via that Echo thing, giving your life a purpose.

Click to Enlarge

Click to Enlarge

Samsung Electronics’ latest POWERbot vacuum cleaner will be unveiled at CES 2017 in January with a slimmer design (97mm), and more powerful cleaning capabilities. POWERbot VR7000 vacuum cleaner will feature “Visionary Mapping Plus” and “FullView Sensor 2.0” in order to detect obstacles and generate a map of the room, its Intelligent Power Control feature will also automatically adjusts the level of suction power to surface type (hardwood, carpet, etc…). I think it’s also the first “IoT-ready” vacuum cleaner I’ve seen, and you can control it using a mobile app, or through voice commands thanks to its compatibility with Amazon Echo. TizenExperts also reports that the device will run Tizen, and can also be integrated with the SmartThings hub. I guess you could also have some sort of dust sensor(s) to decide when to start the vacuum cleaner, beside scheduling cleaning times.

Click to Enlarge

Click to Enlarge

POWERbot VR7000 will be showcased on Samsung Electronics’ CES booth #15006, between January 5th and January 8th 2017.

JeVois-A33 is a Small Quad Core Linux Camera Designed for Computer Vision Applications (Crowdfunding)

December 27th, 2016 8 comments

JeVois Neuromorphic Embedded Vision Toolkit – developed at iLab at the University of Southern California – is an open source software framework to capture and process images through a machine vision algorithm, primarily designed to run on embedded camera hardware, but also supporting Linux board such as the Raspberry Pi. A compact Allwinner A33 has now been design to run the software and use on robotics and other projects requiring a lightweight and/or battery powered camera with computer vision capabilities.

allwinner-a33-computer-vision-cameraJeVois-A33 camera:

  • SoC – Allwinner A33  quad core ARM Cortex A7 processor @ 1.35GHz with  VFPv4 and NEON, and a dual core Mali-400 GPU supporting OpenGL-ES 2.0.
  • System Memory – 256MB DDR3 SDRAM
  • Storage – micro SD slot for firmware and data
  • 1.3MP camera capable of video capture at
    • SXGA (1280 x 1024) up to 15 fps (frames/second)
    • VGA (640 x 480) up to 30 fps
    • CIF (352 x 288) up to 60 fps
    • QVGA (320 x 240) up to 60 fps
    • QCIF (176 x 144)  up to 120 fps
    • QQVGA (160 x 120) up to 60 fps
    • QQCIF (88 x 72) up to 120 fps
  • USB – 1x mini USB port for power and act as a UVC webcam
  • Serial – 5V or 3.3V (selected through VCC-IO pin) micro serial port connector to communicate with Arduino or other MCU boards.
  • Power – 5V (3.5 Watts) via USB port requires USB 3.0 port or Y-cable to two USB 2.0 ports
  • Misc
    • Integrated cooling fan
    • 1x two-color LED: Green: power is good. Orange: power is good and camera is streaming video frames.
  • Dimensions –  28 cc or 1.7 cubic inches (plastic case included with 4 holes for secure mounting)

jevois-camera-hardwareThe camera runs Linux with the drivers for the camera, JeVois C++17 video capture, processing & streaming framework, OpenCV 3.1, and toolchains. You can either connect it to a host computer’s USB port to check out the camera output (actual image + processed image), or to an MCU board such as Arduino via the serial interface to use machine vision to control robots, drones, or others. Currently three modes of operation are available:

  • Demo/development mode – the camera outputs a demo display over USB that shows the results of its analysis, potentially along with simple data over serial port.
  • Text-only mode – the camera provides no USB output, but only text strings, for example, commands for a pan/tilt controller.
  • Pre-processing mode – The smart camera outputs video that is intended for machine consumption, and potentially processed by a more powerful system.

The smart camera can detect motion, track faces and eyes, detect & decode ArUco makers & QR codes, detect & follow lines for autonomous cars, and more. Since the framework is open source, you’ll also be able to add your own algorithms and modify the firmware. Some documentation has already been posted on the project’s website. The best is to watch the demo video below to see the capacities of the camera and software.

The project launched in Kickstarter a few days ago with the goal of raising $50,000 for the project. A $45 “early backer” pledge should get you a JeVois camera with a micro serial connector with 15cm pigtail leads, while a $55 pledge will add an 8GB micro SD card pre-load with JeVois software, and a 24/28 AWG mini USB Y cable. Shipping is free to the US, but adds $10 to Canada, and $15 to the rest of the work. Delivery is planned for February and March 2017.

Grid-EYE Breakout Board is a $49 Low Resolution Thermal Camera Module

November 29th, 2016 10 comments

Thermal cameras can be really expensive pieces of equipment, and even the cheap 60×60 thermal cameras available on Aliexpress costs a little over $200. However, PURE Engineering has made an breakout board with Panasonic Grid-EYE infrared 8×8 array sensor that allows you to experiment with the technology, or integrate into your own projects for just $49.

Click to Enlarge

Click to Enlarge

Grid-EYE breakout board features:

  • Panasonic Grid-EYE AMG8834 64 pixel infrared / thermal camera sensor with 60 degree viewing angle using MEMS thermopile technology
  • Pinout compatible with Arduino Zero,  ST-NUCLEO board, and other 3.3V boards with I2C, VDD, GND, INT, and AD pins
  • PUREModules PCB edge connectors with UART, GPIO, to interface with the company’s IoT board
  • Power Supply – On-board regulator handles 3 to 5V input

The Panasonic sensor transfers thermal presence, direction, and temperature values over I2C. The company wrote a demo for the module with an Arduino sketech and a Processing sketch both available on github, and you can see it in action in the video below using an ice pack and a hot coffee mug.

Applications listed by Panasonic for this sensor include digital signage, security, lighting control, kiosk/ATM, medical imaging, automatic doors, thermal mapping, people counting, robotics, and others.

The board is now listed on GroupGets for $49, and 100 boards need to be sold for the group buying campaign to be successful. More details may be available on the product’s page on Pure Engineering website. Alternatively, you could also get AMG8834EK Grid-EYE evaluation kit with the IR camera, an Atmel SAMD21G18A MCU, and Bluetooth Smart connectivity for about $95 on Newark or 48.99 GPB (~$61) on Farnell UK.

[Update: PUREmodules modular system has been launched a kickstarter campaign, but it does not seem to include the thermal camera]

Nvidia Unveils Xavier Automotive & AI Octa-core SoC with 512-Core Volta GPU, 8K Video Decode & Encode

September 29th, 2016 2 comments

Nvidia has introduced the successor to their Parker SoC mostly targeting self-driving cars and artificial intelligence applications, with Xavier SoC featuring 8 custom ARMv8 cores, a 512-core Volta GPU, a VPU (Video Processing Unit) supporting 8K video decode and encode and HDR (High Dynamic Range), as well as a computer vision accelerator (CVA).

nvidia-xavier The processor will deliver 20 TOPS (trillion operations per second) of performance, while consuming only 20 watts of power, and since it’s designed specifically for autonomous cars, it will comply with automotive safety standards such as ISO 26262 functional safety specification.

Anandtech published a comparison table with Tegra X1 (Erista), Parker, and Xavier using currently available information.

Xavier Parker Erista (Tegra X1)
CPU 8x NVIDIA Custom ARM 2x NVIDIA Denver +
4x ARM Cortex-A57
4x ARM Cortex-A57 +
4x ARM Cortex-A53
GPU Volta, 512 CUDA Cores Pascal, 256 CUDA Cores Maxwell, 256 CUDA Cores
Memory ? LPDDR4, 128-bit Bus LPDDR3, 64-bit Bus
Video Processing 7680×4320 Encode & Decode 3840x2160p60 Decode
3840x2160p60 Encode
3840x2160p60 Decode
3840x2160p30 Encode
Transistors 7B ? ?
Manufacturing Process TSMC 16nm FinFET+ TSMC 16nm FinFET+ TSMC 20nm Planar

The company goes on to say a single Xavier-based AI car supercomputer will be able to replace today’s fully configured DRIVE PX 2 with two Parker SoCs and two Pascal GPUs. The new platform will be much smaller as illustrated below, consumes much less power at 20 Watt, or 25% of the power consumption of PX DRIVE 2, and deliver the same AI performance (20 TOPS), as well as around 33% better integer performance (160 SPECINT).

nvidia-px-drive-2-vs-xavier-board

Xavier will start sampling in Q4 2017, and be available to automakers, tier 1 suppliers, startups and research institutions working on self-driving cars.

Nvidia has also uploaded a video showing the deep learning capabilities of their PX DRIVE 2 computer on a self-driving car that learned to drive in California, before driving in New Jersey.

Parrot S.L.A.M Dunk is a Ubuntu & ROS Computer with 3D Depth Cameras for Drones & Robots

September 26th, 2016 No comments

Parrot and Canonical have partnered to develop the Parrot S.L.A.M.dunk development kit for the design of applications for autonomous navigation, obstacle avoidance, indoor navigation and 3D mapping for drones and robots, and running both Ubuntu 14.04 and ROS operating systems. The name of the kit is derived from its “Simultaneous Localization and Mapping algorithm” (S.L.A.M) allowing for location without GPS signal.

parrot-slam-dunk

Parrot S.L.A.M Dunk preliminary specifications:

  • SoC – NVIDIA Tegra K1 processor
  • Camera – Fish-eye stereo camera with a 1500×1500 resolution at 60fps
  • Sensors – Inertial-measurement unit (IMU), ultrasound sensor up to 15 meters range, magnetometer, barometer
  • Video Output – micro HDMI
  • USB – 1x micro USB 2.0 port, 1x USB 3.0/2.0 port
  • Weight – 140 grams

Parrot S.L.A.M dunk can be fitted various drones and robotic platforms such as quadcopters and fixed-wings, rolling robots and articulated arms using mounting kits. The computer module is then connected to the host platform via a 3.5mm jack cable and a USB cable in order to send and receive commands and data.

parrot-slam-dunk-drone-3d-depthThis morning I wrote about SoftKinetic 3D sensing camera based on time-of-flight technology, but Parrot S.L.A.M Dunk is based on more commonly used stereo vision cameras. The micro HDMI allows developers to connect the computer to a monitor in order to develop their application for Ubuntu and ROS.

Parrot S.L.A.M Dunk will be available in Q4 2016 at an undisclosed price. More information should eventually be found in Parrot Developer website.