Archive

Posts Tagged ‘lidar’

NVIDIA DRIVE PX Pegasus Platform is Designed for Fully Autonomous Vehicles

October 11th, 2017 1 comment

Many companies are now involved in the quest to develop self-driving cars, and getting there step by step with 6 levels of autonomous driving defined based on info from  Wikipedia:

  • Level 0 – Automated system issues warnings but has no vehicle control.
  • Level 1 (”hands on”) – Driver and automated system shares control over the vehicle. Examples include Adaptive Cruise Control (ACC), Parking Assistance, and Lane Keeping Assistance (LKA) Type II.
  • Level 2 (”hands off”) – The automated system takes full control of the vehicle (accelerating, braking, and steering), but the driver is still expected to monitor the driving, and be prepared to immediately intervene at any time. You’ll actually have your hands on the steering wheel, just in case…
  • Level 3 (”eyes off”) – The driver can safely turn their attention away from the driving tasks, e.g. the driver can text or watch a movie. The system may ask the driver to take over in some situations specified by the manufacturer such as traffic jams. So no sleeping while driving 🙂 . The Audi A8 Luxury Sedan was the first commercial car to claim to be able to do level 3 self driving.
  • Level 4 (”mind off”) – Similar to level 3, but no driver attention is ever required. You could sleep while the car is driving, or even send the car somewhere without your being in the driver seat. There’s a limitation at this level, as self-driving mode is limited to certain areas, or special circumstances. Outside of these areas or circumstances, the vehicle must be able to safely park the car, if the driver does not retake control.
  • Level 5 (”steering wheel optional”) – Fully autonomous car with no human intervention required, no other limitations

So the goal is obviously to reach level 5, which would allow robotaxis, or safely drive you home whatever your alcohol or THC blood levels. This however requires lots of redundant (for safety) computing power, and current autonomous vehicle prototypes have a trunk full of computing equipments.

NVIDIA has condensed the A.I processing power required  or level 5 autonomous driving into DRIVE PX Pegasus AI computer that’s roughly the size of a license plate, and capable of handling inputs from high-resolution 360-degree surround cameras and lidars, localizing the vehicle within centimeter accuracy, tracking vehicles and people around the car, and planning a safe and comfortable path to the destination.

The computer comes with four A.I processors said to be delivering 320 TOPS (trillion operations per second) of computing power, ten times faster than NVIDIA DRIVE PX 2, or about the performance of a 100-server data center according to Jensen Huang, NVIDIA founder and CEO. Specifically, the board combines two NVIDIA Xavier SoCs and two “next generation” GPUs with hardware accelerated deep learning and computer vision algorithms. Pegasus is designed for ASIL D certification with automotive inputs/outputs, including CAN bus, Flexray, 16 dedicated high-speed sensor inputs for camera, radar, lidar and ultrasonics, plus multiple 10Gbit Ethernet

Machine learning works in two steps with training on the most powerful hardware you can find, and inferencing done on cheaper hardware, and for autonomous driving, data scientists train their deep neural networks NVIDIA DGX-1 AI supercomputer, for example being able to simulate driving 300,000 miles in five hours by harnessing 8 NVIDIA DGX systems. Once trained is completed, the models can be updated over the air to NVIDIA DRIVE PX platforms where inferencing takes place. The process can be repeated regularly so that the system is always up to date.

NVIDIA DRIVE PX Pegasus will be available to NVIDIA automotive partners in H2 2018, together with NVIDIA DRIVE IX (intelligent experience) SDK, meaning level 5 autonomous driving cars, taxis and trucks based on the solution could become available in a few years.

CrazyPi Board Runs Ubuntu and ROS on Rockchip RK3128 SoC for Robotics & IoT Projects (Crowdfunding)

August 10th, 2017 4 comments

CrazyPi is a maker board powered by Rockchip RK3128 quad core Cortex A7 processor that can take various magnetically connected modules such as LIDAR, gimbal, 4G LTE, etc.., and runs both Ubuntu and ROS (Robot Operating System) for DIY robotics & IoT projects.

Click to Enlarge

CrazyPi main board specifications:

  • SoC – Rockchip RK3128 quad core Cortex A7 processor @ 1.2 GHz with ARM Mali GPU
  • MCU – ARM Cortex-M3 @ 72 MHz
  • System Memory – 1GB DDR3L @ 1066 MHz
  • Storage – 16GB eMMC flash pre-loaded with Ubuntu and ROS
  • Connectivity – 802.11 a/b/g/n WiFi @ 150 Mbps, Bluetooth 4.0
  • USB – 1x USB 2.0 host port
  • Expansion Headers – Two headers with a total of 36-pin exposing 1x HDMI, 1x speaker, 1x microphone, 3x PWM, 1x I2C, 1x UART, 1x SPDIF, 1x SPI, 1x USB
  • Power Supply – 5V via micro USB port ?
  • Dimensions – Smaller than credit card

The full details are not available yet, but the company claims CrazyPi is “completely open source and DIY”, so I’d assume more details will eventually show up on CrazyPi github repo (now empty). A cloud service also allows you to stream the webcam output from anywhere in the world.

Webcam View and Map Generated from CrazyPi Robot Kit

What’s  quite interesting is that the board is designed to be connected to add-on boards, modules and accessories allowing you to build robots:

  • Robotic shield board to control motors / servos
  • Media shield board for HDMI output and use the board as a mini computer
  • 4G LTE module (maybe part of the robotic shield board?)
  • Crazyou 4K LIDAR sensor with SLAM (Simultaneous Localization And Mapping) function to automatically create map of your environment
  • 720p camera module
  • 2-degrees gimbal
  • 4-wheel robot chassis
  • 2x 18650 batteries and case

Again, we don’t have the exact details for each, but the promo video explains what can be done with the kits.

Crazyou – that’s the name of the company – has launched the project on Kickstarter to fund mass production with a 200,000 HKD goal (around $25,800). The board is supposed to cost $29, but is not offered standalone in the crowdfunding campaign, so instead you could start with a $59 CrazyPi Media Kit with the mainboard, camera and media board. If you want the complete robot shown above, you’d have to pledge $466 for the CrazyPi Advanced Kit reward with the camera module, the mainboard, the gimbal, the robotic shield board, battery case and charger, the chassis, and LIDAR. Various bundles are available to match different projects’ requirements. Shipping to most countries adds around $19, and delivery is scheduled for October 2017. There’s not much to see on Crazyou website, but eventually more details may emerge there.

Thanks to Freire for the tip.

TinyLIDAR is a $15 LIDAR MCU Board based on STMicro VL53L0X Time-of-Flight Ranging Sensor (Crowdfunding)

July 23rd, 2017 3 comments

LIDAR (Light Detection and Ranging) technology is used in autonomous car, drones, and some smartphones, in order to get an object position just like RADAR systems, but instead of using radio frequencies, it relies on infrared signals. High speed, long range LIDAR systems can cost several hundred dollars, but if you’d like to experiment with the technology, or your project would work just fine with 60 Hz scanning and a 2 meter range, tinyLIDAR could be a fun board to play with using Arduino compatible boards.

TinyLIDAR specifications and features:

  • LIDAR Sensor
    • ST VL53L0X Time-of-Flight (ToF) ranging sensor
    • 940nm laser VCSEL
    • Up to 2 meters range
    • Up to 60 Hz sampling rate even with Arduino UNO board
    • Up to 3% accuracy with mm precision
  • MCU – Unnamed dedicated 32-bit MCU (likely STM32) used to abstract the ST PAL API into simple I2C commands
  • Host Interface – 4-pin I2C header; re-configurable I2C address and operation modes
  • Misc – Blue LED, low profile reset button
  • Power Supply – +3 to +5V
  • Power Consumption – 10uA typ. Quiescent Current in single step mode
  • Dimensions –  25 x 21 mm (2x M2 mounting holes)
  • Weight – <1.5 g

Some of they advantage of the board against competing solution include lower power consumption, higher sampling rate (up to 60Hz), as well as lower memory and code footprints with 2604 bytes of program storage space and 252 bytes RAM with distance reading sketch in Arduino UNO compared to  6480 bytes / 414 bytes using Pololu VL53L0X library with a generic VL53L0X sensor board ($14) thanks the MCU in the board. They also claim the board is simpler to use thanks to their I2C command set. The company only showed 3D rendering of the board, but they do have working samples, as showed in the demo below with instructions available in instructables.com.

The Arrow Electronics certified project launched on Indiegogo with a $3,000 funding target. A $15 pledge will get you one tiny LIDAR, but you may as well as commit to three boards for $39. Shipping adds $5, and delivery is scheduled for October 2017. If you’d like to get such solution earlier, without built-in MCU and the advantages it brings, beside the $14 Polulu module linked above, you’ll also find a VL53L0X board working within  2.6 V to 5.5 V range on Aliexpress for $8.92 shipped.