AI inference using Images, RTSP Video Stream on NVIDIA Jetson Nano Devkit

Last month I received NVIDIA Jetson Nano developer kit together with 52Pi ICE Tower Cooling Fan, and the main goal was to compare the performance of the board with the stock heatsink or 52Pi heatsink + fan combo.

But the stock heatsink does a very good job of cooling the board, and typical CPU stress tests do not make the processor throttle at all. So I had to stress the GPU as well, as it takes some efforts to set it up all, so I’ll report my experience configuring the board, and running AI test programs including running objects detection on an RTSP video stream.

Setting up NVIDIA Jetson Nano Board

Preparing the board is very much like you’d do with other SBC’s such as the Raspberry Pi, and NVIDIA has a nicely put getting started guide, so I won’t go into too many details here. To summarize:

  1. Download the latest firmware image (nv-jetson-nano-sd-card-image-r32.2.3.zip at the time of the review)
  2. Flash it with balenaEtcher to a MicroSD card since Jetson Nano developer kit does not have built-in storage.
  3. Insert the MicroSD card in the slot underneath the module, connect HDMI, keyboard, and mouse, before finally powering up the board

By default, the board expects to be powered by a 5V power supply through its micro USB port. But to avoid any potential power issues, I connected a 5V/3A power supply to the DC jack and fitted a jumper on J48 header to switch the power source.

NVIDIA Jetson Nano DC Jack Power
Click to Enlarge

It quickly booted to Ubuntu, and after going through the setup wizard to accept the user agreement, select the language, keyboard layout, timezone, and setup a user, the system performed some configurations, and within a couple of minutes, we were good to go.

Jetson Nano Ubuntu 18.04 Screenshot
Click for Original Screenshot

Jetson Nano System Info & NVIDIA Tools

Here’s some system information after updating Ubuntu with dist-upgrade:


Loaded modules:


GPIO appear to be properly configured:


Nvidia Power Model Tool allows us to check the power mode:


MAXN is 10W power mode, and we could change to 5W & check it as follows:


NVIDIA also provides tegrastats utility to report real-time utilization, temperature, and power consumption of various parts of the processor:


It’s rather cryptic, so you may want to check the documentation. For example at idle as shown above, we have two cores being used at a frequency as low as 102 MHz, CPU temperature is around 35°C, the GPU is basically unused, and power consumption of the board is about 1.1 Watts.

AI Hello World

The best way to get started with AI inferences is to use the Hello World samples which can be installed as follows:


The last command will bring Hello AI World dialog to select the models you’d like to use. I just went with the default onesJetson Nano Model Downloader

Once the selected models have been downloaded, you’ll be asked whether you want to install PyTorch:

Jetson Nano PyTorch InstallerIf you’re only going to play with pre-trained models, there’s no need to install it, but I did select PyTorch v1.1.10 for Python 2.7 in case I have time to play with training later on.

We can now build the sample:


Then test image inference with an image sample using ImageNet:


Here’s the output of the last command:


It also generated output_0.jpg with the inference info “97.858 orange” overlaid on top of the image.

Tiny YOLO-v3

The imageNet sample does not take a lot of time, so I also tried Tiny-Yolo3 sample which takes 500 images as described in a forum post:


After we’ve built the sample we have some more work to do. First I downloaded 5 images from the net with typical objects:


edited ~/deepstream_reference_apps/yolo/data/test_images.txt to repeat the 5 lines below 100 times to have 500 entries:


and finally modified ~/deepstream_reference_apps/yolo/config/yolov3-tiny.txt to use kHALF precision:


We can now run Tiny YOLO inference


That’s the output (not the first run):


That sample is only used for benchmarking, and it took 30.70 ms for each inference, so we’ve got an inference speed of around 32 fps. That sample does not report the inference data, nor does it display anything on the display.

Detecting Objects from an RTSP Stream

In theory, I could loop the Tiny Yolov3 sample for stress-testing, but to really stress the GPU continuously the best is to perform inference on a video stream. detectnet-camera sample part of Jetson Inference (aka AI Hello World) can do the job as long as you have a compatible USB camera.

USB Cameras

I have two older webcams that I bought 10 to 15 years ago. I never managed to make the Logitech Quickcam works in Linux or Android, but I could previously use the smaller one called “Venus 2.0” in both operating systems. But this time, I never managed to make the latter work in either Jetson Nano, nor my Ubuntu 18.04 laptop. If you want to get a MIPI or USB camera that works with the board for sure, check out the list of compatible cameras.

But then I thought… hey I have one camera that works! The webcam on my laptop. Of course, I can’t just connect it to the NVIDIA board, and instead, I started an H.264 stream using VLC:


cvlc is the command-line utility for VLC. I could also play the stream on my own laptop:


I had a lag of about 4 seconds, and I thought setting the network caching option to 200 ms might help, but instead the video below chopping. So the problem is somewhere else, and I did not have time to investigate for this review. The important part is that I could get the stream to work:

VLC RTSP StreamWhen I tried to play the stream in Jetson Nano with VLC, it would just segfault. But let’s not focus on that, and instead see if we can make detectnet-camera sample to work with our RTSP stream.

Somebody already did that a few months ago so it helped a lot. We need to edit the source code ~/jetson-inference/build/utils/camera/gstCamera.cpp in two parts of file:

  1. Disable CSI camera detection in gstCamera::ConvertRGBA function:
  2. Hard-code the RTSP stream in gstCamera::buildLaunchStr function:

Now we can rebuild the sample, and run the program:


and success! Provided you accept I’m holding a massive donut on top of my head, and Santa Claus is a teddy bear…

Jetson Nano RTSP Stream Inference

The console is continuously updated with objects detected, and a timing report.


This sample would be a good way to get started to connect one or more IP cameras to Jetson Nano to leverage AI to lower false alerts instead of the primitive PIR detection often used in surveillance cameras. We’ll talk more about CPU and GPU usage, as well as thermals in the next post.

I’d like to thank Seeed Studio for sending NVIDIA Jetson Nano developer kit for evaluation. They sell it for $99.00 plus shipping. Alternatively, you’ll also find the board on Amazon or directly from NVIDIA.

Share this:

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

ROCK 5 ITX Rockchip RK3588 mini-ITX motherboard
Subscribe
Notify of
guest
The comment form collects your name, email and content to allow us keep track of the comments placed on the website. Please read and accept our website Terms and Privacy Policy to post a comment.
3 Comments
oldest
newest
Jim st
Jim st
5 years ago

Thanks for this. I have some RTSP cameras I will try this with.

Itll be interesting to see if I get the lag that other streaming clients such as plc have with with stream.

DennisFaucher
4 years ago

Would you mind sharing your entire modified gstCamera.cpp. I tired to emulate your edits, make runs clean, but detectnet-camera fails with:

/detectnet-camera
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysinkrtspsrc location=rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov ! rtph264depay ! h264parse ! omxh264dec ! appsink name=mysink
[gstreamer] gstCamera failed to create pipeline
[gstreamer] (no property “location” in element “mysinkrtspsrc”)
[gstreamer] failed to init gstCamera (GST_SOURCE_NVARGUS, camera 0)

Thank you.

Andreea
Andreea
4 years ago

Hi!
How can I change the SSD_mobilenet pretrained model with a customized one?

Boardcon Rockchip RK3588S SBC with 8K, WiFI 6, 4G LTE, NVME SSD, HDMI 2.1...