Tag Archives: openCV

AI on the Jetson Nano LESSON 62: Create a Streaming IP Camera from a Raspberry Pi Zero W

In this lesson we learn to make a streaming IP camera with a Raspberry Pi Zero W, and Raspberry Pi camera. The Pi will create a RTP stream, which can then be read by a Jetson Nano on the same network. We use OpenCV to read the frames on the NVIDIA Jetson Nano side.

This is the command to launch the Raspberry Pi camera, and start the RTP stream. This command works well for the Raspberry Pi Camera, version 1.

For the Version 2 Camera, I recommend:

This is the Gstreamer code on the Jetson Nano side to grab the RTP Frames. On host= below, be sure to use the IP address of your Raspberry Pi.

 

 

AI on the Jetson Nano LESSON 61: Image Recognition and Speech (TTS) on the Nano

In this video lesson we learn how to add speech to our NVIDIA Jetson Nano we demonstrate how the Jetson can not only recognize an item, but can audibly speak the item it sees. The video takes you through the process step-by-step, and shows you how to make it all work together properly. For your convenience, the code we developed is included below.

 

Jetson Xavier NX Lesson 12: Intelligent Scanning for Objects of Interest

In this Video Tutorial we show how a camera on a pan/tilt control system can be programmed to search for an object of interest, and then track it when found.  Our system has two independent camera systems, and each can track a separate item of interest independently. The code is written in python, using the OpenCV library. The video takes you through the lesson step-by-step, and then the code is included below for your convenience.

If you want to play along at home, we are using the Jetson Xavier NX, which you can pick up HERE. You will also need to of the bracket/servo kits, which you can get HERE, and then two Raspberry Pi Version two cameras, available HERE.

 

Jetson Xavier NX Lesson 11: Independently Tracking Different Objects in Different Cameras

In this video lesson we show how two Raspberry Pi cameras can independently track to different objects of interest. As a demonstration, we track on two different colors, with pan/tilt servo systems adjusting to keep the object of interest in the center of the field of view.

In this project we are using the Jetson Xavier NX, which you can pick up HERE. You will also need to of the bracket/servo kits, which you can get HERE, and then two Raspberry Pi Version two cameras, available HERE.

 

AI on the Jetson Nano LESSON 52: Improving Picture Quality of the Raspberry Pi Camera with Gstreamer

In this lesson we want to pause and work on improving the image quality of the video stream coming from the Raspberry Pi camera. Right now, we are using a boilerplate Gstreamer string to launch the Raspberry Pi camera. In the video above we show how image quality can be drastically improved by tweaking the Gstreamer launch string.

Based on the Video above, we develop a greatly improved image quality by adjusting the Gstreamer launch string. Below, for you enjoyment is the code that will optimize picture quality.

First, this is the key line that results in excellent video quality:

And here is the overall code for running and displaying from the camera with the enhanced quality:

 

Now, once we have optimized the Gstreamer launch stream, we need to consider what path to move forward. In lesson #50 we saw that we could either control the camera using the NVIDIA Jetson Utilities, or we could control the camera normally from OpenCV. The advantage of our old OpenCV method is that it gives us more control of the camera. The advantage of the Jetson Utility method is that it appears to run faster, and for the rPi camera, have less latency. Below are two code examples for the two methods above. In the video lesson above, we will figure out the best strategy by tweaking the parameters in these two programas.

OPTION #1: Launch the cameras using OpenCV

OPTION # 2: Control Camera with NVIDIA Jetson Utilities Library