In this lesson we want to pause and work on improving the image quality of the video stream coming from the Raspberry Pi camera. Right now, we are using a boilerplate Gstreamer string to launch the Raspberry Pi camera. In the video above we show how image quality can be drastically improved by tweaking the Gstreamer launch string.
Based on the Video above, we develop a greatly improved image quality by adjusting the Gstreamer launch string. Below, for you enjoyment is the code that will optimize picture quality.
First, this is the key line that results in excellent video quality:
# Gstreamer code for improvded Raspberry Pi Camera Quality
#Or, if you have a WEB cam, uncomment the next line
Now, once we have optimized the Gstreamer launch stream, we need to consider what path to move forward. In lesson #50 we saw that we could either control the camera using the NVIDIA Jetson Utilities, or we could control the camera normally from OpenCV. The advantage of our old OpenCV method is that it gives us more control of the camera. The advantage of the Jetson Utility method is that it appears to run faster, and for the rPi camera, have less latency. Below are two code examples for the two methods above. In the video lesson above, we will figure out the best strategy by tweaking the parameters in these two programas.
In this video lesson we show how to recognize and identify faces in live video on the Jestson Nano using OpenCV. We have a separate program that trains the system based on known faces, and this work was described in an earlier lesson in this series. Below is a copy of the code we develop in the video above.