In this lesson we learn how to incorporate a push button switch into our Jetson Nano projects. We explain the concept of a pull up resistor, and show how to configure the GPIO pins as inputs. This will allow you to take your NVIDIA Jetson Nano projects to new heights. Enjoy!
Category Archives: Jetson Nano
AI on the Jetson Nano LESSON 56: Using the GPIO Pins on the Jetson Nano
In this lesson we show how to interact with the GPIO pins on the NVIDIA Jetson Nano. The GPIO pins on the Jetson Nano have very limited current capability, so you must learn to use a PN2222 BJT transistor in order to control things like LED or other components. In this lesson we show how the Jetson Nano can be used to control a standard LED.
AI on the Jetson Nano LESSON 53: Object Detection and Recognition in OpenCV
In this video lesson we learn how to use the NVIDIA Jetson Inference tools for detect objects in a live video. The software developed in this lesson is included below for your convenience.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | import jetson.inference import jetson.utils import time import cv2 import numpy as np timeStamp=time.time() fpsFilt=0 net=jetson.inference.detectNet('ssd-mobilenet-v2',threshold=.5) dispW=1280 dispH=720 flip=2 font=cv2.FONT_HERSHEY_SIMPLEX # Gstreamer code for improvded Raspberry Pi Camera Quality #camSet='nvarguscamerasrc wbmode=3 tnr-mode=2 tnr-strength=1 ee-mode=2 ee-strength=1 ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! videobalance contrast=1.5 brightness=-.2 saturation=1.2 ! appsink' #cam=cv2.VideoCapture(camSet) #cam=jetson.utils.gstCamera(dispW,dispH,'0') cam=cv2.VideoCapture('/dev/video1') cam.set(cv2.CAP_PROP_FRAME_WIDTH, dispW) cam.set(cv2.CAP_PROP_FRAME_HEIGHT, dispH) #cam=jetson.utils.gstCamera(dispW,dispH,'/dev/video1') #display=jetson.utils.glDisplay() #while display.IsOpen(): while True: #img, width, height= cam.CaptureRGBA() _,img = cam.read() height=img.shape[0] width=img.shape[1] frame=cv2.cvtColor(img,cv2.COLOR_BGR2RGBA).astype(np.float32) frame=jetson.utils.cudaFromNumpy(frame) detections=net.Detect(frame, width, height) for detect in detections: #print(detect) ID=detect.ClassID top=detect.Top left=detect.Left bottom=detect.Bottom right=detect.Right item=net.GetClassDesc(ID) print(item,top,left,bottom,right) #display.RenderOnce(img,width,height) dt=time.time()-timeStamp timeStamp=time.time() fps=1/dt fpsFilt=.9*fpsFilt + .1*fps #print(str(round(fps,1))+' fps') cv2.putText(img,str(round(fpsFilt,1))+' fps',(0,30),font,1,(0,0,255),2) cv2.imshow('detCam',img) cv2.moveWindow('detCam',0,0) if cv2.waitKey(1)==ord('q'): break cam.release() cv2.destroyAllWindows() |
AI on the Jetson Nano LESSON 52: Improving Picture Quality of the Raspberry Pi Camera with Gstreamer
In this lesson we want to pause and work on improving the image quality of the video stream coming from the Raspberry Pi camera. Right now, we are using a boilerplate Gstreamer string to launch the Raspberry Pi camera. In the video above we show how image quality can be drastically improved by tweaking the Gstreamer launch string.
Based on the Video above, we develop a greatly improved image quality by adjusting the Gstreamer launch string. Below, for you enjoyment is the code that will optimize picture quality.
First, this is the key line that results in excellent video quality:
1 2 | # Gstreamer code for improvded Raspberry Pi Camera Quality camSet='nvarguscamerasrc wbmode=3 tnr-mode=2 tnr-strength=1 ee-mode=2 ee-strength=1 ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! videobalance contrast=1.5 brightness=-.2 saturation=1.2 ! appsink' |
And here is the overall code for running and displaying from the camera with the enhanced quality:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | import cv2 print(cv2.__version__) dispW=1280 dispH=720 flip=2 #Uncomment These next Two Line for Pi Camera camSet='nvarguscamerasrc wbmode=3 tnr-mode=2 tnr-strength=1 ee-mode=2 ee-strength=1 ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! videobalance contrast=1.5 brightness=-.2 saturation=1.2 ! appsink' cam= cv2.VideoCapture(camSet) #Or, if you have a WEB cam, uncomment the next line #cam=cv2.VideoCapture('/dev/video1') while True: ret, frame = cam.read() cv2.imshow('nanoCam',frame) cv2.moveWindow('nanoCam',0,0) if cv2.waitKey(1)==ord('q'): break cam.release() cv2.destroyAllWindows() |
Now, once we have optimized the Gstreamer launch stream, we need to consider what path to move forward. In lesson #50 we saw that we could either control the camera using the NVIDIA Jetson Utilities, or we could control the camera normally from OpenCV. The advantage of our old OpenCV method is that it gives us more control of the camera. The advantage of the Jetson Utility method is that it appears to run faster, and for the rPi camera, have less latency. Below are two code examples for the two methods above. In the video lesson above, we will figure out the best strategy by tweaking the parameters in these two programas.
OPTION #1: Launch the cameras using OpenCV
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | import jetson.inference import jetson.utils import cv2 import numpy as np import time width=1280 height=720 dispW=width dispH=height flip=2 camSet='nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(dispW)+', height='+str(dispH)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! videobalance contrast=1.5 brightness=-.3 saturation=1.2 ! appsink ' cam1=cv2.VideoCapture(camSet) #cam1=cv2.VideoCapture('/dev/video1') #cam1.set(cv2.CAP_PROP_FRAME_WIDTH,dispW) #cam1.set(cv2.CAP_PROP_FRAME_HEIGHT,dispH) net=jetson.inference.imageNet('alexnet') font=cv2.FONT_HERSHEY_SIMPLEX timeMark=time.time() fpsFilter=0 while True: _,frame=cam1.read() img=cv2.cvtColor(frame,cv2.COLOR_BGR2RGBA).astype(np.float32) img=jetson.utils.cudaFromNumpy(img) classID, confidence =net.Classify(img, width, height) item='' item =net.GetClassDesc(classID) dt=time.time()-timeMark fps=1/dt fpsFilter=.95*fpsFilter +.05*fps timeMark=time.time() cv2.putText(frame,str(round(fpsFilter,1))+' fps '+item,(0,30),font,1,(0,0,255),2) cv2.imshow('recCam',frame) cv2.moveWindow('recCam',0,0) if cv2.waitKey(1)==ord('q'): break cam.releast() cv2.destroyAllWindows() |
OPTION # 2: Control Camera with NVIDIA Jetson Utilities Library
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | import jetson.inference import jetson.utils import time import cv2 import numpy as np width=1280 height=720 #cam=jetson.utils.gstCamera(width,height,'/dev/video1') cam=jetson.utils.gstCamera(width,height,'0') net=jetson.inference.imageNet('googlenet') timeMark=time.time() fpsFilter=0 timeMark=time.time() font=cv2.FONT_HERSHEY_SIMPLEX while True: frame, width, height = cam.CaptureRGBA(zeroCopy=1) classID, confidence = net.Classify(frame, width, height) item = net.GetClassDesc(classID) dt=time.time()-timeMark fps=1/dt fpsFilter=.95*fpsFilter+.05*fps timeMark=time.time() frame=jetson.utils.cudaToNumpy(frame,width,height,4) frame=cv2.cvtColor(frame, cv2.COLOR_RGBA2BGR).astype(np.uint8) cv2.putText(frame,str(round(fpsFilter,1))+' '+item,(0,30),font,1,(0,0,255),2) cv2.imshow('webCam',frame) cv2.moveWindow('webCam',0,0) if cv2.waitKey(1)==ord('q'): break cam.release() cv2.destroyAllWindows() |
AI on the Jetson Nano LESSON 51: Improving NVIDIA Jetson Inference Library for RPi Camera
Here are the lines of code I used in the video to fix the gstreamer command. You can copy them below. copy all the code, including the trailing colon
1 2 | ss << "nvarguscamerasrc wbmode=3 sensor-id=" << mSensorCSI << " ! video/x-raw(memory:NVMM), width=(int)3264, height=(int)2464, framerate=21/1, format=(string)NV12 ! nvvidconv flip-method=" << flipMethod << " ! "; ss << "video/x-raw , width=(int)" << mWidth << ", height=(int)" << mHeight << " ! appsink name=mysink"; |