In this lesson we learn how to incorporate a push button switch into our Jetson Nano projects. We explain the concept of a pull up resistor, and show how to configure the GPIO pins as inputs. This will allow you to take your NVIDIA Jetson Nano projects to new heights. Enjoy!
Tag Archives: NVIDIA
Jetson Xavier NX Lesson 15: Training the Face Recognition Program to Recognize People
In this video lesson we should you a simple method to train our face recognizer on larger data sets. We use the python os.walk command to step through, and train automatically on all the training images in our folder. We then show how to store our training set to our SD card using the pickle utility. This allows us to train once, and use the trained model over and over.
For your convenience, the code below is what we developed to allow training our face recognition model.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | import face_recognition import cv2 import os import pickle print(cv2.__version__) Encodings=[] Names=[] j=0 image_dir='/home/pjm/Desktop/pyPro/demoimages/known' for root, dirs, files in os.walk(image_dir): print(files) for file in files: fullPath=os.path.join(root,file) print(fullPath) name=os.path.splitext(file)[0] print(name) person=face_recognition.load_image_file(fullPath) encoding=face_recognition.face_encodings(person)[0] Encodings.append(encoding) Names.append(name) print(Names) with open('train.pkl','wb') as f: pickle.dump(Names,f) pickle.dump(Encodings,f) |
Then this is a simple program that loads the trained model, and uses it to recognize people in unknown images.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | import cv2 print(cv2.__version__) import face_recognition import pickle with open('train.pkl','rb') as f: Names=pickle.load(f) Encodings=pickle.load(f) font=cv2.FONT_HERSHEY_SIMPLEX testImage=face_recognition.load_image_file('/home/pjm/Desktop/pyPro/demoimages/unkno$ facePositions=face_recognition.face_locations(testImage) allEncodings=face_recognition.face_encodings(testImage,facePositions) testImage=cv2.cvtColor(testImage,cv2.COLOR_RGB2BGR) for (top,right,bottom,left), face_encoding in zip(facePositions, allEncodings): name='Unknown Life Form' matches=face_recognition.compare_faces(Encodings,face_encoding) if True in matches: first_match_index=matches.index(True) name=Names[first_match_index] cv2.rectangle(testImage,(left,top),(right,bottom),(255,0,0),2) cv2.rectangle(testImage, (left,top),(left+200, top+30),(0,255,255),-1) cv2.putText(testImage,name,(left,top+20),font,.75,(255,0,0),2) cv2.imshow('mywindow',testImage) cv2.moveWindow('mywindow',0,0) if cv2.waitKey(0)==ord('q'): cv2.destroyAllWindows() |
AI on the Jetson Nano LESSON 56: Using the GPIO Pins on the Jetson Nano
In this lesson we show how to interact with the GPIO pins on the NVIDIA Jetson Nano. The GPIO pins on the Jetson Nano have very limited current capability, so you must learn to use a PN2222 BJT transistor in order to control things like LED or other components. In this lesson we show how the Jetson Nano can be used to control a standard LED.
Jetson Xavier NX Lesson 13: Installing Face Recognition and Identification Libraries
In this video lesson we show you how to install DLIB and face_recognition libraries on the NVIDIA Jetson Xavier NX. We take you through the installs step-by-step. This will be foundational libraries for future lessons on Face Recognition and Deep Learning.
Jetson Xavier NX Lesson 12: Intelligent Scanning for Objects of Interest
In this Video Tutorial we show how a camera on a pan/tilt control system can be programmed to search for an object of interest, and then track it when found. Our system has two independent camera systems, and each can track a separate item of interest independently. The code is written in python, using the OpenCV library. The video takes you through the lesson step-by-step, and then the code is included below for your convenience.
If you want to play along at home, we are using the Jetson Xavier NX, which you can pick up HERE. You will also need to of the bracket/servo kits, which you can get HERE, and then two Raspberry Pi Version two cameras, available HERE.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | import cv2 import numpy as np import time from adafruit_servokit import ServoKit print(cv2.__version__) timeMark=time.time() dtFIL=0 scanRight=True scanLeft=True def nothing(x): pass cv2.namedWindow('TrackBars') cv2.moveWindow('TrackBars',1320,0) cv2.createTrackbar('hueLower', 'TrackBars',100,179,nothing) cv2.createTrackbar('hueUpper', 'TrackBars',116,179,nothing) cv2.createTrackbar('satLow', 'TrackBars',160,255,nothing) cv2.createTrackbar('satHigh', 'TrackBars',255,255,nothing) cv2.createTrackbar('valLow', 'TrackBars',150,255,nothing) cv2.createTrackbar('valHigh', 'TrackBars',255,255,nothing) cv2.namedWindow('TrackBars2') cv2.moveWindow('TrackBars2',1100,0) cv2.createTrackbar('hueLower2', 'TrackBars2',150,179,nothing) cv2.createTrackbar('hueUpper2', 'TrackBars2',170,179,nothing) cv2.createTrackbar('satLow2', 'TrackBars2',160,255,nothing) cv2.createTrackbar('satHigh2', 'TrackBars2',255,255,nothing) cv2.createTrackbar('valLow2', 'TrackBars2',150,255,nothing) cv2.createTrackbar('valHigh2', 'TrackBars2',255,255,nothing) kit=ServoKit(channels=16) tilt1=90 pan1=90 tilt2=90 pan2=90 dPan1=1 dPan2=1 dTilt1=10 dTilt2=10 kit.servo[0].angle=pan1 kit.servo[1].angle=tilt1 kit.servo[2].angle=pan2 kit.servo[3].angle=tilt2 width=720 height=480 flip=2 font=cv2.FONT_HERSHEY_SIMPLEX camSet1='nvarguscamerasrc sensor-id=0 ee-mode=1 ee-strength=0 tnr-mode=2 tnr-strength=1 wbmode=3 ! video/x-raw(memory:NVMM), width=3264, height=2464, framerate=21/1,format=NV12 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(width)+', height='+str(height)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! videobalance contrast=1.3 brightness=-.2 saturation=1.2 ! appsink drop=True' camSet2='nvarguscamerasrc sensor-id=1 ee-mode=1 ee-strength=0 tnr-mode=2 tnr-strength=1 wbmode=3 ! video/x-raw(memory:NVMM), width=3264, height=2464, framerate=21/1,format=NV12 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(width)+', height='+str(height)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! videobalance contrast=1.3 brightness=-.2 saturation=1.2 ! appsink drop=True' #camSet='nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=3264, height=2464, framerate=21/1,format=NV12 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(width)+', height='+str(height)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink' #camSet ='v4l2src device=/dev/video1 ! video/x-raw,width='+str(width)+',height='+str(height)+',framerate=20/1 ! videoconvert ! appsink' cam1=cv2.VideoCapture(camSet1) cam2=cv2.VideoCapture(camSet2) while True: _, frame1 = cam1.read() _, frame2 = cam2.read() hsv1=cv2.cvtColor(frame1,cv2.COLOR_BGR2HSV) hsv2=cv2.cvtColor(frame2,cv2.COLOR_BGR2HSV) hueLow=cv2.getTrackbarPos('hueLower', 'TrackBars') hueUp=cv2.getTrackbarPos('hueUpper', 'TrackBars') Ls=cv2.getTrackbarPos('satLow', 'TrackBars') Us=cv2.getTrackbarPos('satHigh', 'TrackBars') Lv=cv2.getTrackbarPos('valLow', 'TrackBars') Uv=cv2.getTrackbarPos('valHigh', 'TrackBars') l_b=np.array([hueLow,Ls,Lv]) u_b=np.array([hueUp,Us,Uv]) hueLow2=cv2.getTrackbarPos('hueLower2', 'TrackBars2') hueUp2=cv2.getTrackbarPos('hueUpper2', 'TrackBars2') Ls2=cv2.getTrackbarPos('satLow2', 'TrackBars2') Us2=cv2.getTrackbarPos('satHigh2', 'TrackBars2') Lv2=cv2.getTrackbarPos('valLow2', 'TrackBars2') Uv2=cv2.getTrackbarPos('valHigh2', 'TrackBars2') l_b2=np.array([hueLow2,Ls2,Lv2]) u_b2=np.array([hueUp2,Us2,Uv2]) FGmask1=cv2.inRange(hsv1,l_b,u_b) FGmask2=cv2.inRange(hsv2,l_b2,u_b2) cv2.imshow('FGmask1',FGmask1) cv2.moveWindow('FGmask1',0,0) cv2.imshow('FGmask2',FGmask2) cv2.moveWindow('FGmask2',350,0) contours1,_ = cv2.findContours(FGmask1,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) contours1=sorted(contours1,key=lambda x:cv2.contourArea(x),reverse=True) for cnt in contours1: area=cv2.contourArea(cnt) (x,y,w,h)=cv2.boundingRect(cnt) if area>=100: scanLeft=False cv2.rectangle(frame1,(x,y),(x+w,y+h),(0,255,255),3) objX=x+w/2 objY=y+h/2 errorPan1=objX-width/2 errorTilt1=objY-height/2 if abs(errorPan1)>15: pan1=pan1+errorPan1/40 if abs(errorTilt1)>15: tilt1=tilt1-errorTilt1/40 if pan1>180: pan1=180 print('Pan Out of Range') if pan1<0: pan1=0 print('Pan Out of Range') if tilt1>180: tilt1=180 print('Tilt Out of Range') if tilt1<0: tilt1=0 kit.servo[2].angle=pan1 kit.servo[3].angle=tilt1 break contours2,_ = cv2.findContours(FGmask2,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) contours2=sorted(contours2,key=lambda x:cv2.contourArea(x),reverse=True) for cnt in contours2: area=cv2.contourArea(cnt) (x,y,w,h)=cv2.boundingRect(cnt) if area>=100: scanRight=False cv2.rectangle(frame2,(x,y),(x+w,y+h),(0,255,255),3) objX=x+w/2 objY=y+h/2 errorPan2=objX-width/2 errorTilt2=objY-height/2 if abs(errorPan2)>15: pan2=pan2+errorPan2/40 if abs(errorTilt2)>15: tilt2=tilt2-errorTilt2/40 if pan2>180: pan2=180 print('Pan Out of Range') if pan2<0: pan2=0 print('Pan Out of Range') if tilt2>180: tilt2=180 print('Tilt Out of Range') if tilt2<0: tilt2=0 kit.servo[0].angle=pan2 kit.servo[1].angle=tilt2 break if scanLeft==True: if pan1>=179: dPan1=abs(dPan1)*(-1) if pan1<=1: dPan1=abs(dPan1) if pan1>=179 or pan1<=1: if tilt1>=170: dTilt1=abs(dTilt1)*(-1) if tilt1<=10: dTilt1=abs(dTilt1) tilt1=tilt1+dTilt1 pan1=pan1+dPan1 kit.servo[2].angle=pan1 kit.servo[3].angle=tilt1 scanLeft=True if scanRight==True: if pan2>=179: dPan2=abs(dPan2)*(-1) if pan2<=1: dPan2=abs(dPan2) if pan2>=179 or pan2<=1: if tilt2>=170: dTilt2=abs(dTilt2)*(-1) if tilt2<=10: dTilt2=abs(dTilt2) tilt2=tilt2+dTilt2 pan2=pan2+dPan2 kit.servo[0].angle=pan2 kit.servo[1].angle=tilt2 scanRight=True frame3=np.hstack((frame1,frame2)) dt=time.time()-timeMark timeMark=time.time() dtFIL=.9*dtFIL + .1*dt fps=1/dtFIL cv2.rectangle(frame3,(0,0),(150,40),(0,0,255),-1) cv2.putText(frame3,'fps: '+str(round(fps,1)),(0,30),font,1,(0,255,255),2) #cv2.imshow('myCam1',frame1) #cv2.imshow('myCam2',frame2) cv2.imshow('comboCam',frame3) cv2.moveWindow('comboCam',0,450) if cv2.waitKey(1)==ord('q'): break cam1.release() cam2.release() cv2.destroyAllWindows() |