Co-processor for upcoming season for vision

Hello everyone, I have been doing a lot of reading about Apriltags for the upcoming season. the point of this is to hopefully get vision added to our robot. With RPIs being difficult to get a hold of (i hear, haven’t tried), and some poeple’s concerns about them being underpowered for Apriltags, what would everyone’s recommendations be? an alternative co-processor?

6 Likes

In case you can’t find anything, keep in mind it’ll be runnable on the roboRIO; it just won’t be fast.

I have already begun some preliminary tests on a raspberry pi 3 B+ on detecting april tags. I have not started finding the pose of the tags for robot localization yet, but I gather the processing speed is okay. I think it will be good enough for competition with the pi 4. The supplies for new RPis are limited (chip shortage), but there are so many in circulation, it shouldn’t be hard to find one. I suggest trying a jetson nano if you want even faster speeds, but they are pretty expensive for me. It would definitely be a good idea to use a separate processor to take the heavy computing load off the rio. I will report back with further findings

2 Likes

thanks for the quick response. luckily, my team actually already has a jetson nano 2gb, Do you know how easy that would be to use compared to a pi?

I have actually never worked with the jetson nano before, however I know it was made to be a very powerful microprocessor for real time applications like machine learning models, camera feeds, and vision processing so it is a perfect match for this application. I understand it has a lot of community support, like the raspberry pi, in python. Since both are made for python they are extremely similar software wise, but the jetson nano’s specs are far more suitable for vision applications. I would even be willing to say it can handle two camera streams effectively. You would be able to hook it up to the rio with ethernet or through network tables like the RPi.

This is just a very preliminary finding.
I am testing AprilTag detection on an RPi CM4 using a Pi Camera v2.
Currently I am running a “multi threaded” process. One to acquire images, and one to detect tags in the images.
Since I am using Python, the reality of multi threading is a bit suspect.
Detection is not an issue but FPS appears to be REALLY LOW. Currently I am only seeing approximately 3.5 FPS.
I will update these findings as I progress with testing.

Two questions:

  • Have you measured FPS separately for capture and detect to make sure the low fps isn’t on the capture side? Eg disable detection, measure FPS.
  • How do you have your detector set up? Do you have it searching for only a single tag family, what are your decimate, sigma, and nthreads settings, etc.

I don’t think it would be hard to find one. A pi 3 with a pi cam v1 works best in Photonvision. We use a pi 3 with a microsoft hd3000 camera to detect the reflective tapes and in low enough resolution and exposure it does the job. However I wouldn’t guarantee you won’t need deep learning for the charged up game.

Mind posting a code snippet?

Good questions.
I am measuring the FPS ONLY inside the processing loop. The current configuration for the detection is for the specific tag family identified by FIRST, “Tag36h11”.

That said, I have already achieved an approximate 10x increase in processed FPS by simply moving the definition of the AprilTag detector outside the image processing loop.

Currently I am seeing ~30FPS detection regardless of the FPS configured in the acquisition thread (30, 60, 90).

Quite honestly, I don’t know how to answer this question. This is not part of my programming knowledge. (SEE EDIT BELOW FOR ANSWER) Seriously, I am just a hobby hacker. So, If I can achieve these results, practically anyone can.

[EDIT]

OK, a bit more digging and I found out what your question actually is asking. I left those values at their default settings by not including them in the define.

2 Likes

Snippet?? Nah, here is the whole banana so far. As you will see, this is very rough and has a bunch of options commented out as it has been used for various other projects as a basic starting point.

# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import cv2
import apriltag as ap
import threading
from threading import Thread
import time 
import math
from networktables import NetworkTables

t0 = 0.0 #Used for testing performance
t1 = 0.0 #Used for testing performance
tLast = 0.0
FlushRate = 100

cv2.namedWindow('Camera', cv2.WINDOW_AUTOSIZE )

cond = threading.Condition()
notified = [False]


def connectionListener(connected, info):
    print(info, '; Connected=%s' % connected)
    with cond:
        notified[0] = True
        cond.notify()


class PiVideoStream:
	def __init__(self, resolution=(640,480), framerate= 60):
		# initialize the camera and stream
		self.camera = PiCamera()
		self.camera.resolution = resolution
		#self.camera.framerate = framerate
		self.rawCapture = PiRGBArray(self.camera, size=resolution)
		self.stream = self.camera.capture_continuous(self.rawCapture,
			format="bgr", use_video_port=True)
		# initialize the frame and the variable used to indicate
		# if the thread should be stopped
		self.frame = None
		self.stopped = False
		
	def start(self):
		# start the thread to read frames from the video stream
		Thread(target=self.update, args=()).start()
		return self
		
	def update(self):
		# keep looping infinitely until the thread is stopped
		for f in self.stream:
			# grab the frame from the stream and clear the stream in
			# preparation for the next frame
			self.frame = f.array
			self.rawCapture.truncate(0)
			# if the thread indicator variable is set, stop the thread
			# and resource camera resources
			if self.stopped:
				self.stream.close()
				self.rawCapture.close()
				self.camera.close()
				return
				
	def read(self):
		# return the frame most recently read
		return self.frame
		
	def stop(self):
		# indicate that the thread should be stopped
		self.stopped = True

#Start Camera Acquisition Thread
#cam = WebcamVideoStream(src=0).start() ##Used for USB camera
cam = PiVideoStream().start()  #Used with Pi Camera

#Begin NetworkTables and wait until connected
NetworkTables.initialize(server='192.168.1.128') #Set NT Server IP, Roborio IP
NetworkTables.addConnectionListener(connectionListener, immediateNotify=True)

with cond:
    print("Waiting")
    if not notified[0]:
        cond.wait()

# Insert your processing code here
print("Connected!")

vs = NetworkTables.getTable('Vision') # Get the Vision NetworkTable

# define the AprilTags detector options and then detect the AprilTags
# in the input image
#print("[INFO] detecting AprilTags...")
options = ap.DetectorOptions(families="tag36h11")
detector = ap.Detector(options)

#Begin Image processing loop
while(True):
	t1 = time.perf_counter()#Used for testing performance
	#FPS = (1/(t1-t0))#Used for testing performance
	#vs.putNumber("FPS", FPS)
	Raw = cam.read() ## For USB camera or Pi Camera
	gray = cv2.cvtColor(Raw, cv2.COLOR_BGR2GRAY)
	

	results = detector.detect(gray)
	print("[INFO] {} total AprilTags detected".format(len(results)))
	
	
	# if ((t1-tLast) >= (1/FlushRate)): 
		# NetworkTables.flush()
		# tLast = t1
	
	
	
	cv2.putText(Raw,str(1/(t1 -t0)),(25,25),cv2.FONT_HERSHEY_PLAIN, .8, (255,255,255))
	#vs.putNumber("FPS", FPS)#Used for testing performance
	
	
	cv2.imshow('Camera', Raw)
	t0 = t1
	k = cv2.waitKey(1) # wait xx ms for specified key to be pressed
	if k % 256 == 27: # "27" is the "Esc" key
		break # end the while loop

##When everything is finished, release capture and kill windows
cam.stop()
cv2.destroyAllWindows() # Close the display window
3 Likes

While playing around with the detect options, I did note some minor differences with various different settings for nthreads. I set up a running average for FPS and montored the differences with different values for nthreads. I did see small, but discernable differences when increasing the value. There is a point of dimminishing returns when the value gets above a certain level. With my configuration, I found a value of 3 seemed optimal. YMMV

Think there’s a chance Limelight will add Apriltag support? Or, if reformatted, do you think a Limelight could run vision code effectively?

1 Like

Photonvision I believe runs on that hardware and has some support in the “pipeline”

1 Like

PhotonVision supports Limelight and it runs well. No clue what Brandon is planning.

2 Likes

Just my $.02.
If I were to own Limelight the Co., I would not attempt to recode for use with Apriltag. The current Limelight is an optimized system for use with retro reflective targets.
If it were my call, I would develop a new product built on the same hardware and just leave out the LEDs and drivers.
That said, it would be fairly simple to code the Limelight to do both Retro and Fiducial targeting. One pipeline for each type. I have done exactly that with JeVois and ArUco tags.

3 Likes

The JeVois was intriguing when it came out, but never seemed to get much traction in FRC. Do you think it’s more likely to be a viable choice for FRC vision with the shift to Apriltags?

Really good question!
Honestly, not likely. The easiest way to think about it is, an Rpi 3, without the Ethernet connection.
Will it work? ABSOLUTELY. It is a low cost solution that will really teach you how to code for best results.
Is it optimal, not really.
Now if FIRST starts seriously using targets that lend themselves to Tensorflow, the JeVois Pro might be a real winner.

2 Likes

The biggest reason, IMO, the jevois never took off in FRC is that it came out around the same time as Limelight, with no clear/easy way for teams to track retroreflective targets that “just worked”.

See my post over in the Visual Fudicials thread. Even I, a lowly mechanical/design person was able to have the Jevois tracking the actual AprilTags intended to be used by FIRST in a couple hours (and most of that was waiting on downloads/flashing the SD card). The biggest barrier for it right now is the lack of easy interface with the rest of the robot. If someone were to develop a library/class that easily interfaced with the Jevois over the Rio USB ports (or a tutorial for a robust serial cable/connection) I think the Jevois could be quite palatable. Especially with the $50 price tag.

There’s also the question of is it powerful enough for tracking AprilTags. But that is a question for a more knowledgeable soul.

1 Like

not sure why, but i am having a really hard time finding documentation for apriltags. where can i find some documentation?