Vision processing loop is intermittent

We have tried our code and can get it to the peg lift, it jerks around a lil doing that, when adding in distance it takes almost 30s to go 10ft. We are trying to figure out why it is causing this lag, is it the math or the thread?

I have attached a gist of our code.

Gist

I don’t see any sleeps (like Timer.delay(0.025):wink: in your autonomous while loop. This will cause your autonomous loop to chew up a lot of your CPU time that could be used for vision processing and there isn’t much point in executing the same code thousands of times between frame updates.

It might also be helpful to put the following counts out to the dashboard during autonomous:

  • Number of frames processed.
  • Number of times two targets were found (resulting in power to the drive train).
  • Number of times less than two targets were found (stopping the drive train).
  • Number of times more than two targets were found (stopping the drive train).

Finally, it looks like if your vision filter picks up more than two targets, it will stop the robot. Instead, would it be worth looking for the two best targets should the filter pick up some noise ?

Good luck,
Paul

Thank you, that is very helpful. We will look for an example on filtering the two best contours.

First, find the 5 biggest particles. Then, make pairs so that no pair is duplicated or left out. Then, get left/top/right/bottom info about each item in a pair.
Use this info to calculate 5 different “scores” for each pair, then average the scores to get an overall score for each pair. Use the center and width/height of the pair with the highest overall score to do the rest of your processing (distance and direction).

Examples of “scores:”
Correctness of overall aspect ratio
particle height similarity
particle width similarity
particle height similarity to group height
particle width to overall width ratio