is there a way to use a pixy cam 2 and a raspberry pi 4 to make a vision tracking system that when it is turned on would auto align your robot. I think to do this you also need a gyro? if it is possible what is needed and can someone walk me through how to code that or provide examples. This would be greatly appreciated.
You would use a NavX Gyro connected to the RoboRio, then the raspberry pi 4 is used as the vision co compressor/processor, then that is plugged into the wifi module, then plug the Pixy Cam 2 into the raspberry pi 4. hope that helps
Generic answer: Yes.
I’ve never done it before though, so I don’t have the exact process or examples you were asking for.
Leading questions to help further define your scope:
- What’s the purpose of the raspberry pi 4? Pixy has some onboard vision detection, roboRIO traditionally does the vision-to-drivetrain control logic. It’s definitely possible to do additional processing on the Pi, but I don’t know what that processing would be.
- As to the “need a gyro” question - my general answer is “probably yes”. One discussion on the topic is here..
Typical pipeline for vision-to-drivetrain-rotation
- Get target tracked via vision tracking (limelight, pixycam, raspberry pi)
- Calculated rotation required to center the target
- PID Control your drivetrain to rotate towards the target
Remember, the rotation is based on angles, not on live camera feed. Don’t try and PID using a camera as the sensor
As a side note, thread titles like “HOW does this work?” are very non-descriptive and don’t attract the people knowledgeable in the topics you’re asking about. A title like “How to auto-align with pixy and rpi” tells everyone else what your thread is about, so people who know the solution will come to help.
For a good intro, see Integrating Computer Vision with Motion Control from 254.
I agree with @gerthworm above: you don’t really need the pixy AND the pi.
One option would be a usb cam plugged into the pi should give you all you need. Then you’ll need an opencv-based image processor on the pi (see the wplilib docs, which would send target offset angles and heights back to the roboRio via networktables.
Another option is to let the pixy (or jeVois) do the vision processing, but then you need to get the info back to the roboRio, and I haven’t dealt with that part enough to advise you (there are plenty of threads on the subject here on CD though).
By far the simplest option, though, is a limelight. We made the switch a couple of years ago and can’t imagine going back.
You don’t need one. You can simply turn until lined up.
There are some more complex algorithms that use the gyro to optimize your abilities, but you should focus on getting the thing running before adding extra abilities to it.
This is what we did in 2017, we had a pixy (connected to an Arduino, connected to the RoboRio, I don’t remember why), we would just rotate the robot until the target was in the middle of the camera view. This was our first real attempt at using vision targets, and it worked ok…
Improvements could certainly have been made in hind sight.
Definitely! It’s isn’t anywhere close to optimal, but over the last few years I have found that getting something to work “ok” before making it work “great” is extremely important. Often people have great dreams of “great”, but can never implement them within the time limit, so they always have to fall back on less than “ok”.
Can some provide an example code on how to move the target to center in the picture in code. That’s where im confused.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.