|
Re: Laptops: Will You Guys Use Them On The Robot?
If you are trying to create a robot that works well, and that is all you're trying to do, offboard image processing (offboard the crio) is un-needed for most tasks.
Vision processing works phenominally well with high contrast differences, or with self-lit objects. Thus the white on black rings last year, the 2007 lights etc. The pink over green of lunacy were problematic much of the time due to lighting. The colors of the tubes and changing shape depending on viewing angle even more so.
I love the idea of a robot that utilizes high amounts of vision process to automate tasks. However, I believe you will struggle to consistently track game peices, etc over the varying lighting levels seen on FIRST fields.
Last year you could adjust the robot to see the targets from close to 30 feet away at a framerate more than high enough to aim and shoot balls. You only need ONE FRAME to calculate your angle and distance to target, then use the gyro to turn and fire the ball. Your target wasn't moving.
That is much removed from Lunacy where you needed at least several images and a timestamp to calculate the relative motion of the trailer to your robot and calculate some rather simple leading-angles and distances.
Likewise, this year, the rack is not moving. You need only 1 image to calculate your position and distance and get the angles you need to drive. The crio is more than powerful enough to do that.
|