A thread last night made mention of how driving through tunnels will not be a trivial task since it will be difficult to see the robot. Later I thought of an even more important concern; just how difficult is it to see this year’s game pieces behind the bumps and platforms?
From a few rough calculations it seems there is a 3 foot zone behind the farthest bump and a 1 foot zone behind the closer bump where balls are basically not visible. A test with a cardboard box and tape measures seemed to confirm this, though driver height had a significant factor in this (between me at 5’6" and my Dad at 5’10"). More concerning though is estimates made for the platform, which I found had a 4.5 foot dead zone behind the closer platform and a 9 foot dead zone behind the farther one (half the opponent’s zone!).
Couple this with the area directly in front of the player’s station where balls will be difficult to see because of the wall up front, and camera and/or autonomous ball finding capabilities suddenly seem much more important than I had first anticipated.
I hope I’m overestimating the impact this will have because if game pieces are actually that hard to find then scoring a ball will be much harder than I anticipated. For any team that has built a mock up field or teams at a kickoff with a full field, how hard was it to actually see balls in the middle and opposing zones?
Even easier than using the vision system for the tunnel though… get a small photodetector and an LED or two, and mount them on the bottom middle of the robot. You should then be able to detect when the photodetector can “see” the white line when compared to the carpet, and a little semi-autonomous programming can keep you running straight down that line. Plus a nice bonus from getting this working - you can work the detection into your autonomous program so you never violate <G28> and get a penalty for completely crossing the line during autonomous!
Last year, my team had a turreting shooter, and we used the camera to automatically aim it at the opposing trailers. When we were in range, an LED would light up on the drivers’ station. The driver could then fire with a good chance of scoring. It worked great once we got the shooter to work right. I feel that the same method could easily be applied to the vision targets in this year’s game.
I was going to post to condradict you saying we need some way with the camera to see the balls…but what’s worse is if you are a scoring bot, and you score in the opposite goal of where you and your team are positioned in the alliance station (and it might be hard also to see form the middle…maybe less), there seems to be some sort of wall there too, based on footage of teams showing the offical field in NH. It’ll be hard for the drivers, just from looking to the side and trying to understand thier perspective vs. robot perspective of what is the roobts position relative to that wall (Bumping into that wall when try to score will suck :-/ ).
I would like live vid footage, but as some of my team members have said this year, like last year, the bandwitch in the field is not capable of withstanding soo large of communication data between robots, the field and the driver station of each team. Even if a team WOULD do so, it would cause some delays in the field system.
But if FIRST has been able to solve this issue somehow, oh, how I wish I was one year younger than I am right now. :rolleyes: