View Full Version : 2011 "Light Sensor"
cooldude8181
12-01-2011, 18:40
Has anyone been trying to program their light sensors for this year yet? Are there any examples for Labview that are relevant besides the "simple digital input" one?
Mark McLeod
12-01-2011, 19:44
That's all the light sensor is, a simple digital input.
Did you have a more complex task in mind?
davidthefat
12-01-2011, 20:01
I assume it returns a boolean. I have not looked at the libraries yet, I am currently teaching the other programmers the last year's library. (Its the same as this year technically) How many do they give you? Make it so that 3 of them point in different spots in front of the robot. So the outer 2 sensors would form an angle big enough (it would depend on where you place the sensors) and the middle one would be placed at the angle bisector. So It checks if the middle triggers true, it it does, its on track. If the left is triggered, you are too far right, turn left a little and same with the right side. That should be the basic loop. If they are all triggered, you are at a cross point or a stop point.
Will your robot be able to strafe, if so you may want to consider the camera. The way I look at it the line sensors are best if you can't strafe, but the camera is better if you can.
If you want more details just ask...
davidthefat
12-01-2011, 20:25
Will your robot be able to strafe, if so you may want to consider the camera. The way I look at it the line sensors are best if you can't strafe, but the camera is better if you can.
If you want more details just ask...
Details por favor. I mean details on why you have that belief. I don't see why the line sensors would not be as efficient on a strafing bot than on a robot that can't. Is it because the strafing bot has to change its rotation of the motors(I think these are called crab drive) or reverse the motors (mecanium or omni) to strafe? The robot has to do those actions even with a camera.
Charlie675
12-01-2011, 20:29
How do you follow the line with the camera? That sounds pretty awesome. You would probably have to go slow though because the camera's refresh rate is terrible.
Joe Ross
12-01-2011, 20:31
Has anyone been trying to program their light sensors for this year yet? Are there any examples for Labview that are relevant besides the "simple digital input" one?
Have you looked at the autonomous vi in the framework with game specific code?
Mark McLeod
12-01-2011, 20:37
NI has published a 2011 line tracking paper (http://decibel.ni.com/content/docs/DOC-14730) too.
davidthefat
12-01-2011, 20:39
Have you looked at the autonomous vi in the framework with game specific code?
Talking to me? No sir, I have not looked at anything 2011 programming related. Even with the last year's library, it would not be hard to make a basic line tracking software utilizing the camera. The NI vision library is very comprehensive, it has everything from edge detection to object and motion detection. They make it very easy for programmers to just get up and get going.
How do you follow the line with the camera? That sounds pretty awesome. You would probably have to go slow though because the camera's refresh rate is terrible.
I would have to disagree with you on that. If done correctly, the camera would be significantly faster than the line tracking hardware. The line trackers only see a "pixel" compared to the camera's capabilities. Using trigonometric functions and may be some probability, it would be easy to navigate over the line. The lines are not gonna go anywhere, the software can make decisions ahead of time.
Charlie675
12-01-2011, 21:17
Using trigonometric functions and may be some probability, it would be easy to navigate over the line. The lines are not gonna go anywhere, the software can make decisions ahead of time.
If someone gets this to work effectively I have to see it.
davidthefat
12-01-2011, 21:20
If someone gets this to work effectively I have to see it.
Now apparently our team flushed the whole idea of autonomous drive down the toilet. They do not trust me anymore. Last year our autonomous mode was absolutely hideous. It just barely worked and 90% of the it missed. Now long story short: the team captain comes to me literally the day before shipping "David I want you to get the autonomous to work" and then I see the IR sensor he installed in front of the kicker. So, I coded and debugged the autonomous code AT the competition... So that person might not be me. But certainly I will bring it up to my team that they indeed can trust me. I already got the drive working in 3 ways: linear, exponential and tank drive. They might even want a logarithmic one. Not hard at all can finish in 10 minutes. Now the autonomous might take a couple days, but sure check back with me next week.
Details por favor. I mean details on why you have that belief. I don't see why the line sensors would not be as efficient on a strafing bot than on a robot that can't. Is it because the strafing bot has to change its rotation of the motors(I think these are called crab drive) or reverse the motors (mecanium or omni) to strafe? The robot has to do those actions even with a camera.
no our chassis right now is all omni wheels, so we can strafe at no disadvantage. The camera returns the x,y and size of the traget, so for a strafing bot centering the target is easy, and then you just drive straight keeping the target in the center. For a nonstrafing bot you have to correct all the way to the target. Given I still think the camera is the best choice overall, line trackers will work almost equally well as the camera on a non strafing bot. On a strafing bot, I think the camera is the better choice.
How do you follow the line with the camera? That sounds pretty awesome. You would probably have to go slow though because the camera's refresh rate is terrible.
With the camera I look at the target not the lines. Where the refresh rate is not great, capping with the camera will be more accurate. Even their pretty slick line tracker in the video was jerking around quite a bit. If i was to attempt to cap with a line track I would require a gyro for it to be accurate(75% success rate is typically my bar) with the camera you do not need a gyro because you are looking right at the target.
I believe the average automated camera cap will be faster than a human cap, and more accurate than a line tracker cap.
Personally for this task I would rather use dead reckoning than use a line tracker. If you threw a gyro and a range finder on your robot, figured out the distance and said drive straight, I think you could make the 75% cut off for autonomous
davidthefat
12-01-2011, 21:48
no our chassis right now is all omni wheels, so we can strafe at no disadvantage. The camera returns the x,y and size of the traget, so for a strafing bot centering the target is easy, and then you just drive straight keeping the target in the center. For a nonstrafing bot you have to correct all the way to the target. Given I still think the camera is the best choice overall, line trackers will work almost equally well as the camera on a non strafing bot. On a strafing bot, I think the camera is the better choice.
With the camera I look at the target not the lines. Where the refresh rate is not great, capping with the camera will be more accurate. Even their pretty slick line tracker in the video was jerking around quite a bit. If i was to attempt to cap with a line track I would require a gyro for it to be accurate(75% success rate is typically my bar) with the camera you do not need a gyro because you are looking right at the target.
I believe the average automated camera cap will be faster than a human cap, and more accurate than a line tracker cap.
Oh! I totally misunderstood what you meant about camera tracking. I thought you meant tracking the line with the robot. But with your method, be careful to watch out for other robots and to not run into other robots... I know from experience, I was looking straight ahead and I did not see the bench right in front of me...
To address the OP, the light sensors are pretty easy to use... almost too easy.
They're purely digital. If you power them, you can use the indicator LED to calibrate them. The knob sets the sensitivity, so you don't have to set it in code. Two of the wires are for power, the other two are output. One is normally-open, the other is normally-closed. You only have to wire up one: I'd suggest normally-open, but do whichever makes more sense for you.
Once its on your robot, you'll get a certain boolean value over light areas, and another over dark ones.
Theoretically, you only need one sensor to track a line. Dark: go left, Light: go right; you'll end up following the right edge of the line. With two, you could have them strattle the line and react accordingly, giving you a bit more hysteresis.
If your robot has an unconventional drive system, it will work on the same exact principle. I wouldn't think you would run into any additional problems unless your drive system itself fails.
Strangely enough, I think KISS might apply here. If you can calibrate your encoders correctly, I wouldn't be surprised if you could forgo the line sensors entirely. Encoders aren't sensitive to light/field conditions, and they're extremely simple to mount. Also, the lines on the field help so much: the two end lines put you in the middle of two columns of pegs & you'll have to maneuver a bit more anyways.
Going back to 04, buzz robotics used a line tracker. Note they had 5+ sensors on the front of their robot. And in that game, accuracy was much less of a requirement, hence why line tracking was a good strategy.
the way I am doing it is a variance of the software they give us
basically using the basic digital input keep going straight until the input returns a false then using a gyri scope and getting a value from it you can auto correct
MagiChau
24-01-2011, 05:58
I am planning on using the light sensors as confirmation sensors possibly if I want the robot to go diagonal in autonomous. I do not know however, if the light sensor detects the red/blue & yellow pro gaff tape.
The sensors would confirm what "path" you are on alongside with an angle measurement and data from accelerometer/encoder, then use trigonometry to calculate the distance that the robot needs to be from the peg. Possibly use the camera then to once again confirm the distance and possibly the height of the arm.
This is more for fun though, at the least we are going to have a mode that goes straight, using accelerometer data and camera to confirm distance from scoring peg. I guess the line can be used to stay on path if robot isn't straight but I have confidence in my team to align the robot.
This really helps, being boolean, but which wiring diagram do I use?
"Dual NPN and PNP outputs" or "NPN Outputs"?
Yes, this is straight from the included instructions.
This really helps, being boolean, but which wiring diagram do I use?
"Dual NPN and PNP outputs" or "NPN Outputs"?
Yes, this is straight from the included instructions.
Sorta neither.
The diagrams provided include "load", a resistor to pull-up the signal lines so they are high when not connected. The Digital Sidecar already provides those, so you just need to connect the brown to +12(red) on the PD board, blue to the gnd(black) on the PD board, and white or black to a SIG pin on one of the DIO ports on the digital sidecar. You don't need white and black, pick one. White will tell you when you see light, black will tell you when you see dark, so if you have one, you know the other.
PSHRobotics
04-02-2011, 00:08
Since there seems to be some confusion about this, the Autonomous Independent vi from the "Robot Code with Game Framework" in LabView for 2011 actually does the line tracking using the Light Sensors. My team has created our own code using two sensors instead of three (like the code we are provided with) in case we want to use the third sensor for something else and because we see no use for the 3rd sensor and have had some trouble making sense of the code that was provided for us (because of the multiple values in the Steering Gain and Y Power controls). We finally got our code correctly functioning today. The most difficult part was stopping at the T because our sensors are not lined up perfectly (attached using duct tape at the moment), so we had to adapt the code to stop if the sensors became true within half a second of each other.
If anyone has any questions about programming for the sensors I'll help as much as possible.
I decided to code our line tracking using a state machine. Just use a bunch of switch statements.
Shruikan27
05-02-2011, 14:23
Sorta neither.
The diagrams provided include "load", a resistor to pull-up the signal lines so they are high when not connected. The Digital Sidecar already provides those, so you just need to connect the brown to +12(red) on the PD board, blue to the gnd(black) on the PD board, and white or black to a SIG pin on one of the DIO ports on the digital sidecar. You don't need white and black, pick one. White will tell you when you see light, black will tell you when you see dark, so if you have one, you know the other.
Hello, this is my first year in FRC, and my mentor set me on the task of an issue we were having with the light sensor. I'm sorry if I sound like a complete newb but where exactly does the blue/black and the brown/red attach to on the PD Board? This forum has been very helpful to me and I thank you all for your input.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.