Log in

View Full Version : What sensors are used to detect the hot LED lights on the FRC field?


Ramtech59
12-02-2014, 16:31
My team and I are trying to use sensors on our robot to help us in autonomous, but we were wondering which sensor could be used to detect the hot LED lights on the field and where we could buy them.

cglrcng
12-02-2014, 16:40
Look in your kit of parts received on kickoff day for the grey section of light reflective tape, hook up the camera properly so you can see the image on your drivers station screen, and attach a light source near the lens of the camera, then aim it directly at the reflective tape (there are different sections of that type of tape on each alliance wall including the hot side flip boxes), look at the image on your screen (the tape will reflect back the light based on the color of the light source you actually aim at it). Then read up on programming a bunch.

Good Luck!

DonRotolo
12-02-2014, 19:52
My team and I are trying to use sensors on our robot to help us in autonomous, but we were wondering which sensor could be used to detect the hot LED lights on the field and where we could buy them.

We are using the Axis camera to detect the yellow LEDs surrounding the high goal during autonomous.

Ramtech59
13-02-2014, 12:36
thanks for the replies, me and my team appreciate the help given to us.

tr6scott
13-02-2014, 13:01
We are using the Axis camera to detect the yellow LEDs surrounding the high goal during autonomous.

Don,

I was thinking of doing this too, did you buy the lights the field uses, or did you have access to a field that did? Once the bot is in the bag, we are working on hot goal detection, anything you can share?

Lightfoot26
13-02-2014, 13:56
We are using a photoelectric IR sensor. We will dead reckon before the match, it acts as a digital input and sends true when it sees the retro reflective tape. False if otherwise, process of elimination... If False then 1 goal is hot, if true, the other is hot. Simple elegant solution... If we don't have to use vision... we find every way not to. although some years we can't get around it... (i.e. 2012).

Bpk9p4
13-02-2014, 14:05
How are you using the photoelectric sensor to detect if the goal is hot or not? Also how can you tell if you are detecting the vertical strip vs the actual hot goal?

Bruceb
13-02-2014, 14:20
Yes, remember there is an always "hot" reflective vertical strip adjacent to the variable horizontal one.

NotInControl
13-02-2014, 14:33
We are using a photoelectric IR sensor. We will dead reckon before the match, it acts as a digital input and sends true when it sees the retro reflective tape. False if otherwise, process of elimination... If False then 1 goal is hot, if true, the other is hot. Simple elegant solution... If we don't have to use vision... we find every way not to. although some years we can't get around it... (i.e. 2012).

I am interested in this as well.

What sensor are you using? The hot goal is determined by a vertical and horizontal rectagle being present.

Based on section 2.2.4 of the game manual "Before the MATCH starts and throughout TELEOP, both dynamic VISION TARGETS are positioned such that the the reflective material faces the FIELD."


So before a match, both left and right sides are "Hot". When Autonomous starts, one side has their horizontal bar rotated away, so that indicates it is notHot, and the other side is.

How can you determine the presents of both horizontal and vertical targets exisit with an IR sensor? I believe the IR sensor will at least always see the Vertical Bar and return true, no?

-Kevin

Caleb Sykes
13-02-2014, 15:04
My team and I are trying to use sensors on our robot to help us in autonomous, but we were wondering which sensor could be used to detect the hot LED lights on the field and where we could buy them.

sensor we are using: human eyeballs...
.
.
.
.
.
By using a Kinect on the driver's side during AUTO.

Now the question is, how many posts there will be before someone tells me that this is illegal. My guess: 4

DjScribbles
13-02-2014, 16:52
Don,

I was thinking of doing this too, did you buy the lights the field uses, or did you have access to a field that did? Once the bot is in the bag, we are working on hot goal detection, anything you can share?

I'm also quite curious about this. It had crossed my mind, but I didn't get any further than that with the idea.

Are the individual LEDs close enough together and bright enough that they don't simply appear as individual spots of light without making the filtering stage too permissive?

NotInControl
13-02-2014, 17:17
sensor we are using: human eyeballs...
.
.
.
.
.
By using a Kinect on the driver's side during AUTO.

Now the question is, how many posts there will be before someone tells me that this is illegal. My guess: 4


I believe what you are attemping to do IS perfectly legal.

brennonbrimhall
13-02-2014, 17:22
I believe what you are attemping to do IS perfectly legal.

Confirmed in Q&A #55 (https://frc-qa.usfirst.org/Question/55/are-we-allowed-to-use-the-kinect-as-part-of-our-driver-station-during-autonomous-mode-this-year).

DonRotolo
13-02-2014, 19:30
I was thinking of doing this too, did you buy the lights the field uses, or did you have access to a field that did? No, we got LEDS from SuperBrightLEDs.com, they were a lot less expensive. Also see my PM to you Scott.
Are the individual LEDs close enough together and bright enough that they don't simply appear as individual spots of light without making the filtering stage too permissive?They're 2 or 3 inches apart, but the camera sees them as a 'blob' and not pinpoints of yellow. I don't have the details, but they tell me that the filter is relatively permissive because it's not expected to tell us anything more than "Yep the LEDs are on". We're not using the LEDs for aiming or anything like that, just "Is it hot?"

Joe Ross
13-02-2014, 19:34
No, we got LEDS from SuperBrightLEDs.com, they were a lot less expensive. Also see my PM to you Scott.
They're 2 or 3 inches apart, but the camera sees them as a 'blob' and not pinpoints of yellow. I don't have the details, but they tell me that the filter is relatively permissive because it's not expected to tell us anything more than "Yep the LEDs are on". We're not using the LEDs for aiming or anything like that, just "Is it hot?"

They should be sure to test with these images: http://firstforge.wpi.edu/sf/go/projects.wpilib/frs.2014_vision_images

RaxusPrime
13-02-2014, 19:48
Confirmed in Q&A #55 (https://frc-qa.usfirst.org/Question/55/are-we-allowed-to-use-the-kinect-as-part-of-our-driver-station-during-autonomous-mode-this-year).


My mind=blown

safiq10
13-02-2014, 21:45
sensor we are using: human eyeballs...
.
.
.
.
.
By using a Kinect on the driver's side during AUTO.

Now the question is, how many posts there will be before someone tells me that this is illegal. My guess: 4

This is intresting? could you explain this? we might just take this idea but I want to fully understand it before we use it? Would it be similar to the hybrid mode we had in 2012?

Travis Hoffman
13-02-2014, 22:16
We are using a photoelectric IR sensor. We will dead reckon before the match, it acts as a digital input and sends true when it sees the retro reflective tape. False if otherwise, process of elimination... If False then 1 goal is hot, if true, the other is hot. Simple elegant solution... If we don't have to use vision... we find every way not to. although some years we can't get around it... (i.e. 2012).

^^^ THIS.

Whippet
13-02-2014, 22:39
Confirmed in Q&A #55 (https://frc-qa.usfirst.org/Question/55/are-we-allowed-to-use-the-kinect-as-part-of-our-driver-station-during-autonomous-mode-this-year).

I never thought I would see my q&a question referenced so many times throughout the build season, especially since we're not even planning on taking advantage of it for now...

Caleb Sykes
13-02-2014, 22:40
This is intresting? could you explain this? we might just take this idea but I want to fully understand it before we use it? Would it be similar to the hybrid mode we had in 2012?

Exactly the same as hybrid mode in 2012. Plus, all of the code from that year is still incorporated in WPILib. Literally the only difference is that you have to provide your own Kinect instead of having a set station for it.

z_beeblebrox
13-02-2014, 22:44
For the people who said they were using a photoelectric sensor, what model are you using? How has it been working for you. My team is considering adding one and I'm curious about where to start.

yash101
14-02-2014, 12:25
AXIS camera with OpenCV. we also get distance measurements and goal coordinates. It works well!

Lightfoot26
15-02-2014, 00:53
For the people who said they were using a photoelectric sensor, what model are you using? How has it been working for you. My team is considering adding one and I'm curious about where to start.

these (http://www.alliedelec.com/search/productdetail.aspx?SKU=70167284#tab=specs) are working great!

tgross35
15-02-2014, 23:44
It looks like most options have been covered, but I'll throw in what we use. The retro-reflective tape will direct a reflected light directly back to its source, rather than at an angle like with a mirror, so this is how most teams (including ours) work it out. We use the Axis camera (available on AndyMark) with a LED ring around it (we got it here and it works well http://www.superbrightleds.com/moreinfo/led-headlight-accent-lights/led-angel-eye-headlight-accent-lights/49/.) Just wire the camera into the 5V built-in wago on the PDB and hook the ethernet up to the D-LINK. We hooked the LEDs up to a spike with a 5A breaker that would turn on when the robot is enabled. From there, the programmers can use vision processing to detect the location of the robot and which goal is hot based on the proportion of the rectangle and if a horizontal rectangle is visible or not. Good luck!

geomapguy
15-02-2014, 23:53
We're using the kinect and it's working really good

Lightfoot26
16-02-2014, 05:28
^^^ THIS.

I hope that's good!?...haha... If you have any questions feel free to ask!

Travis Hoffman
16-02-2014, 05:30
I hope that's good!?...haha... If you have any questions feel free to ask!

It is.

Also, garage door opener websites are your friend.

NotInControl
17-02-2014, 12:08
Just wire the camera into the 5V built-in wago on the PDB and hook the ethernet up to the D-LINK. We hooked the LEDs up to a spike with a 5A breaker that would turn on when the robot is enabled. From there, the programmers can use vision processing to detect the location


How was this electrical set up working for you?

Am I correct in assuming, you are running the LEDs at 12V, off a spike, which is plugged into the regular (non-regulated) wago terminals of the Power Distro Board.

Was curious to see how well this was working for you and if your LED voltage dropped out during use of the Robot. If no regulator is inline, at the points when you drive, the voltage to your light rings would drop and loose intensity, which would reduce lumens, and reduce the quality of returned light from the retro-reflective tape.



-Kevin

NotInControl
17-02-2014, 12:12
these (http://www.alliedelec.com/search/productdetail.aspx?SKU=70167284#tab=specs) are working great!


Still curious about how these sensors, IR/Laser, sensors are being employed in this application.

How are teams determining a hot goal is present without a vision system, if two separate objects need to be seen (vertical and horizontal), and one object will always be in view (vertical)?

-Kevin

Alan Anderson
17-02-2014, 12:54
How are teams determining a hot goal is present without a vision system, if two separate objects need to be seen (vertical and horizontal), and one object will always be in view (vertical)?

You only need to check for the presence or absence of reflections from the horizontal target. A Class 1 laser sensor can be pointed at the dynamic vision target before the match begins, and it'll tell you whether or not the target is still there a moment after autonomous mode starts.

NotInControl
17-02-2014, 13:21
You only need to check for the presence or absence of reflections from the horizontal target. A Class 1 laser sensor can be pointed at the dynamic vision target before the match begins, and it'll tell you whether or not the target is still there a moment after autonomous mode starts.

Thanks Alan,

I guess I can buy that would work in Theory... but my question is more along the lines of how do you know you are pointing ONLY at the horizontal target when placing on the field?

What does the person placing the robot on the field see/use to confirm the light is on the horizontal target, if the light is reflected back is to the sensor.

In the case of Laser, maybe the person can try and place their eye near the sensor everytime to confirm location, but how do you do this with an IR which is not in the visible spectrum?

It seems like this is a cumbersum way to position the robot, especially if you have to position the bot and continuously place your eye near the sensor for confirmation. Am I overlooking something? Somehow you need to guarantee you are not looking at the vertical target so you don't get false responses.

Does this system require, another method for position the robot to increase likelyhood that you are looking at the right point?

Regards,
Kevin

Lightfoot26
17-02-2014, 13:53
You only need to check for the presence or absence of reflections from the horizontal target. A Class 1 laser sensor can be pointed at the dynamic vision target before the match begins, and it'll tell you whether or not the target is still there a moment after autonomous mode starts.

You hit the nail on the head Alan! :)

The sensor I posted (http://www.alliedelec.com/search/productdetail.aspx?SKU=70167284#tab=specs)actually has a built in sensing light that illuminates when "the circuit is complete" for lack of a better term. This is all internal to the sensor. If the target is present both a green and amber light located on the top of the sensor will illuminate, if there is power to the sensor, but the target is not present, only the green light illuminates. That's the elegance behind a sensor of this design. Very easy to recognize the presence of the target without any additional coding or alignment mechanism. We weren't sure if this was the route we could take pre-Team Update 1... but after we learned that pre match setup will have both targets revealed, we went for it!

Like I said before, camera stuff consumes a lot of resources (at least on our team) e.g. coding time, debug time, and bandwidth among others, and we only use vision if we rule out all other possibilities. 5 extra points didn't seem worth consuming those resources to us, but in 1625 nature... we still wanted those points! haha So we opted for the laser. Team 2451 (formerly 2949) did a similar laser setup in 2012, from which we drew inspiration.


EDIT: In regards to how do you know you are pointing ONLY at the horizontal target when placing on the field? I guess it is a mixture of what I said above as well as both the retro-reflective properties of the tape and "trial and error" really... The retro reflectability of the tape will help us ensure there is no "spillage" of light from the vertical segment of tape, and "trial and error" helps improve our drive teams ability to position the sensor accurately. This "trial and error" methodology seems cumbersome yes, but most dead reckoned auton systems have some degree of this process.

NotInControl
17-02-2014, 15:31
You hit the nail on the head Alan! :)

The sensor I posted (http://www.alliedelec.com/search/productdetail.aspx?SKU=70167284#tab=specs)actually has a built in sensing light that illuminates when "the circuit is complete" for lack of a better term. This is all internal to the sensor. If the target is present both a green and amber light located on the top of the sensor will illuminate, if there is power to the sensor, but the target is not present, only the green light illuminates.

I agree with the "not use vision if you don't have too." In 2012, we were a team that had no vision on our robot. Our drivers used a line marker at the tail of robot to line up with the center of the key to make key shots. IT was actually a point of pride after a while to have the performance we did without vision. We ranked 1st seed in both regionals we attended that year, so it worked out well for us.

This year, we have vision running on a beaglebone using opencv and ffmpeg. It was a system we received from our friends 118, and have been modifying for our specific use.

I was curious about placement of the robot, because your initial post metioned it was an IR sensor. I didn't see wavelength specified on the link you provided, but I do honestly hope this system works out for you.

It will be pretty cool to see such a simple system reliably work in auton. My fear would be the initial lineup, and shooting prematurely because I was looking at the horizontal target instead of the vertical, but it sounds like you have a plan to overcome that.

Goodluck.

Reagrds,
Kevin

Lightfoot26
17-02-2014, 16:38
Sounds like a pretty neat setup! I hope it works out for you! I am anxious to see how well our system performs. It's hard to tell at the moment because I am at college 300mi away from my former team ( The one I now mentor (sorta) lol )! We shall see! :)

I'll provide followup and/or perpetuate this conversation when I know more! Good Luck!!

nxtmonkeys
18-02-2014, 00:23
What else could be done to detect the hot high goals, because my team doesn't have a Kinect device

Yipyapper
18-02-2014, 01:12
I set up an Axis camera at the front of the bot and sent the video feed into Roborealm to target where in my field of view are various saturated points. Then, since the range is fairly open to prevent not catching the LEDs (just a precaution), I blob them into objects, take out all but the biggest one and put a crosshair value in the centre of the object. This way, despite lights above the field, white banners lying around, etc we can find out where the largest object on the field is. The angle of our camera also allows us to see both sides of the field--even if they aren't 100% in view, the object is filled large enough to outdo any other possible source of light.

The X and Y values of the crosshair are then sent through network tables to the cRIO, and a final, global variable in the command-based code I use shows me where it starts out hot first: left or right. If it's hot on the side I'm on, I shoot and move forward while retracting our arm (maybe a video will come soon!). If it isn't, then a timer I made waits 5.5 seconds until it will inevitably pop into hot and fire. If it's the centre I'm shooting at, then I use that global variable (isRight) to go forward, spin a bit with a PID loop and fire.

The 2-ball auto does some other hoomawazits, and our three-ball will only focus on whether the goal is hot or not after every other possible thing is done. It would be nifty to have a 3-ball hot auto, but one can only dream.

I think I went out of vision processing and into the full autonomous. Oh well, it's late and six weeks of programming really changes how you function ;)

Lightfoot26
27-02-2014, 20:51
I hate to say this, but I received an email regarding the photoelectric sensor I posted, and upon further review of the links, I noticed I gave the incorrect sensor! The sensor I linked was the one we originally attempted to use with no luck. The correct sensor can be found here (http://www.alliedelec.com/search/productdetail.aspx?SKU=70167439). I am sorry for the confusion I may have caused anyone trying to use a similar setup. I wish you all luck!

Ed Law
13-03-2014, 00:37
We're using the kinect and it's working really good
What is your experience with using the Kinect at the regional you attended? How did you set it up at the driver station and who was the person (driver, operator, drive coach?) the Kinect was sensing. Was there any problem with 3 people standing close together behind the driver station?