Finding Distance From Driver Station

Right now I’m playing with an idea for finding position based on the time it takes for a wlan message to take a round trip.

My goal is to do it in such a way that it would be legal, most specifically with R58.

The goal is to ping both the OPERATOR CONSOLE and the Field Management System. By knowing the position of both of those, and the distance from your robot to each you should be able to estimate your position.

I was wondering if anyone has any idea how to ping the OC or the FMS.

I am hoping that if this proves to be successful, that beacons (access points) could be put on the field, such that it could be more accurate. Similar to 2004 but using the wlan technology.

One KOP wireless bridge (either model WGA600N or WET610N) is the only permitted mechanism for communicating to and from the ROBOT during the MATCH. The signal output from the wireless bridge must be directly connected to Port 1 of the cRIO-FRC with an Ethernet cable. All signals must originate from the OPERATOR CONSOLE and/or the Field Management System, and be transmitted to the ROBOT via the official ARENA hardware. No other form of wireless communications shall be used to communicate to, from or within the ROBOT (e.g. radio modems from previous FIRST competitions and Bluetooth devices are not permitted on the ROBOT during competition).

Before you go very far with this try something:

Figure out how long it would take an electromagnetic wave to travel from one corner of the field to the other. I believe you will find that it is a tiny fraction of the total ping time you would observe.

I don’t follow what useful data you are looking to collect.

Are you looking for the latency of the robot’s wireless bridge to FMS Access Point to derive the robot’s distance from the AP?
Sounds like a step towards Zigbee triangulation…

All the rest of the communications are over a wired network with near-identical lengths of Ethernet cable, regardless of the Driver Station physical location. They’ll all be at identical (wired) distance from the field Access Point, so I don’t see that data as being useful.
It seems like any single ping (FMS or DS) would produce the same information, i.e. distance to AP or maybe length of the field Ethernet cables, speed of the network switches, frequency of collisions if you stretch it…
Assuming network collisions and retransmitted TCP/IP packets don’t mess up your timings entirely…

http://www.tkn.tu-berlin.de/publications/papers/tkn_04_16_paper3.pdf

Using this technique they were able to get accuracy within 4m.
Not great, but it would be something.
Also if you had multiple APs to reference you could probably get more accurate by looking at their overlaps. It would eliminate some outliers.
Also based on what area in you could look for points of reference, bumps field borders, tunnels that kinda stuff.

This is only a proof of concept, as Mark pointed out the ideal solution would be to use zigbee which is much more accurate. I would also like to see robots use zigbee for robot to robot comms.

Zigbee is definitely what I want, I am hoping that if this even sort of works, that it will strengthen the argument to allow us to use zigbee devices on the robots.

And yes I would like to use the latency to estimate the robots position. I think it could give me enough accuracy to estimate what zone it is in, which is all I would need for localization. Even as bad as a 4m radius would still help, ie knowing roughly what heading to point to robot to see a target.

In that paper, they were directly manipulating the WLAN card to send and receive raw WLAN frames, not IP layer pings. In FRC you do not have such control over the access point or the WLAN radio on the robot, so the method they employed would not be possible. Also, even with their method (that you can’t reproduce), they achieve 4m accuracy, which is 1/4 of the length of the field or 1/2 of the width, so even at that (unobtainable) accuracy it doesn’t seem very useful.

Also keep in mind that this method only gives you a distance, but not a direction, so even if it worked you would only be able to locate your robot along a circle across the field that would be drawn by using that distance as the radius of the circle. And to further complicate things, the access point used at the field is not located in a pre-defined position, and will vary between events and possibly even during the day of an event if it gets moved a bit by field personnel.

It’d be kind of interesting. You’d only be able to derive the radius of an arc from the field AP from this method. Probably a distance of 10 to 60 feet, so even with a best error of +/-12 feet it won’t place you in one of this year’s zones.

You’d only want to ping the AP (an 10.0.0.x address) if you can to eliminate other network latency from throwing off your results. For your tests at home pinging your wireless router at 10.xx.yy.4. That’ll be a nice experiment that you don’t have to confine to the dimensions of an FRC field.

I guess my goal of trying to do it without breaking any rules isn’t going to work. :frowning:

Also I would be trying it with multiple access points, and this phase would just be to get the architecture in place for this kind of triangulation. I don’t want to do it via WLAN, but the cRIO zigbee module is $700, and I don’t have $2100 kicking around to produce a zigbee triangulation set up, as much as I would love to (anyone have grant money to do such a spike?? haha). Maybe I’ll purchase some zigbee chips and try it that way. http://www.trossenrobotics.com/bioloid-zigbee-wireless-module-set.aspx?feed=Froogle ($25 a sensor)

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04463608

Thanks for all the feedback everyone!

(Hence why I switched from EE to CS :slight_smile: )

I don’t think this qualifies as thread-jacking as I think it still applies…

What about putting a beacon light on your driver station and using the camera to locate it? If you talked to your alliance partners and got the teams at stations 1 and 3 to each have a light, you could triangulate position using two cameras.

–Ryan

Think we could use the beacons that already exist??

The red vs blue would be real nice cause you could look at either side.

Good idea!

Why not just software localization? Just keep track of where you are through software

Ooh… Sounds like I have a new offseason project :smiley:

Good idea, yourself

Maybe I don’t understand what you’re suggesting; you would be using software to detect the position of the lights from the camera images and triangulate your position.

If you’re suggesting (the fairly standard method of) dead-reckoning using encoders/accelerometers/gyros, the advantage to the methods talked about in this thread is that you can determine absolute position on the field, whereas relative localization has accumulated error since at each point in time you’re only calculating the change in position, so successive errors in the sensor readings lead to increasing errors in the position estimate.

–Ryan

Take this years game for instance, all sort of disorientation would occur when you cross the bump. So by having a absolute reference, you could re-initialize your position based on that reference. At that point you could switch to a sensor-driven localization similar to what your suggesting.

It kind of like zeroing a gyro based on a compass when you suspect drift.

Thinking in a different direction: If you were on the field, how would you find your location?

If it were me, I’d just look around, find objects I recognize, and estimate the distance to two or three of them by their relative size in my field of vision.

Turning that into a machine function, you can have your camera rotate until it finds your home driver station wall (this assumes you are able to see it). Hand each of your alliance partners a battery-powered flashing LED, each with a different blink pattern. Your camera sees the blinks, knows which one is where, and calculates location based on separation between LEDs (how close or far apart they are) and angle based on camera rotation.

Sure, there’s a few issues to address. For example, what to do when you’re up against the driver station wall. Perhaps four Maxbotix distance sensors can help there. Whatever, these issues can be solved.

The point is, consider how nature does this, and emulate. Nature usually has a pretty good algorithm.

This was the idea behind looking at the driver station lights.
Also these lights are on either side, so you would just have to look to one side or the other, thus it wouldn’t be side dependent. By seeing all 3 beacons in one scan you could tell where you were.

I think its relatively the same idea.

Have you considered trying a sonar sensor to find your distance?
That might not have a narrow enough beam though.

(Not to pick on you specifically, several people mentioned similar ideas)

I know lights are low-tech, but this would still be deemed “wireless communication” which is prohibited in R58 because your robot is receiving information from a device not through wires. There is also the logistical problem that your allies might not (or might not be able to) put the lights right where you want them. I really like the “triangulate based on lights” approach, but you’ll have to do it using field lights both for the sake of consistency and legality.

I think this could be a very interesting exercise in deep reading the rules.

Design a system, and ignore the rules for the moment. Make certain that all teams can use your system at the same time. Think about what percent of teams it would be accessible to. Then, look at what rules would need to change to allow the system, and what type of impact that would have on the big picture.

Partially beam size, partially signal loss over distance (these two are related) cause most hobby sonar sensors I’ve used have an effective range of about 15 ft; I’ve seen models advertised with greater range, but this is my experience. You also have the problem that with just a distance sensor, you can’t tell what you’re measuring, so it would be impossible to know whether you’re measuring the distance to a field element, another robot, or a game piece. Distance sensors are very useful for measuring proximity, for example, but for ubiquitous absolute positioning on a FIRST field, it seems like you wouldn’t have the kind of accuracy I would want. IMHO.

–Ryan

Exactly, my goal is to have a solution that works similar to the NorthStar localization. The idea is that you can identify a beacon, and then determine your position relative to it. Given to NorthStar requires a ceiling, but the idea is still the same.

http://www.evolution.com/products/northstar/