I need any help I can get, I’m willing to put in the work and learn what to do but I need someone that could possibly help me setting up a limelight in labview. This is our teams first year with a limelight and I have absolutely no clue where to start, I’m the only programmer on our team and I have no mentor with any programming experience. I’ve updated the firmware and connected to the robot. I have seen some of the examples but I don’t know how some of it works and what it means. Any help and advice you can give me is greatly appreciated.
Hey, it would seem your teammate posted about the same thing over here..
Would it be better to keep the conversation in one spot? Or was there a distinct and separate problem you were looking to solve?
Keeping the conversation here would be the best, im just trying to learn anything I can because I know nothing.
Ok, I think my first question is going to be the same:
To start: What are you looking to accomplish by adding a limelight to your robot?
Or, equivalently, what new functionality do you hope your limelight will enable for the robot?
One of the big things I would like to be able to do is have the camera lock on to the high goal and tell the robot how to adjust the drive train and shooter to score the ball. If that is possible, from videos ive seen it looks like it is. To start though another thing I would like to learn, hopefully something easier is how to get the camera to track a ball and or the reflective tape.
This could be two different things.
- Target left and right movement. It depends on how your robot physically targets the goal. Is your shooter mechanism on a turret or do you spin the entire robot? I believe that the Limelight, once configured, publishes horizontal and vertical offsets. You can use the horizontal offset to spin your robot towards the target. This is a “position control” problem. You are trying to control the error to zero. Here is a similar post showing a sample program that does this control. Some concepts from this might apply to your situation too. Maintain Same Drivetrain Speed While Following Limelight Target - NI LabVIEW - Chief Delphi
Also there is a chapter here that talks more about position control. Secret Book of FRC LabVIEW (and control logic) version 2.07 - Programming - Chief Delphi
- If your shooter can move up and down, you might be able to use data from the limelight to set the height. This is a little more complex. You could use the height offset to control the height. You might need to add a constant to the value returned from the limelight depending on how the limelight is mounted. This still may not work since it doesn’t account for the ball flight trajectory at different distances. If the limelight returns a distance value you might be able to populate a table of height correction adjustments for different distances… Hope this helps.
So for this, the limelight docs will be your friend. For example, take a look here - they have instructions on how to set up the camera to track a retroreflective tape target.
This can be done independently of anything in the labview robot code, and is probably the best place to start.
For sure, definitely possible.
The key is to continue to refine the request further - you’re not yet at the point where you can design software to solve the problem, at least as stated.
Building off of Jim’s comments above, you’ll want to answer - in what way will the drivetrain and shooter need to be adjusted? Perhaps think through, how does your driver adjust them today? Some possibilities could include:
- Turning to the left or to the right
- Driving forward or backward
- Changing the shooter motor speeds
- Changing a hood position
Perhaps it’s some combination of the above. Regardless, I believe all of them can be handled separately - I’d definitely recommend biting them off one at a time and getting each to work before moving to the next.
There will be some nuances to each, but all will be solved in fundamentally the same way: Given some measurement from the camera, and some notion of what you want the robot to be doing, generate a command that emulates what the human driver would have done.
This is just a high-level description of some of the stuff Jim has referenced - hopefully an alternate take to help get you in an effective mindset as you start to look through examples and documentation.
So, first things first - See if you can follow the limelight docs to get your camera up and identifying a vision tape target.
In parallel, do some thinking about the types of things your driver does today to line up for shooting toward the high goal. See if you can frame it using those examples above (or something similar). From there, pick which aspect you think is most important and valueable, and we can start to bite that one off first.
Again, I’m not familiar with the LimeLight or your robot. As mentioned about you need to consider what you need for your robot. Here is a simple vertical position control example. It is similar the horizontal spin position control except it uses a vertical adjustment curve based on a pseudo distance measurement, or object size. (As long as the value changes with distance you don’t really care what it is.) I’m assuming the Limelight can pass something similar back to you through network tables. Note that control is stopped if anything appears to be wrong… This assumes up down control is done with a motor.
Whoa, Whoa, before we even get close to the limelight do you have your drive train working with basic manual controls and do you understand that code? Is the code currently setup for tank drive style or arcade style controls? If you’re not sure or if you’re past that point, that’s fine, but just so we start on the same page.
Additional questions, which of the following are you familiar with?
- Case structure
- Network tables
- Why you should use a while loop and delay function in the Period Tasks VI and not in the TeleOp VI
If you’re looking for basic target tracking, my team ended up with this. I’m assuming you’re in LabVIEW because that’s what the title says.
Sorry about the broken cluster, that’s just RobotDriveDevRef.
Basically, the way we have it programmed is to read the offset angle on the X axis (tx) and subtract 1 until the offset angle hits zero. The nested case structure basically just says if the offset angle is equal to zero, stop moving, else, if the angle is not zero, keep moving until it hits zero.
The 0.65 is just the power applied to the drive base so it doesn’t run into the wall or flip over.
I am set up on track drive with basic controls set up and I do understand how that works along with case structures, but as of using periodic task I have not done much or any of that so I’m not familiar with that section.
I was asking about the drive of the robot because I started looking at the 2019 deep space examples limelight put out and was wondering how yo switch that to tank drive or even if I should use that example at all, sorry for the misunderstanding.
Okay thank you for laying out a method of thinking about this, I guess my next question for you would be where do you recommend starting? Should I start with getting the camera to recognize the reflective tape? Or is there something else I should
If I could suggest (but not dictate) a plan of attack, start to work toward just getting the robot to, on command, turn left or right to face toward the target.
Some concrete substeps to accomplish this:
Part 1 - Gather Sensor Data
- Spend some time working on just the camera to get it identifying the vision tape
- Add enough labview code and camera configuration to print out the angle the target is at relative to the camera (this should be some output from the limelight, and not require much real calculation at all)
Part 2 - Design a Control Strategy
This is where knowing why the examples are set up like they are is probably more important than just seeing an example. Since we’re constrained to just talking about rotating the robot left and right, I can walk you through some specifics.
The algorithm I’ll describe is the “P” component of a PID controller, and should suffice to start.
The algorithm takes two inputs:
- The angle between the camera centerline, and where the target was actually detected at
→ This should be coming from the limelight, specifically the work you did in part 1.2
→ This will be denoted as \theta_{act}, or “Actual Angle” - The angle you want the target to be at, relative to the camera centerline, for your shots to go into the goal
→ This is most likely “0 degrees”, unless your camera and/or shooter aren’t centered.
→ This will be denoted as \theta_{des}, or “Desired Angle”
The algorithm generates one output: A left/right rotation command, just like your left/right joystick. We’ll call this cmd.
For the sake of the following math, assume that positive angles mean “target is to the right” and negative angles mean “target is to the left”. Additionally, assume that positive rotation commands mean the robot will turn toward the right, and negative commands mean the robot will turn toward the left.
Assume as well that all angles are in units of degress (not radians), and the output command is in the usual FRC scaling (IE, between -1.0 and 1.0).
Finally, there is one tuned parameter, called the proportional gain (which we’ll denote K_p). This is just a positive, constant number. I don’t know exactly what it has to be, as it will be dependent on many parameters in your drivetrain. The least-academically-intense method to fine an appropriate value for it is to guess and check, though I’ll talk a bit more about it later.
The control law is the math equation that converts all inputs and parameters into the output value. For a simple proportional controller in your use case, use the following control law:
cmd = K_p * (\theta_{des} - \theta_{act})
Just one multiplication and one subtraction - that’s all.
Spend a bit of time thinking about how this works - I believe you’ll be able to convince yourself its reasonable. For example: What value does cmd take in the following scenarios:
- Target is slightly to the right
- Target is slightly to the left
- Target is waaayy off to the left
- Target is dead on centered.
Do each of those cmd values roughly reflect what a driver would have done themselves?
Part 3 - Integration
Add a switch block, and drive it using some button on the driver controller. When the button is held down, the switch should select the output of the control law from Part 2. When not held down, the switch block should use the command from the left/right joystick. Route the output of the switch to you existing drivetrain logic.
The goal here is that the button changes whether the joystick, or the camera is in control of the drivetrain’s rotation. This, in turn, causes the button to act as your “go line up” button.
Part 4 - Tune
Start with K_p equal to zero, deploy your code, and attempt to auto-align with your button. Verify that the left/right joystick doesn’t do anything whenever they are held down. As long as K_p is zero, no motion should occur while the button is held down (Why? Quickly run through the math in the control law - should be hopefully pretty obvious).
Bump K_p up to a small positive value and try again. Observe robot behavior.
Does it still not move at all? Or maybe creep slightly in the correct direction, but not quite get there? Make K_p bigger.
Does it spin way to fast and wildly out of control? Be careful! Make K_p smaller and try again.
Your goal should be to arrive at a value of K_p which gets you close enough to the target to make the goal, as quickly as possible, but without spinning completely out of control.
Additional Notes
Signs and units matter! Be sure your code aligns with the above assumptions, or modify the control laws to match your code’s assumptions.
Labview supports PID blocks that do all this math for you. You can certainly use them, though I’ll not be able to advise on any specifics. For the P-only algorithm above, since it’s just subtraction and multiplication, I felt it was worthwhile to show you the internals of how any why it works, rather than hide it from you.
For picking a starting value of K_p, I suggest a value of 0.033
. How did I get this number? I made some assumptions. I guessed that a driver might start off being 15 degrees misaligned from the target. Furthermore, I assumed that at 15 degrees of misalignment, starting with a motor command of 0.5 would be reasonable to quickly (but not violently) correct for that error. I plugged these assumptions into the above control law and solved for K_p. Again, these are assumptions - I don’t know for sure if they’ll work on your robot. But it should give you a ballpark of where to start.
Furthermore, this line of thinking can give bounds on reasonable values of K_p. If you told me “I picked K_p=0.001”, I’d say that’s probably unreasonable - 75 degrees of error only yields a rotation command of 0.075 - way too tiny to meaningfully move the robot. On the other end, if you said "I picked K_p=1.0, I’d also say unreasonable, as that means we’re at full motor command with only one degree of error. Way too hammer-fisted.
Finally, spend some time considering what should happen if the limelight does not actually see a target. What should the motor command output be? Add logic account for this.
Editorial
Hopefully this point has been proven :D. Best of luck, and poke back here if you have questions!
Looking for an even deeper dive? I’ve got a small series of blog posts on the topic, There’s an awesome free textbook specifically for high schoolers, the WPI docs have a pretty darn good section on it, NI’s got a little bit of docs on using PID, and Youtube isn’t a half bad resource, though you’ll often find the resources are more targeted at the college crowd.
It’s mathy, and the work to prove it functions properly is even mathier. Don’t let that scare you too much - lots of good libraries exist. But study as much as you can and ask lots of questions: It’s more important to understand the mindset than know how to crunch the numbers exactly. A good mechanic knows the tools in their toolbox, and can use them well, even if they don’t know how to fabricate that tool from scratch.
If you are interested in a training presentation on position control, this might help. Module_11 - Google Drive
The presentation wasn’t really meant to stand on its own, but it still might be useful.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.