Well this year is my first year of doing robotics, and I was actually quite surprised… They did pretty much half the work for you, they made you a library and edge detection for you. Well I was hoping that we had to make it from scratch and they just provided the library for the very low level stuff, I noticed its really high level programming you need to do. I haven’t really started, I just look through the libraries but I think it takes away from the experience… You can argue that its only 6 weeks and is by highschool students, but still…
Well in the past, very few teams even attempted to do any sort of vision tracking or got it to work properly. I’m guessing that FIRST would rather have more teams be able to have some sort of autonomous/vision tracking than have only a handful of teams who have the capability to develop the code themselves.
Also, I would like to mention that although the FIRST code works, there is definite room for improvement.
Edit: The hardest part each year (in my experience) isn’t actually programming the autonomous mode, but getting the camera to lock onto the target reliably. Since FIRST solved that issue for us this year (actually I haven’t tested the default code yet so I don’t know how well they “solved” it), teams can focus on the rest of the autonomous mode.
Im contemplating if I want to make my own edge detection class, the idea is quite simple, but IDK if I myself is up to the challenge.
How well does the default code work? My team will try it out tomorrow. If it works reasonably well, I wouldn’t waste time making a custom one. But if you feel that there’s room for improvement (like our team felt last year), you can ditch the default stuff.
I would hopefully try it out tomorrow, but IDK why, the programming mentor is making us practice on the EduBots first… I personally think thats a waste of time so I just downloaded the library my self, without the full documentation, so IDK how well it will work
Your team is lucky that they have someone like you passionate and willing to write things from scratch. Many teams do not have the student body, talent pool, or time to be working on advanced programming like you mentioned. You’re certainly welcome to reinvent many of them (for instance, vision algorithm probably isn’t the most optimized and accurate there is). However, there’s something that’s not provided: how to tie in all the sensors, chug the math, and do slick autonomous. Sure, they tell you where the target is, they tell you how fast you’re moving, they tell you the G’s the robot’s getting in 3 axis, but you really have to work and test to TIE in all that information, make sense out of it, and get the robot to respond to them.
For instance, our first year, we had to write our own quadrature encoder drivers (I have no clue WHY…) but that took our really awesome mentor and a genius student (MIT now) a LONG time to make. Even then it was really shaky. There were significant time synchronizing problems, noise in the data, etc. I’d say that you really shouldn’t need to deal with that right now. I found that trying to just use encoder data in the grand scheme of things to be really hard last year with traction control.
Just food for thought.
Keehun
Well our team had done some preliminary tests and it was tracking onto squares, and triangles too. It will take some tampering to get it to lock onto only the target, but it will probably take some time to get it fully operational.
If you find the coding too easy, may I suggest optimizing the code that is provided, or developing your own from scratch to make it better? Remember, using the libraries is an option you have. I don’t see anything requiring you to use them.
The others have explained it pretty well-
The reason is because many teams lack the coding prowess to do a lot of that by themselves.
However, in my experience, truly implementing the code they give you well is the more challenging part. Sure they give you the code that runs the encoders and camera, but doing something with that data (and doing it well) requires a lot of time, patients, and know-how.
For example, last year they gave everyone the two-color camera tracking code. Yet even with that, very few teams were able to effectively follow the trailers.
And once you get some of that working, ask yourself what else you can do. The next step could involve PID loops. Some teams have even included ultrasonic or IR sensors, GPS devices, and more. There is always more you can in your code (with some electronic assistance) to make your job more challenging.
The default vision code is actually rather thin. It is built on top of NI IMAQ and then does some normalization and simple scaling.
The PID drive should work on a given set of robots that are built like the ones in the video, but weight, wheel, or even center of gravity changes will require the code to be retuned. Also keep in mind that the default code will simply point, no kicking, no pushing, no scoring.
So, the default code will score at most zero points. Depending on where you place the robot on the field, it could even cause penalties.
Hopefully it will provide inspiration.
Greg McKaskle
The code FIRST gives you is really not as useful as some teams would like. Unfortunately, especially for newer teams, it cannot be used without some sort of customization. Last year, our team tried to use the color recognition software, but we weren’t able to implement it.
That being said, it is rather nice to have some sort of working template to refer to.
On a different note, I was wondering where the default code is available. Our team is going to be using Java, but we haven’t been able to find any default code.
I felt that way once - in 2005, when in my first year in FRC on a rookie team. I was pretty confident in my C programming skills, and when they talked about how much code was being prepackaged, I was a little disappointed.
Now, as someone who’s been involved with the programming since 2005, trust me, it’s harder than it looks. They’ve given us some “it should work out of the box” code nearly every year, and almost no one uses it. Almost no one does the great things the GDC hopes for. No matter what they package for us, programming the robot is still hard.
Does giving the teams large libraries take away from “the experience”? I don’t think so. Think about it: suppose you’re writing a simple GUI for some program. You don’t look up how to write to the screen memory buffer, or write font handling code from scratch. You use a library that’s already been written. That’s the awesome thing about Java, Python, PHP, MATLAB, and LabVIEW (to name just a few). All of these languages have extensive built-in libraries to do all kinds of things. It makes us programmers more productive, and lets us spend our time working on the fun problems, rather than slogging through the swamp of low-level code.
Don’t worry that you’ll run out of interesting coding work to do. If you like playing around with low-level stuff, try adding some new sensors, such as an optical mouse. Or devise an automated system to assist the drivers in possessing balls. Or create an autonomous mode program that will make everyone’s jaw drop. The options are truly limitless.
While the ellipse-finding code works reasonably well, it does need to be retuned a bit.
Keep in mind that you also have no pan/tilt tracking code provided…try writing a feed-forward servo tracking algorithm, should give you a bit of a challenge ;).
Kevin Watsons code could easily be adapted for the servo tracking, (http://kevin.org see tracking.c from 2007). It basically just keeps adding the current error (position of target - center) to the servo values.
I was a programmer for two FRC seasons.
I know what you mean; at one point, I was hoping every team would have the programs built “from scratch”, this way, my team would stand out (because of my “superior” programming). The GDC included these libraries and significantly elevated the level of competition.
Of course, some maturity comes with time. The way I look at it since then is that the “prepackaged” codes are there for you to “unpackage” and “re-package” with added components. Not only is this a great way for learning, but you always end up writing a better program that what was provided. Like what’s mentioned above, the provided libraries challenge the programmer to be more creative. In this sense, good programmers can still make their team “to the next level” and feel very accomplished.
Although Labview has PID and Vision classes ready to go, I end up creating my own for both of them.
-Pie Man
Where does FIRST give the target tracking code for LabVIEW? Are you just talking about the libraries or is there default code in the autonomous VI for vision?
I haven’t looked at the new LabVIEW yet since we just finished installing it on our computers. Thanks.
our team gave up on the 2 color tracking demo. as for the basic java programs, you have to update the plugins
A man of my own mind…they can still be optimized though…I remember making some extensive modifications to them in 2007, including running 2 CMUcams at the same time .
Any chance of you posting those optimizations somewhere? I’m doing a side project involving tracking stuff with servos and would be interested in seeing what all you changed
to answer your question, in the new LV architechure (not sure about WR or Java) they impliment the vision tracking program for you (it’s in the vision code). However, there is room for improvement based on your application.
The libraries are good so you can skip the housekeeping. To learn effectively, it is better to get right to the mashed potatoes and get into your logic.