View Full Version : Team 254 Presents: CheesyVision
Jared Russell
08-04-2014, 23:45
Like many teams this season, Team 254 was surprised when we got to our first competition and found out that the Hot Goal vision targets were not triggering right at the start of autonomous mode (http://www.chiefdelphi.com/forums/showthread.php?p=1356815). There seems to have been some improvements through the weeks, but there is still anywhere from 0.5 to 1.5 seconds delay.
We originally had planned on using a sensor on board the robot - an infrared photosensor from Banner - but our problem was that (a) you can't move the robot until the hot goal triggers or you'll miss the target and (b) it meant our drive team spent a lot of time lining up the sensors to be juuuust right (as Karthik and Paul often pointed out at Waterloo). Onboard cameras may be more tolerant of movement, but introduce new hardware and wiring onto the robot.
We were intrigued by the Kinect, but thought: Why use the Kinect when our Driver Station already has a built-in webcam?
Introducing CheesyVision, our new laptop-based webcam system for simple gesture control of our robot. 254 ran this software at SVR and drove to the correct goal every single time. In eliminations, we installed it on 971 and it worked perfectly, as well. We wanted to share it with all of FRC prior to the Championship, because we think that just because the field timing issue will probably never be perfect this season, nobody should have to suffer.
CheesyVision is a Python program that runs on your Driver Station and uses OpenCV to process a video stream from your webcam.
There are three boxes on top of the webcam image:
-A calibration box (top center)
-Two boxes for your hands (left and right)
Basically, if the left and right boxes are similar in color to the calibration box, we assume your hand is not there. Before each match, our operator puts his hands in the left and right boxes, and then drops the one that corresponds with the goal that turns hot. The result is sent over a TCP socket to the cRIO - example Java code for a TCP server and robot that uses this data in autonomous mode is provided, and doing the same thing in C++ or LabView should be easy (If you implement one of these, please share it with the community!).
There are tuning instructions in the source code, but we have found the default settings work pretty reliably under most lighting conditions as long as your shirt and the color of your skin are different enough (because the algorithm is self-calibrating). Of course, you could use virtually anything else besides skin and clothing if the colors are different.
Here are some screenshots:
http://i.imgur.com/ktUE1rt.png
http://i.imgur.com/XARctmm.png
http://i.imgur.com/UWHum6k.png
http://i.imgur.com/9gRMQFv.png
To download and install the software, visit:
https://github.com/Team254/CheesyVision
Good luck!
DampRobot
08-04-2014, 23:49
I saw this in person at SVR, and it is very cool. Great job 254, and thanks for sharing!
Now if only someone would use this same technology to block their 3 ball auto...
Yipyapper
08-04-2014, 23:51
This is absolutely phenomenal, and since 781 had to remove their camera for weight, I needed a new method for hot goal detection. I have not been this happy about programming for a while--whether this works for us or not, I am incredibly grateful.
Too bad I can't give rep more than once.
Thad House
08-04-2014, 23:53
This really is cool. I like the method. The only problem is that Wildstang couldn't use it :D
In all seriousness, I think this is an excellent way of detecting hot goals. Very simple, and most laptops have a camera on them nowadays. Ill keep it in mind for championships this weekend.
instantcake
08-04-2014, 23:54
Thank you so much, we were just looking at how to implement our hot goal detection for champs, and this is an amazing solution. We also plan on extending it to tell the robot where to go while blocking during autonomous. Thank you so much for sharing this with the FIRST community!
JohnFogarty
08-04-2014, 23:54
We currently use the Kinect method, but i might be inclined to implement this instead. I didn't develop something like this because the kinect Java classes already exsisted and were fairly easy to use. I do like how this required some work though.
Nice work.
PayneTrain
08-04-2014, 23:58
I can't wait to tell the beleaguered crew working on Kinect programming there may be another way!
It is a real shame 254 isn't using the Kinect after its rousing success with it in 2012. (https://www.youtube.com/watch?v=ZaOiaC0I8pY)
akoscielski3
08-04-2014, 23:59
This weekend at the Windsor-Essex Great Lakes Regional I heard of 1559 using a very similar program for their Hot Goal detection. Instead they used cards that had symbols on them, and I believe they had this all season long though I can not confirm. Because of this they won the Innovation and Control Award.
It's pretty cool seeing that another team came up with a very similar way to detect the Hot Goal.
Good luck at Champs Poofs!
alex.lew
09-04-2014, 00:05
2468 team appreciate used a system like this at Bayou last week. This never occurred to us - it's so simple and elegant. This will be pretty cool to show kids at demos.
RyanCahoon
09-04-2014, 00:34
This weekend at the Windsor-Essex Great Lakes Regional I heard of 1559 using a very similar program for their Hot Goal detection. Instead they used cards that had symbols on them, and I believe they had this all season long though I can not confirm. Because of this they won the Innovation and Control Award.
We (1708) used a similar method at both NYTV (we got it working about halfway through the competition) and Pittsburgh (where we won Innovation in Control as well). We used the Fiducial module (http://www.roborealm.com/help/Fiducial.php) built into RoboRealm (http://www.roborealm.com/FRC2014/).
I've attached our RoboRealm script file for anyone who's curious. To use, first double click on the Fiducial line in the script, then click the Train button, then click Start. You may need to change the path to the directory that the fiducials are stored in if you're not on 64-bit Windows or you installed in a non-default directory. You'll also have to modify the Network Tables configuration to match your team number.
If we can get a more comprehensive paper written on it, I'll post it on CD.
Nice work, Poofs and Devil-Tech (and others). Cool to see other teams using this method as well.
billbo911
09-04-2014, 01:01
I LOVE IT!!
This year 2073 used a USB webcam on our bot to track the balls. It was implemented to assist the driver with alignment to balls when they were obstructed from his view or just too far away to easily line up.
We won the Inspiration in Control Award at both Regionals we attended because of it. If 254 can share their code, we can share the Labview receiver we used to help any team that can take advantage of it.
Set the IP of the receiver to that of your DS, the Port number to that set on line 72 of the code 254 provided, and set number of bytes to read to whatever you are sending. In the case of 254's code, that should be 1.
A quick look at the block diagram will make it obvious what to do.
Please ask any questions here so I can publicly answer them.
Thad House
09-04-2014, 01:17
So something that might be helpful to add would be to make it SmartDashboard compatible. That might make it alot more accessible to teams because it can easily be added as just a variable on the dashboard. You can get python binding for SmartDashboard here
http://firstforge.wpi.edu/sf/frs/do/viewRelease/projects.robotpy/frs.pynetworktables.2014_4
I don't have a CRIO on me, but I attached a version that uses the exact same method of communicating as we were doing earlier in the season, so it should work. It just has two bool variables (right_in and left_in) and should just use the standard smartdashboard VI's or functions and be compatible with all versions.
EDIT: Attaching the file wouldnt work for some reason. So here is a skydrive link
https://onedrive.live.com/redir?resid=D648460250CFE566!3187&authkey=!AJN3X-AJMsh70Hc&ithint=file%2c.zip
Wouldn't it be easier just to hold up a light or large board of a specific color to discern between the two?
Basically we are just concerned about answering a boolean question here. As in: "Is the left goal hot?" If no, don't hold up the board/light and assume the right goal is hot. If yes, hold up your indicator.
I LOVE IT!!
This year 2073 used a USB webcam on our bot to track the balls. It was implemented to assist the driver with alignment to balls when they were obstructed from his view or just too far away to easily line up.
We won the Inspiration in Control Award at both Regionals we attended because of it.
Cool -- we actually did the same thing, and fed back to the driver station which balls were detected in the field of view, and also their distance + offset angle from our collector. Seemed to work pretty well but when we were out on the field we didn't use the autonomous control for it just because of the nature of defensive and high-speed gameplay. Unfortunately we didn't win a Controls award at either of our regionals. Would love to compare code though!
MrTechCenter
09-04-2014, 01:33
Wouldn't it be easier just to hold up a light or large board of a specific color to discern between the two?
Basically we are just concerned about answering a boolean question here. As in: "Is the left goal hot?" If no, don't hold up the board/light and assume the right goal is hot. If yes, hold up your indicator.
This is kind of like what we did for our hot goal tracking. We lined up our robot up with the middle of the high goal so that it could only see the targets for the side that we were on. The camera looked at the targets for the first second and a half of autonomous. If that side was hot first, the robot quickly drove forward and shot. If the other side was hot first, the robot very slowly moved forward so that by the time it was in position to shoot, the goals flipped.
Wouldn't it be easier just to hold up a light or large board of a specific color to discern between the two?
Basically we are just concerned about answering a boolean question here. As in: "Is the left goal hot?" If no, don't hold up the board/light and assume the right goal is hot. If yes, hold up your indicator.
Well, the problem right now with the high variance in timing of the hot goals,using that kind of methodology leads to the robot assuming a right hot goal while there is no hot goal.
The way 254 has done it, if both hands are in their respective boxes, the robot knows that neither hot goal has lit up, and therefore won't start it's autonomous routine until it receives data from the laptop saying that there is a hot goal to shoot balls into.
Wouldn't it be easier just to hold up a light or large board of a specific color to discern between the two?
Basically we are just concerned about answering a boolean question here. As in: "Is the left goal hot?" If no, don't hold up the board/light and assume the right goal is hot. If yes, hold up your indicator.
The goal was to have zero external devices that could run out of batteries, get left in the pit/cart/etc, be dropped, or otherwise malfunction at the worst possible moment. Holding your hands in the box on a static background is a pretty darn repeatable action and satisfied that requirement.
waialua359
09-04-2014, 05:29
This is easily the simplest most innovative control method this season.
Kudos to whoever came up with the idea and I can see this becoming the standard in subsequent seasons.
Tottanka
09-04-2014, 05:30
Team 3211 The Y Team from Israel did the same thing in the Israeli regional, worked 100% of our matches.
We however used facial recognition libraries, so when the camera recognizes a face, it knows there's a hot goal in front of it.
We later tried printing pictures of Dean and Woodie to use, but they turned out not 3-D enough for the face recognition...
We were'nt sure if its 'legal', so we asked the head ref, who approved the use of that. The only relevant Q&A states that a kinect might be used, we didnt know if a webcam is ok too...
Ill talk to our programs and try having the code here later on, we used Labview on the robot and python with openCV for that image recognition.
Also, we were told by the FTA that he noticed us sending a lot of info through the fields bandwitdh, and that it might cause problems.
We decided to have the drivers shut down the image recognition running at the begining of teleop, to avoid any possible problems or delays (which we didnt have, but just to be sure).
Thanks Poofs! Its an honor seeing that our idea is used by you guys too =]
Also, we were told by the FTA that he noticed us sending a lot of info through the fields bandwitdh, and that it might cause problems.
How/why would this use a lot of bandwidth?
eddie12390
09-04-2014, 06:01
This is awesome, we currently have the Kinect set up but this wouldn't require all of the extra equipment.
Michael Hill
09-04-2014, 06:40
We actually just put on a banner sensor at the last competition, but made it slightly rotatable so we just moved the sensor. Our drive team was able to consistently find the target in about 10 seconds. Maybe it was your placement/mount of the sensor that made you guys take a while?
Tottanka
09-04-2014, 07:38
How/why would this use a lot of bandwidth?
No idea. Im more of a mechanics guy, but the FTA came to us saying that he noticed it, and said that if it disrupts the field somehow he will shut us down. Never happened.
Billfred
09-04-2014, 07:59
How many times can the Poofs (https://www.youtube.com/watch?v=RpSgUrsghv4) blow our minds (https://www.youtube.com/watch?v=aFZy8iibMD0) in one season?
Thank you for sharing this with teams--I bet it's going to see a lot of play at Championship. Maybe even district championships too, for teams on the stick.
Yipyapper
09-04-2014, 08:06
No idea. Im more of a mechanics guy, but the FTA came to us saying that he noticed it, and said that if it disrupts the field somehow he will shut us down. Never happened.
The Kinect might use a lot of resources, but if it's on the driver station and connected via USB, there should be no data sent to the robot save a value or two, which is negligible. If it's on the robot and the processing is done via Wi-Fi, then you have a significant problem with bandwidth (most likely).
Ben Martin
09-04-2014, 08:33
Big thanks to 254. You've given us Cheesy Drive code (which we're running an implementation of right now) and now this little gem.
Thanks for everything you guys do to build teams up.
Tottanka
09-04-2014, 08:34
The Kinect might use a lot of resources, but if it's on the driver station and connected via USB, there should be no data sent to the robot save a value or two, which is negligible. If it's on the robot and the processing is done via Wi-Fi, then you have a significant problem with bandwidth (most likely).
We originally planned on Doing Retroflective recognition with Kinect + Resbery Pie, but when we saw all the problems we switched to the Laptop's Webcam and driver interaction.
We even turned the RPI off, so there would be no problems there.
I too, dont see how it should pass that much packages through the field, but do remember the FTA coming to talk to us about it. There were no problems, but i just wanted to give teams a "heads up", that if not implemented correctly this code might be problematic.
That said, it's not that hard to implement.
On another note, maybe the teams who have tried this method can do some kind of an 'help all teams' stand at the championship, where we could set together premade codes for C, Java and Labview, and just come to teams and help them get those 5 more points. Sounds easy enough, nost laptops already have webcams - so why not. It's kinda like waht 254 did to 971 at SVR, isnt it?
Ideas?
Niezrecki
09-04-2014, 10:17
Wow! Thank you so much for sharing. This is a wonderful means for hot goal detection that would be absolutely wonderful to use. Impressive as usual Cheesy Poofs!
billbo911
09-04-2014, 10:36
How/why would this use a lot of bandwidth?
It uses FAR LESS BW than a camera on the robot sending video TO the DS.
It only sends 1 byte every 25ms. The flow if FROM the DS to the cRio.
If the processing was done on the cRio, then the FTA would have a point, but it is not. All processing is done on the DS and only one byte is sent. How the cRio uses that byte is up to the team.
nlknauss
09-04-2014, 10:52
Wow, awesome work here Jared and 254! Thank you for sharing the work and looking to improve the FRC community.
Kevin Sheridan
09-04-2014, 11:07
We actually just put on a banner sensor at the last competition, but made it slightly rotatable so we just moved the sensor. Our drive team was able to consistently find the target in about 10 seconds. Maybe it was your placement/mount of the sensor that made you guys take a while?
It takes a while because we have to position 3 robots and 2 sensors :rolleyes:
Brandon Zalinsky
09-04-2014, 11:31
I absolutely love the simplicity and out-of-the-box thinking in this hot goal tracking system. I was thinking about it, though, and wondered to myself how it's legal. I checked the rules, and according to G16, it's legal:
During AUTO, TEAM members in the ALLIANCE STATION must remain behind the STARTING LINE and may not contact the OPERATOR CONSOLE.
But I then looked at the definition of AUTO and was a little more confused.
AUTO (aka Autonomous): the first ten (10) seconds of the MATCH in which ROBOTS operate without direct DRIVER control.
It's pretty clear that the LRI's have ruled it legal, as 254 has gotten inspected, kicked butt, and won many times with it. I figured that CheesyVision was pretty much direct control, so how is it legal?
Again, good job, 254 does it again.
I think the rules themselves do not disallow it, but Q431 (https://frc-qa.usfirst.org/Question/431/questionlink) and Q446 (https://frc-qa.usfirst.org/Question/446/questionlink) further clarify the use of non-contact communication in the driver station during autonomous mode.
Completely missed out on the chance to call it "Hot or Not". Just saying :D
JamesTerm
09-04-2014, 11:45
It uses FAR LESS BW than a camera on the robot sending video TO the DS.
It only sends 1 byte every 25ms. The flow if FROM the DS to the cRio.
If the processing was done on the cRio, then the FTA would have a point, but it is not. All processing is done on the DS and only one byte is sent. How the cRio uses that byte is up to the team.
The idea of sending one byte every 25ms by itself cannot be assumed to be low bandwidth unless it is sent with favorable options in the socket setup.
I was surprised to find how much bandwidth can acrue using UDP and the DO_NOT_WAIT option with a similar test of sending two doubles every 33ms. In short I took out the DO_NOT_WAIT and the bandwidth went down significantly.
Jared Russell
09-04-2014, 11:53
Wouldn't it be easier just to hold up a light or large board of a specific color to discern between the two?
Basically we are just concerned about answering a boolean question here. As in: "Is the left goal hot?" If no, don't hold up the board/light and assume the right goal is hot. If yes, hold up your indicator.
In our case, there are three states we are interested in:
Neither goal hot
Left goal hot
Right goal hot
The "neither" state is useful because you can watch for the transition from neither to one of the other states to indicate that the goal has flipped. This requires 2 bits of information to discern, hence separate left and right boxes. Other use-cases may not need the third state and could only use one detection area.
Tottanka
09-04-2014, 11:57
I absolutely love the simplicity and out-of-the-box thinking in this hot goal tracking system. I was thinking about it, though, and wondered to myself how it's legal. I checked the rules, and according to G16, it's legal:
But I then looked at the definition of AUTO and was a little more confused.
It's pretty clear that the LRI's have ruled it legal, as 254 has gotten inspected, kicked butt, and won many times with it. I figured that CheesyVision was pretty much direct control, so how is it legal?
Again, good job, 254 does it again.
Your second quote doesnt refer to a direct rule, but to a more general statement - which is not a rule.
There are also a few Q&A making it legal.
JamesTerm
09-04-2014, 12:01
I LOVE IT!!
This year 2073 used a USB webcam on our bot to track the balls. It was implemented to assist the driver with alignment to balls when they were obstructed from his view or just too far away to easily line up.
Cool -- we actually did the same thing, and fed back to the driver station which balls were detected in the field of view, and also their distance + offset angle from our collector.
Did it work something like this (https://www.dropbox.com/s/6ic2gkw9r2b8mza/00015.MTS)?
JamesTerm
09-04-2014, 12:10
Like many teams this season, Team 254 was surprised when we got to our first competition and found out that the Hot Goal vision targets
Wow! this thread has so many interesting posts in it... now on team 254 using python... wow I'll want to chat up with you on the language choice at some point... and wow what a clever out of the box idea... kudos to you guys, and to quote one of our engineers... "That is a great idea and they are real champs for sharing".
And the final wow goes to all the rules breakdown of what we *can* do... just think of the possibilities... heak why not voice commands (tell alliances mates to be quite hehe). :)
billbo911
09-04-2014, 12:15
Did it work something like this (https://www.dropbox.com/s/6ic2gkw9r2b8mza/00015.MTS)?
I would prefer not to hijack this thread, but here is a short description of what we did. If you would like to discuss this further, please PM me or maybe I can create a new thread.
Yes and no. We never feed video back to the driver. We just used the value of "x center" of the ball to steer the robot whenever the driver needed assistance. One button on the steering wheel overrode the wheel position and replaced it with the "((image x center - ball x center value) * k)". "k" was a gain value used to bring the error value to a useful level to steer the robot.
All image acquisition and processing were done on a PCDuino on-board the robot. None of the network traffic for this crossed the WiFi network, it all stayed local to the robot.
Team 329 used a barcode scanner to decode a barcode which populated a field on the Smart Dashboard which indicated that we would shoot immediately (the goal you are looking at was now hot) or delayed for 5 seconds if no barcode was scanned.
No additional bandwidth, no camera, no additional processing, simple and effective.
JamesTerm
09-04-2014, 12:22
please PM me or maybe I can create a new thread.
I think this would be a great thread. It would be cool to know if any other teams tried it and are willing to share how they did it.
DjParaNoize-
09-04-2014, 14:29
#pewpew
brennonbrimhall
09-04-2014, 15:26
Thanks for sharing!
#veryvision #muchGP #wow
s1900ahon
09-04-2014, 15:33
2468 team appreciate used a system like this at Bayou last week. This never occurred to us - it's so simple and elegant.
Our version was developed by our students with contributions from Greg McKaskle. Written in LabVIEW using the vision libraries. Instead of recognizing hand position, it uses a sign our drivers carry with them. The sign is initially held at an angle (neutral position) and turned to a horizontal position to shoot.
Tom Bottiglieri
09-04-2014, 15:38
In elims at SVR, Brian from 971 and I coded up their robot to use this app (we just let them use our backup driver station laptop to make the install process easier).
They only needed 1 bit of data: whether or not the goal directly in front of them was hot. They used this bit of data to determine whether to wait 3 or 7 seconds to shoot from when auton started. We just used the driver's right hand to signal this bit. The left side was a no care.
PayneTrain
09-04-2014, 15:51
In elims at SVR, Brian from 971 and I coded up their robot to use this app (we just let them use our backup driver station laptop to make the install process easier).
They only needed 1 bit of data: whether or not the goal directly in front of them was hot. They used this bit of data to determine whether to wait 3 or 7 seconds to shoot from when auton started. We just used the driver's right hand to signal this bit. The left side was a no care.
If I recall correctly, it only ever missed once, and that was due to the spectacular new and exciting failure of FMS switching between tleeop and auto at random, correct? This is super neat stuff.
Tom Bottiglieri
09-04-2014, 15:57
If I recall correctly, it only ever missed once, and that was due to the spectacular new and exciting failure of FMS switching between tleeop and auto at random, correct? This is super neat stuff.
Yes. In QF1-1 you can clearly see both our robots double clutch in auto.
This is really cool. We were planning on using the kinect, but we haven't had spectacular results in testing when we try it with people walking around in the background.
After playing around with it, I found it really useful to be able to lock the calibration color value so that I could hold a green index card in front of the calibration square, save that calibration value, then use both hands to hold up two cards in the boxes so that I can drop one hand out of the way to signal.
To add the lock-
above the while loop
locked = 0
after the statement in the while loop beginning with cal, left, right
if locked == 1:
cal = lastCal
lastCal = cal
at the bottom where the keys are checked
elif key == ord('l'):
locked = 1
elif key == ord('u'):
locked = 0
Pressing l locks the current calibration value, and u resets it to normal.
Jared Russell
09-04-2014, 19:03
This is really cool. We were planning on using the kinect, but we haven't had spectacular results in testing when we try it with people walking around in the background.
After playing around with it, I found it really useful to be able to lock the calibration color value so that I could hold a green index card in front of the calibration square, save that calibration value, then use both hands to hold up two cards in the boxes so that I can drop one hand out of the way to signal.
To add the lock-
above the while loop
locked = 0
after the statement in the while loop beginning with cal, left, right
if locked == 1:
cal = lastCal
lastCal = cal
at the bottom where the keys are checked
elif key == ord('l'):
locked = 1
elif key == ord('u'):
locked = 0
Pressing l locks the current calibration value, and u resets it to normal.
This is in fact how our original prototype worked, but we switched to continuous on-line calibration because we found that bumping the laptop could throw things off.
billbo911
09-04-2014, 19:30
This is in fact how our original prototype worked, but we switched to continuous on-line calibration because we found that bumping the laptop could throw things off.
I imagine changes in lightning would create a problem as well. The LED strip right in front of the DS is green prior to a match, then off during Autonomous. That alone would change the calibration. So, a realtime cal is very important.
This is in fact how our original prototype worked, but we switched to continuous on-line calibration because we found that bumping the laptop could throw things off.
We found the same thing with our test tonight, but we saw a lot of improvement with a really bright green index card. We're now just comparing the amount of green in the two squares and ignoring the calibration one. The side with the most green in it becomes the hot side, and the other is the cold one. This seems to work the best for us.
DjScribbles
10-04-2014, 01:15
I haven't seen any talk of a C++ port, so I started a thread in the C++ sub forum here to avoid derailing this thread:
http://www.chiefdelphi.com/forums/showthread.php?p=1372026#post1372026
My prototype code is linked in the thread, it is completely untested, but any contributions are welcome.
Thanks Poofs, very awesome implementation; looking forward to trying this out.
Thad House
10-04-2014, 01:19
It wouldnt let me edit my original post, but I did some testing today and got a version of this that uses NetworkTables to work. It worked on my simulator setup and should work the exact same on a real robot. It just uses 2 bool's, one for each hand. I attached the file to this, and my post on page 1 has the link to the pynetworktables for windows.
I plan bringing this to the regional championship in case anybody needs help with the hot goal. I really like this way, and if we hadn't already coded the kinect we would most likely use it.
DjScribbles
10-04-2014, 07:09
I plan bringing this to the regional championship in case anybody needs help with the hot goal. I really like this way, and if we hadn't already coded the kinect we would most likely use it.[/QUOTE]
In case anybody needs a helping hand with pynetworktables, I believe this would be the dependency you need: https://github.com/robotpy/pynetworktables
Is this correct Thad?
Thad House
10-04-2014, 09:30
In case anybody needs a helping hand with pynetworktables, I believe this would be the dependency you need: https://github.com/robotpy/pynetworktables
Is this correct Thad?
Thats if you want to build it from source. If you use this link it gives you a windows installer so you do not need to install any of the build stuff.
http://firstforge.wpi.edu/sf/frs/do/viewRelease/projects.robotpy/frs.pynetworktables.2014_4
DjScribbles
10-04-2014, 19:57
I just wanted to post back and say I got the C++ port up and running (with minimal changes).
https://github.com/FirstTeamExcel/Robot2014/blob/CheesyVision/CheesyVisionServer.cpp
Feel free to shamelessly steal the code, but I'd love to hear if it helps anyone out.
phurley67
11-04-2014, 09:23
Wanted to say thank you. I helped our team get it working with labview yesterday at the Michigan Championship. I made a couple minor changes to the python script: classmate laptop already flipped the image, so I removed the flip logic and fixed left/right, switched to using UDP, and slowed down the send frequency. UDP made the reconnect stuff unnecessary and simplified the labview interface as well.
While there I also helped 107 with a copy of the code and while I did not touch base to see if they got everything working, I know in testing they also had it working in auton (controlling wheels for easy testing).
The whole team got a real kick out of playing with the code. Thanks again for an elegant and cheesy solution.
Jared Russell
11-04-2014, 10:23
UDP made the reconnect stuff unnecessary and simplified the labview interface as well.
Yeah, UDP is definitely a more straightforward way to do this, but we had already implemented a TCP server on our robot and decided to repurpose it.
Tom Bottiglieri
11-04-2014, 11:27
Yeah, UDP is definitely a more straightforward way to do this, but we had already implemented a TCP server on our robot and decided to repurpose it.
Also the squawk JVM that runs on the cRIO doesn't support UDP listen sockets.
:confused:
Tom Bottiglieri
11-04-2014, 15:37
I just wanted to post back and say I got the C++ port up and running (with minimal changes).
https://github.com/FirstTeamExcel/Robot2014/blob/CheesyVision/CheesyVisionServer.cpp
Feel free to shamelessly steal the code, but I'd love to hear if it helps anyone out.
Have you tested this on a robot? If so I can add it to the repo.
Also it doesn't look like you have a runner thread within the object. Are you running it externally? If so could you post that code as well?
Thank you very much for posting this. Within an hour of showing this to our programmer, we had it fully operational with our 1 ball.
DjScribbles
11-04-2014, 22:53
Have you tested this on a robot? If so I can add it to the repo.
Also it doesn't look like you have a runner thread within the object. Are you running it externally? If so could you post that code as well?
Yes, we've got the code up and running, it is working great. I can try to put together an example project to wrap the vision server code on Monday or so.
The object implements a thread for reading from the IO stream by inheriting from jankyTask, the Run method is the wrapped threading function.
TravSatEE
12-04-2014, 18:55
I think it is very kind of your team to post this publicly. The use of a built-in camera on the driver station is a very good choice for getting a human in the loop and you've sparked some thinking for Chief Delphi that will last for seasons to come.
After looking through the posted code repository, I have to ask: what is Team 254's philosophy on student involvement? The two contributors on github appear to be your mentors and the level of programming skill is also not commonly found in high school students. Have I missed the student involvement in this?
I'm not making any sort of accusation that Team 254 has done something wrong or is not following rules. I am just surprised that for a high school competition the high-visibility work from your team seems to be mentor-only. I believe the announcer at Silicon Valley said that Team 254 won the regional for 15 of the last 16 years. This is impressive and clearly your team is doing something that ensures a solid victory record.
After looking through the posted code repository, I have to ask: what is Team 254's philosophy on student involvement? The two contributors on github appear to be your mentors and the level of programming skill is also not commonly found in high school students. Have I missed the student involvement in this?
I'm not making any sort of accusation that Team 254 has done something wrong or is not following rules. I am just surprised that for a high school competition the high-visibility work from your team seems to be mentor-only. I believe the announcer at Silicon Valley said that Team 254 won the regional for 15 of the last 16 years. This is impressive and clearly your team is doing something that ensures a solid victory record.
Having had the opportunity to interact with Team 254 over the years, and especially this season, I can assure you that both high and low visibility work on their team is far from mentor only. The students are involved and integrated throughout their entire process. The strength of their partnerships is not only evidenced by their unparalleled victory record, but also by their place in the FIRST Hall of Fame as a Championship Chairman's Award Winning team.
TravSatEE
12-04-2014, 21:49
I can assure you that both high and low visibility work on their team is far from mentor only. The students are involved and integrated throughout their entire process.
I asked the Team 254 mentors what the involvement is on this particular project by the students with interest in their broader philosophy on the matter. Indeed in their other public repositories the students have contributed and from seeing them in competition, I know that the students are involved. However, I don't see any obvious indicators for the CheesyVision that students were involved. The workmanship makes it apparent to a casual observer as to who did the work. I believe that I asked an earnest question that they can answer. I am just surprised that the student involvement was not overwhelmingly apparent on this one project given they are such a substantial team (as you also indicated).
Jared Russell
13-04-2014, 00:53
This was forked from our teams' FRC 2014 repository just for public release. It is different from what we competed with last weekend (it removed some team-specific features, streamlined some quick hacks, and added a ton of comments). Mentors went over the code with a fine-toothed comb before making it public. This was deliberate.
While our students are intricately involved in our teams' software (more on this below), we are talking about releasing code to the entire FIRST community DURING the competition season. A fairly high bar is required for teams to be able to understand, use, and trust the code in time for their next competition - we certainly don't want to be breaking other teams' robots. I personally made (and stand behind) the decision to go mentor heavy on this particular project for this reason. (To be clear, I fully believe that our students could have made just as polished a product, but I thought that an expedient release would be ultimately more important.)
It might be software, but this is just another COTS module that you can choose to use (or ignore). Like an AM Shifter or a VEXpro VersaPlanetary, I believe that putting a high quality component in the hands of a student is a vehicle for inspiration.
However, I don't see any obvious indicators for the CheesyVision that students were involved. The workmanship makes it apparent to a casual observer as to who did the work.
This is a dangerous line of thinking for two reasons.
First, never judge a book by its cover. Every year I am amazed at what students are capable of. This year, there are some very gifted programmers on 254. They wrote a RESTful webserver on our cRIO (that ultimately provided the TCP server part of CheesyVision). One of them - and this still absolutely blows my mind to think about - designed and implemented a quintic spline trajectory planner for our autonomous driving routine. I explained the basic concept, then sat back as he did the math, derived the differential equations, and gave me working code. Just awesome.
Second reason: An anecdote. One of my earliest posts on Chief Delphi was in this thread (http://www.chiefdelphi.com/forums/archive/index.php/t-20273.html). It was 2003, and WildStang had just posted about StangPS, a really sophisticated navigation system that I was sure had to be engineer-built (just look at my posts!). I was a senior in high school at the time. I thought my gyro-based autonomous mode was pretty nifty, but was blown away by StangPS. I watched their video dozens of times, enthusiastically emailed it to my programming mentor at the time, and was just totally fascinated with it. I ended up reading about odometry and dead reckoning, using interrupts to read optical encoders, Kalman filters, and all sorts of other concepts that I didn't fully understand as a high schooler, but found really, really cool.
While at the time I was a little peeved that here I was, a high school student writing all of 341's code while these other teams had teams of engineers, in hindsight I cannot thank 111 enough for raising the bar and for sharing what they did. I was inspired and in some permanent and positive way, my life was shaped by it. While a little Python script for processing a webcam image is by no means as impressive as complete robot navigation system, my hope is that at least a few students will give it a look and see something they think is cool and want to learn more about later.
TravSatEE
13-04-2014, 05:11
First, never judge a book by its cover.
I did not judge the book by the cover. At the Madera regional, my team stayed at the same hotel as Team 254. On Friday night, one of your students interacted with my students and said that being on your team isn't as much fun because of the work done by the mentors. This was told to me the next day and I was surprised by a comment like that about such a highly regarded team. Seeing your CD post this weekend, I decided to investigate for myself.
I had expected that your project was forked and that was why I asked for clarification as to what the students did. Instead, your answer wasn't completely clear to me as to exactly what the students did for CheesyVision. I do understand that it was "mentor heavy." Though you couldn't tell the differences between student and mentor effort when you were in high school, I trust my judgment because I have done programming for 18 years and know the subtle differences in programming skills at all levels. I do think very highly of the work you released to all teams. I am sure students also do.
It might be software, but this is just another COTS module that you can choose to use (or ignore). Like an AM Shifter or a VEXpro VersaPlanetary, I believe that putting a high quality component in the hands of a student is a vehicle for inspiration.
Your analogy to a COTS part is not equivalent to this situation: several mentors appear to have worked exclusively on a project that was used to give a competitive advantage to the game performance given limitations of the Field Management System. Albeit it was not an overwhelming advantage and any team could have done the exact same thing. Again, I am not saying Team 254 has broken any rules. But I find it interesting that a NASA sponsored (funded?) team, and the team with the best winning record of FIRST, needs to have mentors do exactly what you have done for a high school competition. Of course you stand by your decision to do CheesyVision the way that you did -- it's easy to stand by a decision that has no consequences.
I am eminently fortunate to always have mentored teams that were student run and each team has students just as impressive as the ones you described. From what I have learned today, I think the difference between your team and my teams is that other mentors keep it students vs students.
I do not intend for any of my posts to put you on the defensive nor to diminish your students' work hard. I am trained to speak my mind and your reply has been informative. Thank you for answering.
Your analogy to a COTS part is not equivalent to this situation: several mentors appear to have worked exclusively on a project that was used to give a competitive advantage to the game performance given limitations of the Field Management System.
How can you possibly claim that a public release of this code gives 254 a competitive advantage? Anybody can use it now. Team 11 won the Mid-Atlantic Region Championship with CheesyVision.
Nick Lawrence
13-04-2014, 10:01
<snip>
I am just surprised that for a high school competition the high-visibility work from your team seems to be mentor-only.
<snip>
Who cares who truthfully does the work? Are the students inspired? Are they motivated to be just like their mentors?
If yes, mission accomplished. It doesn't matter who builds the robot. What matters is what the students get out of it. You don't have to turn a wrench or write software to be inspired to do so.
-Nick
Jay O'Donnell
13-04-2014, 10:05
Let's get this thread back on track everyone...
Cheesyvision is really innovative! Way to think outside the box!
Adam Freeman
13-04-2014, 10:15
Team 254, thank you!
We ran CheesyVision any time we were doing a 1 ball auto at MSC and it worked perfectly.
You guys are awesome.
scottandme
13-04-2014, 10:54
Thanks to 254 for helping to patch the (still broken) field/fms. We're still running a 1 second delay at the start of auton to avoid the timing issues, which was still not enough in at least one of our qualification matches at MAR champs.
Thanks to 254 for giving us more awesome stuff to look through and use.
Our competition season was over before this release but I think we will be trying to implement this for any offseasons we go to.
Huge thanks to 254 for releasing this. We showed it to our programmers on Wednesday and had a working hot goal auton before lunch Thursday. You guys saved us a huge amount of time and finally let us get rid of the annoying green led ring our robot. Now if only our shirts weren't tie-dye...
Jay Meldrum
14-04-2014, 09:03
Team 254, thank you!
We ran CheesyVision any time we were doing a 1 ball auto at MSC and it worked perfectly.
You guys are awesome.
I second this! Thanks again 254!
I haven't seen any talk of a C++ port, so I started a thread in the C++ sub forum here to avoid derailing this thread:
http://www.chiefdelphi.com/forums/showthread.php?p=1372026#post1372026
My prototype code is linked in the thread, it is completely untested, but any contributions are welcome.
Thanks Poofs, very awesome implementation; looking forward to trying this out.
Also thank you to DjScribbles for base of our code in C++. Just had to change a few things and it worked great.
Thanks!
Mike Copioli
14-04-2014, 14:28
We added it this weekend to our Prac bot. Looks like we will have two ball hot goal detect at Champs.
Thanks so much guys.
I heard a lot of teams at MSC were using CheesyVision this weekend with great success. Kudos to releasing such a nice product in season.
Class Act.
CheesyVision is much better than Teh CheeZViSHUN (which apparently just tinted all camera inputs blue).
DjScribbles
14-04-2014, 15:22
As a heads up, I found that our netbook's webcam would cause an exception when the cheesy vision script started, to resolve this, I added a delay between initiating the camera connection, and grabbing the first image.
I don't have the script handy to share, but as someone who's never used python before, I'm confident that someone else could improve on the implementation anyway.
I also saw some weird connection issues between the Client and Server on Saturday on the field (all other systems were normal, just lack of cheesy-vision). I left more details on the issue here (http://www.chiefdelphi.com/forums/showpost.php?p=1373668&postcount=3).
connor.worley
14-04-2014, 16:19
I learned to write software for FRC bots by reading 254's 2010 code. It's great that releases are still coming out each year.
Andrew Schreiber
14-04-2014, 16:42
Your analogy to a COTS part is not equivalent to this situation: several mentors appear to have worked exclusively on a project that was used to give a competitive advantage to the game performance given limitations of the Field Management System.
Uh, what?
You were around in the days back before the off the shelf shifters. You should recall that before AM started selling their shifters there were few teams that could reliably shift. This is EXACTLY the scenario. Only instead of only rich teams having access to it (a set of shifters will run you what, $700?) anyone with an internet can use this.
Comments like this make me question whether I should open source anything lest I be accused of 'cheating'.
JamesTerm
14-04-2014, 17:43
Comments like this make me question whether I should open source anything lest I be accused of 'cheating'.
Please don't let this one person's comment question or prevent you from posting code... look at all the good that has come from Jared releasing this code.
If we give in to comments like this... and good code stops being shared everyone will lose... Keep the good code flowing! Exposure to continuously well-written code for students to view can inspire them. Now that I think about it, well-written code still inspires me! ;)
billbo911
14-04-2014, 18:32
... Exposure to continuously well-written code for students to view can inspire them. Now that I think about it, well-written code still inspires me! ;)
I could not have said it any better!
Dunngeon
14-04-2014, 23:59
I had expected that your project was forked and that was why I asked for clarification as to what the students did. Instead, your answer wasn't completely clear to me as to exactly what the students did for CheesyVision. I do understand that it was "mentor heavy." Though you couldn't tell the differences between student and mentor effort when you were in high school, I trust my judgment because I have done programming for 18 years and know the subtle differences in programming skills at all levels. I do think very highly of the work you released to all teams. I am sure students also do.
I don't understand why the Poofs have to prove to you that student's built it. I'm pretty sure your team used gearboxes from Vex or AndyMark this year, which were designed by mentors/paid engineers. Should those also be exclusively designed and built by students? The idea of COTS is to allow easier entry into FRC and raise the level of play. I would argue that it has, because the level of play is exponentially higher than in the early 2000's when teams were required to build (almost) everything. The same concept applies here, a team that built something amazing is sharing it with the greater community in an effort to increase the competition level. Beyond that, they could have waited until the end of the season to release this vision program, keeping a competitive edge over most teams. Instead they have released it, and had mentors comb through it so that it is easy to implement. I think this release speaks volumes about the character of the members of Team 254.
Your analogy to a COTS part is not equivalent to this situation: several mentors appear to have worked exclusively on a project that was used to give a competitive advantage to the game performance given limitations of the Field Management System. Albeit it was not an overwhelming advantage and any team could have done the exact same thing. Again, I am not saying Team 254 has broken any rules. But I find it interesting that a NASA sponsored (funded?) team, and the team with the best winning record of FIRST, needs to have mentors do exactly what you have done for a high school competition. Of course you stand by your decision to do CheesyVision the way that you did -- it's easy to stand by a decision that has no consequences. Again, you wouldn't of known this even existed if they hadn't released it.
I am eminently fortunate to always have mentored teams that were student run and each team has students just as impressive as the ones you described. From what I have learned today, I think the difference between your team and my teams is that other mentors keep it students vs students.
Either team type can have benefits and drawbacks, it's all in implementation. Our team is fully student run, but sometimes I wish we had more mentors because then other students and I could learn so much more
I do not intend for any of my posts to put you on the defensive nor to diminish your students' work hard. I am trained to speak my mind and your reply has been informative. Thank you for answering.
I'm gonna speak my mind here, everything you wrote above this diminishes the work students have put into Team 254's robot this year. I've no doubt that much of the robot was mentor driven, either directly or indirectly, but if mentors built the entire robot I greatly doubt that top students would stay around for long. The students of 254 make it what it is, just like the students of 955 make our team what it is. Mentors add capabilities to teams because of the knowledge they bring. One of our mentors brought our CNC to life and revolutionized our build process, something a student that is only around for 4 years would have trouble achieving. Team 254's mentors bring knowledge to the table as well and I'm very glad they decided to share it. The debate over mentor domination really shouldn't pollute this generous gesture from the poofs.
Also, Thanks for Cheesy Vision! 955 used it with 1 and 2 ball hot at PNWCMP last weekend :)
Ryan
notmattlythgoe
15-04-2014, 09:02
2363 worked on integrating this into our system last night in preparation for the Championship. We were able to get it working with our one ball and will be working on integrating it into our 2 ball tonight.
Thank you for this innovative out of the box system. It is amazing the simple things teams come up with each and every year and how much you can learn by just looking at what other teams have done.
Coach Norm
15-04-2014, 10:34
2468 team appreciate used a system like this at Bayou last week. This never occurred to us - it's so simple and elegant. This will be pretty cool to show kids at demos.
Alex, thanks for the shout out on our system including pictures and the code. Kylar has posted an explanation of our system here: http://www.chiefdelphi.com/forums/showthread.php?threadid=128785
Kylar worked with Greg McKaskle from NI on this implementation. Kylar will be at Championships if you have any questions for him regarding this programming technique.
rwood359
16-04-2014, 15:53
Please ask any questions here so I can publicly answer them.
Thanks for posting your receive routine and thanks 254 for the original post. Is there something that we are missing?
We copied your loop into periodic tasks and changed the IP and port. We installed the three routines and changed the IP in CheesyVision. We can't get a connection between the ports. CheesyVision says no connection LabView says error 63 or 65 - connection refused.
Any ideas as to what we are doing wrong?
Thanks
TikiTech
16-04-2014, 16:14
A big MAHALO (Hawaiian thank you)!!!
Our team does not have any programming mentors and is entirely student coded. They have been working with the camera tracking the hot goal with some success.
Our students do not use python but were very intrigued by this and had it implemented in a very short time into their C++ code.
In fact our programming cadre has now become INSPIRED to learn more of the language.
THIS IS AWESOME.
Without your gracious sharing of your code I doubt the students would of looked at another programming language, especially this late in the season.
Without a doubt this is what I really love about FIRST. Causing inspiration across the world by sharing.. Keep it up
Good luck to all teams attending the championship!
We will be rocking CheesyVision at St Louis.. See many of you there.
Aloha!
Hi, I have been trying to use the Cheesy vision with LabView, but i can't find the correct functions to use it. Is it possible? if so how can i do it?
Thank you!
billbo911
18-04-2014, 19:18
Hi, I have been trying to use the Cheesy vision with LabView, but i can't find the correct functions to use it. Is it possible? if so how can i do it?
Thank you!
As much as I would like to say I can help, I can't.
The solution I posted here (http://www.chiefdelphi.com/forums/showpost.php?p=1371529&postcount=11) will not work! What I posted is a Socket Requester. What is needed is a Socket Receiver.
I have tried dozens of variations based on tutorials, and on-line NI help, but so far have not been able to find anything that will work with CheesyVision and LabView. I know it "should" be simple, but so far I have not found anything that will work.
That said, it could quite easily be my setup. I do not have access to a cRio, so I am using one laptop to run CheesyVision and another running the "receiver" vi in a standalone configuration.
If anyone has any insight in how to resolve this, PLEASE SPEAK UP!!
Thad House
18-04-2014, 19:31
As much as I would like to say I can help, I can't.
The solution I posted here (http://www.chiefdelphi.com/forums/showpost.php?p=1371529&postcount=11) will not work! What I posted is a Socket Requester. What is needed is a Socket Receiver.
I have tried dozens of variations based on tutorials, and on-line NI help, but so far have not been able to find anything that will work with CheesyVision and LabView. I know it "should" be simple, but so far I have not found anything that will work.
That said, it could quite easily be my setup. I do not have access to a cRio, so I am using one laptop to run CheesyVision and another running the "receiver" vi in a standalone configuration.
If anyone has any insight in how to resolve this, PLEASE SPEAK UP!!
I was able to get it to work in LabVIEW by converting it to use pynetworktables.
http://www.chiefdelphi.com/forums/showpost.php?p=1372029&postcount=54
Thats the link to my post with the download, and the pynetworktables can be found here
http://firstforge.wpi.edu/sf/frs/do/viewRelease/projects.robotpy/frs.pynetworktables.2014_4
The variables can just be read using the SmartDashboard ReadBoolean Vi's.
We got it to work in LV by switching to a UDP socket instead of a TCP socket (on port 1130).
CheesyVision side, we removed the retry and connection code and use sendto instead of send to send a UDP socket. A quick google search (on my phone while at MSC) helped with this.
On the LV side, we used a UDP Open and UDP Listen with a timeout of 0 in a While loop. When UDP Listen returns an error (timed out), we have some logic to use the last good byte recieved as the Cheesyvision byte, timestamp it, then calculate age (dt of timestamp), and report the byte and age to our code.
I don't have the exact code, I'll see if I can get it.
Total coding time was under 10mins in the pits. This was after an hour or so of fooling around with TCP.
billbo911
19-04-2014, 11:48
We got it to work in LV by switching to a UDP socket instead of a TCP socket (on port 1130).
CheesyVision side, we removed the retry and connection code and use sendto instead of send to send a UDP socket. A quick google search (on my phone while at MSC) helped with this.
On the LV side, we used a UDP Open and UDP Listen with a timeout of 0 in a While loop. When UDP Listen returns an error (timed out), we have some logic to use the last good byte recieved as the Cheesyvision byte, timestamp it, then calculate age (dt of timestamp), and report the byte and age to our code.
I don't have the exact code, I'll see if I can get it.
Total coding time was under 10mins in the pits. This was after an hour or so of fooling around with TCP.
Excellent!
I will try to replicate this approach this morning.
Please post your code when you can! It will help tremendously if we can't get it dialed in.
billbo911
20-04-2014, 18:50
OK, Here is a LabView TCP Receiver.
I can't believe how easy it was!
All my struggles were because I had a minor misunderstanding of how my IDE (Notepad++) was interacting with the CheesyVision code and also the security settings in Win 8 were preventing me from testing this receiver. The CheesyVision code is solid. Now this vi works just as reliably with it.
alexander.h
21-04-2014, 17:20
OK, Here is a LabView TCP Receiver.
I can't believe how easy it was!
All my struggles were because I had a minor misunderstanding of how my IDE (Notepad++) was interacting with the CheesyVision code and also the security settings in Win 8 were preventing me from testing this receiver. The CheesyVision code is solid. Now this vi works just as reliably with it.
If I understand well, this is the exact same code with the exact same functionality ... just in Labview?
alexander.h
21-04-2014, 17:29
I LOVE IT!!
This year 2073 used a USB webcam on our bot to track the balls. It was implemented to assist the driver with alignment to balls when they were obstructed from his view or just too far away to easily line up.
We won the Inspiration in Control Award at both Regionals we attended because of it. If 254 can share their code, we can share the Labview receiver we used to help any team that can take advantage of it.
Set the IP of the receiver to that of your DS, the Port number to that set on line 72 of the code 254 provided, and set number of bytes to read to whatever you are sending. In the case of 254's code, that should be 1.
A quick look at the block diagram will make it obvious what to do.
Please ask any questions here so I can publicly answer them.
How I understand this : the Labview code tracks the ball according to the shape and colour? If so, that means you're able to make an order of operations to follow once it tracks down the ball? For example, track the ball, calculate the distance away from the ball, move forward the correct distance and collect the ball when it reaches the right distance from the ball?
billbo911
21-04-2014, 19:10
If I understand well, this is the exact same code with the exact same functionality ... just in Labview?
This code just receives the output from CheesyVision. You still need CheesyVision running on the DS.
alexander.h
21-04-2014, 19:14
This code just receives the output from CheesyVision. You still need CheesyVision running on the DS.
And how would one be able to do this?
billbo911
21-04-2014, 19:22
And how would one be able to do this?
Read the very first in this thread. Everything you need to know is there.
billbo911
21-04-2014, 19:34
How I understand this : the Labview code tracks the ball according to the shape and colour? If so, that means you're able to make an order of operations to follow once it tracks down the ball? For example, track the ball, calculate the distance away from the ball, move forward the correct distance and collect the ball when it reaches the right distance from the ball?
LabView is not involved except for driving the robot.
We used a USB webcam attached to a PCDuino. It wold track the balls based on color and shape. We also had a switch on the DS that allowed us to select to track Blue or Red.
We only used the "x axis" center of the ball to assist the driver with aligning to the ball. We never used the distance to the ball.
We feed the "x" value to LabView to be used to help the driver align.
alexander.h
21-04-2014, 19:47
LabView is not involved except for driving the robot.
We used a USB webcam attached to a PCDuino. It wold track the balls based on color and shape. We also had a switch on the DS that allowed us to select to track Blue or Red.
We only used the "x axis" center of the ball to assist the driver with aligning to the ball. We never used the distance to the ball.
We feed the "x" value to LabView to be used to help the driver align.
OK ... so for Cheesy Vision, there is almost nothing going in Labview. As for the ball tracker, couldn't I just get the alliance colour and send that as the colour to recognize instead of using a switch on the Driver Station?
JamesTerm
22-04-2014, 12:51
As for the ball tracker, couldn't I just get the alliance colour and send that as the colour to recognize instead of using a switch on the Driver Station?
There has been some talk about that... on this (http://www.chiefdelphi.com/forums/showthread.php?p=1376109#post1376109) thread.
BTW... I like the .h in your name... I thought you were a c++ programmer. :)
alexander.h
22-04-2014, 18:13
There has been some talk about that... on this (http://www.chiefdelphi.com/forums/showthread.php?p=1376109#post1376109) thread.
Thanks for the link!
BTW... I like the .h in your name... I thought you were a c++ programmer. :)
Ha ha ha ... No, it just symbolize's the initial of my last name : Hassler.
Skragnoth
23-04-2014, 21:34
Has anyone had any luck using cheesy vision on the playing fields at champs? It works perfectly for us in the pit, but it is not able to connect to the robot on the Newton playing field or the Newton practice field. We submitted a question to the FTA regarding whether they have port 1180 blocked but haven't gotten back to us yet.
Has anyone had any luck using cheesy vision on the playing fields at champs? It works perfectly for us in the pit, but it is not able to connect to the robot on the Newton playing field or the Newton practice field. We submitted a question to the FTA regarding whether they have port 1180 blocked but haven't gotten back to us yet.
We ran it fine on our practice match on Galileo field.
DjScribbles
24-04-2014, 12:56
Has anyone had any luck using cheesy vision on the playing fields at champs? It works perfectly for us in the pit, but it is not able to connect to the robot on the Newton playing field or the Newton practice field. We submitted a question to the FTA regarding whether they have port 1180 blocked but haven't gotten back to us yet.
If you (or anyone else) are using my original C++ port, there were a few issues in the original code that could be causing your problems. See this post for the details: http://www.chiefdelphi.com/forums/showpost.php?p=1375871&postcount=5
Skragnoth
26-04-2014, 06:14
If you (or anyone else) are using my original C++ port, there were a few issues in the original code that could be causing your problems. See this post for the details: http://www.chiefdelphi.com/forums/showpost.php?p=1375871&postcount=5
It turns out that port 1180 was being blocked on the Newton field. They unblocked it and cheesy vision worked all day Thursday and the first half of Friday. Then it was no longer able to connect after lunch on Friday for 2 of our matches. We asked the FTA about it and they wouldn't admit it was their fault but it magically worked our next match. :)
JamesTerm
26-04-2014, 21:15
kudos to you guys, and to quote one of our engineers... "That is a great idea and they are real champs for sharing".
Haha Robert... they are real Champs. :)
billbo911
19-08-2014, 15:18
OK, Here is a LabView TCP Receiver.
I can't believe how easy it was!
All my struggles were because I had a minor misunderstanding of how my IDE (Notepad++) was interacting with the CheesyVision code and also the security settings in Win 8 were preventing me from testing this receiver. The CheesyVision code is solid. Now this vi works just as reliably with it.
After several days of testing, we found that even this code would not work reliably. What we found was that the cRio would max out the CPU Utilization and crash immediately after it started receiving data from CheesyVision. No matter how we modified the vi, the cRio just wouldn't handle the stream of data being sent to it.
Previously, all our testing and development had been done between two quad core PCs. Horsepower was not an issue. Once we deployed it to the cRio, we found that it simply could not handle the amount of data being streamed to it.
So, back to the drawing board.
The approach we took then was to rewrite CheesyVision so that it would continue to continuously sample the image data, but not send the current status to the cRio until the cRio requested it. BTW, this is exactly the same approach we used with 3X award winning DoubleVision (http://www.chiefdelphi.com/forums/showthread.php?t=128682&highlight=2073).
This has been tested (THANK YOU 3250!!) and proved to be stable when deployed to the cRio when running under LabView.
https://github.com/EagleForce/Cheesy-Eagle-Vision
Please follow the instructions and link on this page to install CheesyVision. Then once you are able to run CheesyVision on your DS, download the modified version of CheesyEagleVision from our GitHub to your DS. Simply run it instead of CheesyVision. No modifications to CheesyEagleVision are needed.
Also download the CheesyEagle Receiver.vi. Place the vi into your code wherever you would like it to execute from. The only modifications needed to the vi are to set the IP of your DS in it and determine how you would like to use the data output from it.
This has not been tested on the new RoboRio yet. So, if one of the Beta teams can do that, it would be greatly appreciated!
thinker&planner
19-08-2014, 21:16
Don't get me wrong, I think this is an amazing "thinking out of the robot" solution, but there is a part of me that just doesn't think this is really in the spirit of the whole "Auto" period. You don't even need vision tracking, one of the things that can distinguish an amazing robot from a better-than-average robot. (Yes, vision tracking usually makes robots amazing in teleop too, but it (was) almost the only way to distinguish "hot" targets in auto)
As far as next year, I would imagine that the rules will either have a "hybrid" period or not allow this type of thing.
Once again, great job! I can't say enough how I admire solutions (and loopholes) like this.
MrTechCenter
19-08-2014, 22:37
Don't get me wrong, I think this is an amazing "thinking out of the robot" solution, but there is a part of me that just doesn't think this is really in the spirit of the whole "Auto" period. You don't even need vision tracking, one of the things that can distinguish an amazing robot from a better-than-average robot. (Yes, vision tracking usually makes robots amazing in teleop too, but it (was) almost the only way to distinguish "hot" targets in auto)
As far as next year, I would imagine that the rules will either have a "hybrid" period or not allow this type of thing.
Once again, great job! I can't say enough how I admire solutions (and loopholes) like this.
They did have hybrid mode back in 2012 and hardly anybody used it. Also, it's worth noting that one of the main reasons CheesyVision was created was because the hot goal lights and retroreflective targets were not properly synched with the field timer, causing those using sensors or cameras to detect the hot goal to not work (This was the case for both us and 254 at CVR). FIRST said that a software fix would be implemented after week two to correct this issue, although I don't believe it was ever truly fixed.
AdamHeard
20-08-2014, 01:27
It's not a loophole.... FIRST clarified such things were legal in QnA. They were legal in 2013 as well.
Don't get me wrong, I think this is an amazing "thinking out of the robot" solution, but there is a part of me that just doesn't think this is really in the spirit of the whole "Auto" period. You don't even need vision tracking, one of the things that can distinguish an amazing robot from a better-than-average robot. (Yes, vision tracking usually makes robots amazing in teleop too, but it (was) almost the only way to distinguish "hot" targets in auto)
As far as next year, I would imagine that the rules will either have a "hybrid" period or not allow this type of thing.
Once again, great job! I can't say enough how I admire solutions (and loopholes) like this.
JamesTerm
20-08-2014, 09:48
Don't get me wrong, I think this is an amazing "thinking out of the robot" solution, but there is a part of me that just doesn't think this is really in the spirit of the whole "Auto" period. You don't even need vision tracking, one of the things that can distinguish an amazing robot from a better-than-average robot. (Yes, vision tracking usually makes robots amazing in teleop too, but it (was) almost the only way to distinguish "hot" targets in auto)
As far as next year, I would imagine that the rules will either have a "hybrid" period or not allow this type of thing.
Once again, great job! I can't say enough how I admire solutions (and loopholes) like this.
I have to agree with thinker and planner as I felt the spirit of the game was to encourage and promote vision processing (the way it was intended with the reflectors) to teach students this awesome technology. I should clarify "spirit of game" should not be associated with terms like loophole or whether or not it is legal. As for the issue with the delay... I believe there are at least a few teams that would account for this as it is trivial in code to solve... even if it meant less time to finish. After all... the art of writing a control system is error management and if you write one... you know what I'm talking about.
I really hope future games will indeed target the need for vision processing the way it has been laid out (i.e. have a good ROI in points to use it)... Vision processing is an arduous task and I'd love to see more teams master it!
billbo911
20-08-2014, 10:31
Don't get me wrong, I think this is an amazing "thinking out of the robot" solution, but there is a part of me that just doesn't think this is really in the spirit of the whole "Auto" period. You don't even need vision tracking, one of the things that can distinguish an amazing robot from a better-than-average robot. (Yes, vision tracking usually makes robots amazing in teleop too, but it (was) almost the only way to distinguish "hot" targets in auto)
As far as next year, I would imagine that the rules will either have a "hybrid" period or not allow this type of thing.
Once again, great job! I can't say enough how I admire solutions (and loopholes) like this.
I have to agree with thinker and planner as I felt the spirit of the game was to encourage and promote vision processing (the way it was intended with the reflectors) to teach students this awesome technology. I should clarify "spirit of game" should not be associated with terms like loophole or whether or not it is legal. As for the issue with the delay... I believe there are at least a few teams that would account for this as it is trivial in code to solve... even if it meant less time to finish. After all... the art of writing a control system is error management and if you write one... you know what I'm talking about.
Thank you both for sharing your opinions on this. While I may not agree 100%, I truly see your point and respect your position.
All I did with this modification was to close the loop on it's use under LabView.
I really hope future games will indeed target the need for vision processing the way it has been laid out (i.e. have a good ROI in points to use it)... Vision processing is an arduous task and I'd love to see more teams master it!
James,
THIS!!
I could not have said it any better. If games could be made that have a large ROI in points for use of Vision, I would be thrilled!
As for the issue with the delay... I believe there are at least a few teams that would account for this as it is trivial in code to solve... even if it meant less time to finish. After all... the art of writing a control system is error management and if you write one... you know what I'm talking about.
We did exactly what you said. The delay was unacceptable and was something FIRST was either unwilling or unable to fix. It was impossible for a 3 ball auto to work consistently under those conditions. That is the only reason we made Cheesy Vision.
JamesTerm
20-08-2014, 13:05
We did exactly what you said. The delay was unacceptable and was something FIRST was either unwilling or unable to fix. It was impossible for a 3 ball auto to work consistently under those conditions. That is the only reason we made Cheesy Vision.
Ah, that is good to know... I'm glad you guys went through this path first! We had 3 ball auton in code, and could not fit it within 10 seconds... so kudos for being able to do that. (Our winch took too long to load).
FWIW: They could have easily appended more time to account for the delay as that would have been a software solution... but yeah... probably easier said than done... probably some consequences if they would have went down that path. I do wonder though how they came up with 10 seconds in the first place.
JamesTerm
21-08-2014, 00:31
We did exactly what you said. The delay was unacceptable and was something FIRST was either unwilling or unable to fix. It was impossible for a 3 ball auto to work consistently under those conditions. That is the only reason we made Cheesy Vision.
Ok I got to thinking about this and something is not adding up to me perhaps you can explain. How can Cheesy Vision make up time for the delay? Grant Imahara once said that it takes an average person 200ms to react to a sudden change. So to me anything a human could to *for this context* processing vision could do as well, but even better since it has a faster reaction time.
Kevin Sheridan
21-08-2014, 01:18
Ok I got to thinking about this and something is not adding up to me perhaps you can explain. How can Cheesy Vision make up time for the delay? Grant Imahara once said that it takes an average person 200ms to react to a sudden change. So to me anything a human could to *for this context* processing vision could do as well, but even better since it has a faster reaction time.
So for our three ball auto we determine which goal is hot and drive to the opposite goal and wait for the goals to switch to start shooting. With our old detection system we put a 1 second delay to see which goal was hot before we started driving to the opposite goal. Since the goals were extremely inconsistent on switching (sometimes the lights/targets were delayed by up to 1.5 seconds) we implemented CheesyVision. Using CheesyVision we can determine the hot goal as we drive forward because we are no longer reliant on sensors looking at the retro reflective targets. I believe we determined which goal was hot around the 2.5-3 second mark giving our operator a large window to tell the robot which goal is hot.
JamesTerm
21-08-2014, 08:01
Using CheesyVision we can determine the hot goal as we drive forward because we are no longer reliant on sensors looking at the retro reflective targets. I believe we determined which goal was hot around the 2.5-3 second mark giving our operator a large window to tell the robot which goal is hot.
Ah I get it... as you drive forward your FOV is too narrow to view both reflectors, and yeah I wouldn't want to use fish eye lens to mess with the geometry of recognizing rectangles either. Thanks for explanation. :)
I do want to throw out for the good of the group the idea of streaming 2 video feeds at around 3mbps using h264... of course the 3mbps is only necessary if one is doing vision processing on the PC driverstation... I'm looking forward to seeing what kind of processing power the robo-rio will offer. We were planning on streaming 2 video feeds, but found the human ability to pick up balls did not need a rear view camera.
Abhishek R
11-02-2015, 11:08
Here I thought this thread was revived to discuss possible merits of Cheesy Vision for this year's game. There seem to be a couple...
I thought there was a rule specifically outlawing the use of webcams and Kinects to communicate with the robot during autonomous?
Andrew Schreiber
11-02-2015, 11:15
I thought there was a rule specifically outlawing the use of webcams and Kinects to communicate with the robot during autonomous?
There is.
Caleb Sykes
11-02-2015, 11:27
Here I thought this thread was revived to discuss possible merits of Cheesy Vision for this year's game. There seem to be a couple...
It might be feasible to put a laptop with webcam right on the robot, and then have the HP use cheesy vision to line up the robot with the chutes.
I thought there was a rule specifically outlawing the use of webcams and Kinects to communicate with the robot during autonomous?
It's 100% illegal, but the thought exercise opens up some creativity in robot control / silly antics.
I've moved all the "mentor vs student" posts that weren't discussing CheesyVision into their own thread. Please continue the discussion over there:
http://www.chiefdelphi.com/forums/showthread.php?t=134357
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.