Go to Post Your drivetrain is THE most important system on your robot. Don't rob from it. - Monochron [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rating: Thread Rating: 29 votes, 5.00 average. Display Modes
  #16   Spotlight this post!  
Unread 12-10-2014, 20:20
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by Abhishek R View Post
Answering question 4, the Kinect is a viable means of vision sensing. I'd recommend checking out this paper from Team 987, who used the Kinect very effectively as a camera in 2012's FRC challenge, Rebound Rumble. I believe one of the major advantages of the Kinect is it's depth perception is much better than a standard camera, though I'm not really a vision expert.
That is quite an old document. OpenKinect has changed significantly and is much harder to use now! The documentation kind of sucks as the examples are all in very difficult (for me) C!

The greatest problem with the Kinect was getting it to work. I have never succeeded in opening a kinect stream from OpenCV!

The depth map of the Kinect is surprisingly accurate and powerful!

As of last year, thresholding was the easy part ! Just create a simple OpenCV program to run on your PC, to connect to the camera and get video! Create sliders for each of the HSV values, and keep messing with one bar until the target starts barely fading! Do this for all three sliders. You want to end with the target as white as possible! It is OK if there are tiny holes or 1-4 pixels in the target not highlighted. Next, perform a GaussianBlur transformation. Play around with the kernel size until the target is crisp and clear!

Last year, I use std::fstream to write configuration files. It is a good idea, unless you get a program that has a much better configuration parser! Just write the HSV values to the file and push it onto your processor! Voilla! You have your perfect HSV inrange values!

Hunter mentioned to me, last year, that when at the competitions, as soon as possible, ask field staff if there will be time where you will be able to calibrate your vision systems! At the Phoenix regional, this was during the first lunch break! USE THAT PERIOD! Take the bot on the field and take a gazillion pictures USING THE VISION PROCESSOR CAMERA, so when you aren't under as much stress, you can go through a couple of them at random locations and find the best values!

As I mentioned before, and will again in caps lock, underline and bold:
SET UP A CONFIGURATION FILE!

This way, you can change your program without actually changing code!

Last edited by yash101 : 12-10-2014 at 20:30.
  #17   Spotlight this post!  
Unread 12-10-2014, 20:50
controls weenie's Avatar
controls weenie controls weenie is offline
Registered User
FRC #2973
Team Role: Mentor
 
Join Date: Oct 2014
Rookie Year: 2011
Location: United States
Posts: 23
controls weenie is an unknown quantity at this point
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by marshall View Post
We're using OpenCV in C++ so example code is plentiful around the web. Our students have just now started to use it so we don't have anything to share just yet. If we make progress to the point where we can share it then we will, probably towards the end of build season.

The big deal with the TK1 is that it has the ability to use the GPU to assist with offloading work. To my knowledge, there is no method to use the GPU assisted functions for OpenCV with Python currently but that might be changing with the 3.x code release around the corner. We're using the 2.4.x code right now.

C++ is what we are using for the GPU integration as of right now because you have to manually manage the memory for the GPU and shuffle images onto it and off of it as you need them. Nvidia has a decent amount of resources out there for the Jetson but it is definitely not a project for those unfamiliar with linux. It's not a Raspberry Pi and not anywhere near as clean as a full laptop. To get it working you have to do a bit of assembly. It's a nice computer, just not as straight forward as a Pi or a PCDuino or any of the others that have larger user bases. There are also problems running X11 on it so you really need to run it headless (Nvidia writes binary blob graphics drivers for linux that are not super stable).

We're aiming for full 1080 but depending on the challenge we will likely have to down sample to 720 to get it to work with the frame rates we need.

Granted, this is all off-season right now and we have a lot of testing to do between now and the events before any of this is guaranteed to go on the robot. For all I know FIRST is going to drop vision entirely... I mean, cameras don't work under water do they?
Oh yeah...I forgot about the water issue

I see an issue getting a USB camera driver reading the image more than 30Hz. This was an issue with the PCDuino and our web cam last year. The Ubuntu USB driver would not feed the processor more than 30Hz. Dumping the images from RAM to GPU could be a bottle neck because of the huge sizes of the frame buffers.

I used python binding at work to copy data to (and from) the GPU queue. Python might be easier for the kids to use if it is available. I wonder if you can use OpenCL on the TK1 dev kit? OpenCL might give you the OpenCV/python bindings on that OS.

I hope FIRST continues to have image processing during the games. Some of the kids enjoy that more than any other task. Good luck with the TK1.
  #18   Spotlight this post!  
Unread 12-10-2014, 21:02
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by marshall View Post
For all I know FIRST is going to drop vision entirely... I mean, cameras don't work under water do they?
AUVs (autonomous underwater vehicle) are gradually developing vision systems. A big problem is correcting the color distortion from the water. A good friend of mine is working in a lab at Cornell and is detecting, and retrieving, different colored balls at the bottom of a swimming pool.

The task of finding your local position (aka GPS-denied) becomes exponentially more complex when you do it in 3 dimensions (think quad-copters or AUVs).

Quote:
Originally Posted by yash101 View Post
The greatest problem with the Kinect was getting it to work. I have never succeeded in opening a kinect stream from OpenCV!

The depth map of the Kinect is surprisingly accurate and powerful!

Next, perform a GaussianBlur transformation. Play around with the kernel size until the target is crisp and clear!

Hunter mentioned to me, last year, that when at the competitions, as soon as possible, ask field staff if there will be time where you will be able to calibrate your vision systems!
Over half the battle is getting everything to work, in my opinion. You have to compile source code and sometimes change cmakelists (if you want to compile opencv with openni).

For those of you interested in what the depth map looks like for the kinect: depth map

You can do a lot of cool things with a depth map, but that's for another discussion.

I personally am not a fan of blurring an image unless I absolutely have to, or if my calculation requires a center and not corners of a contour.

You should be asking when you can calibrate vision to the point that it is borderline harassment until you get an answer. A lot of venues are EXTREMELY poor environments due to window locations, but there isn't much you can do about it. As an example: uhhhh
By lunch on Thursday, I got it working like it did in stl:stl

Here is a short "video" me and a student made during calibration at stl:videoWe tweaked some parameters and got it to work nearly perfectly. As you can guess, we tracked the tape and not the leds for hot goal detection. I somewhat regret that decision, but it's whatever now.

final
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."

Last edited by faust1706 : 12-10-2014 at 21:20.
  #19   Spotlight this post!  
Unread 12-10-2014, 21:57
Joe Ross's Avatar Unsung FIRST Hero
Joe Ross Joe Ross is offline
Registered User
FRC #0330 (Beachbots)
Team Role: Engineer
 
Join Date: Jun 2001
Rookie Year: 1997
Location: Los Angeles, CA
Posts: 8,600
Joe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond reputeJoe Ross has a reputation beyond repute
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by MrRoboSteve View Post
Our team is wanting to get serious about vision this year, and I'm curious what people think is the state of the art in vision systems for FRC.

Questions:

1. Is it better to do vision processing onboard or with a coprocessor? What are the tradeoffs? How does the RoboRIO change the answer to this question?

2. Which vision libraries? NI Vision? OpenCV? RoboRealm? Any libraries that run on top of any of these that are useful?

3. Which teams have well developed vision codebases? I'm assuming teams are following R13 and sharing out the code.

4. Are there alternatives to the Axis cameras that should be considered? What USB camera options are viable for 2015 control system use? Is the Kinect a viable vision sensor with the RoboRIO?
I don't think that most teams fail at vision processing because of any of the items listed. FIRST provides vision sample programs for the main vision task that generally work well. Here's what I think teams need to work on to be successful with vision processing:
  1. You need to have a method to tweak constants fairly quickly, to help with initial tuning and also to tweak based on conditions at competition.
  2. You need to have a method to view, save, and retrieve images which can help tune and tweak the constants.
  3. You need to have a way to use the vision data, for example accurately turn to an angle and drive to a distance.
  4. You need to understand exactly what the vision requirements are for the game. Most of the time, there are one or more assumptions you can make which will greatly simplify the task.

As for your third question, Team 341's 2012 vision sample program is probably the most popular: http://www.chiefdelphi.com/media/papers/2676

As for us, we've used LabVIEW/NI Vision on the dashboard PC. This makes it much easier to tweak constants and view and save images.

Last edited by Joe Ross : 12-10-2014 at 22:12.
  #20   Spotlight this post!  
Unread 14-10-2014, 00:07
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Vision: what's state of the art in FRC?

I have plans for an OpenCV Codegen, where I basically make a drag and drop (more like click to add) interface that writes the C++ code for you. It won't be 100% efficient because it really is just putting together known bits of code that work. It is up to you to thereafter change the variable names and optimise the code. I am trying to learn how to thread HighGUI at the moment so hopefully everything should be 100% threaded!

This will be meant to help beginners (and adept) programmers get OpenCV code down in no time!

I will also try to add two network options -- DevServer-based, and C Native socket calls (Windows TCP/UDP, UNIX TCP/UDP).

I have been slowly working on this project since last year. I am thinking about it being 100% web-based. Hopefully, this will make getting started with OpenCV a no-brainer!

It is my goal this year, to get my vision code completed as soon as possible!
  #21   Spotlight this post!  
Unread 14-10-2014, 00:23
Tom Bottiglieri Tom Bottiglieri is offline
Registered User
FRC #0254 (The Cheesy Poofs)
Team Role: Engineer
 
Join Date: Jan 2004
Rookie Year: 2003
Location: San Francisco, CA
Posts: 3,188
Tom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond repute
Re: Vision: what's state of the art in FRC?

While CV is a neat field and is definitely worth learning more about, planning to use a solution before you know the problem is probably a bad idea. Our team looks at vision as a last resort as it introduces extra points of failure to an already complicated robot.

I recommend using simple sensors to get your robot the perception it needs, then tuning your control loops/operator interface code until you run into a brick wall. If you really can't achieve the task you want without a ton of driver practice, then look in to adding a vision system to give you that last little bit of performance.
  #22   Spotlight this post!  
Unread 14-10-2014, 00:59
billbo911's Avatar
billbo911 billbo911 is offline
I prefer you give a perfect effort.
AKA: That's "Mr. Bill"
FRC #2073 (EagleForce)
Team Role: Mentor
 
Join Date: Mar 2005
Rookie Year: 2005
Location: Elk Grove, Ca.
Posts: 2,384
billbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond repute
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by Tom Bottiglieri View Post
While CV is a neat field and is definitely worth learning more about, planning to use a solution before you know the problem is probably a bad idea. Our team looks at vision as a last resort as it introduces extra points of failure to an already complicated robot.

I recommend using simple sensors to get your robot the perception it needs, then tuning your control loops/operator interface code until you run into a brick wall. If you really can't achieve the task you want without a ton of driver practice, then look in to adding a vision system to give you that last little bit of performance.
Tom,
I've heard you make these comments before, and although I don't agree 100%, I fully understand your reasoning and logic. Since I like to look to the Poofs as a team to learn from, I would like to know if there ever was a game where 254 needed to use vision to overcome an obstacle?
__________________
CalGames 2009 Autonomous Champion Award winner
Sacramento 2010 Creativity in Design winner, Sacramento 2010 Quarter finalist
2011 Sacramento Finalist, 2011 Madtown Engineering Inspiration Award.
2012 Sacramento Semi-Finals, 2012 Sacramento Innovation in Control Award, 2012 SVR Judges Award.
2012 CalGames Autonomous Challenge Award winner ($$$).
2014 2X Rockwell Automation: Innovation in Control Award (CVR and SAC). Curie Division Gracious Professionalism Award.
2014 Capital City Classic Winner AND Runner Up. Madtown Throwdown: Runner up.
2015 Innovation in Control Award, Sacramento.
2016 Chezy Champs Finalist, 2016 MTTD Finalist
  #23   Spotlight this post!  
Unread 14-10-2014, 02:20
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: Vision: what's state of the art in FRC?

I am doing vision less because it is so powerful, but more for the experience and learning. There is a lot to learn through it. There's a good amount of math involved behind the scenes, which I am slowly catching up on. It is also dependent on your algorithmic development skills. If you are able to figure out exactly what you want out of your program and draft how you are going to do it, the code is actually quite simple. It only took me a couple of hours to write my actual vision processing code. What took me the longest was A) optimizations B) features (yes, I overcomplicate things) C) Calibration/Testing/Getting people to listen and get out of your way when testing .

Think of the entire program like a cake. What makes a cake a cake is the bread inside. It may be covered with frosting, or it may be bare. The preparation to bake the cake are the testing protocols and your testing workbench. This step also involves you ensuring that what you are trying to solve is feasible, and worth the pain. The bread (cake) is your processing loop. This should be your first priority. After you have written your basic processing code, and have a test-bench and some testing protocols to ensure it works, you can proceed to the decoration stage. Optimizing the code and making it run with peak efficiency is what you should now work on. This stage is just like putting the frosting on the cake. Next, you can start writing on the cake -- adding features and Easter-eggs! Now that you have successfully made your cake, it is time to inspect it -- make sure you don't have any errors or bugs. Use the testing protocols you should have created before even starting this project to ensure everything you want is working. Finally, it is time to eat the cake! Om nom nom! Eating the cake is while at competitions, where your software is working perfectly, and you are doing much better than the other robots.

I came up with this model last year, where I failed to get my vision program completed in time. I started to write on the frosting before I even baked the cake, so there was nothing supporting my excessive features and everything broke. I also did not have a proper testing protocol last year, so my first demo to my mentors went as a flop. I didn't know about threading back then, and the camera buffers were overflowing, so I was getting 20+ seconds of lag. That is -- not a very great first impression.

Because of these problems I faced last year, I have been working on some software to help me get to the cake first, which are also open source. I have some grabber templates and I am working on a small OpenCV extension, as I mentioned before.

My two cents: If you are wanting to pursue anything complex in the next build season, get started right now. Create your development/testing platform, so when you get started coding, you have one step out of your way!

Last edited by yash101 : 14-10-2014 at 02:22.
  #24   Spotlight this post!  
Unread 14-10-2014, 10:37
MrRoboSteve MrRoboSteve is offline
Mentor
AKA: Steve Peterson
FRC #3081 (Kennedy RoboEagles)
Team Role: Mentor
 
Join Date: Mar 2012
Rookie Year: 2011
Location: Bloomington, MN
Posts: 582
MrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond reputeMrRoboSteve has a reputation beyond repute
Re: Vision: what's state of the art in FRC?

Thanks a lot everyone for your comments. Here are my notes from the thread.

General notes

Many teams are using vision, but it's not required to be successful. Often times there is a simpler control strategy than vision, for a particular task.

Teams using vision can't expect a lot of troubleshooting support from the CSA at the event.

Processing

There are two main strategies for performing vision processing.

1. Event -- perform a specific task
a. Aim -- e.g., robot is driven into position, and an aiming command is given.
b. Autonomous scoring -- moving robot into known position on field
c. Ramp -- robot automatically drives over 2012 ramp
2. Continuous -- the vision subsystem runs continuously, identifying one or more objects and feeding image telemetry as an input to the robot program
a. Drive to known position
b. Create HUD style display (image with overlay) to show driver
c. Indicate when robot is in scoring position to driver
Note that most of these are using telemetry as input to an autonomous process.


You have three choices on where vision processing runs, each of which has benefits and drawbacks.

Driver station
+ have full power of PC
+ NI libraries available
+ fairly easy to interface
- Communications limits between robot and driver station prevent certain algorithms from working. This can be a big limitation
+ easy to display telemetry to drive team
+ can use DS software to move telemetry to robot

cRIO
+ NI libraries available
+ simplest interfacing between vision program and robot programs
- Running vision on separate thread/process makes programming more complicated.
- Easier to crash robot program (e.g., memory management issues)
- Limited CPU power. Current WPILib with cRIO at 100% CPU exhibits unpredictable behavior
+ Easier to move images to DS than coprocessor option
- IP camera support only -- no USB camera support

roboRIO
+ NI libraries available
+ potential for openCV to work, but some questions about whether NI Linux Real-Time has necessary libraries
+ simplest interfacing between vision program and robot programs
- Running vision on separate thread/process makes programming more complicated.
- Easier to crash robot program (e.g., memory management issues)
+ USB support allows direct interfacing to cameras
+ Much more CPU power than cRIO
CPU 4-10x faster
Has NEON instruction support, which looks like it's supported in openCV. Unclear on NI Vision.

External single-board computer (SBC or coprocessor)
+ Many choices of hardware available, some more powerful than roboRIO.
Popular examples include Arduinos, Raspberry Pi, PCDuino, GHI Fez Raptor.
Nvidia Jetson TK1 looks like a monster board -- 2GB of RAM, 192 GPU cores, Tegra K1. OpenCV 2.4 doesn't appear to support the GPU, though.
SBC with a video output is easier to troubleshoot than one without.
+ Some hardware supports hardware graphics speedup (vector instructions, GPU)
+ Many SBCs have USB support, allowing direct camera interfacing
- No NI library support
- Requires ability to do UDP packet processing
- Display of image on DS is more difficult

Software

NI Vision generally considered to be easier to set up.

If you want the option of using a single board computer (vision coprocessor), you probably want to code in C++ or Java, as code can run in any of the three locations.

Running a web server on your coprocessor can make things easier. http://code.google.com/p/mongoose/ is one.
http://ndevilla.free.fr/iniparser/ is one of many free configuration file parsers written in C

Camera

Camera calibration is an essential part of the process. Ensure that the camera you select can be calibrated, and the settings persist through reboot/power cycles.
Mounting location also essential
Need to make sure your software library can acquire images from your camera. UVC is standard for USB cameras. UVC 1.5 supports H.264 video, which can be faster to process in certain ways if your vision proc
Some question about whether USB can support frame rates above 30hz

Cameras
Axis cameras (from KOP) are good choices for people just starting out. There is good built in WPILib support, and they maintain their settings through reboots.
Kinect works too. Depth map can be very useful. Driver support in OSS world seems rough.
Other interesting cameras: Asus Xtion, Playstation Eye
Future: Pixy
LED ring lights (typically green, don't use white) are considered essential

Vision programming tactics

. Need to be able to modify parameters at runtime
Driver station dashboard parameter setting
Config file on robot filesystem
Config file is more flexible because you could have named presets, selected via the DS dashboard, that combine several parameter settings
. openCV is very popular. NI Vision is also viable. No commenter supported RoboRealm; one felt it was too simple (but is that bad?!) and another was held back by fears about licensing issues
. It's debatable whether having a FRC specific library on top of a vision library has any use.
Lower the resolution if you need to run at a higher frame rate
Should have a calibration procedure that you use at competitions, which includes moving robot around competition field and taking a bunch of pictures through webcam to use back in pit for calibration purposes.
Some venues are really bad: https://www.dropbox.com/s/j8ju2ttvx7...et..png?d l=0

Resources

Team 2073 Vision Code from 2014: http://www.chiefdelphi.com/forums/sh...d.php?t=128682
pcDuino 3: http://www.pcduino.com/pcduino-v3/
roboRIO OS whitepaper: http://www.ni.com/white-paper/14627/en/
Team 987 Kinect Vision whitepaper from 2012: http://www.chiefdelphi.com/media/papers/2698
openCV camera calibration: http://docs.opencv.org/doc/tutorials...libration.html
Team 3847 Whitepaper on Raspberry Pi: http://www.chiefdelphi.com/media/papers/2709
Team 341 sample vision program from 2012: http://www.chiefdelphi.com/media/papers/2676
__________________
2016-17 events: 10000 Lakes Regional, Northern Lights Regional, FTC Burnsville Qualifying Tournament

2011 - present · FRC 3081 Kennedy RoboEagles mentor
2013 - present · event volunteer at 10000 Lakes Regional, Northern Lights Regional, North Star Regional, Lake Superior Regional, Minnesota State Tournament, PNW District 4 Glacier Peak, MN FTC, CMP
http://twitter.com/MrRoboSteve · www.linkedin.com/in/speterson
  #25   Spotlight this post!  
Unread 14-10-2014, 11:49
marshall's Avatar
marshall marshall is offline
My pants are louder than yours.
FRC #0900 (The Zebracorns)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2003
Location: North Carolina
Posts: 1,337
marshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond repute
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by MrRoboSteve View Post
Nvidia Jetson TK1 looks like a monster board -- 2GB of RAM, 192 GPU cores, Tegra K1. OpenCV 2.4 doesn't appear to support the GPU, though.
To be clear, OpenCV is supported on the GPU for the Jetson, just not with Python (to my knowledge). C++ definitely works on the GPU on the Jetson. We've had some awesome early success with it processing 640x480 images at like 60-80fps.
  #26   Spotlight this post!  
Unread 14-10-2014, 13:56
Tom Bottiglieri Tom Bottiglieri is offline
Registered User
FRC #0254 (The Cheesy Poofs)
Team Role: Engineer
 
Join Date: Jan 2004
Rookie Year: 2003
Location: San Francisco, CA
Posts: 3,188
Tom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond repute
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by billbo911 View Post
Tom,
I've heard you make these comments before, and although I don't agree 100%, I fully understand your reasoning and logic. Since I like to look to the Poofs as a team to learn from, I would like to know if there ever was a game where 254 needed to use vision to overcome an obstacle?
We used vision in 2012 to align with the basket from the key. You really had to be tight on the target that year and our long 8wd was not super easy to align by hand. We were able to pull out the angle of the robot relative to a line between the center of the robot and the center of the backboard, as well as the angle of said line relative to the field. This allowed us to aim a bit left or right of the center of the backboard based on whether we were on the right/left/center side of the key.
  #27   Spotlight this post!  
Unread 15-10-2014, 02:10
billbo911's Avatar
billbo911 billbo911 is offline
I prefer you give a perfect effort.
AKA: That's "Mr. Bill"
FRC #2073 (EagleForce)
Team Role: Mentor
 
Join Date: Mar 2005
Rookie Year: 2005
Location: Elk Grove, Ca.
Posts: 2,384
billbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond repute
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by Tom Bottiglieri View Post
We used vision in 2012 to align with the basket from the key. You really had to be tight on the target that year and our long 8wd was not super easy to align by hand. We were able to pull out the angle of the robot relative to a line between the center of the robot and the center of the backboard, as well as the angle of said line relative to the field. This allowed us to aim a bit left or right of the center of the backboard based on whether we were on the right/left/center side of the key.
Yes, I remember seeing it in action at CVR that year.
We used a similar process, but only aligning our turret, not the entire robot. We had fairly decent success that season using the cRio to do the processing. Fortunately we were able to relegate the cRio to only trying to process the image and determining alignment and distance, while not having to do anything else during the process.
__________________
CalGames 2009 Autonomous Champion Award winner
Sacramento 2010 Creativity in Design winner, Sacramento 2010 Quarter finalist
2011 Sacramento Finalist, 2011 Madtown Engineering Inspiration Award.
2012 Sacramento Semi-Finals, 2012 Sacramento Innovation in Control Award, 2012 SVR Judges Award.
2012 CalGames Autonomous Challenge Award winner ($$$).
2014 2X Rockwell Automation: Innovation in Control Award (CVR and SAC). Curie Division Gracious Professionalism Award.
2014 Capital City Classic Winner AND Runner Up. Madtown Throwdown: Runner up.
2015 Innovation in Control Award, Sacramento.
2016 Chezy Champs Finalist, 2016 MTTD Finalist
  #28   Spotlight this post!  
Unread 20-10-2014, 20:39
controls weenie's Avatar
controls weenie controls weenie is offline
Registered User
FRC #2973
Team Role: Mentor
 
Join Date: Oct 2014
Rookie Year: 2011
Location: United States
Posts: 23
controls weenie is an unknown quantity at this point
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by marshall View Post
To be clear, OpenCV is supported on the GPU for the Jetson, just not with Python (to my knowledge). C++ definitely works on the GPU on the Jetson. We've had some awesome early success with it processing 640x480 images at like 60-80fps.
Marshal, how did you get the camera to output more than 30 Hz? What camera and OS did you use on the jetson. The PCDuino/Linux driver would only read at 30Hz. We would crop our image to get the higher frame rates.
  #29   Spotlight this post!  
Unread 20-10-2014, 20:54
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,624
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by MrRoboSteve View Post
...
Processing
...
Forgot laptop or Android device on the robot.

Also do not assume that a CSA will not try to help you, but it could be asking quite a lot.
I know I've always offered to help as CSA but if you do something very complicated it's hard to justify the time to fix that over say getting a totally immovable team running.
So I will say...help us...help you .
The CSA can't, and probably the FTA can't, turn the field inside out for your team's robot vision.
Then again if I notice it is not plugged in - I might suggest you fix that.

Quote:
Originally Posted by controls weenie View Post
Marshal, how did you get the camera to output more than 30 Hz? What camera and OS did you use on the jetson. The PCDuino/Linux driver would only read at 30Hz. We would crop our image to get the higher frame rates.
I would guess they used a PS3-Eye camera which can go to 100fps with the right setup.
Be aware that I am only guessing based on my experience with that camera on a Linux laptop.

Last edited by techhelpbb : 20-10-2014 at 21:02.
  #30   Spotlight this post!  
Unread 20-10-2014, 22:10
marshall's Avatar
marshall marshall is offline
My pants are louder than yours.
FRC #0900 (The Zebracorns)
Team Role: Mentor
 
Join Date: Jan 2012
Rookie Year: 2003
Location: North Carolina
Posts: 1,337
marshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond reputemarshall has a reputation beyond repute
Re: Vision: what's state of the art in FRC?

Quote:
Originally Posted by controls weenie View Post
Marshal, how did you get the camera to output more than 30 Hz? What camera and OS did you use on the jetson. The PCDuino/Linux driver would only read at 30Hz. We would crop our image to get the higher frame rates.
We got our framerate higher by dropping the resolution. Most webcams from logitech/MS seem to support dropping the resolution to up the framerate above 30 FPS. I think it's a logitech C920 we are using but I'm not certain. The OS is the stock ubuntu that comes on the board. It has been updated to the latest versions and we are using the latest stable OpenCV and the latest CUDA for Jetson from Nvidia (6.0, not 6.5 yet ).

Honestly though, beyond 30FPS doesn't seem to help as much as higher resolution does. We're happy to trade one for the other. At least for what we are working on. We want more precise targeting and distance calculations.

Right now the team is working on just basic object tracking using a crappy video I shot with the webcam of a co-worker juggling some red balls. The lighting was complete crap so the students are having to do a lot of processing to figure out the lighting, which is good because that will hopefully make them better at figuring it out come competition.

I had no idea about the PS3 eye. I own one of those so maybe I'll bring it in for the students to experiment with but as I mentioned, FPS isn't a big deal.

Last edited by marshall : 20-10-2014 at 22:12.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 02:56.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi