Camera Code

So I worked out some camera code and it is able to tell me the coordinates of the target when it is present, but when the target isn’t present it is supposed to display on the dashboard that their is no target. This does not happen because once it finds the target it will say found target and that field never gets reset on the driverstation. so my question is how can you get the dashboard to say no target present.

Here is my sample code.

public void Camera()
    {
            cam.freshImage();
         try
            {
              ColorImage image = cam.getImage();
              try
              {
                BinaryImage firstColor = image.thresholdRGB(
                120, 160, 170, 210, 240, 260);
                ParticleAnalysisReport] firstColorHits = firstColor.getOrderedParticleAnalysisReports(3);
                firstColor.free();

                for (int i = 0; i < firstColorHits.length; i++)
                {
                   ParticleAnalysisReport firstTrackReport = firstColorHits*;
                   if (firstTrackReport.particleToImagePercent < FRC_PARTICLE_TO_IMAGE_PERCENT)
                   {
                       
                       imagepres = false;
                      
                      
                   }
                   else if(firstTrackReport.particleToImagePercent > FRC_PARTICLE_TO_IMAGE_PERCENT)
                   {
                       imagepres = true;
                       foundhorcenter = firstTrackReport.center_mass_x;
                       foundvertcenter = firstTrackReport.center_mass_y; 
                   }
                   if (imagepres == true)
                   {

                       display.println(DriverStationLCD.Line.kUser2, 1, "X coordinate " + Integer.toString(foundhorcenter));
                       display.println(DriverStationLCD.Line.kUser3, 1, "Y coordinate: " + Integer.toString(foundvertcenter));
                       display.println(DriverStationLCD.Line.kUser4, 1, "Target found");
                       display.updateLCD();
                    }
               else 
               {
                   display.updateLCD();
                   display.println(DriverStationLCD.Line.kUser2, 1, "X coordinate na");
                   display.println(DriverStationLCD.Line.kUser3, 1, "Y coordinate: na");
                   display.println(DriverStationLCD.Line.kUser4, 1, "no image");

               }
                }
              }
              catch (Exception e) {
                e.printStackTrace();
            } finally {
                if (image != null) {
                    image.free();
                }
                image = null;
            }
            }
            catch (AxisCameraException e)
            {
            }
            catch (NIVisionException e)
            {
            }
        }

Thanks for your help*

imagepres = false; only gets set if there are color hits.
this is because it is in the for loop.

if the colorhits.length = 0 it should get set to false too

:slight_smile:

Where is FRC_PARTICLE_TO_IMAGE_PERCENT coming from? I can copy and paste most of your code, figure out the undeclared variables, but not sure how to resolve that one.

Thanks!

its a static variable provided by in the sample code. probably in the wpilib. look at the tracker code from 09

private static final double FRC_PARTICLE_TO_IMAGE_PERCENT = .0001;

A shame :frowning: I was kind of hoping to not have to tackle most of the camera code myself, and the CircleTrackerDemo is extremely hard to figure out. Took me three days to work out the Dashboard example :eek:

His code is a good example of how to get started with the camera though. It is just not the full class. Many teams have a single class autonomous code.

I edited my comment after realizing this, my claim was incorrect. this function is open source. i guess you grabbed it before i removed it…

also secrecy is not uncommon among first teams. so teams are not expected to open source. As a mentor programming I feel ethically compelled to do so. If Jimmy is a student he is welcome to keep his code secret :). My work is COTS, his isn’t

PS. if you want to take a look at my code, it is all open source and comes with video tutorials :). (missing good comments though, dont be too harsh dell)

just shoot me an email [email protected]

I kind of think the spirit of FIRST ethically compels me to share my code, if it will help someone else. That being said, I’m a teacher, so if it can help someone learn I’ll share it with them.

I use multiple classes to break the code down into reusable pieces, and spin off threads to allow things like the driver station to take care of themselves. I use a separate class to define autonomous routines (go straight, turn left, turn right, back up, etc.), and then call those methods in autonomous() in a script like fashion, i.e.


public void autonomous() {
    autodrive.goStraight(8);  // go straight ahead for 8 seconds
    autodrive.turnRight(90);  // turn right 90 degrees
    autodrive.backUp(3);       // back up for 3 seconds
    } 

When the AutoDrive class is complete anyone on my team should be able to program an autonomous routine using simple commands, without having to worry about how the program actually accomplishes the task.

The whole thing is modular that way. I use SimpleRobot instead of Iterative: it allows finer control of a multi-threaded model, without the interference of the automatic looping in Iterative. Iterative would eliminate threads, but would also be more monolithic, and therefore less reusable.

Off topic :o I’m sketching out my camera class, so I’m looking for better (meaning easier to follow) code than the stock examples. Jimmy’s looks good, and I can actually read it (well, mostly :slight_smile: )

Dell, if you look at my code I use a similar structure. Reusable is a critical element. However our team has found that the series maneuvers is not sufficient so I have created a similar structure that utilizes a tree (pass, fail, timeout).

Also thank you for joining the initiative. Autonomous has been lacking in FIRST teams, so I agree it is the responsibility of successful teams to lead the endeavorer and stay open source

647 has essentially had NO autonomous capability in the past, so for now series will work. A decision tree is a great idea though, I’ll definitely take a look and see what I can implement in the future.

I found in the past that the camera processing is unneccessary. A lot of times it is just easier to use delays, especially since the example camera code is so hard to follow, but that is because they added things that were unnecessary. I think if we can get very simple camera code that anyone can follow and modify, more teams will use the camera in auton.

what are you tracking with this program?


Gianna Simmons
-Rookie Team 3567

It tracks the reflective tape that is on the end of the scoring pegs.

Right now it is tracking just the threshold values(aka the color we want to track), but if you know the threshold values of a different target then you can plag them in and use them. Also the threshold values that i was using were for reflective tape with an Led light shinning at it.