Go to Post Wow I am old. How did that happen? - Paul Copioli [more]
Home
Go Back   Chief Delphi > Technical > Programming > Java
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
 
 
Thread Tools Rate Thread Display Modes
Prev Previous Post   Next Post Next
  #6   Spotlight this post!  
Unread 09-01-2013, 21:13
RufflesRidge RufflesRidge is offline
Registered User
no team
 
Join Date: Jan 2012
Location: USA
Posts: 989
RufflesRidge has a brilliant futureRufflesRidge has a brilliant futureRufflesRidge has a brilliant futureRufflesRidge has a brilliant futureRufflesRidge has a brilliant futureRufflesRidge has a brilliant futureRufflesRidge has a brilliant futureRufflesRidge has a brilliant futureRufflesRidge has a brilliant futureRufflesRidge has a brilliant futureRufflesRidge has a brilliant future
Re: Vision Processing with Crio

To understand the interaction between the two commands it is necessary to understand at least the basics of how the command scheduler works. Ideally, each time your robot receives a packet from the DS (~every 20ms), the scheduler goes through each command currently scheduled to run (I don't recall the logic for the ordering, for this purpose assume it's arbitrary) and runs the execute() method then the isFinished() method. This means that for ideal operation the total execution time of the execute() method of all combinations of commands you expect to ever run at once should be < 20ms.

Understanding the scheduler at this basic level is critical to figuring out how to get Vision code to play nice with your other commands so if you have any questions on the above I would recommend stopping here and ask away before proceeding.

An assumption I have made in a few of the examples below (particularly the timing of the examples) is that the vision processing code is the primary consumer of CPU time in the system. For most FRC robots this assumption is either true, or close enough that the point of the examples still holds.

So now you want to integrate vision processing code into a command. I have not yet had a chance to benchmark the 2013 Vision Example code, but based on previous experience it is extremely unlikely that processing even a single frame occurs in under 20 ms which means that if you do complete processing of even a single frame in the execute() method of a command you will overrun the next DS packet.

So what do we do about that?

Option 1: Depending on how long it actually takes to process a frame you may be able to just "ignore" the problem (you're not really ignoring it you're deciding the result of the overrun is acceptable). I would suggest that many FRC drivers would hardly notice if their robot only responded to every other packet (40ms update rate). To benchmark how long it takes to process a frame, use the getMsClock or getUsClock methods in the timer class to compare the system time before and after processing the image. This option is particularly palatable if you are only processing a single image on demand (e.g. when a driver presses a button), then acting on it.

Option 2: If you would like to constantly process images, but find that the robot is performing sluggishly when constantly calling the command, you may wish to limit the rate at which the images are being processed, resulting in a sequence like: process image (50ms), respond to DS packet and idle(20ms), respond to DS packet and idle(20ms), respond to DS packet and idle(20ms) = ~9 fps processed. I can think of two ways of doing this, one is to implement the timer inside the command itself to handle when it should process images. The other, probably better way, is to make a subclass of Trigger, that triggers the command to run every X ms (every 100ms would result in approximately my sample timing above).

Option 3: The third option is to break up the image processing in to steps that will return faster and turn FindTargets into a Command Group that calls commands for each of these chunks. Depending on how the time to process the image is split up between the different methods this option may or may not be effective. Note that this will result in a lower framerate similar to option 2, but if the processing code is able to be divided appropriately, will avoid the potential "lag spikes" you may see with that method. A sequence using this method to process the same frame as option 2 would look something like: Respond to DS packet and process step 1 (20ms), Respond to DS packet and process step 2 (20ms), Respond to DS packet and process step 3(20ms), Respond to DS packet and Process Step 1 Image 2 (20ms)...etc.

As you can see, option 3 takes the same 50ms processing time and spreads it into the idle time while the robot is waiting for the next packet (the processing time for other commands responding to the packet is likely very low). If the processing can be broken out this way, this is the approach I would recommend. You can again use the methods in the Timer class to time various steps of the image processing to determine if it can be broken into chuncks of 20ms or less.

Option 4 would be threading, but my general recommendation would be to avoid threading unless you are familiar with it and know what kind of trouble you can get in with it.

Option 5 is offload the vision processing to another processor, either a coprocessor located on the robot or the DS laptop (using RoboRealm, SmartDashboard extension or LabVIEW dashboard with the LabVIEW vision example added).
Reply With Quote
 


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 11:20.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi