Quick Question about Threads

I’m currently writing some robot code, and I’ve been wondering about using multiple threads in java. As far as I know, it’s the same as using threads in any other java application, but some people on my team think differently. Here’s what I think should work. I’m not sure that starting the thread in the constructor is the best idea, and I’m also not sure that I’m using thread.sleep() the right way. I know that I could start the thread from somewhere else by using new Thread(Vision).start();, but what would work with java for FRC?


public class Vision implements Runnable{
    public Vision(){
         new Thread(this).start();
   }
    public void run() {
        while(processImages){
            //image stuff here
                 try {
                    Thread.sleep(100);
            } catch (InterruptedException ex) {
                ex.printStackTrace();
            }
        }
    }
}

It would work. But I am very sceptical of you starting a thread in a constructor. I can’t imagine a situation where that would be a good idea (in frc coding at least). But yes, the Thread class works just like Java SE.

As to Thread.sleep(), I would refer you here: http://stackoverflow.com/questions/3956512/java-performance-issue-with-thread-sleep

You’re right, starting a thread in the constructor is wrong. It should be placed in a method named startVision instead of the constructor. Also, we’re using thread.sleep instead of thread.yield because it seems to give us more consistent vision loop timings, but I’m not really sure why. It also keeps the cpu usage away from 100%. We know that the vision loop will slow down when other stuff starts up, but it’s nice to be able to watch the cpu usage and see what functions use the most resources. What do you think the best way to free up the CPU?

Might be because of this?

Thread.sleep(100);

Never used threading for FRC, but I have for game programming. In which my sleep time varies based off a few factors (Another words, it’s not just a while loop with a constant sleep time). Not sure how to implement this in FRC, just throwing out the possibility.

Also, you’re sleeping for a tenth of a second. Try making the value 1000 (1 second) and see if the usage goes down.

If neither of these work, you might want to use a debugger & pin point what in specific is maxing out the CPU.

Sorry, my last post was a bit confusing. What I was trying to say was that using thread.yield() caused the cpu usage to go to 100% but didn’t cause any lag. The disadvantage was that it was unpredictable with the vision loop. What we used instead was thread.sleep. This does work for us because waiting 100ms doesn’t increase the cpu usage very much. 6-10 fps seems like a good target frame rate for our camera. I don’t really understand how the thread.sleep can be improved without making a much more complicated method of threading. I am aware that the sleep function tends to be off because of the way that thread scheduling works, but the difference is so small, that for us it won’t really make a difference. We’re not waiting for something to happen, we’re just trying to reduce CPU usage. I’m trying to keep it simple, and since we only have 3 threads right now including the main thread, and they interact in such a way that they don’t need to be synchronized with each other, I don’t see why I’d have to do anything differently.

Thread.yield() tells the CPU to pause the current thread and allow other threads to run. See : http://stackoverflow.com/questions/4827429/whats-the-difference-between-thread-yield-and-thread-sleep0-in-java

Unless that is what you are going for, I’m thinking that yield() doesn’t find any other threads to run, so it just runs right away, making the loop incredibly fast (100% usage).

Also, if you are feeling lazy, the wpilibj has a class called Timer that has a static method delay(double seconds). It just uses Thread.sleep(), but converts the double correctly and catches the exception.

I don’t know whether this works on the robot, but this is how we plan to implement it on our driver’s station this year. We may add an interrupt timer that will re-process a new image if the old image hasn’t been processed, but we also don’t expect the driver’s station laptop to bog down enough that it can’t keep up. This method doesn’t depend on a sleep() or a timer, it simply processes images as they chronologically come in (even if they’re asynchronous due to network lag).

The big issue with doing threading like this comes when you go to fold the results back into whatever other robot logic you have. You MUST make sure your data is thread-safe (i.e. use synchronized blocks on critical path items OR make sure that only one single thread can access/modify a specific set of critical data). With gui’s, it’s easy: SwingUtilities.invokeLater() will update data on a GUI using the EventDispatchThread. For your own internal robot stuff though, it isn’t quite as clear-cut.


/**
 * IUpdate<T> is just a type-safe interface that I commonly use
 * for event-driven items that only need 1 input parameter.
 * 
  public interface IUpdate<T>
  {
    public void update(T pObject);
  }  
 *
 */
public abstract class ImageProcessor implements IUpdate<BufferedImage>
{
  private final Executor mExecutor = Executors.newSingleThreadExecutor();
  
  public ImageProcessor(IImageProvider pProvider)
  {
    pProvider.addListener(this);
  }

  @Override public void update(final BufferedImage pObject)
  {
    mExecutor.execute(new Runnable()
    {
        @Override public void run()
        {
           processImpl(pObject);
        }
    });
  }
  
  protected abstract void processImpl(BufferedImage pImage);
}

My idea, which is my attempt to keep things simple, was to have a thread that would process the image, store the coordinates of the image in a variable, then wait 100ms. This would keep the processor happy. If we wanted, we could time the processing stuff then subtract that time from 100ms to make sure that we got exactly 10fps. Then, our PID loop for targeting would look to the most recently calculated image coordinate by using a method in the vision class which gets the variables with the coordinates. I realize that the PID loop may be sometimes getting bad information, for instance, it may grab the same coordinates twice before the cRIO can process another image. We used labview to do the same thing on the cRIO, and it worked fine. If my logic is flawed, I think I could just call a method which processes the image in the PID loop, which is threaded. What is the easiest way to get the image stuff working in a thread?

What are thread used for and do you have any resources for using them?

Multitasking and keeping certain functions of the robot isolated from others.

General information on Java threads can be found with Google.

FRC-specific information can be found using the handy “search” bar up at the top of the page…

Why 100ms?
What happens if your image processor take 10ms to process (for a total loop time of 110ms), but the images still come in at 10Hz (every 100ms)?

Threads are used to keep critical code (like your robot drive code) from being interfered with by errant processing algorithms that augment, but do not supercede, the data that the critical code uses (like image processing). Separating the threads allows for the critical code to always have enough processor to run so that the robot remains responsive. On a multi-core processor even if the image rate is increased to something obscene like 100fps, the robot drive code should still remain responsive. As far as I know, the cRIO is not multi-core – so image rate does still matter – yet 2-3 non-intensive threads are still great ways to ensure the non-critical data is processed separately. If there’s an error in the data that causes the thread to die (uncaught exceptions anyone???) – the thread that died should not be the main robot code thread.

However, there is no free lunch: dedicating more threads than you have CPU cores will still CPU cause scheduling conflicts. For example, if I dedicate 8 threads to heavy processing and expect my 9th main critical thread to main responsive (in a quad-core HT system), I’m really just fooling myself. The system has 7 available threads for heavy (100% cpu) processing.

A much simpler explanation of threading is this:

Being able to do more than 1 thing at a time. It basically lets you do two (or more) things simultaneously, without any expectations of one of them being done before the other. Keep in mind there are a lot of concurrency problems that you need to deal with if you use multiple threads. (see synchronized blocks, atomic variables and volatile variables)

Well, we chose 100ms because it is what is used in LabVIEW for the vision loop wait time. Also, when we use the AxisCamera.getInstance.getImage() method it will get the most recent image. Since the images come in faster than 100 ms, there’s no way for the loop to grab the same image twice. I’m not sure that this is the best way. I’d like to know if there is a better way to do this.

Disclaimer: I have little to no experience with vision tracking apart from pseudo discussions with people who do.

I’m not sure there is any need to periodically fetch and analyse an image. For me, I would only get an image when it is needed, and only take one and analyse that. I know for a fact that most of the teams that did well in champs did exactly that. Processing multiple images is just wasting valuable processing power. If it take 2-5 seconds to process one image well, versus doing 10 images in the same amount of time badly, it seems like an obvious decision to do it once thoroughly.

Kind of like this:

1. User presses "track" button (mapped to somewhere on the controller)
2. Stop robot, Take picture
3. Send image to a new vision processing thread (or executor)
4. Wait (keep robot stopped) until thread sends speed (or whatever data you need) back to main thread (use thread.join() or object.wait())
5. Shoot using data

This is far more efficient than constantly looking for the target. 100% CPU all of the time can literally slow your robot down. (watchdog, PWM signal sent less) Unless you are using the target to dynamically line yourself up (which would be impressive), I can’t think of a good reason to track it periodically.

I can honestly say that properly processing an image well doesn’t take even 2 seconds. However, if it did, the good engineer would find the balance of quality vs time. Strategy of a given game may not allow a 2-second waiting sequence if the goal is to have a top-notch performing robot with respect to using vision tracking as part of scoring (auto-aim in '12, 2-tube auton in '11, 5-ball auton in '10, etc).

For rebound rumble, we used LabVIEW to program our robot. We actually found that it was much easier to process about 8-10 images per second and just use a PID loop. The tracking wasn’t perfect, but it was simpler to implement than doing the math to figure out the change in angle that our turret needed to have. Also, it was easier to build because our turret never had position feedback and used the image coordinates instead. We were able to sucessfully drive back, tip the bridge, then drive forward and make both baskets lining up using this method, and my goal was to do something similar in Java, which we will use this year.

Sorry if that sounded like I was using 2 seconds as some sort of specific number. Like I mentioned, I have no good experience with vision processing and I just used 2 for the sake of the point.