Since we managed to get the Rectangular Target Processing VI functioning, we’re now trying to use the Distance information to move the robot.
The problem is, with the teleop code used in the attachment, the robot doesn’t move at all. I can’t imagine that there’s anything wrong with the drivetrain, since the robot’s been driving a lot before we modified the code.
At first glance, I would say that .1 is a fairly low drive value and may not be enough to move a robot. Have you tried it on blocks to see it the wheels turn?
I also think you will find you are introducing a turn with those values. Arcade drive uses X and Y input. X will cause the robot to turn.
Lastly, the vi is using the following logic:
If distance is greater than 6, drive at .1 and .1.
Is that what you are thinking as well?
We have found that the Rectangular Target Processing VI takes up too much of the cRIO’s memory and processing to run our drive code at the same time. We are looking into running the targeting code off of the cRIO, but we’re still in the early stages.
Can you be more specific? Is it RAM or CPU? Processing on the laptop is certainly an option, and discussed to a degree in the Example and white paper. If you drop the framerate, and perhaps even lower the priority of the processing, you should be able to control how much CPU is used.
I can confirm that the code logic is correct already, since when I wire the Joystick inputs instead of constants, the robot can move when the condition is met.
Could there be a problem with the input values I’m using? I noticed through probing wires that the values the joystick gives aren’t scaled from -1 to 1…
EDIT: Actually, no, I was reading the numbers wrong. They should be the same as the ones the joystick uses…
After fiddling around with the targeting code we got running, I think it’d be best if we did this. What sort of modifications would we have to make to the Rectangular Target Processing code to do this, do you think?
The Send Axis Camera Signal Directly to Dashboard says the Read MJPG vi is what’s used, but I’m not sure if that translates to this situation?
Another potential hiccup: whenever something obstructs the camera view, or even when the camera is simply stationary, the index of a particular target in the Target Info array changes. That could be a problem if we’re trying to narrow on a particular target. My mentor says we should be able to make the code remember certain position coordinates, and assign them to a specific index, so that if a target is at index 1 and its postion is (-0.5, 0.5), for argument’s sake, if we obstruct the camera view and then unobstruct it the target should still be reassigned to index 1. Any idea how to implement this?
The dashboard code already has independent loops that do UDP. For example, the loop that reads the Kinect Server data is towards the top of the Dashboard diagram. The important part is shown below.
It reads from port 1166 about once a second, or whenever data arrives. It reads at most 1018 bytes as a string, and then interprets it as the agreed datatype. In the situation we are considering, a similar loop would be placed on your robot and run in parallel with everything else – I’d suggest doing it in Periodic Tasks.
The second image shows the code that needs to run on the dashboard to send the data. You need to change team and you need to make the data constant be your own data either formatted or flattened to a string. The final piece is to identify the UDP port to use and use it for both the read and write.
As for the index problem, I’m pretty sure that is currently based on the particle size. I’d probably try to sort them by location and label them as top, left, right, and bottom. You could then store them in a cluster or an array with a unique cell for top, left, right, and bottom. You should be able to identify them with any sorts of simple sort techniques.
So then, would we have to run the Vision Processing code on the dashboard, write to the robot using UDP, then have the robot read the UDP packets using code in the Periodic Tasks VI?
never mind that, wasn’t reading correctly.
I guess the challenge is finding out how to convert the image to a string (the Get Image Data String VI requires a CameraDevRef input).
Provided the camera is connected to the dlink, you don’t need the cRIO to do anything with the image.
The dashboard reads the image directly from the camera.
The dashboard processes it.
The dashboard sends any target info to the robot via UDP string.
The robot reads the TDP string and updates setpoints which ultimately move the robot.
You don’t have to do it this way, but if you want to use the laptop to do the processing, to allow the cRIO CPU to do other things, this is the way I’d approach it.
Yes, the dashboard is quite capable of doing vision processing. The classmate and typical laptops are quite a bit more powerful than the cRIO. Less capable to do I/O, but more powerful CPU.
I’ll look at the other thread as well. Thanks for bringing it to my attention.