I’ve seen a bunch of teams use raspberry pies. Has anyone heard of / used an APC as an off-board processor?
Seems better than a Pi. However, you may like the CubieBoard or the oDroid a lot better. This system seems to have a decently wimpy processor. However, I will have to say that it will work. Just make sure to get a good 5 volt PSU. I suggest using the robot 12 volt (with regulator) because that will rid you of all voltage spikes that will easy damage your board or corrupt your code.
I have a similar thread, SoC power. However, I am convinced that it may not be possible. Hopefully, FRC changes some rules because now more and more teams are using these development boards to offload the straining tasks from the cRIO
There’s really no reason to put something like a raspberry pi. More complicated vision tasks can be offloaded to the driver station laptop. Look at team 341 in 2012, they were one of the best teams that year because of their great vision routines, and they did the processing on the driver station laptop.
I challenge you to come up with a task that a raspberry pi/oDroid/BeagleBone can do, but can’t be done with a cRIO or driver station laptop.
Laaaaaaaggg!!! At competitions, if you are relying on vision processing heavily, you may get a 500 ms response time on your robot even if you had an i7 extreme, the most expensive one available. The network connection is unreliable. The ability to process within the robot and communicate via i2C or SPI or something hardwired to the cRIO. There’s nothing more dependable than that. I think the 7 mbit/s internet connection speed cap is just random and in-required.
I think it would be most wise to process the images onboard, send the signals to the cRIO, and have a background process to update the driver station. Note, the driver station won’t be very accurate at high speeds because of the laagg.
I agree that the speed cap is rather stupid, as the whole point of using 802.11 is to get higher connection speeds than that, but there really isn’t lag if you’re doing it right. Our camera stream managed 20 fps at only 3 mbit/s, and worked really well for processing. If you’re seeing a 500 ms response time between your robot and your driver station, then something else is really wrong and you need to look at your software again. A robot isn’t really driveable with that sort of lag.
In fact, I’d be willing to bet that our vision setup (processing on the ds laptop) has way less lag than your onboard processor. The ds laptop is an order of magnitude faster than an onboard processor, and the time from an image being sent from the robot to the laptop, the laptop processing it, and the laptop sending the coordinates of the target to the robot, was well under 100 ms all the time.
If you use a PID loop for alignment, you can compensate for any lag pretty easily while sacrificing response time.
When we used vision, we could align our robot with the target in under 1 second every single time. Also, watch videos from 341 in 2012. They were one of the fastest to get aligned, and they processed images on their driver station laptop. If you think that this process could be improved with lower lag, then prove it.
You can say (and be right) that a hardwired connection will be faster than the wi-fi connection, but unless you’ve actually done tests with well-written and complete setups in both arrangement, you can’t comment on the effect of the increased time on the effectiveness of actual image processing. What you have above is just speculation with incorrect information.
You’ll easily get the framerate. That is based off connection speed. However, even if you get a lower processing framerate onboard the robot, 3 fps is good enough to outmatch the driver station at times. Then, the computer will need to process those images and then send data back to the cRIO. You can use UDP if you are trying for speed, but there is no error correction so there is a chance your robot could do something stupid.
If the robot was having vision tracking making sure it doesn’t bump into anything, the robot is speeding at top speed and you walk in front of the robot (accidentally), running on the driver station with a .5s delay with everything, the robot will have a minimum of 5 meters to start reacting, not including the braking distance and other factors. However, having the processing on the robot speeds things up a lot. Plus, you have the lag increase when you go farther from the AP/Laptop in the first example.
Where are you getting this 0.5 seconds from?
At competition, the our total delay for the transfer of both the image and the processing was ALWAYS less than 100 ms, usually less than 50ms.
You don’t have evidence that shows the total response time from either of the setups, and you also don’t have any evidence of how response times actually effect the robot’s ability to function. Before making your decision, I strongly suggest you test both methods!
(Also, if your robot can travel 5 meters in 0.5 seconds, you’re going 32 feet per second, more than triple what you normally see in FRC).
I’m not going to argue with you, but if you are driving at a high speed, as you go farther from the access point, the robot will get a slower and slower speed. Also, if you want the robot to be semiautonomous, it isn’t possible to use wifi because every millisecond counts because there are so many variables to calculate a trajectory and make sure the robot follow it in a closed-loop code. You must be the world’s best programmer is you can do that. Things like a rotating turret could easily use network because you have the time to make these measurements.
I’m not going to argue with you, but if you are driving at a high speed, as you go farther from the access point, the robot will get a slower and slower speed. Also, if you want the robot to be semiautonomous, it isn’t possible to use wifi because every millisecond counts because there are so many variables to calculate a trajectory and make sure the robot follow it in a closed-loop code. You must be the world’s best programmer is you can do that. Things like a rotating turret could easily use network because you have the time to make these measurements. I will have to agree that using network can relieve many electrical hardships because you do not have to worry about a separate computer psu (regulator and shutdown watchdog). Also, depending on the board you are using, using an interface like i2C can be difficult. For the APC, it seems just like the RPi, which is quite simplistic. Other than that, you would typically like to have the ability to code/mess with the code, so you will need to do some work into integrating either a web interface or rdp forwarding, etc.