|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: Worried about cRIO CPU usage when using vision
Isn't there a way to off load the vision processing back onto the drivers station during a match?
|
|
#2
|
|||
|
|||
|
Re: Worried about cRIO CPU usage when using vision
Quote:
As with most engineering problems, there are tradeoffs to be evaluated when choosing between various options. |
|
#3
|
||||
|
||||
|
Re: Worried about cRIO CPU usage when using vision
Not completely sure but cant we have something else on the robot to co-process the image data received?
|
|
#4
|
||||
|
||||
|
Re: Worried about cRIO CPU usage when using vision
Quote:
|
|
#5
|
|||
|
|||
|
Re: Worried about cRIO CPU usage when using vision
When discussing using laptops for vision, here are a few things to keep in mind.
The camera and cRIO are enet devices. They are already on a network with the dashboard. A laptop on the robot will be on the network too. The networks are different, but not as black and white as it may seem. As discussed in other threads, you will gain yourself some additional CPU, but you will soon use that up too. Yesterday, I ran a team's dashboard that pegged my core 2 duo laptop with a 640x480 image stream. I could only process 20fps and they were sending 30. This resulted in seconds of lag. Think about this as a budgeting exercise. Can you determine what is possible with the CPU resources in the cRIO? If you add more CPU, what will it be used for? What will it cost in $ and lbs? There are certainly tasks that will benefit from adding a laptop, but it is not magic. It should really follow the same process as adding a motor or pneumatics. Greg McKaskle |
|
#6
|
||||
|
||||
|
Re: Worried about cRIO CPU usage when using vision
What if the laptop was only used for processing the image to find out the distance of the target rectangle, but you did not stream any thing back to your driver-station?
|
|
#7
|
||||
|
||||
|
Re: Worried about cRIO CPU usage when using vision
Quote:
For example I am looking at the rectangle as two parallel lines. The distance to the hoop can be determined based on the lines and your robots heading. In my case I'm using the kinect but that's more to limit the amount of field calibration that needs to be done. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|