|
Re: Using the AXIS camera (M1011) with OpenCV
So now that I have a working stream address, and I am able to do basic processing, How do I separate the goals from everything else? The camera is saturated from the reflected green, so I just need to create an algorithm to separate that from the rest of the stuff. Whee should I start? If anyone has some sample code, that would be appreciate, even if you want to PM me to not let anyone else see!
So currently, I am able to process the colors as CV_BRG2HSV and CV_BGR2GRAY. I am also able to imshow the images throughout the entire process!
----Different topic----
So many of us have claimed how wifi interference can cause a ton of lag in robot communication. To see what would happen, I will try this wacky test:
step 1: Ask everyone to turn off their phones and electronics, and isolate the appliances from as much interferance as possible. I'll try to get a rough estimate of the lag (with no bandwidth restrictions enabled (similar to what you'd get if you were doing onboard processing without any network usage (except feedback).
step 2: Ask everyone to turn on their phones, bluetooth, wifi and any other communications possible. I will also be running aircrack-ng on another computer to just send a ton of packets and try to cause as much disturbance as possible. This will run on the same channel as the robot communications. To manage consistency, no bandwidth restrictions would be enabled, again. I'd then check the lag times and see whether vision will be a possibility. I'll most likely publish this data to show what my experimentation came up with.
step 3: analyze and publish the results. Also, choose the pathway you wish to follow, there-on. It could be to go by onboard or offboard processing!
---Tell me if I should add another step to this. I will try to use BackTrack Linux off a pendrive iso image.
Also, this is quite overkill. You'll probably never get as much interference competition, so this will be more-of-a worst-case scenario!
What do you guys think?
|