OCCRA
Go to Post Computers have a nasty habit of doing exactly what you tell them to do instead of what you want them to do. - KenWittlief [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 08-09-2018, 04:13 PM
Team6928 Team6928 is offline
Registered User
FRC #6928
 
Join Date: Jan 2018
Location: Greeneville, Tennessee
Posts: 60
Team6928 is an unknown quantity at this point
Vision on a Pi

We just finished our rookie season and we can see how important visioning is. Our cameras never worked at competition (they did work at school), so we were wondering how to run visioning on a pi machine. Also, how do we get started in doing visioning for targets? What camera/etc. would we need?
Reply With Quote
  #2   Spotlight this post!  
Unread 08-09-2018, 04:39 PM
Chadfrom308's Avatar
Chadfrom308 Chadfrom308 is offline
Slave to the bot
AKA: Chad Krause
FRC #7226 (Error 404)
Team Role: College Student
 
Join Date: Jan 2013
Rookie Year: 2011
Location: East Lansing
Posts: 323
Chadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to behold
Re: Vision on a Pi

Pis are typically frowned upon because of the lack of horsepower.

Many teams use the Nvidia Jetson TK1 or similar mini computers. They have CUDA which helps a lot with processing speed.

Maybe the newer Pis are faster, I haven't used one in a while.

Anyways, many teams use OpenCV for their tracking system. GRIP is a good tool https://github.com/WPIRoboticsProjects/GRIP to help you with vision tracking.

A popular camera is the Microsoft LifeCam 3000
__________________
//TODO: make signature
Reply With Quote
  #3   Spotlight this post!  
Unread 08-09-2018, 05:25 PM
solomondg's Avatar
solomondg solomondg is offline
Registered User
AKA: Solomon
FRC #2898 (Flying Hedgehogs)
Team Role: Leadership
 
Join Date: Aug 2016
Rookie Year: 2016
Location: Portland, Oregon
Posts: 108
solomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant future
Re: Vision on a Pi

I don't agree with the lack of horsepower thing. Pis are more than fast enough to run your standard FRC OpenCV vision stack, especially if you downscale the image to something like 320x240, which is frankly all you need.

The Jetsons are nice, but they can be difficult to set up and power on the robot. As well, the CUDA capability is generally unnecessary for the vision tasks found in FRC, and is pretty difficult to utilize in code, especially if you're less familiar with C++.

+1 on GRIP, it's a very nice tool.

Lifecam 3k is always a good bet, some teams like things like the Logitech C920 or C310. I'm a fan of the ELP USB cameras, myself.

The JeVois camera is worth looking at for sure. It provides an integrated package of a low power linux computer and a camera. You have to do serial communication, which isn't as easy as networktables, but it's a great and compact piece of hardware.

At least in my - probably controversial - opinion (and as someone who's done a _lot_ of vision stuff), vision isn't too important in FRC, it seems. It was near necessary for the 2017 boiler, and nice to have in 2016, but other than that it's far from essential. While it's a great goal to work towards, I wouldn't sweat it if you need to prioritize something like motion profiling or getting control loops running instead of a vision stack.

However, for sure see if you can figure out your competition camera issue. Driver cameras are very useful. What were the sort of issues? Did you use a USB webcam or an Axis IP cam or something else?
Reply With Quote
  #4   Spotlight this post!  
Unread 08-09-2018, 06:29 PM
deslusionary's Avatar
deslusionary deslusionary is offline
easily impressed by fancy acronyms
AKA: Christopher Tinker
FRC #7093 (Veritas Valiants)
Team Role: Programmer
 
Join Date: Mar 2018
Rookie Year: 2018
Location: Austin TX
Posts: 95
deslusionary is a splendid one to beholddeslusionary is a splendid one to beholddeslusionary is a splendid one to beholddeslusionary is a splendid one to beholddeslusionary is a splendid one to beholddeslusionary is a splendid one to behold
Re: Vision on a Pi

I agree that vision isn't priority number 1. Focus first on developing your programming skills in the following:

-Control loops: PID control, feed forward control

-Utilizing WPIlib's command based robot framework. This is really important. You can't write high level code without some sort of high level framework like command based.

-Code architecture and framework: separating your code into subsystems, utility code, commands, and so forth.

-Motion profiling and path following

If you don't know what I'm talking about, ask questions here! People will be glad to help out and provide resources. IMO these are the topics you should concentrate on first before vision.

Edit: it's usually referred to as 'vision', not 'visioning'

Last edited by deslusionary : 08-09-2018 at 06:30 PM. Reason: note about visioning
Reply With Quote
  #5   Spotlight this post!  
Unread 08-09-2018, 06:53 PM
billbo911's Avatar
billbo911 billbo911 is offline
I prefer you give a perfect effort.
AKA: That's "Mr. Bill"
FRC #2073 (EagleForce)
Team Role: Mentor
 
Join Date: Mar 2005
Rookie Year: 2005
Location: Elk Grove, Ca.
Posts: 2,638
billbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond reputebillbo911 has a reputation beyond repute
Re: Vision on a Pi

Quote:
Originally Posted by Team6928 View Post
We just finished our rookie season and we can see how important visioning is. Our cameras never worked at competition (they did work at school), so we were wondering how to run visioning on a pi machine. Also, how do we get started in doing visioning for targets? What camera/etc. would we need?
Many teams have been very successful using an RPi with either a web camera or a Pi camera. Examples of code to do this have been posted several times, so do some searching and you are sure to find what you need to get started.

A couple things to consider:
Vision has two basic outputs, the image stream and targeting information.
Setting up the camera optimally to do both can be a bit of a challenge.

Quote:
Originally Posted by solomondg View Post
I don't agree with the lack of horsepower thing. Pis are more than fast enough to run your standard FRC OpenCV vision stack, especially if you downscale the image to something like 320x240, which is frankly all you need.
Absolutely correct!


Quote:
Originally Posted by solomondg View Post
+1 on GRIP, it's a very nice tool.

...Lifecam 3k is always a good bet, some teams like things like the Logitech C920 or C310...

The JeVois camera is worth looking at for sure. It provides an integrated package of a low power linux computer and a camera. You have to do serial communication, which isn't as easy as networktables, but it's a great and compact piece of hardware.
I can vouch for the JeVois being a fantastic alternative! In fact, it is even a bit more economical to set up than a Pi.
Honestly, it is a bit more challenging to program if you are not already familiar with vision coding. That said, JeVois is introducing a GUI (JeVois Inventor) that addresses a lot of the difficulties and makes coding it much easier. In fact, it already has several code examples built in that can be modified for use in FRC.

Quote:
Originally Posted by solomondg View Post
At least in my - probably controversial - opinion (and as someone who's done a _lot_ of vision stuff), vision isn't too important in FRC, it seems....
This point needs a HUGE caveat! The necessity of vision is completely dependent on the game and field that FIRST releases. With careful inspection of the rules, and a bit of creativity, it turns out even 2018 had a really excellent use for vision. (Think ArUco)
__________________
CalGames 2009 Autonomous Champion Award
2012 Sacramento Semi-Finals, 2012 Sacramento Innovation in Control Award, 2012 SVR Judges Award.
2012 CalGames Autonomous Challenge Award winner ($$$).
2014 2X Rockwell Automation: Innovation in Control Award (CVR and SAC).
Curie Division Gracious Professionalism Award.
2014 Capital City Classic Winner AND Runner Up. Madtown Throwdown: Runner up.
2015 Innovation in Control Award, Sacramento.
2016 Chezy Champs Finalist, 2016 MTTD Finalist
2017 Utah Regional Winner!, Sacramento Finalist
Innovation in Control Newton/Carver Divisions , Newton #5 Captain
2018 WFFA Sacramento, Creativity Award Galileo-Robling Divisions
Reply With Quote
  #6   Spotlight this post!  
Unread 08-09-2018, 06:56 PM
keco185's Avatar
keco185 keco185 is offline
Registered User
FRC #0484 (Roboforce)
Team Role: Programmer
 
Join Date: Jan 2015
Rookie Year: 2012
Location: United States
Posts: 27
keco185 is an unknown quantity at this point
Re: Vision on a Pi

We did vision this past year (for finding cubes) on the roboRIO. The best way to utilize vision is typically to take a single frame from the camera, analyze it to find your target, and then use a gyro and encoders to move to the target. If you do this, you won't need much processing power since you don't need to process a continuous video stream, just 1 frame. Additionally, you won't have to worry about latency since you will be using the gyro and/or encoders instead of a delayed video stream.
Reply With Quote
  #7   Spotlight this post!  
Unread 08-09-2018, 07:25 PM
wgorgen's Avatar
wgorgen wgorgen is offline
Registered User
FRC #1533 (Triple Strange)
Team Role: Mentor
 
Join Date: Apr 2018
Rookie Year: 2014
Location: Greensboro, NC, USA
Posts: 115
wgorgen is a splendid one to beholdwgorgen is a splendid one to beholdwgorgen is a splendid one to beholdwgorgen is a splendid one to beholdwgorgen is a splendid one to beholdwgorgen is a splendid one to beholdwgorgen is a splendid one to behold
Re: Vision on a Pi

Quote:
Originally Posted by solomondg View Post
At least in my - probably controversial - opinion (and as someone who's done a _lot_ of vision stuff), vision isn't too important in FRC, it seems. It was near necessary for the 2017 boiler, and nice to have in 2016, but other than that it's far from essential. While it's a great goal to work towards, I wouldn't sweat it if you need to prioritize something like motion profiling or getting control loops running instead of a vision stack.
I agree.

I was actually surprised that this year's game had so few tasks that could be done better with vision. After 2016 and 2017, and with the way the FTC game has placed increasing emphasis on vision and Vuforia, I was expecting this year to take things up another notch in terms of vision related tasks being a part of higher level game play. I saw a few teams use vision well this year, but few gained any real advantage from it.

Having said that, I still feel that vision will be a good tool to have in your toolkit of skills. It may not be the most basic tool and if you have not mastered the more basic stuff, you should not distract your efforts with vision, but it is probably something you want to have as part of your long term to-do list. I expect we will see games that have vision elements in the future.

But, more than that, I think it is a great programming challenge for students. It involves both a highly technical sub-task of processing the image and extracting key information, but also involves integration with the rest of the robot programming and the overall strategy (what information do you need to extract from the image and what is the robot going to do with that information?). If the programmers in your group have the desire to give it a try, especially during the off-season, I say, go for it.
__________________
Reply With Quote
  #8   Spotlight this post!  
Unread 08-09-2018, 09:11 PM
Prateek M's Avatar
Prateek M Prateek M is offline
Kotlin Master Race
FRC #5190 (Green Hope Falcons)
Team Role: Programmer
 
Join Date: May 2018
Rookie Year: 2018
Location: Cary, North Carolina
Posts: 27
Prateek M is on a distinguished road
Re: Vision on a Pi

Quote:
Originally Posted by billbo911 View Post

I can vouch for the JeVois being a fantastic alternative! In fact, it is even a bit more economical to set up than a Pi.
I completely agree. We got a prototype of a Vision program to identify cubes on the JeVois working this year, but we never used it at competition because we found that using splines to get to the two fence cubes was very accurate.

One thing to keep in mind is how you're going to use Vision information to perform a task on the robot. Integrating this data is often considered the hardest part when talking about Vision. We are considering trying out some experiments over the next few months to somehow integrate Vision data to correct error while path following, so we'll see how that goes.

Last edited by Prateek M : 08-09-2018 at 09:13 PM. Reason: Grammar
Reply With Quote
  #9   Spotlight this post!  
Unread 08-10-2018, 02:02 AM
solomondg's Avatar
solomondg solomondg is offline
Registered User
AKA: Solomon
FRC #2898 (Flying Hedgehogs)
Team Role: Leadership
 
Join Date: Aug 2016
Rookie Year: 2016
Location: Portland, Oregon
Posts: 108
solomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant futuresolomondg has a brilliant future
Re: Vision on a Pi

Quote:
Originally Posted by billbo911 View Post
This point needs a HUGE caveat! The necessity of vision is completely dependent on the game and field that FIRST releases. With careful inspection of the rules, and a bit of creativity, it turns out even 2018 had a really excellent use for vision. (Think ArUco)
For sure! Completely game dependent.

I'm curious about the ArUco thing, though. I was under the impression ArUco was solely for April tags?

If you're just talking about SIFT/SURF/ORB/FAST/BRIEF/the flavor-of-the-day image homography algorithm, those can for sure be useful, especially with a Perspective-N-Point algorithm. Issue is though, they're pretty darn finicky you need a sharp, vaguely high res image from not too extreme of an angle, or Bad Things Will Happen. Compute horsepower is another issue Jetson CUDA acceleration is a must if you want a usable framerate, but even then calling it fast would be pretty inaccurate, especially with the larger image sizes you need to have the algorithm work over longer ranges. While it certainly can have its uses, it's not very fun to integrate onto a robot. As someone who integrated that sort of thing into our 2017 vision stack, you'd have to do some serious convincing (or bribery) to get me to do that again.

Though, if you guys managed to get it effectively integrated into your vision stack this year, I'd love to hear about it saying that 2017 me was clueless is an understatement, haha.
Reply With Quote
  #10   Spotlight this post!  
Unread 08-10-2018, 09:15 AM
jtrv's Avatar
jtrv jtrv is offline
Registered User
AKA: Justin
FRC #0340 (GRR), FRC #5254 (HYPE)
Team Role: Mentor
 
Join Date: Jan 2013
Rookie Year: 2012
Location: Rochester, NY
Posts: 474
jtrv has a reputation beyond reputejtrv has a reputation beyond reputejtrv has a reputation beyond reputejtrv has a reputation beyond reputejtrv has a reputation beyond reputejtrv has a reputation beyond reputejtrv has a reputation beyond reputejtrv has a reputation beyond reputejtrv has a reputation beyond reputejtrv has a reputation beyond reputejtrv has a reputation beyond repute
Re: Vision on a Pi

Quote:
Originally Posted by Chadfrom308 View Post
Pis are typically frowned upon because of the lack of horsepower.

Many teams use the Nvidia Jetson TK1 or similar mini computers. They have CUDA which helps a lot with processing speed.

Maybe the newer Pis are faster, I haven't used one in a while.
I am not entirely sure what kind of vision processing you're trying to do on a Pi, but I don't know if I would recommend a Jetson - up to $500!! - for a team who has never done vision processing before. It's simply overkill.

OP -- I recommend you get a Pi, look into setting up GRIP. Once you have the image processing part down, you can start to send data over NetworkTables to the Rio from the Pi using PyNetworkTables. For example, this year on 340, we processed cube images using the Pi, calculated where in the image the cube was (on the X axis, 0-360, where 180 is the center), and sent that over the NetworkTables to the Rio. Then we had a simple "rotate until the cube is between 170ish and 190ish". While we never wound up integrating it into our autos, we had it working in a day or so during build season. You can see our code on GitHub.

This was absolutely nowhere near full utilization of the computing power on a Pi. It's not the most advanced vision code, but it was relatively fast and it worked. For a camera, we just used a Raspberry Pi Camera Module.

There may be an easier workflow to this by now, but this is particularly easy and it lets your programmers start to learn Python as well as Java, without being too overwhelming.

Last edited by jtrv : 08-10-2018 at 01:25 PM.
Reply With Quote
  #11   Spotlight this post!  
Unread 08-10-2018, 10:13 AM
Waz Waz is offline
Strategy and programming mentor
AKA: Steve
FRC #2357 (System Meltdown)
Team Role: Mentor
 
Join Date: Feb 2013
Rookie Year: 2009
Location: Raymore, MO
Posts: 100
Waz will become famous soon enoughWaz will become famous soon enough
Re: Vision on a Pi

Quote:
Originally Posted by keco185 View Post
We did vision this past year (for finding cubes) on the roboRIO. The best way to utilize vision is typically to take a single frame from the camera, analyze it to find your target, and then use a gyro and encoders to move to the target. If you do this, you won't need much processing power since you don't need to process a continuous video stream, just 1 frame. Additionally, you won't have to worry about latency since you will be using the gyro and/or encoders instead of a delayed video stream.
This is essentially what we do too except that we have a background thread analyzing frames at a rate too low to cause performance issues and consequently also too low to be the pid input. However, it does produce new targeting information plenty quick enough for us to link together multiple gyro and pid controlled turns/course corrections together to complete a single movement. Did not use it in 2018 but did use it for hanging gears in 2017 during both autonomous and teleop.

Steve
Reply With Quote
  #12   Spotlight this post!  
Unread 08-10-2018, 04:49 PM
Peter Salisbury Peter Salisbury is offline
Registered User
FRC #5811 (The BONDS)
Team Role: Tactician
 
Join Date: Jan 2017
Rookie Year: 2010
Location: Ohio
Posts: 59
Peter Salisbury has a brilliant futurePeter Salisbury has a brilliant futurePeter Salisbury has a brilliant futurePeter Salisbury has a brilliant futurePeter Salisbury has a brilliant futurePeter Salisbury has a brilliant futurePeter Salisbury has a brilliant futurePeter Salisbury has a brilliant futurePeter Salisbury has a brilliant futurePeter Salisbury has a brilliant futurePeter Salisbury has a brilliant future
Re: Vision on a Pi

Quote:
Originally Posted by billbo911 View Post
This point needs a HUGE caveat! The necessity of vision is completely dependent on the game and field that FIRST releases. With careful inspection of the rules, and a bit of creativity, it turns out even 2018 had a really excellent use for vision. (Think ArUco)
I agree with this. The importance of vision varies a lot from year to year.

In 2016, to score a high goal after crossing one of the terrain based obstacles in autonomous was extremely difficult without using vision. Even teams that had excellent autonomous driving software to get into a position near, or even locked against the face of the tower (330), still used vision once they got there.

In 2018 or 2015 vision was really not a very important ability because scoring didn't need to be very precise.

It is super important to determine whether using vision is important or if there is an easier way to aim at the target. Games like 2012 and 2013 are examples of games where only certain strategies need to use vision.

In 2013, for example, full court shooters filled a niche role that benefited from vision alignment because of the long range (though many full court shooters still were driver aimed). Additionally, the vision system didn't need to be super fast, because once aligned, a full-court shooter would just continue to shoot without having to move again. Having an amazing vision application like 987's from 2013 is one of the coolest things in FRC, but in their case it wasn't a huge advantage over a solid cycling robot. I will always think of 610 as one of the ultimate KISS robots to win the championship. Like many others, they would drive straight to the back of the pyramid, spend zero time on line-up or vision lock, unload four discs, and zoom back across the field.

Vision is a great tool, but it is important to evaluate critically whether it is valuable to use.

In regards to using the Pi, my team has developed vision for the Pi running OpenCV and streaming the points to the Rio over USB and would recommend the setup, though we didn't use it this year. From our experience, one factor that turned out to be important is having a heatsink and/or fan on the Pi. Another important tip is to utilize the separate cores of the Pi to run each chunk of the operation on separate cores. This threading dramatically improves framerate by allowing the camera to capture another image without having to wait for the previous image to be processed.
Reply With Quote
  #13   Spotlight this post!  
Unread 08-10-2018, 07:02 PM
cpapplefamily's Avatar
cpapplefamily cpapplefamily is offline
Registered User
FRC #3244 (Granite City Gearheads)
Team Role: Mentor
 
Join Date: May 2015
Rookie Year: 2015
Location: Minnesota
Posts: 706
cpapplefamily is a splendid one to beholdcpapplefamily is a splendid one to beholdcpapplefamily is a splendid one to beholdcpapplefamily is a splendid one to beholdcpapplefamily is a splendid one to beholdcpapplefamily is a splendid one to behold
Re: Vision on a Pi

I know the original post is questions about the PI. I will again +1 GRIP. Infact start now using your PC and whatever usb camera you have.

I have used GRIP on kangaroo mini pc and a ip camera with separate LED Ring. This is similar to a raspberry pi and camera system. The challenge was getting a reliable system. We had a routine that included relaunching the smartdashboard after the robot connected hopping the video feed would come up.

No mention yet of Lime Light. We used it this year with great success. Camera, co-processor (rasp-pi compute), led lights, web based configuration, all in one and soon GRIP support. It a super powerful and fast vision system. Many of the teams here in the Central MN Robotics hub are moving to it. This winter before kick off i will be hosting some Jumpstart seminars with Lime Light.
__________________
It makes sense in my mind.



Reply With Quote
  #14   Spotlight this post!  
Unread 08-11-2018, 01:40 PM
asid61's Avatar
asid61 asid61 is offline
Design Simple
AKA: Anand Rajamani
FRC #1072 (Harker Robotics)
Team Role: Mentor
 
Join Date: Jan 2014
Rookie Year: 2013
Location: Cupertino, CA
Posts: 2,972
asid61 has a reputation beyond reputeasid61 has a reputation beyond reputeasid61 has a reputation beyond reputeasid61 has a reputation beyond reputeasid61 has a reputation beyond reputeasid61 has a reputation beyond reputeasid61 has a reputation beyond reputeasid61 has a reputation beyond reputeasid61 has a reputation beyond reputeasid61 has a reputation beyond reputeasid61 has a reputation beyond repute
Re: Vision on a Pi

Quote:
Originally Posted by jtrv View Post
I am not entirely sure what kind of vision processing you're trying to do on a Pi, but I don't know if I would recommend a Jetson - up to $500!! - for a team who has never done vision processing before. It's simply overkill.

OP -- I recommend you get a Pi, look into setting up GRIP. Once you have the image processing part down, you can start to send data over NetworkTables to the Rio from the Pi using PyNetworkTables. For example, this year on 340, we processed cube images using the Pi, calculated where in the image the cube was (on the X axis, 0-360, where 180 is the center), and sent that over the NetworkTables to the Rio. Then we had a simple "rotate until the cube is between 170ish and 190ish". While we never wound up integrating it into our autos, we had it working in a day or so during build season. You can see our code on GitHub.

This was absolutely nowhere near full utilization of the computing power on a Pi. It's not the most advanced vision code, but it was relatively fast and it worked. For a camera, we just used a Raspberry Pi Camera Module.

There may be an easier workflow to this by now, but this is particularly easy and it lets your programmers start to learn Python as well as Java, without being too overwhelming.
What kind of framerates were you getting with the Pi?
I liked using the JeVois over the Pi. We tried Pi-based vision for 2017 and it was very slow, on the order of 10-20fps maximum. By contrast, the $50 JeVois I bought last December was able to effectively process a 320x240 image at 61fps right out of the box. I have a (somewhat verbose) guide to porting GRIP code onto the JeVois in my list of white papers.
__________________
Team 1072 2017-present
Team 299 2017
Team 115 2013-2016 (student)

2018 Davis Finalists (w/ 6474 and 3880), 2018 Roebling Winners (w/ 3476, 1323, and 1778)

Reply With Quote
  #15   Spotlight this post!  
Unread 08-13-2018, 03:28 PM
Chadfrom308's Avatar
Chadfrom308 Chadfrom308 is offline
Slave to the bot
AKA: Chad Krause
FRC #7226 (Error 404)
Team Role: College Student
 
Join Date: Jan 2013
Rookie Year: 2011
Location: East Lansing
Posts: 323
Chadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to beholdChadfrom308 is a splendid one to behold
Re: Vision on a Pi

Quote:
Originally Posted by asid61 View Post
What kind of framerates were you getting with the Pi?
I liked using the JeVois over the Pi. We tried Pi-based vision for 2017 and it was very slow, on the order of 10-20fps maximum. By contrast, the $50 JeVois I bought last December was able to effectively process a 320x240 image at 61fps right out of the box. I have a (somewhat verbose) guide to porting GRIP code onto the JeVois in my list of white papers.
I guess my original post about the pi being slow was a little unfair. I used the first generation pi, so of course my code was slower.

I attempted 640x480 and got terrible framerates. Can't remember what, but doing blob detection and outlining was just killing the pi.
__________________
//TODO: make signature
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 01:14 AM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi