1D/2D Barcodes with Camera (Micro QR, Semacode, etc.)

OOC, has anyone investigated the possibility of interpreting and/or tracking BARCODE labels (1D or 2D) with the new Camera, in NI Vision, OpenCV, or any other vision processor that’ll run on the cRIO? (…or, can we do it NOW, and I just don’t KNOW about it?)

We HAVE the hardware for it now, so I believe developing a library for this capacity would open up MANY game possibilities. EG: Give various colored ball bins a set of shuffle-able barcode labels (or make robots go to a Reading Station first), so ONLY the ROBOTS know which goals score for THEM in any given round, and NOT the Drivers. They now have to work TOGETHER to score DURING Teleop, in a semi-autonomous fashion…:smiley:

Heck… Various 2D scanning apps have already been developed for camera cell phones, to allow them to read QR Codes. Open source sites like sourceforge.net already have QR libraries. Sites like http://mech-warfare.com/default.aspx are discussing that THEIR systems ALREADY are able to track 2D barcodes for targeting.

This means the apps ARE out there !! It’s now probably just a matter of finding them, and porting them to the cRIO and Axis camera set…

Along with horizontal color bars (as in 2009 Lunacy) to ID signposts, I believe enlarged “Micro QR Code” might make a GREAT candidate for a FIRST Signpost label, to instruct an Autonomous Robot to do something.

So, IS there 1D or 2d/matrix barcode software already available, that could be made to run on our hardware?

If not, are there any camera code jockeys out there willing to take a crack at taking on finding and porting a 1D/2D Barcode Reader or Tracker to the cRIO?

Where are the Camera Jocks here, that can take this on???

  • Keith

The NI Imaq libraries do several types of barcodes. If you have LabVIEW installed, you can open up and run an example from Program Files/National Instruments/LabVIEW 8.5/examples/vision. I suspect there are C examples installed, but I’m less certain of where – probably in the Program Files/National Instruments/Vision folder.

Finally, perhaps the best way to interact is to launch Vision Assistant, bring in an image with the barcode, and in the menu system experiment with the barcode blocks to see how well they work. If you get it to work there, you can either recode it by hand in LV or C, or you can generate code.

As for whether these are good target codes. It is something that has potential, and is certainly use in real world to give computers ways of identifying things moving by. On the other hand, most of these systems have a good guess as to where in the scene will contain the barcode.

Greg McKaskle

Greg McKaskle

Thanks for the hints. (I’ll have the new Vision Student check them out when the team reconvenes.)

Hmmm… Are you sure we still need to know where in the scene it resides? The http://www.roborealm.com/ site is the one cited in the Resources area of the mech-warefare.com site (link corrected from my original post, sorry) http://mech-warfare.com/parts.aspx.

Looking it over, I now see they are referring to Fiducials, not Barcodes. A Fiducial is a simpler object, but this accomplishes exactly what I’m talking about, so let’s shift the conversation to there.

The stills on the Roborealm page shows them finding and tracking MULTIPLE “Fiducials” in the scene… http://www.roborealm.com/help/Fiducial.php

Not having access to our vision setup right now (nor a student… we lost our current Vision Expert in June), can you determine: IS it possible for the NI Vision system to accomplish the same thing?

IOW, how can our NI Vision package be made to emulate the Roborealm Fiducial Tracking system?

Thanks!

  • Keith

All of the algorithms that search will benefit from a search area that is smaller. They will work faster and are less likely to find false positives in the scene. I believe that one of the 2D bars is commonly used for full scene, but I don’t have experience with how robust it is.

The fiducial dialog in that package is a bit more specialized to robotics than what NI Vision contains, but I think that you can use the Characterization functions with skewed templates in the same way. If you have access to a computer with last years files on them, flip through the NI Vision Concepts manual. It was installed in Program Files/National Instruments/Vision/Documentation. It contains a chapter on each of the major techniques implemented in NI Imaq. Each of the search functions has their strengths and weaknesses, and it is useful to know a bit about your options.

Characterization uses mask comparisons and I’m pretty sure it supports scaling and rotation variation. The aspect skew is accomplished by adding skewed templates. If this were going to be used extensively for FRC, we’d add a training dialog that’d generate the skewed images automatically. If the characterization doesn’t support rotation or scaling, then those would need to be generated too, or we’d move to other techniques such as geometric edge detection.

Greg McKaskle

Replies:

<[1]> Unfortunately, I don’t have a system available PERSONALLY to me with last year’s package installed. My main team (I coach several) can’t find their 2009 KoP NI distribution disk either (which is a separate issue)… :frowning:

However, technically, let me take a crack at this, from a theoretical level… <cracks fingers>

As long as the Fiducial block has unique recognizable perimeter features, it should be quick to isolate in a random image. The example given above was a “white square around black square, with data in the middle”. In our case, since we have a color camera, IMO it should be much smarter to try something like “a Blue square around Red square, with Green/Black [or White/Black] data bits in the middle”. This unique boundary should be easily found in any scene (it’s similar to the Flag ID problem this season) and characterized quickly with simple operations, to create the bounding box mask for further processing (eg scaling). You can then directly pluck the data bits from the projected image and do a quick lookup table (which I’ll describe below).

As long as the data encoding scheme within the framework gives a unique pattern for all 4 rotations of the square, you’re golden. If I’m given just 15 unique Fiduciaries that can be quickly decoded and tracked by NI Vision, that should be MORE than enough for ANY game I’m contemplating.

15 possible Fiduciaries translates to four bits. Accounting for rotations, (and no ECCs) that’s only sixteen bits max in the data image. A sixteen bit optical matrix is simply a 4x4 blob block, where 4 bits are significant, and the 3 mirror positions for each data bit are always ZERO.

The bounding bicolor parallelegram directly IDs the data bit XY positions within it, regardless of projection, by thinking of a Cross Hatch defined by drawing lines across equal divisions of each of the four sides to its opposite side. (Your optical projection and distance limit is when the parallelogram is squished/small enough that intersections approach each other enough to be less than one pixel…)

Now, given the XY positions of each bit, you populate the sixteen bit table, then look up the 4 bit answer. ( TAA DAA! )

In my mind’s eye, this OUGHT to be quick to do, IF the library has suffficient image primitives. (…and I WISH I had a library to look at…) Maybe you can comment on whether or not the current library has sufficient primitives to try this already written…

<[2]> Next: I can’t speak for FIRST GDC. I was interested in this for two reasons:

A) Possibly designing some SIMPLE but FUN off-season games for everyone to try next summer, using what we already have available. If we can jointly solve the Fiducial ID and Tracking problem part of this, many cool things become possible…

B) Opening The Door for the GDC, by suggesting games that COULD use this feature (again assuming we somehow figure out how to get it to work). I’d love to make a demo of a simple game based on Fiduciaries, but to do that I need to figure out how to create and track a few Fiduciaries with our cRIO hardware and Axis 206 camera system.

Preferably, I’d like to be able to ID and track more than one Fiducial in any scene, but that’s not necessary for Proof of Concept, as many games can be defined where you only see one Fiducial at a time. (BTW… An alternative to multi-track is to have a fast mechanism to MASK a Fiducial once it’s understood, so another CAN be sought and found in the same scene).

As to having to skew images for training, proper definition of the Fiduciaries (as defined above) should eliminate the need for ANY kind of a training system. It’s a simple feature extraction, conversion of the chosen pixels to bits to make a word, then a table lookup.

Of course, you probably want to add a few redundancy data bits in the Fiducial for error correction to make sure you ARE looking at a Fiducial and not “a robot’s guts with all the pretty colored wires in it” that HAPPENS to look like a Fiducial (or a partially occluded Fiducial). But THAT is a second subject, to be addressed once “BASIC Fiducial tracking” is cracked… :smiley: (BTW… Robot wiring is why I said “Red and BLUE” boundry, not “Red and BLACK”… )

Does any of this make sense? I’m not sure if I explained this well or not.

There may also be many OTHER or BETTER ways to do this that uses things tougher techniques already WRITTEN (eg: pattern recognition training?), but I’m trying to take the direct approach and throw a Straw Man out there, just to get everyone’s brains started and get the ball rolling…

Given that criteria, is there anyone out there that’s a Vision Jock, that HAS access to the 2009 hardware and software, that wants to take a crack at finding a Fiducial in a scene? Unfortunately here, until we can find our disk, get our software up on another system AND get back up to speed, my hands are tied. But I’d be happy to correspond with anyone that wants to take a crack at it!

  • Keith

I follow what you are asking for, and I think it can be written above Imaq to work relatively well. You will not find a single block or call that does this unless this is a subset of what the 2D bar code stuff can do. Again, I’m not that familiar with these.

If you want to do it as described, I’d say that the first task is to find the quadrilateral defined by the two color edge. You can do this with color or with edge detection. IMaq doesn’t have a red-near-blue or X-near-Y convolution, but you can build one by grabbing the pixels, or you can look for the blobs of the first color, and search in the appropriate place for the second color – which is what the target examples did last year, at least the LV one did it this way.

Once you have a blob that scores high on the bicolor and looks like a valid quad, you can extract a line at as many areas within the quad as you need, and decide how you sample the line to decide if it is a 0 or 1. You can also at this point use some of the binary characterization functions to see which scores highest. All of these should be fast, but I couldn’t say which will be more robust without doing them.

I’d love to hear how it goes, and I’ll help out a bit, but I can’t really spend much time on this, especially for the next week – NIWeek and the robotics summit, etc.

Good luck.
Greg McKaskle