Outputting Data Onto Computer Screen

Ok anyone have an idea how to do it? How draw the image from the robot sensors to the screen? Not through the dashboard but a separate program. I know that if I print something out, it only shows up on the IDE output window. So theoretically I can just put the OpenGL code on the robot, the window would pop up to the screen. But if that would work, that would be takin away processing power and memory. But the cRio does not have a video card, so it will not be able to do OpenGL… Or I can just render it softwarely.

What do you mean by image from the robot sensors? Do you mean you want to generate an image from sensors on the robot?

The sensors would take in an array of data of the distances and then that can be turned into an “image”. So each distance can be turned into a color, which can be easily put on the screen with OpenGL

This seems like a job for the dashboard. IF you decide to go with another program my suggestion would be to pipe the array to standard out and then read it in from a C++/Python/Haskell/C#/Language of Choice program that then outputs the image to the screen.

Was thinking of doing that, won’t that lead to memory leaks or buffer overflows since the robot and the computer are modifying the same data all at once

Why would your display program be modifying data?

So it can alter reality… LOL, yea you are right, why would it modify anything?

I don’t understand the question, but I suspect that you don’t understand it either. The dashboard is a separate program, and it is exactly the right way to “draw” data on the screen.

If all you want to do is output text, the “LCD” User Messages area on the Driver Station is available.

I am saying not outputing to the dashboard, but a custom made program

The real question is WHY?

I am not familiar with LabView, No one is, I rather do it in C++ with OpenGL, I think it has more potential then LabView

It looks like I was right. You don’t fully understand what you’re asking for.

“The dashboard” is just a program that receives data on a certain port. You don’t have to use the one that is installed by default alongside the Driver Station application. It is perfectly reasonable to replace it with a custom program, either by replacing the default one with a custom one of the same name or by changing the Driver Station configuration file to start the one you want instead. See the forum discussion of ZomB, for example.

Speaking from personal experience Labview is not incredibly hard to learn. <disregard>Additionally it would be more beneficial to you as you can use it during the competition on your Dashboard. </disregard> The alternative would be to figure out how to use OpenGL inside Labview… I have a hunch there is a way.

Edit, Im an idiot at times. :\

Well if I am understanding ye right, you want to have a non-LabVIEW dashboard to display something (i.e. a computer generated image) on the Classmate?

As far as I know that wouldn’t be legal for competition use with this year’s rules, though you can certainly pipe out the dashboard data to another computer (via remote dashboard) and read the values coming in via whatever method of programming language you like (Processing sounds like fun). I haven’t tried it, but it might work.

-Tanner

It’s specifically allowed by <R60>.

Then all sounds good then for something like this.

-Tanner

I CAN put an arry of LEDs on the robot to light up each led according to the sensor input… That would be pretty cool

Yup, if you are in Atlanta check out 33, they do just this. They have 3 colors of light bars on their bot to serve as visual feedback.

EDIT: 2337 uses the vibration of the xbox controller to provide feedback to their driver, just another way of doing it.

Any data visualization should be done off-board the target (cRIO). The robot doesn’t care what you see on the screen, so why should it do any extra work to put it there? You just need to set up your robot to pipe the required data to an external viewer.

The dashboard classes built into WPILib are probably the easiest way to pull data off of the robot. The Labview example dashboard is probably the easiest way to view this data. You don’t need to worry about any of the transport methods, and can just concentrate on doing things that matter. We used a custom Labview dashboard quite frequently to plot sensor inputs/motor outputs over time to help tune our control loops.

If you really are opposed to using free functionality, you can set up your own socket connection to the target and pump the data through manually. Or run a http server on the robot and fetch selective data with http requests and return JSON/XML. Or whatever you want, really.

Edit: I forgot about this. Seems pretty cool. http://code.google.com/p/webdma/

As has already been said, you can run notepad as your dashboard if you like. The key is that it listens on the DB port and pulls the data from the data streams. Also said, you don’t need to build bitmaps or renderings on the cRIO. Sure, you can do it if you try hard enough, but the fact that the cRIO doesn’t have a video card might be a clue that it was designed for something else.

If you are interested in doing OpenGL programming, you can find many frameworks and toolkits to assist you in the language you choose, and yes, the scene graph, sometimes called the 3D Picture Control, will do 3D in LV.

Honestly, I don’t think you need OpenGL. LV has a display called the intensity graph which maps a 2D numeric field into color-coded display. It doesn’t do contours and fancier stuff, but it does fast display, typically of spectrograms, but other times data from 2D microphone arrays, radio telescopes, etc.

Greg McKaskle