There were some rumors going around about FIRST opening up the FPGA of our cRIO’s this year. Does anybody know if all or part of this is true? It would be nice if they did because it would give teams alot more room to expand.
I don’t know if it’s true or not.
However, I’d lean towards not if the FPGA contains stuff that is important (like communications outside of the robot). More room to expand also means more room to make a huge mistake.
If it is opened up and there’s stuff in there about “DO NOT MODIFY THIS SECTION”, take that seriously.
I can’t remember where I heard this, but I was under the impression that the FPGA is full.
Having said that, I could easily be wrong or have misheard.
What does the FPGA do right now?
It handles all of the IO directly. This includes:
Reading analog channels when you poll them
Reading digital channels when you poll them
Integrating gyros/accelerometers (there are only 2 in the current FPGA code)
Counting encoder clicks/determining rate
It also includes more dangerous stuff, like:
PWM output generation
Relay output generation (not directly, but it controls the chips on the DSC that actually do generate the relays)
Solenoid output
And most importantly…
The enable line. This is a single digital channel on each of the DSC’s which, when disabled (due to loss of cRio or watchdog trip or the field disables you), will prevent the DSC from passing the digital signals to the PWM and Relay ports.
This seems like stuff we don’t want to mess with. IF they do open it up, they will not open the code which handles this action. Guaranteed.
What they have done is released the source code for the Cypress board, so we can read the communication protocol between the Cypress board and the DS (assuming they don’t modify the protocol, it would be trivially easy to modify the code in the board to use whatever it contains that you want, and feed it into the existing datatypes). Since they have done this, I would expect that they will open up the Cypress board next year, now that they gave us the code for it (and the tools to modify it), it seems difficult to expect nobody will change the code. Being that it is on the Driver Station, has limited current sourcing potential (USB power is not much), and has to go through the common protocol to communicate with the robot, it seams reasonable for them to open this up.
As a beta test team doing LabVIEW, we haven’t heard about any “opening up the FPGA to teams.”
However, there are some extra FPGA related things that are now included for use in LabVIEW, but they are not controlling the FPGA itself. They are quite advanced, but if you’re inquiring about the FPGA then maybe this might apply to you. They are:
-3 more external timers for DMA for use with the External Sample Clock DMA Open.
-There is bit handshake mode added to I2C - CompatabilityMode input on I2C Read and I2C Write.
-FPGA timer output has signed count
I agree with apalrd, the FPGA is an advanced thing that you don’t really have to or should mess with.
THIS. I am building an FPGA image for the NI 9074 for work and I can tell you that it is complex. Plus, if you think compile times stink now…I hit compile then walk laps around the building for the next 30 minutes (I have to keep my figure somehow).
Just reminded me of this:
http://www.thinkgeek.com/images/products/additional/large/dac0_compiling.jpg
http://www.thinkgeek.com/tshirts-apparel/xkcd/dac0/
now can’t they add another FPGA onto the board or at least provide an extra FPGA to use? I honestly think that exposing students to harder and more complex “things” is the wise thing to do. As we speak right now, the world of computing is zooming by us. We are in the age of multicore/processor parallel processing. Due to the physical limitations and power consumption level of computer are now getting, the trends are leveling out. There is a whole lecture about this; the bottom line is that if we want to produce engineers of the future, expose them to the technologies of the new, not the old.
This has been a common rumor for the past two years. The FPGA contains among other things, the ability to stop all robot motion under field control. It is unlikely that it would become open for team modification until someone can insure that a wayward robot could be disabled before it did some damage to personnel, field, venue.
Actually, that is pretty close to reality.
now can’t they add another FPGA onto the board or at least provide an extra FPGA to use? I honestly think that exposing students to harder and more complex “things” is the wise thing to do. As we speak right now, the world of computing is zooming by us. We are in the age of multicore/processor parallel processing. Due to the physical limitations and power consumption level of computer are now getting, the trends are leveling out. There is a whole lecture about this; the bottom line is that if we want to produce engineers of the future, expose them to the technologies of the new, not the old.
I’m sure there is a whole lecture about it but that doesn’t change the facts that FRC is not about teaching and students can’t work with the latest technologies without understanding the basics. Learning the basics on complex hardware is difficult.
If you really want to play around with an FPGA a quick google search should turn up some hardware you can order and use.
I think there is enough crazy advanced stuff that we can do without getting into the advanced programming of an FPGA. Oh, if only we had a 8 or 9 week build season. Teams would be making amazing things and breaching what would be possible with radio-controlled robots.
I also remember the rookie teams who flounder around with the advancement and detail of our current control system. I think it’s a great system, but if you’re not a strong team or if you’re just trying to get a rolling robot, sometimes the extra advancement doesn’t matter, IMHO.
As we speak right now, the world of computing is zooming by us.
Wouldn’t this be even more true if you weren’t involved in FIRST? If FIRST provides an approachable and motivating task that exposes you to the real-world problems and some of the real-world tools, then you are better prepared to continue learning and tackle bigger and better things. If it is still your goal, you’ll then be prepared to get into one of the fast-lanes of computing. Programming robots and real-time embedded systems is also quite different from databases or programming in general, and you are also being exposed to those differences.
We are in the age of multicore/processor parallel processing … the bottom line is that if we want to produce engineers of the future, expose them to the technologies of the new, not the old.
FIRST is trying to inspire the engineers of the future, but it is not fully responsible for producing them. If your resume only lists experiences such as robot programmer for FIRST team and auditing a few lectures from Stanford, and the woman next to you has a four year degree from a local state college, she is way ahead in the process. You will have to knock my socks off to get the job instead of her. She is better prepared, even if you are more inspired. FIRST is an awesome program, but I don’t believe it is a replacement for higher level education and training.
If you can learn to apply the enthusiasm, team-skills, and GP you have seen in FIRST towards high school studies, college, and then work challenges, FIRST has accomplished its goals and you will accomplish yours too.
Back to the technical stuff. Multi-tasking doesn’t require multiple cores, and you have access to multi-tasking concepts in a variety of tools and approaches. C++ and Java have threads, tasks, processes, mutexes, semaphores, critical sections, and others. LV chooses not to directly expose threads, but uses dataflow specification to capture what can and cannot be run simultaneously, then uses threads and such to carry out the execution. This limits the need for synchronization of data access. For synchronizing access to other resources, it exposes semaphores, notifiers, events, rendezvous, RT FIFOs, and other synchronization primitives. All tools allow you to do multi-tasking and learn about the issues.
What I’m getting at is that the concepts and abstractions for multi-tasking are equally valid on a single core with a multi-tasking scheduler – like the cRIO. More cores act as an accelerant, but aren’t fundamentally different. If you learn these tools on the FIRST controller, they will be very similar on a PlayStation 3, on high-end workstations, and other MIMD architectures. SIMD requires additional tools, and while it is currently somewhat rare, that may change in the future provided GPUs and BIG FPGAs come down in cost. Again, the tools you are using are already being enhanced to support SIMD, and it is a continuation of what you are learning now.
A comment on the sudden issue with clock speeds – it wasn’t really that sudden. Supercomputers hit the wall twenty years ago, and tried many things including more CPUs. It is now impacting consumer devices, so more magazines write about it, more people talk about it, but it is the same problem, similar solutions, but different price constraints.
Finally, I suspect that your “need for speed” is related to vision processing. As I’ve mentioned in other threads, vision is a hungry hungry beast – it will consume whatever CPU resources you have and demand more. Successful vision systems always benefit from and often require controlling the lighting, controlling the content in the scene, and skillful application of simple processing approaches. Oh, and lots of trial and error. Sometimes a bigger processor, or more processors is the right approach, but it also reaches diminishing returns, and rather quickly.
In terms of computing power, FIRST has enhanced the control system rather dramatically in the last few years. This will most likely continue, but it has to be balanced with cost. And on a robot, you don’t only want fast processing, you want lots of sensors with precise timing, lots of outputs, tight control timing, and flexibility in how you combine those. Oh, and simplicity of setup is nice too. I’m not going to claim that the current control system is best in all these areas, but it scores well in many areas, and I’m sure that FIRST will continue to upgrade it to improve things. Meanwhile, it is not uncommon to find teams adding FRC-legal coprocessors to enhance areas such as vision. I knew of two last year that had a dedicated image processor capable of handling multiple cameras. If your team wants to lead the charge in atom-based or beagle-based image processing array, go for it. I’m not sure it will win the game, but it will satisfy many of the FIRST goals, and almost certainly win you some awards.
Greg McKaskle
Apparently the bottle neck right now is the power consumption, due to the loss of electricity every cycle because of the very tiny gate stack sizes. They are becoming so small that they are now only few atomic layers, which leads to more leakage. Also another bottle neck is not in the processor it self but the time to access the memory.
I got that from this lecture:
http://dl.dropbox.com/u/15093997/Lecture%2001_%20Course%20introduction%20(par.mp4
(Right click, save as)
Yea I will be adding another processor onto the robot, but I really have this problem asking people for money or buying anything for me. Im just not that comfortable asking. Well of course its for the robot, but I mean if we are low on money I would want the essentials to be bought first like the metal for chassis and ect. I do not know to budget, but I don’t think $200 for the cameras and the processor will affect us that much.
Like I said before in my previous posts, I am very interested in the PS3 (whether its legal or not, I am still interested for my personal programming endeavors.) Due to the fact that it has multiple cores and it being a distributed memory system, each core has its own memory and I won’t have to deal with locking and stuff as much.