Motor Drivers

Would it be possible to create a SLAVE circuit to the cRIO to control the small servos or even the motors, to prevent lag. I have an eight core processor that I could use to do all the dirty work…Just note, this would be for next year. not this year…no time

The processor is a P8X32A, in the form of a DIP-40 chip. I will try to migrate this into a QFP or a QFN

Don’t think it would be legal, see quoted rule below, though it would make for a neat project.

Every relay module, servo, and PWM motor controller shall be connected via PWM cable to the Digital Sidecar and be controlled by signals provided from the cRIO via the Digital Sidecar. They shall not be controlled by signals from any other source.

I don’t understand why you want to do this. The cRIO has more than enough power to handle whatever you can throw, except maybe intensive vision processing, and the FPGA can go down to the microsecond.

This sounds like a terrible idea, even if it were legal (which it is not).

The Propeller is one of the last processors I would want to use as a robot controller. There are far better micros which have far more useful peripherals.

The cRio IO is really quite slow since it’s read through the FPGA which itself reads it from the modules over some high-speed bus (unsure exactly what it is), and the layering and OS overhead isn’t insignificant, but it’s still a 400mhz 32-bit PowerPC with hardware floating point and many megabytes of RAM. The Propeller runs at max 80mhz with an external clock, has 32 IO lines with no analog, no peripherals to speak of other than the 8 cores (Each of which has 512 words of RAM) plus 32K of system memory (RAM and ROM).

For some comparison, some of the MPC5500/5600 processors I deal with could be configured to run the RT control loops of a robot through peripherals and DMA entirely (without any actual processor action or interrupts). That is the processor I would like to use in FRC.

I was wanting to make it possible for a microcontroller with parallel processing capabilities be getting input, talking to the cRIO, and then send motor output. If possible, I would like the microcontroller to take over while the cRIO is processing an image. This microcontroller costs $8 from manufacturer and $12 from stores and can operate twice as fast as the cRIO. with a clock running at 5MHz and a PLL of 16, 8 cores can give 160 MIPS (160,000,000 instructions per second). That is pretty nice for this. Also, the cRIO needs to continuously interrupt, reducing performance. Sending a Pulse through a PWM means that the cRIO will need to wait for the signal to send before it can do the next thing. This processor can have a core doing it in realtime. The microcontroller has i2c so it can be easily interfaced through the WPILib C++.

I am working on a schematic for a control board with two MCP3204 chips for ADC 12b. 3v3 and 5v regs, and 256kb EEPROM. I also have programmed this chip in ASM before. Hard, but the performance was just great.

Interrupts don’t exactly work that way.

There are many ways to do something. The MPC5500/5600 I mentioned earlier has dedicated hardware to time outputs (eMIOS and eTPU) so the processor only has to set them up at init and write a memory mapped register to set the value. The eTPU can even run timing-critical code on it’s internal processor, without interaction with the PowerPC. The cRio uses an FPGA to do something similar, neither method requires an interrupt at all.

On a normal processor supporting interrupts, it would be common to generate PWM using a timer interrupt. You could setup the timer to interrupt when you next need to service some IO thing, go on and do other things, and interrupt back when it’s time to do something again. All of this assumes the micro dosen’t have PWM generation in hardware, most can do it using timers, the propeller has nothing resembling hardware PWM generation.

The purpose of an interrupt on micro programming without a pre-emptive OS is to provide a method to pre-empt the running task to deal with an important event (usually IO or clock related) allowing the running task to go on with it’s buisness on a single faster core. There’s not very much inherintly inefficient about the interrupt if the code is written well. When you can prioritize interrupts, it’s common to run fast code in a low-priority ISR as a basic pre-emptive system.

An embedded microprocessor is frequently surrounded by peripherals which can do a LOT of things in hardware and only signal interrupts when they are needed or wanted. The propeller has none of these, and has to do everything in software. It will spend most of it’s time doing these mundane IO tasks which are usually done in hardware, including waiting for events instead of using interrupts, which is exactly how it is designed to work. This means it will spend many clock cycles waiting for timing or spinning until it’s ready, where an interrupt driven processor could use those clock cycle to do useful other things and interrupt back when it needs service again.

In addition to all the issues that Andrew has raised, I should point out that drawing the schematic is the “easy” part. Laying out a circuit board for a processor like this would require that you have experience designing (properly) multi-layer, high-speed circuit boards with multiple power and ground layers. There are no secrets or black art involved, just years of study. Learning to use the free Altium package only makes you a CAD operator, not a PCB designer. If you do not understand the difference, you are unlikely to design the circuit board properly, leading to “anomalous” operation that will make you tear your hair out. It is also likely that you will have a lot of trouble properly soldering the QFP and QFN packages.

It may be best if you do some research and find out what external processor boards other teams have been using to do their vision processing and buy one of them. It is likely that you will pay much more for just a bare PCB than you will for one of these fully functioning, mass produced processor boards.

Excuse my bluntness, but the Propeller doesn’t come within spitting distance of the cRIO. The PPC in the cRIO runs at 400MHz and can execute up to 3 instructions per clock - theoretical peak at 1.2 Billion instructions per second. The propellor has 8 cores at 80MHz, but takes at least 4 clock cycles to execute a single instruction - theoretical peak of 160 Million instructions per second. Note: Those are theoretical peaks, neither will sustain that throughput with actual code.

Then look at the quality of the instructions the two processors provide - The propeller doesn’t even have hardware support for multiply, the PPC has a full fledged floating point unit and multiple integer units.

Once you add the cRIO’s FPGA, it is a “brought a twisty straw and wadded up paper to a battleship fight” scenario.

Adding a propeller to your robot will provide no measurable benefit.

I am in a STEM program and I have teachers that help me with these problems.

I am pretty sure that vxWorks (on the cRIO) uses a lot of the processing power itself, just like how a slow netbook might have a 50% processor usage, running nothing but the operating system.

Even if you cripple the cRIO to only a quarter of the specs Eric provided*, it’s still no contest. 1/4 of the cRIO’s specs still beats the Propeller, and that’s without the FPGA, which does a lot of the heavy lifting in FRC. I still don’t understand what you expect to gain.

*I’m not an expert, but that sounds unrealistic under normal circumstances or even heavy FRC usage.

The system I am currently designing is supposed to be modular, kind of like the cRIO. an ADC breakout, PWMPAL Breakout are what I am wanting to create. The modules can easily be swapped, taken out, or be changed in order.

By the way, What I was asking is if the propeller could do continuous processing, especially when the cRIO is busy. I was thinking about using a Raspberry Pi, but the problem I thought is that it draws lots of power, and you cannot just remove the power plug to shut it down.

that sounds like a great personal project but in order to match the c-rio you would need something far faster and more advanced. If you really wanted to do this you could use the dio pins and set them to correspond to a single motor controller output

I am basing it off the arduino, but placing a much better processor instead of an Atmel AVR. Currently I am designing with the P8X32A-D40

Which can’t come close to the cRIO. Unless you’re trying to do very intensive vision processing/process multiple images simultaneously, the cRIO can handle whatever you throw at it. I still don’t know what you’re trying to accomplish.

With regards to (this year’s*) rules: Coprocessors are allowed, but the cRIO has to control the robot. [R67, R55] Altering PWM pathways is expressly disallowed. [R54, R67] You have to go through the cRIO anyway, so I don’t see what you’d gain.

*I (not an expert nor insider) do not anticipate any of the relevant rules changing significantly for (near) future seasons.

This is to continuously drive the motors. When the cRIO is busy with another task, it cannot output another signal to the Jaguar or Talon. Also, I wanted all sensors to be connected to this. The final output to the cRIO could be possibly: LEFT | RIGHT | BACK | FORW, and the driver input. I really like Parallax products because I find them robust, powerful, efficient and high quality. Parallax has a nice support team, even though they are such a small corporation. They are also quite resistant to some dangerous things like electrostatic discharge. For example, the cRIO could send this chip a command, like RETRIEVE [sensor name], for example, and the chip could give the data. Using a laser range finder, for example, would allow you to get an accurate distance from a point, like the goal in this year’s challenge. The cRIO would only have to make the final decisions. Also, I could integrate a CAS into this chip so that when the cRIO needs to calculate large numbers, the coprocessor could do it. When there is only one processor doing a ton of work, the whole system becomes laggy. It is common about how the robot will stop in place while processing an image.

As others have told you, this isn’t true.