Recording times from a timer

I am working on a project for school apart form the robotics team. I am using an ardunio uno with a labview interface. I am trying to use an magnetic switch as a rpm sensor. The problem i am having is i do not know how to record the time in between true reads form the switch and then using that number to calculate rpm. I was wondering if anyone would know how i could make this work.

Here is a link to some LabVIEW examples. Scroll down to the tachometer.

I’m not too familiar with LabView, so I can’t help you with the technical details. However, I do see a potential problem. If (and I may not be thinking correctly here) your program sits there and continues reading the sensor, you could simply count cycles until state change. Problem is, you take up the processor doing this. The way I think you should approach this problem (and it may be beyond your capability) is to use a hardware interrupt to handle the timing. The idea is that every time the sensor causes a hardware interrupt, the interrupt handler stores the time at which the interrupt occurred in a variable and stores the difference between that interrupt and the interrupt before that. So have two vars (prevTime and interval). Every interrupt takes current time, subtracts prevTime from that and sticks that in interval, then sticks the current time in prevTime.

What this does is frees up the board to do other stuff while waiting for reads, instead of sitting there waiting for state changes. Hope this helped.

Interrupts are a good way to approach this.

When the interrupt is triggered ( the switch hits ) you save the “actual time” to a variable. Exit the interrupt routine. Then you subtract the previous time from the actual time, which is used to calculate the wheel speed. Set the Actual Time to the Previous Time, and repeat.

*Don already mentioned using an interrupt. With that method, you’ll also need a counter in your ISR so you know how many pulses the timestamp corresponds to.

Here’s another way to use an interrupt that gives you a bit more flexibility.

  • Set up a ring buffer1

  • Set up an interrupt to trigger on your sensor pulses

  • In your interrupt service routine, just add the new timestamp2 to your ring buffer and return from interrupt.

  • When you need a speed reading in your control code, do the following:
    – disable interrupts3
    – grab a copy of the ring buffer
    – enable interrupts

  • Now you have a history of timestamps of the past N pulses (N being the size of your ring buffer). If it’s not too noisy4, you can just use the difference between the latest two. Or you can use an average of the past n samples, where n<=N.

The other common way to compute speed is to have an interrupt service routine which just counts pulses. Then in your control code, you grab that counter (atomically), subtract the previous counter value, and divide by the elapsed time between the two readings (of the system timer). This works better at high speeds. As the speed gets slower the change in the counter gets smaller and the resolution becomes unacceptable.

1ask if you’d like more explanation

2e.g. read the microsecond system timer.

3operation needs to be protected to preserve data integrity

4at high speeds, the signal-to-noise ratio may suffer, in which case use more samples from the ring buffer.

i am still rather confused on how to put the whole tachometer together. Is there an easier way to make the switch into a tachometer?

Rather than just saying you’re confused, could you pick one of the responses you got and state what part you don’t understand? Make this a dialog.

A tachometer is a device that measures the rotational speed of something. The switch is the sensor you’re using to (try to) measure that speed.

Let’s go ahead slowly with this, and let us know what of Ether’s post isn’t completely clear. We’ll be happy to post a wall of text for you with great detail - we really really want you to understand this completely - but to save both of us time, just tell us what you need to start and we;ll walk you through the whole thing. Honest. We like to help, because this is both cool stuff and important to understand.

OK, maybe to get started: Write a sketch that will enable interrupts, and then increment a variable when the interrupt gets serviced.

Enabling interrupts is simple, but you have to learn what that one line should look like and type it in yourself.

Them, you write a ‘subroutine’ that, when the interrupt takes place, runs the formula N=N+1 (where N is your variable, you need to declare this, I suggest globally for the moment, and also set N=0 but put it where it will only set it to 0 one time).

Is this completely understood? If not, tell us what isn’t, even if you think it is a very dumb or simple question. It is OK, we all started somewhere…

You would do it the same way that you would program a time based autonomous function. Determine how fast your processor is running through your code (every 50 milliseconds, every 10 milliseconds, etc.) by establishing a count, and timing how long it takes to reach 100 for example. Lets say you did this and got that it takes 3.5 seconds to run through your code 100 times. That means that your code is run through once every 0.035 seconds. Now lets say that the amount of “count” in between the first reading and the second reading was 200. So, you would then multiply 200 by 0.035, and that will get you the time between readings. (I fudged the numbers so don’t assume that this will be anything like you will get) :slight_smile: