I’m trying to make a DTMF decoder with LabVIEW. However, I want it to be able to handle the possibility of several signals overlapping, instead of just assuming it’s the first one it finds.
To do that, I need to find the amplitude of a 7 predefined frequencies.
My question is, what is the most efficient way of doing this?
Should I separately filter out each frequency with strict tolerances, and then analyze each filtered waveform to find the frequency of greatest amplitude?
(Doesn’t sound very efficient)
Or should I use “Extract single tone information.vi” without filtering, and just enter the frequency I’m looking for in the “advanced” cluster? (I’m assuming a ± 5% band would be tight enough)
Or should I look for the highest peak in the waveform, generate a waveform of the desired frequency, and somehow correlate the two waveforms? (say, subtract or divide, and then look at the remaining variation in frequency)
I think you are looking for a “Fourier Transform”. It converts a time-domain signal to a frequency-domain signal. Basically, it is the efficient method of “separately filter out each frequency with strict tolerances, and then analyze each filtered waveform to find the frequency of greatest amplitude” you mentioned.
You may see it refered to as “FFT” for “Fast Fourier Transform”. For whatever reason, that extra F got into the lexicon.
Apply an FFT to your signal, and look at the amplitudes of the frequencies you care about. Remember to take the absolute value of the amplitude, because you don’t care about phase information.
Marshall,
You will find in DTMF that searching for frequency by amplitude will get you unusual results. The DTMF signaling system uses eight distinct tones mixed in groups of two to represent 16 states. The tones are divided into a high and low group, centered around 1kHz. The tones are high group, 1209 Hz, 1336 Hz, 1477 Hz, and 1633 Hz. The low group is 697 Hz, 770 Hz, 852 Hz, and 941 Hz. these tones were specifically chosen to center up in the passband of the phone which has a high frequency limit of 3kHz and a low frequency limit of 300Hz. When a DTMF encoded signal is generated, one high signal and one low signal are mixed. The resulting waveform varies in amplitude as the high frequency signal is AM modulated by the low frequency signal. Effectively, the two signals are algebraically summed. Simple DTMF decoders use Phase Locked Loops to recognize the frequencies present and output a one of sixteen tally signal. By using this PLL method, noise, amplitude variations and distortion in the signal is minimized resulting in less errors. You likely will need to use some analog circuitry ahead of the digital implementation to bring the original signal to a level and shape that can be used by the control device. There is a ton of info available on this subject. You might try looking into Ham Radio books for various discussions and implementations. Hams have been using DTMF since it was invented for station control, remote signaling and phone dialing from radios.
The Fast Fourier Transform is a specific implementation, based on powers of two and something called a “butterfly” operation. It’s very well optimized for digital computers.
For DTMF decoding, there is an even more specific implementation called the Goertzel algorithm. It simultaneously computes the “DFT” for “Discrete Fourier Transform” at all seven (or eight) frequencies of interest. It’s a lot faster than doing a full FFT over the entire range of frequencies.
A quick Google search shows that there’s a goertzel MathScript function in LabVIEW. The book Digital signal processing system-level design using LabVIEW appears to have a complete DTMF decoder as one of its lab exercises.
This is why I love this forum. I can read what I thought to be a question that I understood, go to bed, wake up, and get two answers that remind me that I still have lots of cool things to learn.
As to the FT vs FFT statement: What I’m trying to say is that people often say FFT when they mean FT. Its sort of like calling all rectangles squares because you happen to be more used to squares.
Well, I guess I should follow up on this.
For now I’m using a modified “Extract single tone information.vi” to look at the amplitudes of multiple frequencies using a single Fourrier transform. (apparently the FFT gives me a Hann spectrum; I’m probably not going to take the time to determine why that specific windowing function is used over others)
I was unable to find any Discrete Fourrier Transform functions - they seem to be only available in Mathscript. We do not get Mathscript with the FRC version of LabVIEW.
Phase Lock Loops are an excellent method on a hardware level, because of the ease of signal generation and comparison. However, on the software level, it seems very inefficient to be continuously subtracting and integrating a waveform.
You may be wondering why I’d care to do it in software:
I’m planning a simple communication protocol between robots, and I’d like to chose the frequencies for their transmission and noise characteristics, not be limited to frequencies that are commonly used (and run the risk of excessive sound output to ensure the message can be recieved).
Also, software is easier to distribute than hardware.
Well, a Fourier transform is a Fourier Transform, the only difference is the method used. As Alan noted, the FFT is a method well suited to use by computers. In today’s world, the overwhelming majority of the Fourier Transforms actually done use the FFT.
Oh, and it’s a ton faster than the “original” way developed by Fourier.
What does it do? It converts between the Frequency domain and the Time domain.
Time domain is what you see on an oscilloscope: The X axis is time.
Frequency domain is what you see on a spectrum analyzer. X axis is frequency.
In both cases, the Y axis is amplitude.
I could certainly be wrong, but I believe that where you specify the size of the FFT, which can be different than the size of the array specifies whether you have an FFT, a zero padded FFT, or a DFT.
I don’t have access to the laptop with the FRC LV, but I thought that the FRC LV included advanced analysis, and therefore the Mathscript node. If the Functions menu contains the blue script node, you can use the goertzel. Mine has it, but as I said, it isn’t a strict FRC installation. Be sure you aren’t simply looking in the simple palettes.
Marshall,
I can’t help but feel you are barking up the wrong tree here. Noise spectra are easy to deal with and control with DSP functions. You will find looking for amplitude variations for an acoustic signal heart wrenching. In a sterile (anechoic) environment, looking for the presence of multi-frequency tones is pretty easy. In a real world environment, reflections cause significant signal distortions. Using FFT techniques can help if you first know the arrival of the initial sound. By “windowing” in the time domain, you can effectively block reflections that are not close (in time) to the original signal. However, reflective surfaces have a nasty habit of diffracting the signal causing multi-frequency signals to arrive at slightly different times and with varying signal levels. Two signals arriving in near coincidence at a receiver will then add slightly out of phase. The result is a very distorted waveform. Distortion generates additional frequencies that you may be also looking for. i.e. as the resulting waveform become more square in shape, odd harmonics of the fundamental frequency start to be generated. Although they decrease in amplitude as the frequency increases, they are generated none the less. In audio this results in a rather confusing spectra. If you consider that two signals in phase can only add to 6dB but 180 degrees out of phase can add to infinite null you can imagine the resulting waveforms for phase alignments of less than 180.
Speaker designers fought this phenomena for years until they finally came to realize that multi-driver speakers overlapped output due to the lack of sharp cutoff in the crossover filters. The individual speakers have different acoustic centers and so two drivers emitting the same frequencies did so at different times/phases which resulted in signal degradation at the listener’s ears. A search of Time Aligned Speakers or Don Davis’s book on sound system design will be an eye opener.
Further, diffraction will take place as the transmitted signal passes through other structures much like light is diffracted as it passes through a slit or a prism. In audio this results in the phase distortion described above.
However, I’m curious what you see as viable options for a beacon system. Here were my limitations:
simple and inexpensive hardware
must not eat up processor time
every robot must be able to transmit their team number at least twice a second
must not be prone to collisions