View Single Post
  #10   Spotlight this post!  
Unread 10-12-2009, 22:32
Unsung FIRST Hero
Al Skierkiewicz Al Skierkiewicz is offline
Broadcast Eng/Chief Robot Inspector
AKA: Big Al WFFA 2005
FRC #0111 (WildStang)
Team Role: Engineer
 
Join Date: Jun 2001
Rookie Year: 1996
Location: Wheeling, IL
Posts: 10,770
Al Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond reputeAl Skierkiewicz has a reputation beyond repute
Re: finding amplitude of a specific frequency

Marshall,
I can't help but feel you are barking up the wrong tree here. Noise spectra are easy to deal with and control with DSP functions. You will find looking for amplitude variations for an acoustic signal heart wrenching. In a sterile (anechoic) environment, looking for the presence of multi-frequency tones is pretty easy. In a real world environment, reflections cause significant signal distortions. Using FFT techniques can help if you first know the arrival of the initial sound. By "windowing" in the time domain, you can effectively block reflections that are not close (in time) to the original signal. However, reflective surfaces have a nasty habit of diffracting the signal causing multi-frequency signals to arrive at slightly different times and with varying signal levels. Two signals arriving in near coincidence at a receiver will then add slightly out of phase. The result is a very distorted waveform. Distortion generates additional frequencies that you may be also looking for. i.e. as the resulting waveform become more square in shape, odd harmonics of the fundamental frequency start to be generated. Although they decrease in amplitude as the frequency increases, they are generated none the less. In audio this results in a rather confusing spectra. If you consider that two signals in phase can only add to 6dB but 180 degrees out of phase can add to infinite null you can imagine the resulting waveforms for phase alignments of less than 180.
Speaker designers fought this phenomena for years until they finally came to realize that multi-driver speakers overlapped output due to the lack of sharp cutoff in the crossover filters. The individual speakers have different acoustic centers and so two drivers emitting the same frequencies did so at different times/phases which resulted in signal degradation at the listener's ears. A search of Time Aligned Speakers or Don Davis's book on sound system design will be an eye opener.
Further, diffraction will take place as the transmitted signal passes through other structures much like light is diffracted as it passes through a slit or a prism. In audio this results in the phase distortion described above.
__________________
Good Luck All. Learn something new, everyday!
Al
WB9UVJ
www.wildstang.org
________________________
Storming the Tower since 1996.

Last edited by Al Skierkiewicz : 10-12-2009 at 22:37.
Reply With Quote