FFT for quick, educational demonstration?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

In ten minutes one can only say "you know we had seen this in course x, and that in course y, and sure you remember it all, and these formulas are obvious (sure they are)" and then have a coffee break, which is at least useful for something.
~stratum~

Post

What I want to do is capture video of processing a file with before and after transform and embedding that video in the PowerPoint presentation.

Let’s say your are using an EEG to measure in real time changes in brain states in patients under anesthesia. This is an example of time series data. The goal is, generally, to be able to make statistical inferences from this data and specifically in this case to be able to determine whether distinct anaesthetic states are associated with distinct EEG signatures. A persons is under anasthesia for a certain length of time. An EEG will make measurements according to a certain frequency, i.e. a certain number of measurements per second, e.g 250hz. This, these brain state signals are localized in time.

While Fourier can represent the signal power in terms of frequency, it cannot have any information about how these features vary over time. This limits the the usefulness of Fourier for analyzing signals that are localized in time, like transients or any signal of finite extent, such as the EEG measurements.

Post

In theory fourier transform uses the whole time domain of a function (from -infinity to +infinity) that's why time information is lost. But engineering and math are not the same thing. Engineers can use and abuse math, and for that reason there is a version called STFT (short time fourier transform). You can compare results from adjacent windows and even process the data in those windows to achieve some useful purpose. There is a limit to the accuracy of time and frequency localization known as the uncertainity principle, search for it at the wikipedia but skip the paragraphs about quantum mechanics.
~stratum~

Post

stratum wrote:In ten minutes one can only say "you know we had seen this in course x, and that in course y, and sure you remember it all, and these formulas are obvious (sure they are)" and then have a coffee break, which is at least useful for something.
You’re right, that’s what I have to do, hence wanting to demonstrate frequency domain limitation using sound. It’s immediate and obvious, takes a second, it’s engaging (especially since these very young teen-20 something students don’t like to read and prefer multimedia; I myself am an older student).

The presentation is on Big Data and Fourier. It has two parts:

1) Sparse Fourier Transform - since audio/Video/image data is sparse, to speed up data compression, save money on energy costs and wireless broadband costs by improving O(nlogn) complexity.

2) SSMT Algorithm - to provide unified statistical inference network to analyze time series data. Fourier can only be used for frequency, limits usefulness.

It is high level - there is no explanation of the mathematics involved.

When I discussed this topic with class mates, many wondered why Fourier wouldn’t work on its own to include time-domain estimates. I showed them the references in the book, online and realized they didn’t really understand, in a tangible way, what the frequency meant. They only understood in an abstract sense, to permit performance on assignments and test questions (which don’t even discuss in terms of frequency, they discuss in terms of real and complex numbers, scalar products, etc). This is because The time/frequency riots of Discrete Fourier Transform (the only Transform we covered, with FFT implementation) are somewhat glossed over because it is emphasized as a tool for computing polynomials and large integers.

They got it when I showed them a video of Specteal Conquest in a second, versus trying to explain in words... hence this KVR topic.

Post

stratum wrote:In theory fourier transform uses the whole time domain of a function (from -infinity to +infinity) that's why time information is lost. But engineering and math are not the same thing. Engineers can use and abuse math, and for that reason there is a version called STFT (short time fourier transform). You can compare results from adjacent windows and even process the data in those windows to achieve some useful purpose. There is a limit to the accuracy of time and frequency localization known as the uncertainity principle, search for it at the wikipedia but skip the paragraphs about quantum mechanics.
Yes, that is what I have read. Thank you for your description. Very helpful and informative.

Post

So they don't understand the word "frequency" but you are trying explain data analysis methods involving Sparse Fourier Transform and SSMT Algorithm (whatever they are, I don't know). How is this going to work?
~stratum~

Post

stratum wrote:So they don't understand the word "frequency" but you are trying explain data analysis methods involving Sparse Fourier Transform and SSMT Algorithm (whatever they are, I don't know). How is this going to work?
Indeed!

SFT - in audio/image/video signals most Fourier coefficients are small or equal to zero so DFT output is approximately sparse. For sparse signals, lower bound complexity of DFT no longer applies. When a signal has a small number k of non-zero coefficients, DFT output can be represented using only k coefficients. Thus can achieve o(nlogn) instead of O(nlogn) for an k=o(n).

SSMT State Space & Multitaper. SS - inference framework for analyzing systems with properties that evolve over time. Multitaper = Fourier + Tapering. Tapering increases frequency resolution, thus MT optimized bias-variance tradeoff.

Post

stratum wrote:So they don't understand the word "frequency" but you are trying explain data analysis methods involving Sparse Fourier Transform and SSMT Algorithm (whatever they are, I don't know). How is this going to work?
From what I could tell, they understand abstractly. They didn’t understand concretely, or intuitively.

And by this I means they understood the transformation process of one “domain” to the other, but the context was polynomial multiplication and large integer multiplication. There was literally one sentence about time/domain in the book, because the class is about algorithms. How to implement in code, etc. And like I said, most of these young students don’t read the book. They go directly to YouTube to watch a 5 minute video. Personally, I can’t learn that way but I’m an old student. I have to read.

All they needed was an example...

Post

Time/frequency domain relationship is about eigenfunctions of lti systems, i.e complex signals can be decomposed into sinusoids and steady state behavior of lti systems can be analysed with respect to how they react to those sinusoids separately. I don't know how it ended up being used in polynomial multiplication and large integer multiplication, so I'm not suprized that they didn't see an immediate relationship either.

The most straightforward way to understand what 'frequency' means is to take a guitar and tune it by ear. If that concept doesn't directly translate into its usage in data analysis problems, there is something missing in between and I guess it's not about the understanding of the word 'frequency'.

p.s. that 'multitapering' approach reminds me this https://en.wikipedia.org/wiki/Signal_averaging (not that I have understood what you meant by it, but it seems similar, the "noise" being uncertainity of frequency measurement as found by a windowed DFT).
~stratum~

Post


Post

Apparently it's about a statistical random process generating various events with certain frequencies. i.e. those events are not periodic but random, but nevertheless I guess DFT is supposed to give a measure of that probability. The underlying concept in this case seems to be 'probability', instead of 'frequency', and apparently that's one possible source of confusion.

In audio signal processing context people don't work like that. They take overlapping subsequent windows from the same signal and multiply that by an hat shaped smoothing 'window function' and further assume that spectral content between those windows change continously; i.e. they don't work on 'probabilities' they work on actual frequencies that are physically meaningful, i.e. you can hear that thing, it's not something abstract or a loosely related concept like probability.
~stratum~

Post

stratum wrote:
Apparently it's about a statistical random process generating various events with certain frequencies. i.e. those events are not periodic but random, but nevertheless I guess DFT is supposed to give a measure of that probability. The underlying concept in this case seems to be 'probability', instead of 'frequency', and apparently that's one possible source of confusion.

In audio signal processing context people don't work like that. They take overlapping subsequent windows from the same signal and multiply that by an hat shaped smoothing 'window function' and further assume that spectral content between those windows change continously; i.e. they don't work on 'probabilities' they work on actual frequencies that are physically meaningful, i.e. you can hear that thing, it's not something abstract or a loosely related concept like probability.
Indeed, you are correct.

It is my understanding from the paper that the State Space modeling is a common practice in analyses of nonststionary time series and assumes a minimal interval length and that data are stationary on intervals having this minimal length. Fourier transform represents this observational model in the frequency domain.

Here, the number of tapers M is chosen to minimize the local bias-variance tradeoff for spectrum estimation on each stationary interval using the standard MT formula. Since the tapers are orthonormal, the Fourier transform of each tapered series has the same probability distribution and spectral representation as the non-tapered SS model.

Therefore, the tapers used to achieve the desired frequency resolution given the assumed length of the local stationary intervals transforms the original time series into M indépendant time series and their M indépendant state space models, with their corresponding Fourier transformed frequency domains.
Last edited by perpetual3 on Sat Apr 07, 2018 10:09 pm, edited 1 time in total.

Post

Deleted: duplicate post

Post

You can use STFT to demonstrate the simultaneous time-frequency resolution restriction (i.e. the uncertainity principle) but I don't know how well it would map to the ideas above. A shorter STFT window means that the data observed is taken from the time range of that shorter window (therefore localized better in time) but it would give worse frequency resolution. If you increase the window length, you can get better frequency resolution but then the data is taken from a longer window, therefore time localization is worse. As far as audio signal processing is concerned, I guess this is as close as it gets to the idea you are trying to demonstrate by example.
~stratum~

Post Reply

Return to “DSP and Plugin Development”