Oversampling LPF question

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Z1202 posted two images, both showing the same step-transition (presumably from a square wave), but sampled at two different sampling rates.

The whole premise of Shannon-Nyquist sampling is that we can perfectly reconstruct a band-limited signal after sampling. Hence, it makes sense to reason that for all intents and purposes, it remains very much the same signal, no matter where and how many sampling points we are placing, as long as the sampling is dense enough to avoid aliasing.

So the "perfectly" interpolated curve is really what matters and any change in the ripples of the interpolated curve when changing the sampling rate is always the result of an error term in the resampling process. On the other hand, if the interpolated curve remains the same, it really doesn't matter what the samples look like. For example, fractional delays will change the "ripple" in the samples, but if you add more fractional delay until it becomes an integer again, you'll be back to the original (because the underlying signal didn't change, the sampling grid just moved in time), except for potential error terms of a practical implementation.

You might want to treat the samples as impulses during the resampling process, there is nothing wrong with that. But when a WAV file is loaded into a DAW, the samples contained in the wavefile are very much not impulses, but rather just values of a band-limited signal that we have stored and which we intend reproduce again at playback (even though the whole DAC process will probably go through a couple of steps, changing the signal on the way, but that's implementation detail; eventually we still want to get at close to as possible to the original signal)... and I would assume that is also what Z1202 is trying to point out.

Post

mystran wrote: Fri Feb 21, 2020 2:23 am Z1202 posted two images, both showing the same step-transition (presumably from a square wave), but sampled at two different sampling rates.

The whole premise of Shannon-Nyquist sampling is that we can perfectly reconstruct a band-limited signal after sampling. Hence, it makes sense to reason that for all intents and purposes, it remains very much the same signal, no matter where and how many sampling points we are placing, as long as the sampling is dense enough to avoid aliasing.

So the "perfectly" interpolated curve is really what matters and any change in the ripples of the interpolated curve when changing the sampling rate is always the result of an error term in the resampling process. On the other hand, if the interpolated curve remains the same, it really doesn't matter what the samples look like. For example, fractional delays will change the "ripple" in the samples, but if you add more fractional delay until it becomes an integer again, you'll be back to the original (because the underlying signal didn't change, the sampling grid just moved in time), except for potential error terms of a practical implementation.
You've lost the context a bit. Z1202 was giving me case where a user/musician gets confused by seeing "lollipops", and said explaining that by, 'Saying "an interpolation curve of sufficiently high interpolation quality" should be good enough.' While saying explaining it the way I do would be tougher. I don't see how, but again I'm talking about explaining theory, not telling a musician how to understand his DAW. He said, "Maybe you should try doing that sawtooth experiment yourself to know what I'm talking about." I asked him to explain the experiment, which was initially stated as 100 Hz sawtooth...etc.

So your comments don't apply to the exchange.
You might want to treat the samples as impulses during the resampling process, there is nothing wrong with that. But when a WAV file is loaded into a DAW, the samples contained in the wavefile are very much not impulses, but rather just values of a band-limited signal that we have stored and which we intend reproduce again at playback (even though the whole DAC process will probably go through a couple of steps, changing the signal on the way, but that's implementation detail; eventually we still want to get at close to as possible to the original signal)... and I would assume that is also what Z1202 is trying to point out.
That's a conversation abandoned earlier in the thread (pre-"Wanted to share an example..."), but I'll say again, I disagree with your statement about what the samples are. You 1) say they aren't impulses, but 2) they are samples of the band limited waveform. Samples of a band limited wave form are, for mathematical purposes, impulses. It's the very reason we need a lowpass filter—because the samples aren't bandlimited. It's why aliasing happens, and it's why sample rate conversion requires lowpass filters. Obviously, we disagree. I don't need to change your mind, I'm fine with it, but I'm not going to agree with your assertion.

PS—Remember, the samples container of that wav file is PCM. That’s a pulse-code-modulation signal, not a band limited one.
My audio DSP blog: earlevel.com

Post

earlevel wrote:Samples of a band limited wave form are, for mathematical purposes, impulses. It's the very reason we need a lowpass filter—because the samples aren't bandlimited. It's why aliasing happens, and it's why sample rate conversion requires lowpass filters.
Samples are just numbers indicating the value of the analog waveform at particular times. If the waveform was bandlimited then the sampling theorem says that you can perfectly reconstruct the analog waveform from the samples. But the samples are not impulses because impulses are (idealised) analog signals and the samples are just numbers.

You can construct a stream of impulses using the samples, and you can subsequently filter the impulses to make the analogue waveform. Or you can construct a set of sinc signals (essentially pre-filtered impulses) using the samples and use them to recreate the same waveform. But the samples themselves are neither impulses nor sinc waveforms, they are just numbers, measurements of the analog waveform at particular times.

Post

kryptonaut wrote: Sun Feb 23, 2020 6:13 pm
earlevel wrote:Samples of a band limited wave form are, for mathematical purposes, impulses. It's the very reason we need a lowpass filter—because the samples aren't bandlimited. It's why aliasing happens, and it's why sample rate conversion requires lowpass filters.
Samples are just numbers indicating the value of the analog waveform at particular times. If the waveform was bandlimited then the sampling theorem says that you can perfectly reconstruct the analog waveform from the samples. But the samples are not impulses because impulses are (idealised) analog signals and the samples are just numbers.

You can construct a stream of impulses using the samples, and you can subsequently filter the impulses to make the analogue waveform. Or you can construct a set of sinc signals (essentially pre-filtered impulses) using the samples and use them to recreate the same waveform. But the samples themselves are neither impulses nor sinc waveforms, they are just numbers, measurements of the analog waveform at particular times.
Kind of stating the obvious here, k, pretty sure no one thinks we have somehow coerced computer memory to maintain an analog impulse.

What I said, if you scroll back to the comment that started this, was, "I don't want to start a fight, but samples represent impulses". The math works for impulses, exactly, and is the reason it's often referred to as "ideal sampling", it represents the mathematical ideal.

And the math works for sampled analog as impulse trains too. Obviously, it works as well as you can make impulses, which is actually virtually perfect for something as low bandwidth as audio. (Where you see the effect of a non-ideal pulse is mainly if you look at the spectrum of the sample train, the images will fall off as you go to very high frequencies.) But there are more economical/easier ways to reconstruct, we don't need to generate pulses—that doesn't change what the samples represent. And even using s&h on the input is effectively multiplying by a pulse train, since we discard everything but the measurement (there is no difference between multiplying by a pulse and perfectly capturing the instantaneous peak, and performing a s&h and measuring more leisurely, except for latency which we don't care much about and that the latter is easily realizable).

But you go too far to say samples are just numbers. They are numbers that represent something. Otherwise we wouldn't know what to do with them.
My audio DSP blog: earlevel.com

Post

As long as we're looping on the thread, I'll point out a couple of things:

"Samples aren't [or don't represent] impulses"—OK, then why do we test an LTI system's impulse response by feeding it a single non-zero sample? You feed an impulse to obtain an impulse response, no?

"It can't be an impulse because perfect impulses don't exist in the real world"—that's like saying we can't have digital audio because the signal must be bandlimited, yet that's impossible with a finite sequence (a bandlimited signal cannot also be time limited). Pedantry.
My audio DSP blog: earlevel.com

Post

LOL, revisiting this thread...because I'm in the process of making a video or two, and want to try to understand why I wasn't getting through. See if i can think of a different spin...
mystran wrote: Fri Feb 21, 2020 2:23 amYou might want to treat the samples as impulses during the resampling process, there is nothing wrong with that.
OK...
But when a WAV file is loaded into a DAW, the samples contained in the wavefile are very much not impulses, but rather just values of a band-limited signal...
Yes, we collected samples. What you are missing here is that the process of collecting the individual, time spaced samples is a modulation (Pulse Code Modulation). That process makes created sidebands. When we do something to widen the sidebands, then get into the audio band, and we call it aliasing. This cannot happen if all we have is a "band-limited signal". The samples aren't a band-limited signal. They are a modulated version of a band-limited signal.

Another hint that it's not a bandlimited signal is that we can't get our analog signal back without using a lowpass filter to remove them.

And even in the digital domain, another hint is that we can't downsample to a smaller bandwidth without first filtering to remove the excess. otherwise we get aliasing. And what is aliasing? I't the sidebands from the modulation overlapping into the audio band.

All of this is super easy to confirm with your ears by generating a signal with a small bandwidth (say, 6k), and upsampling it without lowpass filtering (insert zeros to get it to 48k).

Now, I realize you may say that I mucked things up by inserting zeros—I create a modulation—but I guarantee you that the same thing would happen if you have a 6k SR DAC and disabled the filter. The exact same sidebands in the audio band. Now, practical DACs use the rectangular function (zero order hold) instead of impulses, but the resulting frequency is that of the deviation expected when impulse are converted to ZOH—the sinc response.

Analog sampling (BBD delays, etc.) have pretty much the same story. But that reminds me, I set about doing a video on analog sampling simulation using modular synth. Signal times a narrow pulse width VCO (must be DC offset, 0-max) with a balanced modulator. You can see the results follows what I described, in a spectrum analyzer, within the limits of the electronics. Or, if you limit the bandwidth, you can .hear it in the sidebands.

I'm just tossing ideas, I'll have to think of this more. But maybe I need to understand what people are thinking, and why they have a problem with the notion samples are impulses. As I've said before, those classic images of the repeating spectrum imply impulses, that's the only way you can have the repeating sidebands. an impulse train is the only way the math works out...
... that we have stored and which we intend reproduce again at playback (even though the whole DAC process will probably go through a couple of steps, changing the signal on the way, but that's implementation detail; eventually we still want to get at close to as possible to the original signal)... and I would assume that is also what Z1202 is trying to point out.
Implementation detail...the thing is, I can tell you what those details are and what they do mathematically. And they align with the math of the data representing an impulse train. You guys have said things, but you have not given me that.

For instance, "me" on the implementation details: Ideally the samples in digital memory would be converted to an impulse train. A lowpass filter would follow, and remove the modulation products, returning to the analog signal. But impulses aren't practical—it's far easier to use zero-order hold. ZOH causes the frequency response to follow the sinc function, though. That's not a problem for the sidebands, we intend to remove them later with the lowpass filter. But the sinc function rolls off before half the sample rate. It's still more practical to fix the audio-band rolloff with a filter, and follow with the lowpass. (Of course, these filters can be combined, and in fact it's pretty easy to account for the rolloff in the digital domain and as part of oversampling converters.)

How about "you"?
My audio DSP blog: earlevel.com

Post

earlevel wrote: Sun Feb 23, 2020 9:21 pm "Samples aren't [or don't represent] impulses"—OK, then why do we test an LTI system's impulse response by feeding it a single non-zero sample? You feed an impulse to obtain an impulse response, no?
Well, this is sort of philosophical in a sense, but when those samples are used to represent band-limited signals, they are strictly speaking not impulses, but rather data-points ("samples") on the unique band-limited signal that passes through those data-points... and if we replace all those data-points with scaled cardinal sines that we add together, we obtain said band-limited signal, though we could also obtain the same signal by fitting a polynomial (of infinite degree) through all the data-points (with infinite zero-padding, since "band-limited" implies compact support of the spectrum which means we can't have compact support in time).

But it turns out it does not matter in LTI cases, because LTI systems can be represented by convolution and convolution is commutative, so the result is the same whether we convolve by the cardinal sine before (samples are data-points of a band-limited signal) or after (samples are impulses and we band-limit afterwards when we reconstruct into continuous time) an LTI system.

Once we do something non-linear (or time-varying.. but that's basically the same thing) this is no longer true and treating the samples as impulses predicts the aliasing we observe because of the periodicity of the spectrum of a sampled signal treated as impulses... and this is where things get slightly philosophical, because now we need to choose what the "correct" behaviour should look like: are we willing to accept aliasing as the natural result of "samples as impulses" view, or are we going to try to go into extra lengths to try and obtain results similar to what we would have obtained had we reconstructed the band-limited signal, applied the non-linear process in continuous-time, then band-limited and sampled the results?

I'd generally argue that for musical purposes you (usually) would prefer the latter and go into some extra pains to try to make it happen... but for LTI purposes (eg. linear filter design) it's all irrelevant, because convolution is commutative.

Post

mystran wrote: Mon Mar 25, 2024 2:40 am
earlevel wrote: Sun Feb 23, 2020 9:21 pm "Samples aren't [or don't represent] impulses"—OK, then why do we test an LTI system's impulse response by feeding it a single non-zero sample? You feed an impulse to obtain an impulse response, no?
Well, this is sort of philosophical in a sense, but when those samples are used to represent band-limited signals, they are strictly speaking not impulses, but rather data-points ("samples") on the unique band-limited signal that passes through those data-points... and if we replace all those data-points with scaled cardinal sines that we add together, we obtain said band-limited signal, though we could also obtain the same signal by fitting a polynomial (of infinite degree) through all the data-points (with infinite zero-padding, since "band-limited" implies compact support of the spectrum which means we can't have compact support in time).

But it turns out it does not matter in LTI cases, because LTI systems can be represented by convolution and convolution is commutative, so the result is the same whether we convolve by the cardinal sine before (samples are data-points of a band-limited signal) or after (samples are impulses and we band-limit afterwards when we reconstruct into continuous time) an LTI system.

Once we do something non-linear (or time-varying.. but that's basically the same thing) this is no longer true and treating the samples as impulses predicts the aliasing we observe because of the periodicity of the spectrum of a sampled signal treated as impulses... and this is where things get slightly philosophical, because now we need to choose what the "correct" behaviour should look like: are we willing to accept aliasing as the natural result of "samples as impulses" view, or are we going to try to go into extra lengths to try and obtain results similar to what we would have obtained had we reconstructed the band-limited signal, applied the non-linear process in continuous-time, then band-limited and sampled the results?

I'd generally argue that for musical purposes you (usually) would prefer the latter and go into some extra pains to try to make it happen... but for LTI purposes (eg. linear filter design) it's all irrelevant, because convolution is commutative.
Not sure if you intended to answer my statement from four years ago, instead of the one I posted yesterday? Either way is OK, but I was hoping you'd address the new statements. Mainly, because I'm going to disagree with your reply for exactly the same reasons I disagreed four years ago, and nothing is learned.

But, I'll say it briefly...

"...when those samples are used to represent band-limited signals, they are strictly speaking not impulses"—No. Digital samples are, strictly speaking, the relative amplitude of impulses. A sampled audio signal is not the signal, it's the pulse code modulated signal.

"if we replace all those data-points with scaled cardinal sines that we add together, we obtain said band-limited signal"—Another way to say that is if we lowpass filter the PCM signal, we obtain the audio signal. Sure, that's the basis of digital audio. But it can only be done in the continuous time (analog) domain, because in the discrete time (sampled) domain, it's inherently PCM, which has infinite sidebands.

On that last point: To avoid someone twisting my words, of course you can lowpass filter in the digital domain, it's just that it will always have the sidebands. In other words, if you have 100 Hz sampled at 48k, in the digital domain it's not possible to not have 47900 and 48100...and 95900, 96100, 143900, 144100, 191900, 192100...No matter what filtering you do in the digital domain, you can't get rid of those without getting rid of 100.

I don't think you really answered that old question, thought, you just restated your views. We use a single sample to return the impulse response of an LTI system because a single sample is an ideal impulse in the sampled domain. If that were not true, why does it result in the exact impulse response of the system? I won't belabor it further, I'm not trying to beat on this point, I wrote the new post because I wanted to see what you think is wrong with any of those points...
My audio DSP blog: earlevel.com

Post

Let's start with the basics, and you can tell me where it goes wrong. I think I have several books that give a similar explanation of the sampling theorem, but I pulled up one that works. The explanation is very similar to mine, and equivalent—I'll explain the difference when I get to it. I'll reference the book, and you can tell me where it goes wrong. This was a relatively important advanced text book of the time, the authors well-known:

Multirate Digital Signal Processing, Crochiere and Rabiner, 1983

The book shows a drawn continuous waveform, then with the "lollipops" drawn on (imagine it, I'll not include it). Next, it states, "One convenient way of interpretting the sampling process is as a modulation or multiplication process, as show in Fig. 2.2(a). The continuous signal xc(t) is multiplied (modulated) by the periodic impulse train (sampling function) s(t) to give the pulse amplitude modulated (PAM) signal xc(t)s(t). The PAM signal is then discretized in time to give x(n), that is [equations I won't reproduce]."
IMG_8766.jpeg
This shows that the conceptual sampling process of regular sampling via a momentary switch at the sample rate produces out sequence of samples, x(n). Further, that process is equivalent to multiplying (modulating) the source signal with an impulse train, also resulting in x(n).

I think that's pretty basic—both the mechanical view (a) and the modulation view (b-d) produce the same mathematical result. But let me know if you think otherwise, and why. Specifically, I'd want to know where the mathematical view produces different results than the mechanical view.

Next, the book notes that multiplication in the time domain is equivalent to convolution in the frequency domain. In Fig. 2.3, the authors illustrate the spectra of b (continuous signal) and c (pulse train) of the time domain signals in Fig. 2.2, and the spectrum of the result of their convolution—and therefore the spectrum of Fig. 2.2(d)—in Fig 2.3(c):
IMG_8765.jpeg
The end. Let me know if you agree with this so far.

But I'll note briefly where my description, giving the identical result, differes: My goal was to use the simplest math I could think of, that is mathematically accurate. Discussing convolution can cause blank stares. I thought it was easier to describe the simple rules of amplitude modulation in the time domain and what it produces in frequency domain (sum and difference frequencies—regular people can do that math). Crochiere and Rabiner describe the modulation as PAM, just as I do, but instead of describing what that means in terms of the frequency domain, when just translate the components (signal and pulse train) to the frequency domain and do a convolution of the two spectra to get the spectrum of x(n). Great for people who understand convolution intuitively, because for them the resulting image is apparent and no actual math needs to be performed. Both are exactly equivalent, just slightly different presentation.
You do not have the required permissions to view the files attached to this post.
My audio DSP blog: earlevel.com

Post

earlevel wrote: Mon Mar 25, 2024 5:38 am "...when those samples are used to represent band-limited signals, they are strictly speaking not impulses"—No. Digital samples are, strictly speaking, the relative amplitude of impulses. A sampled audio signal is not the signal, it's the pulse code modulated signal.
Nyquist-Shannon theorem says if we sample a band-limited signal, we can reconstruct the same signal (theoretically) exactly. Therefore it is perfectly reasonable to treat the band-limited analog signal and it's sampled representation as exactly equivalent. In fact this exact equivalence is the whole reason Nyquist-Shannon sampling works.

Mathematically you do not need impulses and filters to reconstruct the continuous signal. You just need to interpolate the points. In fact, most of the time you'd probably first interpolate (perhaps using zero-order hold) in order to obtain some practical bounded continuous-time signal you can then filter (to correct for less than ideal interpolation)... and yes, you can treat the zero-order hold as a boxcar if your sampling rate is uniform, but interpolation works even if that's not the case and extended Nyquist-Shannon says we'll still have a unique signal band-limited to half the average sampling rate.

This is a viewpoint thing and I'm not trying to argue that one viewpoint is better than the other. The thing about viewpoints is that different viewpoints are useful in different situations.
Last edited by mystran on Mon Mar 25, 2024 10:10 am, edited 1 time in total.

Post

earlevel wrote: Mon Mar 25, 2024 6:58 am The book [...] states, "One convenient way of interpretting the sampling process is as a modulation or multiplication process, as show in Fig. 2.2(a). The continuous signal xc(t) is multiplied (modulated) by the periodic impulse train (sampling function) s(t) to give the pulse amplitude modulated (PAM) signal xc(t)s(t). The PAM signal is then discretized in time to give x(n),...
Well....the authors say that this is a "convenient way of interpreting the sampling process" - but what these signal samples actually are, but their inherent nature - I think, I would also take the position that this is a somewhat philosophical question
mystran wrote:Once we do something non-linear (or time-varying.. but that's basically the same thing)
Could you explain why these two things are equivalent? I can understand this view, if the modulation signal is derived from the input signal (like in a compressor, say) - but what if the modulation signal is some external signal (like in a tremolo effect with an external LFO). Can this situation still be recasted as a nonlinear process? I can't see how. I picked that example because, if I'm not mistaken, amplitude modulation satisfies the formal criteria for linearity (homogeneity and additivity) but is clearly time-variant.

As a prototypical example for a nonlinear process, I'd pick waveshaping. In this case, it can be recasted as a time-varying gain where the gain signal is derived from the input itself. But I don't see, how this generalizes to all systems. Perhaps all autonomous nonlinear systems can be re-expressed as such time-varying gains for signals derived from the input - yeah...I guess we could do something with Volterra series. But the other way around? Maybe we need to distinguish between autonomous and non-autonomous systems. The tremolo/amp-mod is non-autonomous (it has this external LFO)
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

mystran wrote: Mon Mar 25, 2024 7:34 amNyquist-Shannon theorem says if we sample a band-limited signal, we can reconstruct the same signal (theoretically) exactly.
Yes.
Therefore it is perfectly reasonable to treat the band-limited analog signal and it's sampled representation as exactly equivalent.
This is a bad inference.

Sampled audio shares significant traits with AM radio—both are an amplitude modulation resulting in something unlistenable, but can be demodulated back into the original. Say you record a minute of spoken word. We can take that and modulate it for AM radio and send it RF, along with other similar programs, to a radio that demodulates it.

The radio waves are not the original audio. The radio waves are not equivalent to audio. You can't play radio wave on your stereo, you can't send audio over the air for 50 miles.
In fact this exact equivalence is the whole reason Nyquist-Shannon sampling works.
No, of course not. We can't send audio over the airwaves. So we changed it (via modulation) to something with a high frequency carrier that could be sent over the air. We have to change it back for it to be audio again.

Similarly, the problem with continuous audio is that it doesn't fit in computer memory. So we modulated it into something that can fit into computer memory. We must demodulate it to listen to it.

I don't understand how you think they are "exactly equivalent". Is it a language issue? You emphasize exact equivalence twice. Do you consider an encrypted file as exactly equivalent to the original file? It's not. The encrypted file contains the information of the original file, but is not equivalent to it. For one thing, the original file doesn't contain the information of the encrypted file, which must be true if they are equivalent. If A = B, then B = A.

You have a fundamental misunderstanding of the sampling theorem. You're a smart guy, whom I respect, but I can't mince words on this.
My audio DSP blog: earlevel.com

Post

Music Engineer wrote: Mon Mar 25, 2024 7:49 am
earlevel wrote: Mon Mar 25, 2024 6:58 am The book [...] states, "One convenient way of interpretting the sampling process is as a modulation or multiplication process, as show in Fig. 2.2(a). The continuous signal xc(t) is multiplied (modulated) by the periodic impulse train (sampling function) s(t) to give the pulse amplitude modulated (PAM) signal xc(t)s(t). The PAM signal is then discretized in time to give x(n),...
Well....the authors say that this is a "convenient way of interpreting the sampling process" - but what these signal samples actually are, but their inherent nature - I think, I would also take the position that this is a somewhat philosophical question
Wait...they are referring to Fig. 2.2 there, saying that the sampling process of Fig. 2.2(a) is mathematically equivalent to (b, c, d). Further, they say that mathematical interpretation is convenient to proceeding with a mathematical discussion.

To put it another way, Fig. 2.2(a)—the initial description of sampling involving a switch click, click, clicking at the sample rate—is very inconvenient to talk about mathematically, because it's not a mathematical description, it'a mechanical one. So they chose an equivalent mathematical restatement.

That's all they are saying. So, you're essentially saying, "we can't discuss the rest—the part about the spectra—because they chose to change form their first, mechanical representation to a mathematical one.

That's a cop-out.

Or maybe you just misinterpreted their statement. They were not saying, "we're going to wax poetically here instead of addressing reality." This is a serious signal processing text, they were addressing it with mathematical rigor as they should. They weren't "telling a story".
Last edited by earlevel on Mon Mar 25, 2024 4:01 pm, edited 1 time in total.
My audio DSP blog: earlevel.com

Post

Music Engineer wrote: Mon Mar 25, 2024 7:49 am
mystran wrote:Once we do something non-linear (or time-varying.. but that's basically the same thing)
Could you explain why these two things are equivalent?
Not necessarily equivalent (that's.. another philosophical thing I guess), but rather in both cases LTI theory falls apart.

Post

earlevel wrote: Mon Mar 25, 2024 8:28 am I don't understand how you think they are "exactly equivalent". Is it a language issue? You emphasize exact equivalence twice. Do you consider an encrypted file as exactly equivalent to the original file? It's not. The encrypted file contains the information of the original file, but is not equivalent to it. For one thing, the original file doesn't contain the information of the encrypted file, which must be true if they are equivalent. If A = B, then B = A.
Encryption is a poor example, as there is additional secret information (the key) in the mix. A better example would be lossless compression: even though the files (in the sense of byte streams) are not the same, the "signal" (ie. the logical file stored) is still the same.

Does a file become something else because you put it inside a zip? No. Does it become something else when you download it over the internet (so it's split and encapsulated into TCP or UDP packets)? No. For all intents and purposes it is the same file... unless you are concerned by the implementation of the codec or the networking protocol.

And.. that's the point I'm trying to make: if we are not concerned about the actual dirty details of converting between digital and analog domain, but rather we are only interested in the band-limited signal that we can (theoretically) convert back and forth in a lossless fashion, then we can treat the signals as being equivalent. While we can't store every possible time-instant of a continuous signal in finite memory, we can (theoretically in the limit and given exact arithmetics) obtain the value of the band-limited signal at any arbitrary point in time from only the Nyquist samples. There is no mathematical difficulties here, only practical ones.

Ofcourse in practice we can't do this perfectly, so it would be closer to the situation with a lossy codec. If we store a picture as a JPEG or a movie as an MPEG, then certainly there is some generation loss in practice... but from the logical point of view it's still the same picture or the same movie, just slightly degraded due to (what I previously called) "implementation details."

ps. Also.. for the most part when I say "discrete-time" or "band-limited continuous-time" I am not concerned about actual "digital" or "analog" signals, but rather two signals that exist in the realm of pure mathematics. We can defined a bijection between the two domains and declare said bijection as an equivalence relation. All I'm trying to say is that whether or not this is helpful or counter-productive depends on the situation.

Post Reply

Return to “DSP and Plugin Development”