- KVRist
- 56 posts since 15 Feb, 2010

I trying to learn DSP, but I have a problem getting through digital signal decomposition. I thought perhaps I could hear some friendly nudges in the right direction here. It's not advanced. I'm just a bit stuck, I think.

I'm learning DSP, to hopefully be able to write my own EQ and distortions (my dream ones, whose properties I already tested and got mapped out). I'm no school-kid trying to cheat exams or something.

I'm using 'The Scientist and Engineer's Guide to Digital Signal Processing' by Steven W. Smith, available online (http://www.dspguide.com). The chapter I'm on (and referring to) is Chapter 5, common decompositions. (http://www.dspguide.com/ch5/7.htm)

I guess my problem is I do not understand how a continuous wave can be decomposed into the waveforms that are demonstrated.

For example, I do understand this example ..

.. that adding all three y0[n]+y1[n]+y2[n] points together will result in the waveform to the low right in the graphic (x[n]).

But then decomposition schemes like odd/even and FFT are covered in the text, and I can no longer understand how the x[n] waveform can be decomposed to the waveforms illustrated in the pics (5-14), not by reading the text or looking at the pics.

Maybe I'm asking the wrong question here, but my spontaneous question to this is .. What kind of filtering process is be used, to disperse the data in x[n] into xe[n] and xo[n]? (illustration 5-14)

I can't even imagine adding xe[n] and xo[n] together will produce the original x[n] wave (which is a requirement of a linear system). Will it?

Thank you kindly, from a striving newbie

I'm learning DSP, to hopefully be able to write my own EQ and distortions (my dream ones, whose properties I already tested and got mapped out). I'm no school-kid trying to cheat exams or something.

I'm using 'The Scientist and Engineer's Guide to Digital Signal Processing' by Steven W. Smith, available online (http://www.dspguide.com). The chapter I'm on (and referring to) is Chapter 5, common decompositions. (http://www.dspguide.com/ch5/7.htm)

I guess my problem is I do not understand how a continuous wave can be decomposed into the waveforms that are demonstrated.

For example, I do understand this example ..

.. that adding all three y0[n]+y1[n]+y2[n] points together will result in the waveform to the low right in the graphic (x[n]).

But then decomposition schemes like odd/even and FFT are covered in the text, and I can no longer understand how the x[n] waveform can be decomposed to the waveforms illustrated in the pics (5-14), not by reading the text or looking at the pics.

Maybe I'm asking the wrong question here, but my spontaneous question to this is .. What kind of filtering process is be used, to disperse the data in x[n] into xe[n] and xo[n]? (illustration 5-14)

I can't even imagine adding xe[n] and xo[n] together will produce the original x[n] wave (which is a requirement of a linear system). Will it?

Thank you kindly, from a striving newbie

- KVRer
- 10 posts since 12 May, 2016

At least for me the equations for computing xe[n] and xo[n] are not visible on the website. However, the PDF-version shows them:

http://www.dspguide.com/CH5.PDF

If you check the equation for the even signal, it shows that its nth value is computed as the average of the nth sample from the start and the nth sample from the end of the signal:

where N is the length of the signal. The odd signal is computed from the difference of the samples:

You can probably see from the equations that xe[n] + xo[n] = x[n]. That is, the even and odd signals sum to form the original.

http://www.dspguide.com/CH5.PDF

If you check the equation for the even signal, it shows that its nth value is computed as the average of the nth sample from the start and the nth sample from the end of the signal:

xe[n] = 1/2 * (x[n] + x[N-n]),

where N is the length of the signal. The odd signal is computed from the difference of the samples:

xo[n] = 1/2 * (x[n] - x[N-n]).

You can probably see from the equations that xe[n] + xo[n] = x[n]. That is, the even and odd signals sum to form the original.

- KVRAF
- 1845 posts since 29 May, 2012

DSP is about linear and time invariant systems. Think about the second part, "time invariant" a bit, and you'll see.

~stratum~

- KVRist
- 56 posts since 15 Feb, 2010

iiphii wrote:At least for me the equations for computing xe[n] and xo[n] are ...

stratum wrote:DSP is about linear and time invariant systems ...

Aha. Now I understand it.

By having a strong coffee, looking in the PDF, and punching through a few quite equivocal explanation angles ... I got it.

Thanks guys. Really appreciate it!

I'm on to FFT now, it's a quagmire, the first decomposition type that has the potential to make the student decompose. Z-trans and Laplace coming next, phew.

Man I just wanted to write good songs

- KVRian
- 789 posts since 13 Mar, 2012

I'm on to FFT now, it's a quagmire, the first decomposition type that has the potential to make the student decompose.

FFT is a really a strange thing.

Had the same decomposion effect on my brain, when I first started to read about DFT. But when you finally arrived at the point wher you understand FFT, your brain suddelny switches to "holy shit, that's so f**king stupid simple .. hello butterfly Gauß, Cooley and Tukey are damn genius"

~~ ॐ http://soundcloud.com/mfr ॐ ~~

- KVRAF
- 3764 posts since 8 Mar, 2004, from Berlin, Germany

se729 wrote:I'm on to FFT now, it's a quagmire, the first decomposition type that has the potential to make the student decompose.

i wouldn't worry so much about understanding the inner workings of an FFT algorithm. just notice that it's an efficient algorithm to compute the DFT and if you want to use it, use one of the many readily available "white-box" implementations (i like this one: http://www.kurims.kyoto-u.ac.jp/~ooura/fft.html). from a conceptual point of view, understanding the DFT itself is much more important - how it computes its coefficients by correlating the signal with the various sinusoids, and how the signal can be reconstructed by summing those sinusoids (using the coeffs as weights).

- KVRian
- 954 posts since 11 Apr, 2002

I'd say it's even more important to understand continuous-time Fourier series and transform, because they are simpler, since they don't have to deal with time quantization. Then one can quickly understand which effects time quantization adds on top of that. But one doesn't need to go into much detail of those, just get the basic ideas and properties (most importantly the convolution property and the fact that real signals have Hermitian spectra). The rest can be skipped (you can return to details much later, if you wish). Similar consideratons apply to Laplace and Z-transform. Whether one needs to formally study discrete-time difference systems is another big question. IMHO, from the practical (and possibly even didactical) perspective it might make more sense to start right away with TPT (aka ZDF) filters. In this case the study of differential (not difference!) systems formal theory might be more useful, but again not a must.Music Engineer wrote:se729 wrote:I'm on to FFT now, it's a quagmire, the first decomposition type that has the potential to make the student decompose.

i wouldn't worry so much about understanding the inner workings of an FFT algorithm. just notice that it's an efficient algorithm to compute the DFT and if you want to use it, use one of the many readily available "white-box" implementations (i like this one: http://www.kurims.kyoto-u.ac.jp/~ooura/fft.html). from a conceptual point of view, understanding the DFT itself is much more important - how it computes its coefficients by correlating the signal with the various sinusoids, and how the signal can be reconstructed by summing those sinusoids (using the coeffs as weights).

- KVRist
- 56 posts since 15 Feb, 2010

Thanks guys. I took your advice and waited with FFT until I know more in general. It was loosing my hair trying to wrap my head around those series anyway

So back onto convolution. And I drove into a snow pile right away, that I could use help with. The text and also videos are messing things up something awfully, using the same terms for different things, and describing things as if isolated from each other, even within the same principles. It's a maze! (http://www.dspguide.com/ch6/1.htm and http://www.dspguide.com/ch6/2.htm)

Can I ask you:

In the simplest of systems, is there a difference between the Delta Function ð[n] (also known as 'unit impulse'), and the system input x[n]?

If so, what is that difference?

If not, why do they carry different names? X[n] vs ð[n].

(assuming a fully linear system here)

In the simplest of systems, is there a difference between the "impulse response h[n]" and the "system output y[n]"?

If so, what is that difference?

If not, why to they carry different names, y[n] vs h[n]?

(assuming a fully linear system here too)

Have a look at the above image. It doesn't make sense to me. Assuming that the delta function is the same thing as what is elsewhere called the input (x[n]), and the impulse response is the same thing as what is elsewhere called the output (from running the delta through a linear system, y[n]), then the impulse response should be identical to the delta in the pic above. But the image shows differently. Unless the impulse response is NOT the output of a unit impulse input, and then the picture would be right?

Many things in the text supports the notion that it is not. I don't know what to think.

Furthermore, the impulse response is described as something else than the linear system output, with different characteristics. The author writes "If we know a system's impulse response, then we can calculate what the output will be for any possible input signal. This means we know everything about the system." That can only be true, if the term 'impulse response' refers to something ELSE than the system output. Because you simply can't calculate the output from an unknown input, if there is manipulation involved, and you only got the output (and not the input) from a previous process to compare with.

Then later, the convolution process is described as x[n] * h[n] = y[n], which is the opposite of some of the above.

I'm going nuts here. Now I'm into shifting the delta in time and amplitude, and the impression from the text is that 'these changes are reflected in the impulse response as well, which will create a different result in the output'. I got a feeling that the delta and input is the same thing, but that the impulse response and the output is not always the same thing. But this same confusion happens when you look at other learning sources as well. It's incredible. I need to be sure at this stage because otherwise I'll screw up badly. Might have to call CSI here.

So back onto convolution. And I drove into a snow pile right away, that I could use help with. The text and also videos are messing things up something awfully, using the same terms for different things, and describing things as if isolated from each other, even within the same principles. It's a maze! (http://www.dspguide.com/ch6/1.htm and http://www.dspguide.com/ch6/2.htm)

Can I ask you:

In the simplest of systems, is there a difference between the Delta Function ð[n] (also known as 'unit impulse'), and the system input x[n]?

If so, what is that difference?

If not, why do they carry different names? X[n] vs ð[n].

(assuming a fully linear system here)

In the simplest of systems, is there a difference between the "impulse response h[n]" and the "system output y[n]"?

If so, what is that difference?

If not, why to they carry different names, y[n] vs h[n]?

(assuming a fully linear system here too)

Have a look at the above image. It doesn't make sense to me. Assuming that the delta function is the same thing as what is elsewhere called the input (x[n]), and the impulse response is the same thing as what is elsewhere called the output (from running the delta through a linear system, y[n]), then the impulse response should be identical to the delta in the pic above. But the image shows differently. Unless the impulse response is NOT the output of a unit impulse input, and then the picture would be right?

Many things in the text supports the notion that it is not. I don't know what to think.

Furthermore, the impulse response is described as something else than the linear system output, with different characteristics. The author writes "If we know a system's impulse response, then we can calculate what the output will be for any possible input signal. This means we know everything about the system." That can only be true, if the term 'impulse response' refers to something ELSE than the system output. Because you simply can't calculate the output from an unknown input, if there is manipulation involved, and you only got the output (and not the input) from a previous process to compare with.

Then later, the convolution process is described as x[n] * h[n] = y[n], which is the opposite of some of the above.

I'm going nuts here. Now I'm into shifting the delta in time and amplitude, and the impression from the text is that 'these changes are reflected in the impulse response as well, which will create a different result in the output'. I got a feeling that the delta and input is the same thing, but that the impulse response and the output is not always the same thing. But this same confusion happens when you look at other learning sources as well. It's incredible. I need to be sure at this stage because otherwise I'll screw up badly. Might have to call CSI here.

- KVRian
- 954 posts since 11 Apr, 2002

Didn't read all of your question in full detail, so my apologies if I misunderstood something.

System input and output signals are what they are called: the input and the output

Similarly, delta function (sometimes referred to as delta sequence in discrete time) is what it is called: a function.

The confusion might originate from the fact that it's common to "test" the system by sending the delta function to its input (that is using the input signal equal to the delta function). The system output obtained in this case is referred to as the impulse response of the system (because delta function can be also referred to as the impulse, although the word "impulse" can sometimes mean other signal forms).

Why is this important? Because we can represent any discrete-time signal, particularly the system's input signal, as a linear combination of time-shifted delta functions, which can be then written as a convolution of the input signal with the delta function. Then, from the LTI property of the system it follows that the output signal will be the same linear combination, but of impulse responses instead of delta functions, that is the convolution of the input signal with the impulse response of the system.

PS. Personally, I really doubt that this is the best way to start learning digital filters, at least as long as music DSP application is assumed. The impulse response paradigm is not the one of primary importance, unless you want to do FIR filters (which are primarily used in music instrument/effect software for linear-phase EQing). And even then, you don't really need to be explicitly aware of this paradigm, knowing the convolution property of the Fourier transform suffices. So while I wouldn't call this concept fully esoteric, you can do a lot without knowing it at all, and other concepts (such as the transfer function) are much more important in practice and can be learned independently (also without involving the concept of Z-transform etc). But this is AFAIK what all classical texts on DSP traditionally do.

Edit: if one really wants a formal proof of the "transfer function theory", the control theory might be a better idea, where the state-space form is discussed and analyzed. Because this has more connection with how a discrete-time system "really works", rather that formal abstract proofs using the impulse responses and Z-transform. But again, this is not really necessary for a beginner IMHO.

System input and output signals are what they are called: the input and the output

Similarly, delta function (sometimes referred to as delta sequence in discrete time) is what it is called: a function.

The confusion might originate from the fact that it's common to "test" the system by sending the delta function to its input (that is using the input signal equal to the delta function). The system output obtained in this case is referred to as the impulse response of the system (because delta function can be also referred to as the impulse, although the word "impulse" can sometimes mean other signal forms).

Why is this important? Because we can represent any discrete-time signal, particularly the system's input signal, as a linear combination of time-shifted delta functions, which can be then written as a convolution of the input signal with the delta function. Then, from the LTI property of the system it follows that the output signal will be the same linear combination, but of impulse responses instead of delta functions, that is the convolution of the input signal with the impulse response of the system.

PS. Personally, I really doubt that this is the best way to start learning digital filters, at least as long as music DSP application is assumed. The impulse response paradigm is not the one of primary importance, unless you want to do FIR filters (which are primarily used in music instrument/effect software for linear-phase EQing). And even then, you don't really need to be explicitly aware of this paradigm, knowing the convolution property of the Fourier transform suffices. So while I wouldn't call this concept fully esoteric, you can do a lot without knowing it at all, and other concepts (such as the transfer function) are much more important in practice and can be learned independently (also without involving the concept of Z-transform etc). But this is AFAIK what all classical texts on DSP traditionally do.

Edit: if one really wants a formal proof of the "transfer function theory", the control theory might be a better idea, where the state-space form is discussed and analyzed. Because this has more connection with how a discrete-time system "really works", rather that formal abstract proofs using the impulse responses and Z-transform. But again, this is not really necessary for a beginner IMHO.

- KVRian
- 954 posts since 11 Apr, 2002

If you really want to learn classical digital filter theory (why this isn't necessarily the best idea, I'll explain later), then I would suggest the following "milestones":

- Learn the concept of discrete-time block diagrams (consisting of multipliers, summators and unit delays).

- Understand the idea of the LTI (linearity and time-invariance). It should be intuitively clear that for the mentioned block diagrams the LTI property holds as long as the multiplier gains are constant (summators and unit delays are always LTI, since they don't have any parameters which can vary with time). The LTI property holds not only for the outputs of such diagrams but for signals picked up at any arbitrary point, since any such point can be considered as an output.

- According to the Fourier theory, any discrete-time signal can be represented as a linear combination of complex sinusoids exp(j*w*n), where n is the sample index (learn Fourier theory to the point where you can understand this statement).

- Letting a linear combination of complex sinusoids through an LTI system is equal to letting each sinusoid through the system separately, and computing the linear combination of the results (this is the definition of linearity).

- Let z = exp(j*w). Then each sinusoid in question can be written as exp(j*w*n) = z^n

- Given a signal x[n]=z^n at an input of a unit delay, the output signal is obviously y[n]=exp(j*w*(n-1))=z^(n-1)=z^-1 * z^n. That is y[n] = z^-1 * x[n]

- For a multiplier y[n] = g*x[n], where g is the multiplication coefficient

- For a summator y[n] = x1[n]+x2[n]

- Using this relationships all signals in the system can be written in the form H*x[n], where H is a coefficient which doesn't depend on n and where we still assume x[n]=z^n. This should be obvious.

- The signals have the same form if feedback is involved (this piece of knowledge needs to be taken for granted unless you want to go into control theory). Using this knowledge, we can find the unknown coefficients H of such signals from simple algebraic equations (simply pretend we know that all signals are equal to H[i]*x[n] (where i is the index of the signal), write the equation for all elements of the block diagram and cancel the common factor x[n] from all equations).

- One of such signals is the output signal of the system and it also has the form H*x[n]. The coefficient H clearly depends on z and thus is a function of z: H=H(z). The function H(z) is called the transfer function. Obviously |H(z)| defines the change of the sinusoidal signal's amplitude and arg H(z) defines the change of the phase.

- Now, given a block diagram we know how to find the respective transfer function and therefore its amplitude and phase responses

- In order to construct a filter we need to either know the block diagram (but then we already constructed the filter, so there is kind of no question to be answered) or only the transfer function. How do we know the transfer function? A common approach is to take an analog filter's transfer function and apply the Bilinear Transform to it, thus you need to study what is bilinear transform (in the context of DSP).

- Where do you take the analog transfer function from? A good source is RBJ's audio EQ cookbook, it already contains the results of bilinear transform too (IIRC).

- Now that we have the digital transfer function, how do we find the corresponding block diagram? Now this is the big question. The classical textbook approach is to use a direct form (study direct forms). However, direct forms have precision loss issues, plus they don't respond well to the changes of filter's parameters. So we need to find some other block diagrams with the same transfer function (one says we need to find a good topology for the filter).

- The topologies which are almost always good are analog filter topologies. Unfortunately, when we performed the bilinear transform we discarded the topology information from the filter and kept only the transfer function, replacing the topology with the direct form. The trick is therefore to perform the bilinear transform, while preserving the filter's topology (so called topology-preserving transform, aka TPT aka ZDF). In this case you don't need to really understand the discrete-time filter theory, it's more important to understand the analog filter theory, so that you can choose a good analog topology. The TPT will provide a routine to build its digital model. So much for the need to understand digital filter theory (I don't want to say that it's useless, but I want to say that it's not of primary use, you probably can do quite well without it, while you can't do quite well knowing only the classical digital filter theory).

Edit: as for the FIR filters, simply notice that the respective block diagrams implement a convolution of the input signal with the sequence of the filter's coefficients (which are also the coefficients of the transfer function). Now use the convolution property of Fourier transform (the spectrum of the convolution is the product of spectra of convolved signals). Therefore the Fourier transform of the coefficient sequence gives you the information about amplitude and phase changes. It can be noticed that the Fourier transform of the coefficient sequence is equal to H(z) evaluated at z=exp(j*w) (you simply get identical expressions), corresponding to the fact that H(z) encodes amplitude and phase responses. Thus, given a desired H(exp(j*w)), perform the inverse Fourier transform to obtain the coefficients. Usually this would produce an infinitely long sequence of coefficients, this sequence needs to be windowed (study the windowing technique) to end up with a finite coefficient sequence.

- Learn the concept of discrete-time block diagrams (consisting of multipliers, summators and unit delays).

- Understand the idea of the LTI (linearity and time-invariance). It should be intuitively clear that for the mentioned block diagrams the LTI property holds as long as the multiplier gains are constant (summators and unit delays are always LTI, since they don't have any parameters which can vary with time). The LTI property holds not only for the outputs of such diagrams but for signals picked up at any arbitrary point, since any such point can be considered as an output.

- According to the Fourier theory, any discrete-time signal can be represented as a linear combination of complex sinusoids exp(j*w*n), where n is the sample index (learn Fourier theory to the point where you can understand this statement).

- Letting a linear combination of complex sinusoids through an LTI system is equal to letting each sinusoid through the system separately, and computing the linear combination of the results (this is the definition of linearity).

- Let z = exp(j*w). Then each sinusoid in question can be written as exp(j*w*n) = z^n

- Given a signal x[n]=z^n at an input of a unit delay, the output signal is obviously y[n]=exp(j*w*(n-1))=z^(n-1)=z^-1 * z^n. That is y[n] = z^-1 * x[n]

- For a multiplier y[n] = g*x[n], where g is the multiplication coefficient

- For a summator y[n] = x1[n]+x2[n]

- Using this relationships all signals in the system can be written in the form H*x[n], where H is a coefficient which doesn't depend on n and where we still assume x[n]=z^n. This should be obvious.

- The signals have the same form if feedback is involved (this piece of knowledge needs to be taken for granted unless you want to go into control theory). Using this knowledge, we can find the unknown coefficients H of such signals from simple algebraic equations (simply pretend we know that all signals are equal to H[i]*x[n] (where i is the index of the signal), write the equation for all elements of the block diagram and cancel the common factor x[n] from all equations).

- One of such signals is the output signal of the system and it also has the form H*x[n]. The coefficient H clearly depends on z and thus is a function of z: H=H(z). The function H(z) is called the transfer function. Obviously |H(z)| defines the change of the sinusoidal signal's amplitude and arg H(z) defines the change of the phase.

- Now, given a block diagram we know how to find the respective transfer function and therefore its amplitude and phase responses

- In order to construct a filter we need to either know the block diagram (but then we already constructed the filter, so there is kind of no question to be answered) or only the transfer function. How do we know the transfer function? A common approach is to take an analog filter's transfer function and apply the Bilinear Transform to it, thus you need to study what is bilinear transform (in the context of DSP).

- Where do you take the analog transfer function from? A good source is RBJ's audio EQ cookbook, it already contains the results of bilinear transform too (IIRC).

- Now that we have the digital transfer function, how do we find the corresponding block diagram? Now this is the big question. The classical textbook approach is to use a direct form (study direct forms). However, direct forms have precision loss issues, plus they don't respond well to the changes of filter's parameters. So we need to find some other block diagrams with the same transfer function (one says we need to find a good topology for the filter).

- The topologies which are almost always good are analog filter topologies. Unfortunately, when we performed the bilinear transform we discarded the topology information from the filter and kept only the transfer function, replacing the topology with the direct form. The trick is therefore to perform the bilinear transform, while preserving the filter's topology (so called topology-preserving transform, aka TPT aka ZDF). In this case you don't need to really understand the discrete-time filter theory, it's more important to understand the analog filter theory, so that you can choose a good analog topology. The TPT will provide a routine to build its digital model. So much for the need to understand digital filter theory (I don't want to say that it's useless, but I want to say that it's not of primary use, you probably can do quite well without it, while you can't do quite well knowing only the classical digital filter theory).

Edit: as for the FIR filters, simply notice that the respective block diagrams implement a convolution of the input signal with the sequence of the filter's coefficients (which are also the coefficients of the transfer function). Now use the convolution property of Fourier transform (the spectrum of the convolution is the product of spectra of convolved signals). Therefore the Fourier transform of the coefficient sequence gives you the information about amplitude and phase changes. It can be noticed that the Fourier transform of the coefficient sequence is equal to H(z) evaluated at z=exp(j*w) (you simply get identical expressions), corresponding to the fact that H(z) encodes amplitude and phase responses. Thus, given a desired H(exp(j*w)), perform the inverse Fourier transform to obtain the coefficients. Usually this would produce an infinitely long sequence of coefficients, this sequence needs to be windowed (study the windowing technique) to end up with a finite coefficient sequence.

- KVRist
- 56 posts since 15 Feb, 2010

Z1202 wrote:Didn't read all of your question in full detail, so my apologies if I misunderstood something.

System input and output signals are what they are called: the input and the output

Similarly, delta function (sometimes referred to as delta sequence in discrete time) is what it is called: a function.

Thanx for helping.

Aha, so the Delta is to be thought of as a function. That's constitutes a concept or idea, to me. The 'input' can refer to the content being sent in, or to the pysical input fascility itself, or both, without separating which one you're referring to atm. This wasn't clear in the writing and pics, and I had nothing to reference-check it with.

And I thought Fourier Transform sucked. But now I know what it means. Thx for that.

Z1202 wrote:Why is this important? Because we can represent any discrete-time signal, particularly the system's input signal, as a linear combination of time-shifted delta functions, which can be then ...

Well, it's important to learn it, right? Otherwise whatever you learn on top of this will be as warped as your misunderstanding of the underlying base.

It seems that you are using concepts outside of this knowledge, to backengineer this in your mind, to prove it. You know 'the wider landscape'. Me, I didn't know anything - up until this moment when you verified things for me. I can't inherently tell what is right and what is wrong. I've got nothing to reference-check it with.

Z1202 wrote:PS. Personally, I really doubt that this is the best way to start learning digital filters, at least as long as music DSP application is assumed. The impulse response paradigm is not the one of primary importance, unless you want to do FIR filters (which are primarily used in music instrument/effect software for linear-phase EQing). And even then, you don't really need to be explicitly aware of this paradigm, knowing the convolution property of the Fourier transform suffices. So while I wouldn't call this concept fully esoteric, you can do a lot without knowing it at all, and other concepts (such as the transfer function) are much more important in practice and can be learned independently (also without involving the concept of Z-transform etc). But this is AFAIK what all classical texts on DSP traditionally do.

Ah, going for the heart of things You're probably right about this too. And I just wasn't sure what other route to take.

I don't even know the proper term for the subject area I want to learn. And even if I do learn such a term, I'm still not sure it's the right thing, leading to what I want. I mean, it's ... you don't even know what to ask for. If I'm not careful, I might start down a road that will make me an excellent tailor, by mistake.

No, Music DSP is the only thing for me, at this time. I'm mostly interested in EQ and distortion now, IIR.

Initially, I was trying to figure out how to process and combine coefficients to make a filter. And to ne able to adjust the filter according to taste etc. But I found that the technique was too above my head. So to get there, I need to learn some DSP.

FIR isn't a favorite, the cutoff slope timing problem, pre-ringing etc.

It's like with anyone's musical progress I guess. I started playing music when I was about 4, fumbled around on my own, local music school taught me some organizing things, years go by, own experiences, own thinking, playing, years of of lessons, things gradually bloom, until I'm going through athlete type music training in 5 year music university program.

That, is kindof how I see this programming thing will develop for me too, in the long term. So, gotta start with some fumbling around.

Don't have any guidance. mentor or friends/family who can help. And I'm NOT embarassing myself doing the tearful needy pleady freebie thing with online ppl who may know what I want to know. Hate that behavior.

A university course? Sure, but those are usually so general, all-encompassing and over-arching. I would save so much mental willpower, time (and money) by going for what I personally need to know, at this time. Then, things will gradually bloom out with time anyway.

So then what to do? I was recommended that book, Dspguide, and through it figured that digital signal processing might probably be a good angle to approach this through. I know programming reasonably well, and I can follow advanced mathematical descriptions and equations as long as I understand the traditional symbolic meanings (they are slightly different in DSP than in pure math). And filling in the holes with Youtube videos, Wikipedia and google, and hoping to throw out a few balls here, and see what comes back. I didn't know better than to just follow that book, chapter by chapter onward from the beginning, hoping to get wise some day.

I guess I don't have the overview to know if this is a good way to learn what I hope to learn, or not. You may know.

- KVRist
- 56 posts since 15 Feb, 2010

Oh? Hm. Unknown paper to me. Ok, thanks. I'll have a go at it.Z1202 wrote:Try this. If you don't have problems reading complex numbers and calculus notation, there shouldn't be any major obstacles. Some people even claim the book can be read in one day

Wait, 148 pages, of technical explanation, in 1 day??

Naaah? Whoever does that, must clearly know 95% of the technical principles already. I doubt you eyes could even read every word of 148 pages in 1 day, much less stop and think. But .. I'll give it a shot.

No problems with complex calculus, well ehm .. that is of course a matter of where you set the threshold

N.I.? are there any other interesting publications on this address, perhaps?

- KVRist
- 249 posts since 7 Feb, 2017

First, the continuous wave function is assumed to be bandlimited so that a finite number of supports (weighted + uniformly delayed sinc functions) can perfectly reconstruct the original function; hence the need to only record finite number of digital samples.

Second, the wave function is assumed to be the output of a linear time-invariant system which lends itself to be fully represented by an impulse response or transfer function. i.e. output signal = convolution of input signal and impulse response or more concretely, output = sum of delayed and scaled copies of input signal. Too see why, decompose input signal into sum of weighted and delayed pulses before applying linear and time-invariant properties.

Second, the wave function is assumed to be the output of a linear time-invariant system which lends itself to be fully represented by an impulse response or transfer function. i.e. output signal = convolution of input signal and impulse response or more concretely, output = sum of delayed and scaled copies of input signal. Too see why, decompose input signal into sum of weighted and delayed pulses before applying linear and time-invariant properties.

- KVRian
- 1044 posts since 9 Jan, 2006

For someone starting out I would recommend a few things:

The Scientist and Engineers Guide to DSP as it is freely available and the material should be accessible to most people. The down side as pointed out is that it doesn't really cover typical audio plugin filter design.

Designing Audio Effect Plugins a in C++ is great for quickly getting up to speed with stuff like analysing transfer functions using complex sinusoids, z-transform, why multiplying by z^-1 results in a sample delay etc. It gets a hard time for having rubbish C++ code examples, but the theory side of things is well done IMHO.

Depending on your maths background a book on engineering mathematics (seems like you have decent math skills).

The Art of VA filter Design of course, if you have the maths chops it's great.

After all that you should be able to direct you're own further learning.

The Scientist and Engineers Guide to DSP as it is freely available and the material should be accessible to most people. The down side as pointed out is that it doesn't really cover typical audio plugin filter design.

Designing Audio Effect Plugins a in C++ is great for quickly getting up to speed with stuff like analysing transfer functions using complex sinusoids, z-transform, why multiplying by z^-1 results in a sample delay etc. It gets a hard time for having rubbish C++ code examples, but the theory side of things is well done IMHO.

Depending on your maths background a book on engineering mathematics (seems like you have decent math skills).

The Art of VA filter Design of course, if you have the maths chops it's great.

After all that you should be able to direct you're own further learning.