]]>

Andrew Souter wrote:

...however a "bright" waveform such as a raw saw or sqaure, has a 1/f amplitude ratio, so the 1024th partial is down ~60dB anyway... and one could project that the average waveform has even less energy at the 1024th partial.

2048 samples, 1024 partials, is enough 99% of the time...

earlevel wrote:

2048 or 4096 are good choices for audio rate, the main reason to go lower is so you can sweep a bright waveform (sawtooth) to sub audio without the bandlimited nature becoming apparent.

...however a "bright" waveform such as a raw saw or sqaure, has a 1/f amplitude ratio, so the 1024th partial is down ~60dB anyway... and one could project that the average waveform has even less energy at the 1024th partial.

2048 samples, 1024 partials, is enough 99% of the time...

I'm not sure whether you got my point—you seem to have misinterpreted it, but I might be wrong.

What I meant was...play this on your good monitors or headphone (unfortunately, it's mp3 for better browser compatibility, but good enough to get the point across):

http://www.earlevel.com/main/wp-content/uploads/2012/06/saw-sub-audio-test-20-20k-2048-20s.mp3

By 5 seconds you're already losing apparent harmonics, by 8, 9, 10 it sounds like you're sweeping a lowpass filter down with it...A real sawtooth would not get dull as you dropped.

But here's with 32k tables:

http://www.earlevel.com/main/wp-content/uploads/2012/06/saw-sub-audio-test-20-20k-2048-+-32768-ramp-20s.mp3

As I said, 2048 is a good minimum number. I like going another octave with 4096, gives full bandwidth to 20 Hz, and allows a bit lower before crapping out.

Statistics: Posted by earlevel — Sat Aug 18, 2018 9:10 am

]]>

earlevel wrote:

2048 or 4096 are good choices for audio rate, the main reason to go lower is so you can sweep a bright waveform (sawtooth) to sub audio without the bandlimited nature becoming apparent.

2048 or 4096 are good choices for audio rate, the main reason to go lower is so you can sweep a bright waveform (sawtooth) to sub audio without the bandlimited nature becoming apparent.

...however a "bright" waveform such as a raw saw or sqaure, has a 1/f amplitude ratio, so the 1024th partial is down ~60dB anyway... and one could project that the average waveform has even less energy at the 1024th partial.

2048 samples, 1024 partials, is enough 99% of the time...

Statistics: Posted by Andrew Souter — Sat Aug 18, 2018 3:15 am

]]>

ps. while the Wikipedia text works in continuous time, in order to do the same in discrete time you (quite literally) just substitute z for s and the basic idea is exactly the same.

Statistics: Posted by mystran — Sat Aug 18, 2018 2:12 am

]]>

Andy's doc contains a Bode plot for the filter types which is exactly what I'm looking for (page 11).

On page 12 it looks like it has all the formulas for each individual filter type which are worked out using 'Solve'. They get fed into the BodePlotSZ function for plotting the charts in the doc. But I'm finding it difficult to turn that function in code.

The table code looks like this:

- Code:
`Table[LogLinearPlot[{db[(responseS) /. {g -> 2 $pi 2^wc, k -> damp, s -> 2 $pi i w, A -> Power[10, gaindb/40]}], db[(responseZ), /. {g -> Tan[$pi 2^wc], k -> damp, z -> Exp[2 $pi i w], A -> Power[10, gaindb/40]}]}, {w, 0.01, 0.5}, PlotLabel -> title, PlotRange -> {-30, 20}, GridLines -> Automatic, GridLinesStyle -> LightGray], Evaluate[wcrange]]]`

I'm not sure what 'damp' is, what the 'i' symbol is...I think the 'wc' is a complex?

Anyhow, feeding the solution for a low pass into it using the following transfer should give the magnitude for the Bode plot ( z is the frequency ).

- Code:
`g^2 * (1 + z)^2`

lp = -----------------------------------------------

(-1 + z)^2 + g^2 + (1+z)^2 + g * k * (-1 + z^2)

Even then, what's really needed is a solution given m0, m1, m2 and v0, v1, v2 as per the final specification on page 31.

If there's no way of easily doing that then I guess it's back to drawing the biquad.

Statistics: Posted by JustinJ — Sat Aug 18, 2018 1:35 am

]]>

You can use the equivalent biquad equation as you've noted. Or, as someone pointed out to me, it may be simpler to use the prototype analog transfer function. You just need to take the frequency warping into account.

Statistics: Posted by matt42 — Fri Aug 17, 2018 9:59 pm

]]>

So far, so good. The SVF filters are implemented and work/sound great. Not that familiar with Matlab language so that's been a learning curve.

I'm having difficulty with the z-transfer function to get the amplitude, given a frequency, in order to draw the filter response. Maths isn't a strength, computer languages are. Every implementation I've found of these filters doesn't include a function to get amplitude given frequency.

I could use the SVF for audio but render using the biquad as the SVF formulas are designed to match them anyhow. But I'd rather implement it properly.

Any pointers to code or pseudo code for SVF amplitude given frequency?

Statistics: Posted by JustinJ — Fri Aug 17, 2018 6:32 pm

]]>

S0lo wrote:

Thats why I'm saying "Variable Slope" not "Variable Asymptotic Value" if that makes any sense. There is no notion of a constant slope over a long range of frequencies in this situation. I'm aware of that.

Thats why I'm saying "Variable Slope" not "Variable Asymptotic Value" if that makes any sense. There is no notion of a constant slope over a long range of frequencies in this situation. I'm aware of that.

Yes the ultimate slope of a lowpass filter above a couple of octaves above the cutoff frequency is its order * -6 dB per oct. -6 for 1st order, -24 for 4th order, etc.

If you want to carve out "interesting" slopes over a few octaves in the middle then if your synth architecture will allow it you can do it by feeding filter blocks in series. And you can do it by feeding filter blocks in parallel and then mixing the filter outputs together, possibly also mixing-in some of the original input signal. There are many ways to carve out interesting slopes in the middle. When you start parallel-mixing-in some of the original input signal, the responses can sometimes be various "shelf" shapes.

If your synth architecture will allow it, when parallel mixing it gives more freedom if you can mix each branch either inverted or non-inverted. In simple mixing code, it is convenient to think of positive and negative gain-- For instance something like if Gain_1 = 1.0, Gain_2 = -0.5, etc-- The mixing of each sample would be Output = (Signal_1 * 1.0) + (Signal_2 * -0.5); Which is basically subtracting half of Signal_2 from Signal_1.

Dealing with a "common" mixer GUI calibrated in dB, positive vs negative gain tends to "mean something else": +6 dB multiplies the signal by 2.0 and -6 dB multiplies the signal by 0.5. Both cases are "Positive Gain" in the multiplier sense, but positive dB just multiply to bigger results and negative dB muiltiply to smaller results. So if using a "dB oriented" mixer GUI, maybe the easiest way to include the option of "subtracting rather than adding" is to give each channel strip a phase invert button. Push the phase invert button and then a setting of +6 dB SUBTRACTS [ signal * 2.0 ].

Mixing parallel filters can have lots of shapes, some hard to intuitively predict, because if the filters are tuned different, they have different phase shifts, and when you mix them together the effect of phase shifts can make some frequencies louder or quieter than you may have "intuitively expected". Also why "mixer channel phase invert" for subtraction can be useful, because if the phase differences are radical enough, then subtracting signals may give louder (or more interesting) results than adding the signals!

Apologies belaboring simple obvious points.

The parameter "Q" is most meaningful with 2nd order filters. If series-connecting or parallel-connecting higher order filters, or any filter with something like "resonance" or "bandwidth" knob rather than a Q knob, if not just tuning by ear, it is helpful to watch a spectrum analyzer because it is hard to predict what "middle frequency shape" certain resonance or bandwidth settings would give, and a resonance setting of 0.5 in one synth might not do exactly the same thing as the same resonance setting in some other synth. Each synth designer might have something different in mind when defining his terms.

But if implemented correctly, Q of 2nd order filters we can talk about purt reliably. For lowpass and highpass 2nd order filters, often the value of Q is also the filter gain at the center frequency. A Q of 0.707 is 2nd order butterworth with "sharpest rolloff which is also smoothly falling into the stopband" or something like that. The gain at the center frequency is -3 dB and for the 2nd order lowpass filter, as you trace up from low freq to high freq, the curve steadily decreases all the way up.

Below Q = 0.707 the transition in the middle is more gradual before it settles down to -12 dB per octave. At Q = 0.5, gain - -6 dB at Fc. It takes a couple of octaves below Fc to fall to -6 dB, and it will take a couple of octaves above Fc before the response settles down into -12 dB / oct.

For "carving interesting midrange responses" the Q range from 0.707 to 1.414 deserves attention. I suspect people often either use "little or no resonance" or immediately jump the Q significantly bigger than 1.414, not noticing the subtlty available in that 0.707 < Q < 1.414 range.

At Q = 1.0, Gain at Fc is 1.0. There is a smooth little "bump" approaching from below up to Fc, then it falls off a little faster above Fc, but not a lot.

Here is a dumb example-- If you have two series LP 2nd order filters-- If you set the second LP Q = 1.414 then there is a fairly wide gentle +3 dB gain bump at Fc. There will be SOME noticeable gain going down at least one octave below Fc and gain will stay positive for awhile above Fc before it starts falling off initially "a little steeper" than -12 dB / oct, eventually settling at -12 dB / oct in the high freqs.

So if you tune the Fc of the second LP filter to the -9 dB point of the first LP filter, then the curve is made "more gradual" in the couple of octaves around LP_1's -9 dB point, because of that gentle +3 dB gain bump added by LP_2. But eventually up in the higher frequencies the ultimate falloff is steeper with the two series filters rather than just one.

Higher Q's can also sculpt responses of course, but higher Q's also get narrower and more drastic in effect, so they might not be as useful to sculpt up to a couple of octaves in a cumulative frequency response.

Also of note, if the filters are NOT significantly distorting then the result would be the same feeding [LP_1 -> LP_2 -> Out] OR [LP_2 -> LP_1 -> Out]. So it is only a matter of "easier convenient thinking" to consider that you are using LP_2 to further modify LP_1. OTOH if you distort the filters then different filter routings may give significantly different-sounding results.

Statistics: Posted by JCJR — Fri Aug 17, 2018 11:54 am

]]>

earlevel wrote:

Sure—you omitted, from the paragraph you quoted, "In my website (multi-part) article and code I allow for experimentation with constant or proportional table, mainly for educational purposes. A constant table size is the way unless you're on memory-constrained hardware and not a regular computer." Just want to make sure you noticed that.

Sure—you omitted, from the paragraph you quoted, "In my website (multi-part) article and code I allow for experimentation with constant or proportional table, mainly for educational purposes. A constant table size is the way unless you're on memory-constrained hardware and not a regular computer." Just want to make sure you noticed that.

Yeah I suppose I might have failed to notice that. I did (sort of) read the whole thing, but I admit I wasn't very throughout.

But mainly, I think it's good to think about (and experiment with) what's acceptable and why—that with constant tables we're essentially saying we can accept little or no oversampling at low frequencies, but need more as we go higher. For instance, I think many (including myself) consider 2k tables with linear interpolation as the minimum but perfectly acceptable solution for an audio range oscillator for arbitrary waveforms a synth. Yet if you think about the highest harmonics for the low fundamental frequency range, "in theory" it's unusable, even if fine in practice.

I feel like there are good arguments why 2k with linear ends up as a sort of "sweet spot" overall. It's obviously not perfect, but if you wanted to spend twice the memory, often times doubling the number of mips is a better idea... and while you could use something like cubic hermites.. that's only really going to make a difference for the high-frequency content of low-fundamental sounds, which quite often gets masked further by low-pass filtering anyway.

Statistics: Posted by mystran — Fri Aug 17, 2018 9:30 am

]]>

mystran wrote:

I feel like the situations where this would really be useful are somewhat limited though. If you have something like 2 tables per octave over 10 octaves with 2k samples in a 1D array of 256 slices, then that's about 40MB of data in single precision. While that might not be an entirely trivial amount, it's not usually a huge problem these days either. But then if you want another dimensions (eg. let's say the user can pick some spectral effect to be calculated) with another 256 steps for example, suddenly we're talking about 512MB even without any mipmaps... so now you're probably stuck computing the stuff on the fly anyway.

So I feel like most of the time the memory is either not a problem, or it's such a huge issue that some minor trimming is not going to save it. I suppose there are some scenarios where such memory footprint optimizations could be useful, but it still feels to me like it's way more limited in practice than what it might sound at first thought and really might not be worth the quality loss (or additional CPU to combat that) in practice.

YMMV.

earlevel wrote:One way to minimize memory that seemed to work well was use minimum tables (inversely proportional to the base wavetable frequency, as you noted), but only up to a point—keep the top tables constant size (64 samples, for instance) so they get progressively better resolution for higher frequencies.

I feel like the situations where this would really be useful are somewhat limited though. If you have something like 2 tables per octave over 10 octaves with 2k samples in a 1D array of 256 slices, then that's about 40MB of data in single precision. While that might not be an entirely trivial amount, it's not usually a huge problem these days either. But then if you want another dimensions (eg. let's say the user can pick some spectral effect to be calculated) with another 256 steps for example, suddenly we're talking about 512MB even without any mipmaps... so now you're probably stuck computing the stuff on the fly anyway.

So I feel like most of the time the memory is either not a problem, or it's such a huge issue that some minor trimming is not going to save it. I suppose there are some scenarios where such memory footprint optimizations could be useful, but it still feels to me like it's way more limited in practice than what it might sound at first thought and really might not be worth the quality loss (or additional CPU to combat that) in practice.

YMMV.

Sure—you omitted, from the paragraph you quoted, "In my website (multi-part) article and code I allow for experimentation with constant or proportional table, mainly for educational purposes. A constant table size is the way unless you're on memory-constrained hardware and not a regular computer." Just want to make sure you noticed that.

Yes, little reason to skimp on memory, constant table sizes are convenient, and the increased oversampling is what you want. Still, it's good to understand if you need to do it in a dedicated module. Also, the memory savings can be large if you decide you want to stick with wavetables solely (obviously, there are other ways), but be able to run the audio down to sub-audio (love the sound a sawtooth that retains all harmonics as you sweep it down, annoying when it doesn't). You could have 16k or more for the lowest, and scale back.

But mainly, I think it's good to think about (and experiment with) what's acceptable and why—that with constant tables we're essentially saying we can accept little or no oversampling at low frequencies, but need more as we go higher. For instance, I think many (including myself) consider 2k tables with linear interpolation as the minimum but perfectly acceptable solution for an audio range oscillator for arbitrary waveforms a synth. Yet if you think about the highest harmonics for the low fundamental frequency range, "in theory" it's unusable, even if fine in practice.

Statistics: Posted by earlevel — Fri Aug 17, 2018 9:01 am

]]>

earlevel wrote:

One way to minimize memory that seemed to work well was use minimum tables (inversely proportional to the base wavetable frequency, as you noted), but only up to a point—keep the top tables constant size (64 samples, for instance) so they get progressively better resolution for higher frequencies.

One way to minimize memory that seemed to work well was use minimum tables (inversely proportional to the base wavetable frequency, as you noted), but only up to a point—keep the top tables constant size (64 samples, for instance) so they get progressively better resolution for higher frequencies.

I feel like the situations where this would really be useful are somewhat limited though. If you have something like 2 tables per octave over 10 octaves with 2k samples in a 1D array of 256 slices, then that's about 40MB of data in single precision. While that might not be an entirely trivial amount, it's not usually a huge problem these days either. But then if you want another dimensions (eg. let's say the user can pick some spectral effect to be calculated) with another 256 steps for example, suddenly we're talking about 512MB even without any mipmaps... so now you're probably stuck computing the stuff on the fly anyway.

So I feel like most of the time the memory is either not a problem, or it's such a huge issue that some minor trimming is not going to save it. I suppose there are some scenarios where such memory footprint optimizations could be useful, but it still feels to me like it's way more limited in practice than what it might sound at first thought and really might not be worth the quality loss (or additional CPU to combat that) in practice.

YMMV.

Statistics: Posted by mystran — Fri Aug 17, 2018 2:33 am

]]>

karrikuh wrote:

Thanks, never thought that way. So it seems using a constant number of samples per cycle independent of playback frequency is the way to go. Right now, I'm using cycle sizes inversely proportional to frequency (in steps of 1/2 octave), that is, with a constant oversampling factor. Switching to constant size would greatly simplify my code and potentially make it faster, too...

Thanks, never thought that way. So it seems using a constant number of samples per cycle independent of playback frequency is the way to go. Right now, I'm using cycle sizes inversely proportional to frequency (in steps of 1/2 octave), that is, with a constant oversampling factor. Switching to constant size would greatly simplify my code and potentially make it faster, too...

One way to minimize memory that seemed to work well was use minimum tables (inversely proportional to the base wavetable frequency, as you noted), but only up to a point—keep the top tables constant size (64 samples, for instance) so they get progressively better resolution for higher frequencies. In my website (multi-part) article and code I allow for experimentation with constant or proportional table, mainly for educational purposes. A constant table size is the way unless you're on memory-constrained hardware and not a regular computer.

I'll elaborate on one point, for the sake of completeness. In my earlier explanation I said that for the low table, no oversampling is fine because you have such good resolution there. That it completely true for the low harmonics, but of course resolution drops for higher harmonics, halved for every octave. For instance, if you create a waveform that's only the 500th harmonic...well, you could only play or hear it in the very low end of the keyboard so it would be lame, but it would be fairly noisy too—lack of resolution. That's not realistic, but the same is true of the upper harmonics of a sawtooth, when played in the lowest octaves. However, that error is masked by the stronger (and clean) lower harmonics in a musically useful waveform and in practice it can be ignored. Most smart people (Urs mentioned it this thread, rbj has mentioned it to me in the past) happily use a single cycle for the lowest octave. Of course there's always 16k or 32k tables for the paranoid! Memory is cheap, and your bank of waves can be stored as a single cycle table and expanded as needed.

Statistics: Posted by earlevel — Thu Aug 16, 2018 9:35 am

]]>

The way these work is by taking the currently held keys, assigning them to pattern rows (eg. by sorting) and then triggering the notes based on the pattern.

Statistics: Posted by mystran — Thu Aug 16, 2018 3:29 am

]]>

juha_p wrote:

Hmm... Home Edition starting from 320€ ... and it includes all functionality found in pro version.

Richard_Synapse wrote:Mathematica costs like $4000 so they won't give you the same functionality online for free.

...

Hmm... Home Edition starting from 320€ ... and it includes all functionality found in pro version.

Having the same feature set is quite usual for home or student editions. But such versions are almost always for non-commercial use only, so there's a reason they are much more affordable.

Richard

Statistics: Posted by Richard_Synapse — Thu Aug 16, 2018 3:21 am

]]>

kryptonaut wrote:

This combination produces a rounded response in the shoulder-overlapping-with-linear region. Somewhere in this region, the slope might be -9dB/octave but the asymptotic value will still be -12dB/octave if you go far enough (although once past the notch, the response will then theoretically head back towards -6dB/oct)

This combination produces a rounded response in the shoulder-overlapping-with-linear region. Somewhere in this region, the slope might be -9dB/octave but the asymptotic value will still be -12dB/octave if you go far enough (although once past the notch, the response will then theoretically head back towards -6dB/oct)

I agree. Thats why I'm saying "Variable Slope" not "Variable Asymptotic Value" if that makes any sense. There is no notion of a constant slope over a long range of frequencies in this situation. I'm aware of that.

I've more or less mentioned before, that my aim was to approximate the sound not necessarily the frequency response and math behind it. As the OP him self was just curious probably from a sound perspective. And I doubt that any sound designer would notice the difference between a strait and precise fractional, (none 6 multiple) Xdb filter and a roughly approximated round and curvy one. And even if he does, he probably wont care. The sound of such a filter or the sound of varying the slope smoothly isn't really that interesting IMHO to say the least. One could probably get a similar sound by simple EQing. The whole effort and CPU usage one would incur to come up with a really good approximation isn't really worth the resultant sound in this case. And even if it is, who's to say that musicians would like it better than a wacky, flawed but inexpensive design. The moog filter sound it self came from a miscalculation by Moog engineer Jim Scott who had inadvertently overdriven the filter by up to 15dB.

For some one who considers this a mathematical challenge. And is just motivated to solve the problem from a theoretical perspective and considers this as an achievement. I can understand that.

kryptonaut wrote:

By chaining several appropriately-tuned 6dB/oct low-pass and high-pass (or shelving) filters together, for example, it's possible to arrange all the rounded shoulder regions in such a way that the result approximates, say, a -3dB/oct rolloff. But it will only be an approximation, and only valid over a certain frequency range. The final asymptotic slope will still be an integer multiple of 6dB/oct

Point made.

Statistics: Posted by S0lo — Thu Aug 16, 2018 2:59 am

]]>