Shortest pulse width to support

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

One more thing I want to add.

For performance, it is absolutely critical that you reorder your BLEP lookups such that each branch (ie. all the samples used for a given sub-sample offset) sits together in the memory, one branch after another. This has several benefits.

First, it means that you need the minimum amount of cache lines to read the whole BLEP branch. For example, linear interpolation between two branches of 16 taps each, stored in single-precision reads exactly two (full) cache lines (assuming the common 64 bytes per line). Unless your whole lookup table already sits in the cache, this is WAY faster than jumping around reading one sample at a time.

Second, it means that the whole process maps perfectly to SIMD: you fetch full vectors, you interpolate full vectors, you mix full vectors. The output buffer access is necessarily unaligned, but this isn't much of a problem with modern processors. The 16-tap kernel example in SSE1 packed-singles is a matter of 4 lerps, 4 multiplies to scale and then 4 adds to the output buffer. In fact it's so simple that your compiler can probably auto-vectorise it.

Third, it means that you can store an extra branch or two, so you can ignore slight inaccuracies in floating-point rounding (ie. no need to check for lookup table bounds) and correctly interpolate over the discontinuity (ie. the trivial step is subtracted from each branch separately when the table is generated; this way your lerps never see the discontinuity). This way the whole BLEP-mixing function can be branchless. Trying to do this correctly any other way is just a horrible mess.

Post

S0lo wrote: Fri Nov 08, 2019 12:01 am Would be interesting to elaborate on "Why is that so?". As I'm aware of more than a few well known devs/synths that use minimum-phase. Synths like Hive, DUNE 2, Lush-101, Softube modular, Bazille, Karma FX to name a few all use some form of either minBLEP or minimum-phase filters for waveform generation. At least thats what clearly apparent when you look at the waveforms.
Bazille uses PD with heavy oversampling, Hive uses a mix of pre-computed and dynamically bandlimited wavetables. Neither have any BLEP or minBLEP.

Of all our synths, only ACE uses minBLEPs. It was the first time I tried the concept and we had to let it go for Diva because it was impossible to create an accurate behaviour with minBLEPs for the reasons Mystran mentioned. The DC build up would change the tone in unpredictable ways, so we couldn't tweak stuff to sound right. Diva therefore uses BLEPs.

To avoid latency issues with feedback loops, we have had good results setting up two concurrent audio paths: One for the signal that's heard (BLEPs) and one for modulations signals (PD, wavetable etc.) which uses a compromise of low latency and good band limiting. This helped overcome my litmus test for analogue oscillator emulations: PWM an oscillator with its own, inverted output.

Post

Urs wrote: Fri Nov 08, 2019 7:26 am To avoid latency issues with feedback loops, we have had good results setting up two concurrent audio paths: One for the signal that's heard (BLEPs) and one for modulations signals (PD, wavetable etc.) which uses a compromise of low latency and good band limiting. This helped overcome my litmus test for analogue oscillator emulations: PWM an oscillator with its own, inverted output.
This type of scheme actually has an additional benefit: the filter-ripple from the (strict) band-limiting does not affect modulation, which can become a problem depending on the modulation target (eg. it's probably not a good idea for FM or filter modulation).

Post

Interesting talk. Thanks mystran. I did implement MinBLEPS before and ran into the DC issue with the sawtooth but managed to almost completely fix it without a DC blocker. It was actually a one line fix. Can't remember what it is.

I didn't say MinBLEP is the best there is. Sure it has it's issues. The thing is I process audio internally sample by sample (not a block). And I know for a fact some devs do so in modular. This means that discontinuities can happen at the sample time level due to sync without any pre-knowledge of that in the module level.

So at the time (like 3 years back), it seamed like a reasonable solution to do sync.

I never tried linear phase BLEPS. May be I'll do and see what is what.
mystran wrote: Fri Nov 08, 2019 3:15 am
First of all, you should usually not draw too many conclusions about the exact oscillator waveforms by looking at the output of some plugin, because there are any number of reasons why the phase of the oscillators might have undergone some phase-distortion later in the signal chain. As an obvious example, if oversampling is involved and minimum-phase filters are used for resampling (and that's pretty common), then those will tend to give you ripple that looks like minBLEPs no matter what. It is entirely possible that some of these use minBLEPs, but it is also entirely possible that none of them do.
Sure, but I have to say that they do very much look like they have used some sort of minimum-phase "thing". As the ripple is completely at one side of the discontinuity. I could be mistaken of-course, nothing is for sure. But here is an example from two different synths:

ExportedImage_min.png


ExportedImage_lin.png
You do not have the required permissions to view the files attached to this post.
Last edited by S0lo on Fri Nov 08, 2019 9:27 am, edited 1 time in total.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

Urs wrote: Fri Nov 08, 2019 7:26 am Bazille uses PD with heavy oversampling, Hive uses a mix of pre-computed and dynamically bandlimited wavetables. Neither have any BLEP or minBLEP.
That is good to know and I apologizes for jumping to quick conclusions.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

THANKS everybody for his great input. I need to take it a littler slower ...
mystran wrote: Thu Nov 07, 2019 6:49 pm
synthpark wrote: Thu Nov 07, 2019 1:49 pm Is there any paper which describes BLEPS the most forward way?
It cannot be a simple filter, rather some sort of replacement technique.
Then I wonder whether the algorithm must look somewhat in the future
which can turn out to be problematic together with certain modulation.
Not sure if there are any really great papers, but the basic idea is that only the discontinuities cause aliasing, so if we can solve the exact (sub-sample) position and magnitude of the discontinuities, we can subtract a pre-computed "BLEP residue" (scaled appropriately) from the naive waveform to cancel out the aliasing.

The "BLEP residue" can be computed by subtracting the trivial step from a band-limited step. The band-limited step can be obtained by integrating a highly oversampled brick-wall FIR low-pass, which can then be decomposed into branches similar to a poly-phase interpolator to obtain non-oversampled residues at different sub-sample offsets (which can then be further interpolated at run-time to obtain the intermediate position). This is all normally done as a preprocess, with the final results stored in a lookup table.

At run-time when generating the waveform, you then generate the naive waveform and check for discontinuities. For every detected discontinuity (and you certainly should detect them all), you solve the position and magnitude, then fetch and scale the relevant BLEP and mix it into the output buffer. Since you usually want the BLEPs to be linear-phase, you mix the naive waveform into the buffer with half the BLEP length worth of delay. This essentially gives you some "lookahead" but the algorithm itself can still process one sample at a time and even though several BLEPs might overlap, you can simply mix them into the output one at a time as you come across the discontinuities (which means it's relatively easy to do things like hard-sync oscillators with master and slave modulated separately, etc).

Since there are a lot of moving parts (ie. the BLEP residue computation, residue interpolation, the output buffer mixing, the actual oscillator logic) you might find it useful to first look at my old PolyBLEP tutorial (viewtopic.php?t=398553) which uses simple 2-sample polynomial "BLEPs" to avoid most of the complexity with the actual residues (and just tries to show how to solve the discontinuities), while still demonstrating the general structure of the algorithm (ie. how to solve the position) in such a way that longer BLEPs can be integrated in a fairly easy way... and even as is provides WAY better quality than naive oscillators. ;)
Thanks! I will need the weekend to understand everything and study your code. Problably I try it in Matlab and make an analysis. Interesting what the implications are due to delay. I understand that you have to know the exact time when the transition takes place to position the BLEP. The BLEP itself seems quite clear. But the implications ... if you want to do some feedback processing it could be prohibiting ...

Post

You can actually also build pulse-waves by subtracting two saw-waves, where one of them has it's phase offset by the pulse-width. You get a square where the offset is half a cycle and you can take this all the way to 0 where the two saws simply cancel each other. The saw-waves can then be wavetables or whatever. You can also mix the result with the PWM control signal, if you want the "high" and "low" levels of the pulse to stay constant, like they would in an analog comparator design.

I would not necessarily recommend this approach for fast audio-rate modulations (since varying the phase-offset of the secondary saw will start to cause some problems), but for typical reasonable PWM modulation (especially if you're not interested in mixing it with hard-sync) it works quite fine.

Post

I know this approach, thanks! I may consider this for LFOs, because you want the LFO signal to have a smoothing parameter and as you decrease the feedback, the LFO waveform becomes less sharp, but it must be strictly increasing and decreasing.

By the way I am doing a "private" FPGA synth project. The sampling rate for the saw could be as low as 1.5 MHz and as high as 200 MHz. I am comparing the aliasing of naive waveform generation with such things like BLEP to see if BLEP offers any advantage. In the world of FPGA, ressources are a little different compared to processor technology.

For lower sampling rates one can realize more voices on the same chip, of course. The ram blocks are at least 512 addresses deep. For "low" sampling rate like 1.5 MHz, you may only 128 cycles per sample to read and write data (dual port). So you only use 1/4 of the available ram. Still much better than 6 MHz and higher. The naive waveform generation for sawtooth is critical as the aliasing for 32x (=1.5 MHz) is too high. When going through a high resonant filter, the aliasing maybe increased by 40 dB. There is also the choice to use bandlimited OSCs with 4 different stored waveforms. But this is quite traditional.
mystran wrote: Sat Nov 09, 2019 9:47 pm You can actually also build pulse-waves by subtracting two saw-waves, where one of them has it's phase offset by the pulse-width. You get a square where the offset is half a cycle and you can take this all the way to 0 where the two saws simply cancel each other. The saw-waves can then be wavetables or whatever. You can also mix the result with the PWM control signal, if you want the "high" and "low" levels of the pulse to stay constant, like they would in an analog comparator design.

I would not necessarily recommend this approach for fast audio-rate modulations (since varying the phase-offset of the secondary saw will start to cause some problems), but for typical reasonable PWM modulation (especially if you're not interested in mixing it with hard-sync) it works quite fine.

Post

With such high oversampling rates you might have some luck generating integrated waveforms (eg. parabola for saw-tooth) and then differentiating those. This paper should give you a good idea how this works:

https://www.researchgate.net/publicatio ... _Waveforms

While these techniques don't perform very well near the Nyquist limit (and their spectral masking stuff is pure bullshit), with enough oversampling you might be able to get something decent out of them.

Post

mystran wrote: Sun Nov 10, 2019 1:26 pm With such high oversampling rates you might have some luck generating integrated waveforms (eg. parabola for saw-tooth) and then differentiating those. This paper should give you a good idea how this works:

https://www.researchgate.net/publicatio ... _Waveforms

While these techniques don't perform very well near the Nyquist limit (and their spectral masking stuff is pure bullshit), with enough oversampling you might be able to get something decent out of them.
Hey thank you so much. An interesting paper, also summarizes some ideas from previous works.

I tried the most simple approach, not going for higher order polynomials. A quick demo is obtained via the matlab code

function scr_diff_saw_v1

freq = 1e3;
ft = 44.1e3;
n_ov = 16;
n = 1e6;

p = modc(freq/(n_ov*ft)*(0:n_ov*n-1),0.5);
y1 = 2*p;
y2 = diff(2*p.*p*350);

coeff = lowpass_fir_coeff(50*n_ov,1/n_ov,'blackman');

y1 = conv(y1,coeff);
y1 = y1(1:n_ov:end);

y2 = conv(y2,coeff);
y2 = y2(1:n_ov:end);

figure; grid on; hold on;
plot(y1);
plot(y2, 'r--');
legend('direct', 'diff');

figure; grid on; hold on;
plot(20*log10(abs(fft(y1))));
plot(20*log10(abs(fft(y2))), 'r--');
legend('direct', 'diff');

lowpass_fir_coeff is just a windowed lowpass filter and modc a modulo-complement operation for plusminus 0.5 mapping. The spectrum is shown below. The gain was matched manually. An impressive improvement of 30-35 dB is seen for 16x oversampling and 1 khz test tone.

Still there is a fundamental problem: The waveform depends on the input frequency, the gain of the differentiator has to be adjusted and is a function of input.

Now if you want this oscillator to accept linear phase modulation (with the wildest modulation index, thats why there is the FPGA), the modulator can have any frequency. So the best is to use some scheme where the waveform shape is generated independent of "carrier" frequency. The naive approach using a counter and simply adding some additional phase component works of course. This requirement also seems to disqualify a simple solution like choosing a bandlimited waveform out of a set depending on the octave.
You do not have the required permissions to view the files attached to this post.
Last edited by synthpark on Mon Nov 11, 2019 12:21 pm, edited 4 times in total.

Post

mystran wrote: Sat Nov 09, 2019 9:47 pmI would not necessarily recommend this approach for fast audio-rate modulations (since varying the phase-offset of the secondary saw will start to cause some problems), but for typical reasonable PWM modulation (especially if you're not interested in mixing it with hard-sync) it works quite fine.
Hmmm, what's the difference between the two options? Upon a first sight they'd seem to generate 100% identical results (up to the DC offset, but you mentioned the correction).

Post

Z1202 wrote: Mon Nov 11, 2019 11:24 am
mystran wrote: Sat Nov 09, 2019 9:47 pmI would not necessarily recommend this approach for fast audio-rate modulations (since varying the phase-offset of the secondary saw will start to cause some problems), but for typical reasonable PWM modulation (especially if you're not interested in mixing it with hard-sync) it works quite fine.
Hmmm, what's the difference between the two options? Upon a first sight they'd seem to generate 100% identical results (up to the DC offset, but you mentioned the correction).
What I mean is that when the saw-waves are band-limited in advance (eg. using wavetables or something) then this band-limiting will be wrong for the secondary saw (notably around it's band-limited discontinuity) where the PWM slope is high enough. So ideally you should band-limited the secondary saw-wave after both the phase-modulation and after compensation (since compensation cancels out some of the aliasing), but then why would you bother with this phase-shifting business in the first place if you can band-limit arbitrary waves on the fly?

While BLEPs are certainly not immune to various inconsistencies when it comes to high-frequency modulation, the worst that could happen in this particular case is that your pulse-transitions happen at a slightly wrong time where linear interpolation gives a poor approximation of the continuous-time PWM signal.

Whether any of this is a practical concern is an entirely different matter. The failure-cases tend to be such that there is a high practical chance that even the "correct" oscillator output is rather noisy, but it is possible to construct artificial scenarios where one approach works better than the other.

Post

synthpark wrote: Mon Nov 11, 2019 10:00 am Still there is a fundamental problem: The waveform depends on the input frequency, the gain of the differentiator has to be adjusted and is a function of input.
I should probably also warn you that with any method based on integration, rounding errors tend to accumulate which means that in practice you need to make your integrators lossy... and the rate of error accumulation usually depends on the frequency too. I just wanted to point out the method anyway in case you can turn it into something useful.
Now if you want this oscillator to accept linear phase modulation (with the wildest modulation index, thats why there is the FPGA), the modulator can have any frequency. So the best is to use some scheme where the waveform shape is generated independent of "carrier" frequency.
Band-limiting this stuff tends to be a mess no matter what. You can convert the phase modulation problem into a frequency modulation problem by differentiating the modulation signal (and then smooth it out slightly, taking advantage of the high sampling rate), but most methods of band-limited waveform generation tend to assume that frequency is changing slowly enough that the effects can be neglected, because this makes the problems much more tractable.

Post

Thanks once again.

Actually I think one can also get rid of noise using slight clipping. As the waveform touches the borders, the noice is simply squeezed away.

Meanwhile I made the decision to use brute-force 50 MHz sample rate with simple overflowing counter as originally planned. Before that I wanted to use 200 MHz sample rate, but I get to much latency by down-and up sampling, planning to enable some nasty feedback loops, and they need to be stable over a wide range.

The next challenge is oscillator pitch range. If following some design like the nord lead, if OSC2 can be tuned to plus minus 5 octaves, the picth range is as high as 1 Hz ... 160 khz, not counting all the additional pitch modulations on-top. There are two approaches how to limit the frequency range...
Such an OSC could be acting as a FM modulator, so cannot be simply neglected outside the hearing range.
mystran wrote: Mon Nov 11, 2019 4:56 pm
I should probably also warn you that with any method based on integration, rounding errors tend to accumulate which means that in practice you need to make your integrators lossy... and the rate of error accumulation usually depends on the frequency too. I just wanted to point out the method anyway in case you can turn it into something useful.

Band-limiting this stuff tends to be a mess no matter what. You can convert the phase modulation problem into a frequency modulation problem by differentiating the modulation signal (and then smooth it out slightly, taking advantage of the high sampling rate), but most methods of band-limited waveform generation tend to assume that frequency is changing slowly enough that the effects can be neglected, because this makes the problems much more tractable.

Post

mystran wrote: Mon Nov 11, 2019 4:56 pmbut most methods of band-limited waveform generation tend to assume that frequency is changing slowly enough that the effects can be neglected, because this makes the problems much more tractable.
I think that BLEP method is immune to audio rate modulation as long as the segments (after the modulation) between the transients are of polynomial shape (linear, parabolic etc.). This keeps the segments themselves "bandlimited" and simultaneosly keeps the number of derivative discontinuities finite. So in case of PWM I think BLEP method works perfectly (save the timelimiting of BLEPs), provided you identify the subsample positions of transitions correctly.

With other segment shapes (e.g. synced sine or piecewise exponential wave) you'd probably have an infinite number of discontinuities and simultaneously there is a question of the bandlimitedness of segments. For a sine we know the criterion of bandlimitedness. For an exponent it's really an interesting question. To my best knowledge this topic hasn't been properly researched, since Fourier transform of exponent function doesn't exist, respectively we need a more generic definition of bandlimitedness. I have a feeling that a signal can be seen as "generically bandlimited" iff the sequence of BLEPs fixing the derivative discontinuities converges as we fix more and more of derivatives, but I don't have a proof. When looking around I found Paley-Wiener-Schwartz theorem which might be closely related to that question.

Post Reply

Return to “DSP and Plugin Development”