BBD delay line compander

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Hi Itoa. Dunno.

An interesting tidbit in the article-- The proposition that a 1024 sample BBD would add about 1 percent distortion, regardless of signal level. Interesting.

The article said that it is difficult to get a constant amount of distortion over a wide dynamic range, because most nonlinear processing relies on level-sensitive gain mapping.

So here is an idea which may or may not work--

Sometimes people would build fuzz boxes based on VCA's rather than static distortion networks. You can make a VCA compressor with the desired fuzz transfer function, except that you don't smooth the rectifier. Snip out the envelope smoothing capacitors! A compressor without envelope smoothing becomes a fuzztone. Adjust the threshold and ratio however you desire the fuzz to behave.

So usually a VCA-based fuzztone would have a threshold and ratio, adjusted to emulate tube or diode clipping or whatever. Linear at low levels, becoming more distorted along with increased signal level. Just a different way to skin the cat.

But if you make the compressor active over a very wide amplitude range, the way a compander compressor is built-- Perhaps make a feedforward compander-style compressor. Full-wave rectify the input signal but don't smooth the envelope. Set this wide-dynamic-range compressor to a tiny ratio barely above or below unity. Maybe start with a ratio of 1.01 : 1.

Perhaps this would apply about 1 percent distortion at all dynamic levels? Emulating the claimed wide-range constant distortion behavior of a BBD?

Regardless of the signal level, the output would have about 1 percent linearity error compared to the input. But maybe 1 percent THD would require some other percent of linearity error. Dunno at the moment how to calculate that, but the ratio could be experimentally tweaked until it yields 1 percent THD or whatever is desired.

****

If that would provide the level-insensitive 1 percent distortion, I suppose the distortion would be added before the feedback loop, so that regeneration would add more distortion every cycle thru the delay.

If assymetrical clipping is also added, dunno if it would be better to do the assymetrical clipping before or after the 1 percent distortion. I'd be inclined to add the assymetrical clipping after the 1 percent distortion, but maybe that is the wrong guess.

Not very scientific, but in the past I assumed the assymetrical clipping had to do with all those FETs in the delay line. But just because a FET class A amp stage can sometimes assymetrical clip similar to a class A tube, probably doesn't mean that the long FET capacitor switching chain would have to work the same.

However, lacking opportunity to probe a BBD with a scope, perhaps applying a FET-like (or even tube-like)class A assymetrical clipping would work about as good as any other wild guess? :)

Post

This 1% may be important in delay units, but definitely not in choruses. As its so small and the signal is compressed, this can be just roughly approximated.
giq

Post

JCJR wrote:Hi Itoa. Dunno.

An interesting tidbit in the article-- The proposition that a 1024 sample BBD would add about 1 percent distortion, regardless of signal level. Interesting.

The article said that it is difficult to get a constant amount of distortion over a wide dynamic range, because most nonlinear processing relies on level-sensitive gain mapping.

So here is an idea which may or may not work--

Sometimes people would build fuzz boxes based on VCA's rather than static distortion networks. You can make a VCA compressor with the desired fuzz transfer function, except that you don't smooth the rectifier. Snip out the envelope smoothing capacitors! A compressor without envelope smoothing becomes a fuzztone. Adjust the threshold and ratio however you desire the fuzz to behave.
Thats what I initially thought about: y += fabsf(y)*0.01 is level insensitive

But I would like to know the true nature of this phenomenon. I suspect its due to bucket loss during transition. Could this be asymmetrical for some reason? e.g. 0.03% when increasing 0.01% for decreasing voltage?

I would love to have a sample of square wave passed through bbd in the loop. Somebody pleeease :)
giq

Post

Hi Itoa

I haven't any guess what would cause the level-insensitive constant amount of distortion as reported in that article. Just superstitious thoughts such as: "there sure are a lot of FETs in that chip" which hardly qualifies as scientific analysis. :)

What I recall of long-ago scope-probing raw BBD output, it just looked real nasty. But maybe some of the better or later-designed chips were cleaner.

So far as I recall the chip wouldn't pass a very loud signal, and it was picky to set the DC bias right in the center of that tiny linear range. If the DC bias was set too high or too low, then no signal at all would get thru. Sometimes it was a bit of a hunt tweaking the DC bias trimmer until you begin to see any signal at all coming out of the chip. I preferred 10 or 20 turn trimmer resistors for such situations.

Various chorus and flanger kits sold back then-- The merchants couldn't assume that the customer would have an oscillosope. So kit instructions would say, "Adjust the trimmer until you can hear sound, and then adjust the trimmer for the cleanest sound." Which could take awhile doing it by ear.

The signal would only start looking "relatively clean" if the signal amplitude was less than half of the max level before clipping. Which was discouraging because the chip noise level plus the noise of surrounding circuitry was typically so high, that ideally you would want to pump through as much signal possible, so that the music isn't completely drowned in noise.

Another visual artifact which probably wouldn't be feasible to model except at stupid-high samplerates-- If you zoom-in the scope display to see the stairstep of the individual samples coming out-- There would be a brief little voltage spike separating each sample of the stairstep. Like little dirac impulses between each sample. I assumed they were clocking artifacts of some kind.

Those brief inter-sample spikes would presumably be attenuated by the anti-alias filter, though the anti-alias filters on many of the old circuits were not exactly what one would describe as elaborate. I think the intersample spikes were one reason that it was desirable to differentially drive two BBD segments in parallel, so that differential amplification of the two outputs would help cancel out those voltage spikes. Perhaps differential drive could also un-do some of the assymetrical distortion present in a single delay line.

I hope those are reasonably accurate recollections.

Post

I assume bias and range are fine tuned and the only effect that I hear in recordings is noise, low pass filtering (kind of box filter) and this mysterious nonlinearity.

In long delays the signal gets kind of gritty (more and more even harmonics) and kind of blurred (tilt/box style filter?) - and these effects are the most important when it comes to bbd character imo
giq

Post

I'm not very good in electronics, but trying to investigate this.

The charging curve in bucket is not exponential, but asymptotic. This may be the reason assuming limited transfer time.
giq

Post

Hi itoa

Maybe your idea about the charging curve makes sense.

Yes the details about bias and such would be irrelevant to modeling the device. Unless maybe one also wishes to emulate devices which are not properly calibrated. :) There may be a limit as to which quirks in old devices are worth modeling.

Likely useless thoughts--

Those brief clocking spikes between-samples-- Doesn't a single dirac impulse have a white noise spectrum? Would a train of dirac impulses also make a white noise spectrum?

Assuming the little clocking glitches are all about the same amplitude, and the output anti-alias filter is smoothing the glitch power along with the adjacent samples-- Could the filtered contribution from the glitches be a source of some of the noise in the circuit?

Could the inter-sample glitches be a source of some of the nonlinearity? If each glitch is about the same level, and the power of the glitch is being added and smoothed into the adjacent samples-- Could the percentage error added by the power of the glitch be greater at low signal levels?

Or maybe the imperfectly-filtered glitches would merely contribute a small amount of noisy DC offset to the entire signal?

Or perhaps none of the above. :)

Post

So far as I understand, sampling theory should consider a sampled waveform as a train of smoothed instantaneous pulses, rather than a stair-step. However, the BBD actually outputs a stair-step. Could this cause some extra high-frequency loss on the output?

****

I googled a few chorus schematics yesterday to refresh memory. So far as I could see, some devices apparently had no input anti-aliasing except perhaps a single stage of first-order lowpass filtering, and other devices had better input filters.

The loosest output filter I saw was second-order. The fanciest output filter I noticed was 6th order. But I only sampled a few ancient stompbox circuits. A common parts-miserly cheat was to add a low impedance first-order passive section ahead of a second-order active filter, for a purt good-performing 18 dB per octave filter. The 36 dB output filter was made thataway, 2 * 18 dB implemented with two opamps.

When trying to get fast-as-possible rolloff in the octave above filter cutoff, it was common to adjust the filter Q a bit high in order to get chebyschev-like response-- Some ripple in the passband and a bump at Fc, as tradeoff for better performance in the octave above Fc. Also analog filters didn't work very precise except with selected precision caps and resistors one might be unlikely to find in a stompbox.

So a variety of non-optimal anti-alias filters might be emulated, including stacks of a couple of second-order sections, with at least one section optionally having a Q maybe as high as 1.0 to 1.4, giving a bump at Fc.

An echo unit with a 4096 chip, set for delay time > 200 ms, would result in an equivalent sample rate significantly smaller than 44.1 k. OTOH a chorus with about 25 ms delay, with an SAD 512 or 1024, would be a sample rate roughly in the ballpark of 44.1 k.

A designer would want to tune his input and output filters at least below nyquist at the slowest clock rate he expects to see, with some generous slop added to account for loose component value matching and the low-order, slow rolloff of the cheap filters.

Which may account for the typical "dull" high freq response of even chorus and flanger units which can use a fairly fast clock rate, and especially long-delay echo pedals where even 4096 samples of delay ain't nearly enough.

So far as I know, most inexpensive devices had fixed-frequency filters, so you would want to design them according to the slowest expected clocking rate.

But possibly some units had tunable anti-alias filters which could track the clock rate. Dunno how many. I recall long ago repairing a fairly-expensive ADA chorus or echo stomp box which had tracking filters, cunningly implemented with home-brew switched-capacitor lowpass filters, using CMOS switches and an ultrasonic clock to modulate the filter cutoff. Couldn't google a schematic of that device. May be remembering wrong, but fairly certain it was real and not imaginary. :)

Post

Apologies rambling in such a dull fashion.

* Might want to emulate input aliasing for pedals that didn't have very good input filters.

* If you re-clock noise, it modulates the noise spectrum similar to a filter. Which was quite effective for instance in the old SID sound chips.

* The input and output filters would be tuned fairly low, with gradual rolloff. Or alternately tune them higher and emulate the resultant aliasing.

Maybe a signal flow--

* Input filtering
* Compression
* Add a little bit of possibly white noise to emulate noise in the BBD
* Possibly slightly distort the signal
* Possibly assymetrical clip loud signals
* Delay (time modulated for flange or chorus)
* Output filtering
* Expansion
* Tap the output here for any global feedback, mixed-in ahead of the input filtering.

The time modulation would probably add some variable filtering effects to the injected noise + the noise present in the input signal. Dunno if such a time-varying filter effect would be audible, but seems possible.

****

The BBD was a fixed-length delay, constantly being resampled at different clock rates in the case of chorus/flange. That is slightly different than digitally emulating it with a variable-length computer memory delay.

Maybe it would behave "better" or more realistic using a fixed delay buffer in the computer, and resampling the input and output to the fixed-size buffer. With crappy stairstep resampling and crappy output filters. But programming it thataway sounds like it would be a lot of work. :)

Post

I use fixed delay buffer and resample at fractional "tick" position both on input and output. Works perfectly.

btw. seems my theory with loading curve is right. I'm getting described kind of distortion and its almost level independent... but its very expensive since every cell is modelled separately.
I also modelled 2 stage cell. So transition is done in 2 cycles.
giq

Post

itoa wrote:I use fixed delay buffer and resample at fractional "tick" position both on input and output. Works perfectly.

btw. seems my theory with loading curve is right. I'm getting described kind of distortion and its almost level independent... but its very expensive since every cell is modelled separately.
I also modelled 2 stage cell. So transition is done in 2 cycles.
Kewl!

Post

JCJR I need to deeper understand what is happening

Could you help me with this on idealised model?
bbd-basic.png
1. Say C0 is loaded with value 0.3V, clk1 is down, clk2 is up. (say 1v)
2. Now we switch clocks, clk1 is up (1V) and clk2 is down (0V)

So C0 and C1 are connected now. C0 is loaded to 0.3, but C1 is connected to clock (1V here).
What voltages will be on C0 and C1 when transition is done? Will C1 be loaded to 0.7?
You do not have the required permissions to view the files attached to this post.
giq

Post

Hi Itoa

Been a long time since I paid attention and wasn't very expert ever. Here are some wild guesses of which you can be very suspicious.

Here is a SAD 1024 appnote that looks like I recall-- http://thmq.mysteria.cz/em1022/SAD1024.pdf

It is a little different from that ElectroSmash illustration, but the difference probably isn't significant. There are "dummy FETs" inserted between each S&H section, but those dummy FETs are "always on" so probably act exactly like resistors between each section. Below I'll reference the circuit in that SAD 1024 appnote rather than the one you posted.

The chip runs on 10 or 15 volts Vdd and ground, single ended power supply. The clocks are expected to be about the same level as the Vdd so far as I can see. That Vbb can be equal to Vdd but they say its better that Vbb be about 1 volt lower than Vdd.

These are most likely N channel mosfets. N channel mosfets are smaller and therefore cheaper than P channel. Because there is no arrow on the gate terminal of the transistors (like JFET symbols have), that usually means that either of the non-gate terminals are just as happy to be drain or source. The deal is, with an N channel fet, if the gate is several volts BELOW the source voltage, it pinches off the transistor and it has such high resistance its turned off like a switch. If voltage is EQUAL or ABOVE the source terminal voltage, it opens up the FET, lowering the resistance. A fairly large positive voltage turns the FET on as hard as it will go, minimum resistance.

The switching polarity is opposite with P channel FETs. Low voltage turns em on, and high voltage turns em off.

OK that SAD 1024 appnote has the clock wiring reversed compared to your posted circuit, though it probably doesn't matter. The appnote shows the input fet controlled by clock 1, and the appnote says that the input FET turns on and charges the first capacitor when clock 1 goes high. So that sounds like an N channel device if it turns on when the clock goes high.

The intermediate "buffer" FETs between each S&H stage are always turned on by that Vbb which is equal or nearly equal to the main Vdd positive power input. Another place in the appnote, it says the chip's Output FET's have about 400 ohm impedance. That is no guarantee that all those "buffer" FETs also have a turned-on equivalent resistance of 400 ohms, but maybe somewhere in the same ballpark. Maybe Signetics wanted resistors between each stage, and it was cheaper to put always-on FETs in those locations, than to etch actual resistors on the chip die. Or maybe they have some other purpose, dunno. So far as I can tell they ought to be acting just like fixed resistors. Maybe some characteristic of improved linearity is imparted by those "buffer" FETs, dunno. Maybe the resistance keeps the chip from dumping so much current due to clocking, so that the chip will run cooler.

I didn't notice any hint in the appnote about what value all those on-chip S&H capacitors happen to be. Probably not very big.

The appnote says typical audio input bias is half Vdd, and you can get 1 V peak-to-peak at 1 percent distortion, or 2 V peak-to-peak if you just don't care how nasty it gets. Overall the specsheet paints a rather rosy optimistic picture, which was common on many chips back in the day, when they really were not so impressive in actual use. :)

Post

OK, imagine we are clocking the chip but no signal is playing. Just a silent input. The input is capacitatively coupled into a couple of resistors and a trimmer resistor so the input is at optimum bias. Lets say the chip is running on 12 volts, the input bias is 6 volts, and clock 1 & clock 2 are 12 volt square waves 180 degrees out of phase.

Clock 1 goes high and clock 2 goes low. Clock 1 turns on the input FET and maybe all the FETs have a turned-on resistance of 400 ohms. Clock 2 goes low, turning off the second S&H stage. So the input voltage doesn't have anywere to go except charging that first input capacitor thru 400 ohms or whatever is the impedance of the turned-on input FET. Since it is probably a tiny capacitor it probably gets purt close to the input voltage before the clocks cycle and the input FET turns off.

Clock 2 goes high when clock 1 goes low. The first cap is only connected to that second-stage S&H cap, maybe thru 400 ohms of the buffer FET, and another 400 Ohms of the second stage switching FET.

Notice that the clocks are not only connected to the FET gates. The clocks are also connected to the subsequent S&H capacitors! That second capacitor doesn't have a discharge path to ground. It is disconnected from the third S&H stage and it is disconnected from the input.

So when clock 2 jumps to +12 volts, both terminals of the second capacitor will jump to +12 volts, because there is no hipass discharge path for the second capacitor.

Because the first capacitor is at 6 volts, when the second capacitor jumps to 12 volts, I think that will make the second capacitor want to "suck dry" the first capacitor. The second capacitor will suck all the charge out of the first capacitor and pull the first capacitor up to 12 volts as well. Or real close.

So when the clocks toggle again, clock 2 goes down and the second S&H switch turns off-- The second cap will get hit with a 12 volts - to - zero volts transition, but since it sucked off the first cap's charge, this will drop cap 2 down to the same 6 volts that was on cap 1. And at the same time clock 1 turns on S&H switch 3 and spikes cap 3 with +12 volts, so cap 3 starts sucking cap 2 dry!

Maybe something else is happening, but thats the best I can figger ATM.

If there is signal on the input, then each subsequent cap would be "sucking dry" voltages not exactly the 6 volt input bias. So the signal would get pulled along for the ride when each cap sucks its preceeding cap dry. Or something like that. :)

Post

It is more than slightly confusing.

For instance, if cap #2 never gets connected to cap #1-- If cap #2 happens to be at 6 volts when clock 2 is at ground-- Then when clock 2 toggles to +12, with no load present in the circuit to discharge cap #2-- Then cap #2 would jump to 18 volts. But then when clock 2 toggles back to ground, cap #2 would jump back to 6 volts again. Or whatever voltage that cap #2 happened to have, the last time its clock was at ground.

If both cap #1 and cap #2 are at 6 volts-- When clock 2 goes high and also turns on the second stage switching FET-- Initially cap #2 would rise to 18 volts, and there would be charging of cap #1, pulling electrons out of cap #1, initially a difference of 12 volts pulling electrons thru the resistance of the switching and buffer FETs.

Maybe an initial 12 volt differential thru maybe 800 ohms, into the grounded cap #1. The initial 6 volts at cap #1 would rise and the voltage of cap #2 would fall, at whatever time constants of the caps and the resistance of the FETs.

And then whatever voltage cap #2 has attained when the clock cycles again, that voltage would drop -12 volts when the clock 2 goes back to ground. Subtracting 12 volts from whatever voltage that cap #2 has attained when the clock toggles.

Then cap #3 (and all the other caps) would be in the same situation-- 12 volts added to whatever voltage that cap #3 happened to be at, and that voltage pulling electrons thru the resistance of the FETs, from the now ground-referenced cap #2.

Post Reply

Return to “DSP and Plugin Development”