7 posts

Page

**1**of**1**- KVRist
- 132 posts since 12 May, 2012

does anyone here know what might be some of the conditions surrounding if one was to sample frequency bins at a subharmonic frequency instead of twice per bin or whatever?

i have a guess that it would alias, but that the aliasing would be consonant and thus not so disruptive for analysis?

is that similar to the aliasing you get with wavelet transforms?

thanks so much.

i have a guess that it would alias, but that the aliasing would be consonant and thus not so disruptive for analysis?

is that similar to the aliasing you get with wavelet transforms?

thanks so much.

- KVRAF
- 4952 posts since 11 Feb, 2006, from Helsinki, Finland

kamalmanzukie wrote:does anyone here know what might be some of the conditions surrounding if one was to sample frequency bins at a subharmonic frequency instead of twice per bin or whatever?

Judging by the fact that there's 52 views and no replies, I suspect I might not be the only one that has trouble understanding what you mean by "sample frequency bins at a subharmonic frequency" exactly?

I mean... I'm not even sure if you're thinking about modifying the DFT or just trying to do some spectral processing. As far as spectral aliasing goes, it works the same as regular aliasing, the folding happens in the "other domain" as DFT being (essentially) it's own inverse the choice of "spectrum" vs. "time domain" is mathematically just a matter of definition really.

<- plugins | forum

- KVRAF
- 1852 posts since 29 May, 2012

If we leave the confusing words aside and try to make sense of that post, this is what I would say:

If you want to increase resolution in frequency domain of a DFT result, then you take a larger window of samples at time domain, and if more samples are not available, then fill the rest of the window with zeros. This doesn't result in aliasing and gives a better resolution at frequency domain. It's like upsampling in time domain, i.e. you can always figure out the intersample values, because the samples are assumed to be taken from a band limited continuous signal.

If you want to increase resolution in frequency domain of a DFT result, then you take a larger window of samples at time domain, and if more samples are not available, then fill the rest of the window with zeros. This doesn't result in aliasing and gives a better resolution at frequency domain. It's like upsampling in time domain, i.e. you can always figure out the intersample values, because the samples are assumed to be taken from a band limited continuous signal.

~stratum~

- KVRist
- 132 posts since 12 May, 2012

excuse the vagueness. the problem exactly is i'm working with something kind of along the lines of constant q transform, so the bins at the top get sampled more often than the ones at the bottom. the rate that the ones at the top need to be sampled to keep everything kosher gets pretty high up there, to the point that it causes problems for the analysis i'm doing and its not great for cpu either

so what i was wondering is what i might expect if the bins at top were sampled at a rate that was a subharmonic to the required sampling frequency - specifically some number of octaves lower. would the aliasing that was generated be 'masked', so to speak? would it be better to keep them filtered below half the actual bin sample rate?

all i am doing is comparing amplitudes, so i'm not worried about there being subharmonics in any inverse transform, cause im not even doing one.

as far a zero padding goes, i'm not even sure how that would work. i only very recently was able to conceptualize how windowing for constant q looks

so what i was wondering is what i might expect if the bins at top were sampled at a rate that was a subharmonic to the required sampling frequency - specifically some number of octaves lower. would the aliasing that was generated be 'masked', so to speak? would it be better to keep them filtered below half the actual bin sample rate?

all i am doing is comparing amplitudes, so i'm not worried about there being subharmonics in any inverse transform, cause im not even doing one.

as far a zero padding goes, i'm not even sure how that would work. i only very recently was able to conceptualize how windowing for constant q looks

- KVRAF
- 4952 posts since 11 Feb, 2006, from Helsinki, Finland

kamalmanzukie wrote:so what i was wondering is what i might expect if the bins at top were sampled at a rate that was a subharmonic to the required sampling frequency - specifically some number of octaves lower. would the aliasing that was generated be 'masked', so to speak? would it be better to keep them filtered below half the actual bin sample rate?

Oh, I see... so what you probably is known as "multi-rate filter banks" which should return you at least a metric ton of Google results. The basic idea is to split the signal into multiple bands of equal bandwidth (eg. most commonly halves) and then decimate each band. Then you can do this recursively until each of your bands is at a resolution suitable for what you want to do.

<- plugins | forum

- KVRist
- 132 posts since 12 May, 2012

that's like sub band coding, right? i know there are also ways to start out with a regular old fourier transform and warp it into constant q bandwidth, which i hate

i'm just using old fashioned heterodyning with proper filters and sines/cosines. i just decided to filter below half of whatever rate i'm subsampling and it seems to work fine. it definitely seems that subsampling at some lower octave is generally friendlier than if using some other random frequency, in that it may still add a low frequency image but at least the amplitude measurement is about what it should be

i'm just using old fashioned heterodyning with proper filters and sines/cosines. i just decided to filter below half of whatever rate i'm subsampling and it seems to work fine. it definitely seems that subsampling at some lower octave is generally friendlier than if using some other random frequency, in that it may still add a low frequency image but at least the amplitude measurement is about what it should be

- KVRist
- 132 posts since 12 May, 2012

some misgivings and anti hunches about sampling structure for freq domain lead me back to trace their source. there was an old reaktor instrument i had made titled 'constant q(uestion mark)

the file knew what it wanted to do, but in hashing out the structure the 'how' part got very muddled. the most generous thing a person might say upon evaluating might be 'bless their heart'

there was a bandpass bank that every band went through an allpass array afterward to try to get some quadrature esque affect in the mix. these 'sub bands' were than sample and held (at audio rate!) to downsample. not sure that this even conveyed any usable phase information since it was introduced afterward?

after being brutally and pointlessly sub sampled the bands were then subjected to an entire new set of sine/cosine oscillators for 'resynthesis', me of course having no idea that this was pointless since the bandpassed bands were already oscillating around their center frequency

trying the 'reverse' transform required things being hooked up backward or else the phase went the wrong way. also at this time i realized that conventional linear sampling would only produce what a fourier transform looks like if you throw away the phase, or The Wavetable for Terrible Sound, totally flouting the bands that they were (logarithmically) originally filtered

the workaround for this was also categorically, quantifiably wrong, but graciously ended up being right enough for a dopamine hit for someone for whom even the reasons why it now 'worked' were completely lost on. it is fun to look back and have the perspective of 'look how stupid i was', and also to have the eventual insight after getting signs flipped and observing how things go interestingly hawwire, when at a point much later you finally do understand how and why something works

the file knew what it wanted to do, but in hashing out the structure the 'how' part got very muddled. the most generous thing a person might say upon evaluating might be 'bless their heart'

there was a bandpass bank that every band went through an allpass array afterward to try to get some quadrature esque affect in the mix. these 'sub bands' were than sample and held (at audio rate!) to downsample. not sure that this even conveyed any usable phase information since it was introduced afterward?

after being brutally and pointlessly sub sampled the bands were then subjected to an entire new set of sine/cosine oscillators for 'resynthesis', me of course having no idea that this was pointless since the bandpassed bands were already oscillating around their center frequency

trying the 'reverse' transform required things being hooked up backward or else the phase went the wrong way. also at this time i realized that conventional linear sampling would only produce what a fourier transform looks like if you throw away the phase, or The Wavetable for Terrible Sound, totally flouting the bands that they were (logarithmically) originally filtered

the workaround for this was also categorically, quantifiably wrong, but graciously ended up being right enough for a dopamine hit for someone for whom even the reasons why it now 'worked' were completely lost on. it is fun to look back and have the perspective of 'look how stupid i was', and also to have the eventual insight after getting signs flipped and observing how things go interestingly hawwire, when at a point much later you finally do understand how and why something works