i'm testing a CICDecimator i've found online, which seems to "sound" pretty cool as decimator for an oversampling routine.

here's the complete code:

Code: Select all

```
struct CICDecimator {
static constexpr int64_t scale = ((int64_t)1) << 32;
int mStages;
int mFactor;
float mGainCorrection;
int64_t *pIntegrators;
int64_t *pCombs;
CICDecimator(int stages = 4, int factor = 8) {
assert(stages > 0);
assert(factor > 0);
mStages = stages;
pIntegrators = new int64_t[mStages + 1]{};
pCombs = new int64_t[mStages]{};
if (mFactor != factor) {
mFactor = factor;
mGainCorrection = 1.0f / (float)(pow(mFactor, mStages));
}
}
~CICDecimator() {
delete[] pIntegrators;
delete[] pCombs;
}
float process(const float *buffer) {
for (int i = 0; i < mFactor; i++) {
pIntegrators[0] = buffer[i] * scale;
for (int j = 1; j <= mStages; j++) {
pIntegrators[j] += pIntegrators[j - 1];
}
}
int64_t s = pIntegrators[mStages];
for (int i = 0; i < mStages; i++) {
int64_t t = s;
s -= pCombs[i];
pCombs[i] = t;
}
return mGainCorrection * (s / (float)scale);
}
};
```

i think it converts float * scale to int64_t (and than the comb's s / scale to float) due to improve the precision on summing/subtracting comb/integrators. is correct?

but why it needs int64_t and not (for example) int32_t? will the precision/performances will suffering this? somethings like:

Code: Select all

```
static constexpr int64_t scale = ((int32_t)1) << 16;
...
int32_t *pIntegrators;
int32_t *pCombs;
...
```

thanks to all