Noise/shaders for Audio

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Hi,
Are there any examples of Noise shaders that has been used for audio. Only one I know is perlin noise. Could you use cellular noise like worley for audio. I'm interested in both noise as a audio signal and as a modulator of other waveforms.

In graphics some noise functions can give interesting results like mountains, liquid textures. Are there any examples for audio were you can get something that sounds more interesting than rain.

Post

Noise is noise, I'm not sure why you'd want a shader to do it massively parallel?
There are shaders that have been used, but they are not in real-time yet. I have done quite a few, including filtering - where you have to reproduce a large waveform history on every pixel, which gets quite expensive...

Post

Xenobioz, striving for the lead at this game I see.

Noise is primarily going to be either perlin or voronoi. However, if you are going to create a sound texture oscillator, I suggest you heavily use saws and squares. All textures you are going to find are going to use scalar recursion or repetition to find it's patterns.

Divide a square into a grid of points of whatever number of points for both x and y (and as many dimensions as you want).

For Perlin:
For each point, assign a random value, now subdivide between these points and repeat the process on a smaller scale, this will build up complexity. You can use single points per value or two points (diminishing range until range is below a specified value).

For Voronoi:
Offset each point in the grid by a random angle and random amount as so not to cross other points (limit the radius). After each offset is created, use the distance function (dT ^ 2 = d1 ^2 + d2 ^2 + ...). Use the minimum to determine which point it is closest to.

Most noise uses a variation of these two algorithms, however it is effectively about scale and repetition, seeded by controlled noise. Where multiple dimensions interact, be sure there is at least one multiplication operation or equivalent to bind dimensional data together, or use a bound 2d operation.

This guy has many tutorials.
http://www.iquilezles.org/www/index.htm

Additionally, here in the audio field we can utilize variations on the 2d water effect to create a number of analog phenonena, in many regards, it has similarity to z-transform family filters.

I find that the optimal concept though is a form of compositional pattern producting networks, very similar to Funxion in some regards, but with expansion into more dimensions (input parameters A, B, C, D number of input dimensions). This is nothing more than expanding a synth to be purely made of functions, specifically limited to the bounds of 0 to 1, or -1 to +1.

Additionally, I love the idea of an animated convolution filter, with an oscillator made specifically to generate animated impulse responses to replace the textbook filter. Thus it would consist of nothing but a oscillator and a single convolution filter that would create multiple effects in a single convolution, including the best filters around.

I've been studying the best oscillators out there, I intend to create something Good here. Ultra sonic frequencies are perceivable to the emotional body, as per Patrick Flanagan's research, thus there should not be a 20k cutoff where the frequencies are useful and beneficial to the person and their environment.

I might recommend looking at Filter Forge for scripting in both image and audio with Blue Cat's Script for quick prototyping in both domains, or there is MatLab. It's best to see as much information from both domains and with as many transforms and distortions as possible to reveal hidden details and flaws to fix. Distortion and synmtery flips have the ability to create a more complexity on textures.

I have a filter that I wrote that uses a set of handheld waveshapes, and uses operations to move them together. Additionally, you can create math functions that do not exist, by creating a 2D image, an keeping the bounds from 0 to 1 (or -1 to +1 if you wanna get wild about it).

Use distortion maps (offset maps generated by noise) to add further complexity for some textures.

https://en.wikipedia.org/wiki/Compositi ... ng_network
An Evolutionary method can be quick used to decern left and right, which is better.. store this feedback detail and use it to breadcrumb to quickly find favorable regions to functions that create not only better and far more detailed and complex waveforms in many dimensions to move within freely and really move within an audioscape... but to create full loops that pull everyone in to it's natural savor, and eventually full songs that connect the human spirit with higher and higher forms of order.

One step at a time. :phones:
SLH - Yes, I am a woman, deal with it.

Post

:phones: :arrow:
Last edited by Vertion on Sat Jun 08, 2019 10:49 am, edited 1 time in total.
SLH - Yes, I am a woman, deal with it.

Post

Here are some examples of audio shaders using noise:-
Value noise added in several fractal levels motion to create wind:
https://www.shadertoy.com/view/Xls3D2
Used to filter a saw wave:
https://www.shadertoy.com/view/4sjSW1
For bird pitch sounds and wind in trees:
https://www.shadertoy.com/view/4tl3RM
You can look at the code on the Sound tab, and you'll probably need Chome or Firefox to see/hear them.

Post

Thanks for the replies and links. I'll study those links and see were I get from there.

Post

It's all pretty cool for it to be browser enabled, and you can use the search field to find all other shaders with 'sound' tags...
*edit* I mean like this: https://www.shadertoy.com/results?query ... oundoutput
...Otherwise you'll get soundCloud audio listed as well.
Another one well worth looking at:
https://www.shadertoy.com/view/ldfSW2
Last edited by DaveHoskins on Fri Nov 17, 2017 3:58 pm, edited 2 times in total.

Post

Thanks, there is some amazing stuff there including yours. Is the audio running on the CPU or GPU?

Post

Thanks, yeah some of the guys on there have been in the demo scene for quite a while now and are very experienced in shader work. The site is writtten in JavaScript, and almost everything is GPU. So for the audio, every pixel calculates 4 samples (RGBA). So it's all in parallel. The problem with that is that you can't have variables that track through time (stateless), hence the need to do a loop for filtering and replicate the waveform's history for every 'pixel' rendered. It also means it's not realtime audio creation, because the audio is just rendered from the previously created textures, copied into main memory then played using the Java media stuff.
Last edited by DaveHoskins on Fri Nov 17, 2017 4:14 pm, edited 1 time in total.

Post

very cool!

Post

DaveHoskins wrote:Thanks, yeah some of the guys on there have been in the demo scene for quite a while now and are very experienced...
I love the demoscene. May I?

https://www.youtube.com/watch?v=sxW6CW0RWlg
SLH - Yes, I am a woman, deal with it.

Post

Vertion wrote: For Perlin:
For each point, assign a random value, now subdivide between these points and repeat the process on a smaller scale, this will build up complexity. You can use single points per value or two points (diminishing range until range is below a specified value).
That's not Perlin noise, even though it's often mistakenly called so. What you describe is just fractal value noise and doesn't really have any of the nice properties of Perlin noise.

Perlin noise is gradient noise. You assign a random gradient on every point on some grid (eg. using a hash function, so you can evaluate it using random access), then use a polynomial fitted on the gradients to get the actual smooth noise function (see https://en.wikipedia.org/wiki/Perlin_noise). You can actually use the hashing approach for value noise too (to make it parallel), but there's more to Perlin.

Since the actual noise function goes through zero at regular intervals (the grid points, where only the gradients change), you'll get an approximately band-pass function where all the features are roughly the same size. You can then generate different "octaves" by using progressive smaller (or larger) grids to control the relative frequency balance of the result. By shaping the noise in various ways (before or after adding the octaves together) you can get a whole lot of different "textures" (and in fact it was invented for procedural texturing of graphics).

The original Perlin noise does the noise evaluation over a simple euclidean grid, but in higher dimensions (which you might want even if your goal is 1D, see below) you probably want to use Perlin's "improved noise" that works on a simplex grid (which is commonly known as "simplex noise"), which gives results that are closer to isotropic off-axis and also cheaper to evaluate in multiple dimensions (although more complicated to implement, each simplex in d dimensions only needs 1+d points to evaluate in where as a hyper-cube needs 2^d; if the math goes over your head, don't worry, there's free code on the web for both CPU and GPU if you search for "simplex noise").

Another cool thing about Perlin is that since it's random access, you can evaluate it over paths other than straight lines. So if you wanted to generate a 1D periodic function, you simply need to trace a circle on the 2D noise plane! If you then wanted to slowly morph this over time, you could use time as a third coordinate into a 3D noise surface. For 2D texture generation, you can get perfectly periodic textures both directions by doing the circle trace trick on both coordinates in a 4D noise hyper-volume (each source coordinate choosing 2 of the 4D coordinates).

Besides Perlin, if you're going for periodic results and don't mind precalculating the function then another easy way to get fun results is to use IFFT where you pick some magnitude spectra and then randomise the phases. This also works in multiple dimensions and can generate various randomised texture patterns that are hard to get by other means (search FFT on shadertoy and you should find something).

Post

mystran wrote: That's not Perlin noise, even though it's often mistakenly called so. What you describe is just fractal value noise and doesn't really have any of the nice properties of Perlin noise.
Yes, you are correct. And there are many variations from his original algorithm. You know, those who started as democoders always look for ways to save one more cycle per instruction. But if we are talking about the original algorithm. My intent is to streamline knowledge and ideas for quicker absorption, and generalizing on common features between sets of knowledge has proven an excellent method in divulging knowledge in many of my own situations.

The first way I knew it way back (a very old computer graphics book.. old old.. decades ago) was as scalar 'octaves' that are then combined to make the final product. I eventually expanded on this and wrote my own variations to see what might happen. What happens when I do this? *zap* *computer smokes* ahh...

Side node: Mystran, What a wonderful name.. since I name-dropped Renaissance. Might be a long shot, but
it could be a Mystery, is Mystran really Tran from Renaissance? Oh that would be awesome, if it's you, what ancient weapon and musical instrument go hand and hand?
SLH - Yes, I am a woman, deal with it.

Post

Vertion wrote: Side node: Mystran, What a wonderful name.. since I name-dropped Renaissance. Might be a long shot, but
it could be a Mystery, is Mystran really Tran from Renaissance? Oh that would be awesome, if it's you, what ancient weapon and musical instrument go hand and hand?
Haha, no.. it's just a purely random fantasy name for an RPG character I came up with decades ago.. but at some point ages ago I started using it as a nickname and since it's more or less unique I just kept it.

Post Reply

Return to “DSP and Plugin Development”