Fragment is an online collaborative additive / spectral and soon granular live-coding web. synthesizer, visuals are created by a live GPU script which accept many sort of inputs (webcam, images etc), the visuals are then fed into an additive or granular synthesis engine in real-time.
This video demonstrate the upcoming granular synthesis engine (Fragment Audio Server only) which adapt Fragment live coding/visual workflow with granular synthesis.
Granular synthesis is a powerful sound synthesis method that operates on the microsound time scale.
How does it work ? The Fragment Audio Server load all audio files found in a "grains" folder, try to find their fundamental and use them as a synthesis source, each horizontal lines is mapped to a different sample, the vertical axis represent the frequency and time is on the horizontal axis, 1 to 50ms grains are then played back by the synthesizer from the real-time pixels data which represent the amplitude, all received in real-time from the web application.
The granular synthesis engine is still in its infancy and this is an early work in progress prototype, many things might change.
Fragment Audio Server is enabled and the audio output is fed into a DAW (Renoise) in the background which apply some effects (reverb), many samples are loaded (vocals, flute, orchestra etc.).
Renoise : https://www.renoise.com/