Fragment is a collaborative cross-platform audiovisual live coding environment with pixels based real-time image-synth approach to sound synthesis, the sound synthesis is powered by pixels data produced on the graphics card by live GLSL code, everything is based on pixels.
Fragment is able to produce high-quality fast additive and granular synthesis simultaneously with re-synthesis support, it has many features making it a bliss to produce any kind of sounds or visuals and is aimed at artists seeking a creative environment with few limitations to experiment with, a programmable noise-of-all-kinds machine.
This is a video of one hour jam on ambient soundscapes made of (relatively simple) granular and additive synthesis patches with live compositing visuals produced from two videos.
This demonstrate granular synthesis features, videos looping/compositing and mixing sound synthesis methods, some of the new features of the just released one year massive anniversary update.
On this video, i also use the new distributed sound synthesis tool which is able to split sound synthesis computation over multiple machines on the network or cores, 4 Fragment Audio servers were running over multiple core on a i7 CPU to provide polyphonic and multitimbral (4 timbres) additive + granular synthesis with over 150+ grains at the same time.
Some amount of reverb/delay effect is added inside Renoise DAW, MIDI sequences are coming from Renoise.