Fragment is an online collaborative additive / spectral / granular / subtractive / PM live-coding web. platform, the spectrum is created by a live GPU script, the generated visual is then converted to audio.
Fragment is a programmable noise-of-all-kinds software and excel at sound design by combining two of the most powerful synthesis process with a live coding aspect, Fragment also excel at visuals.
This is the part 1 video of a live additive + granular (and sometimes subtractive) ambient soundscapes session which demonstrate visuals reactive audio.
A MIDI sequence is played and recorded as an image inside Fragment, the produced image is then drawn onto the canvas two times at different offset, the images are then scrolled and a compositing function is applied between the images and the video, this mostly result in audio sounding "bigger" when the brightness is high such as on the antenna/blue sky part, this effect can be heard the most on the antenna part @4:06 different compositing functions are used along the video.
Part 1 has MIDI sequences running on top of the recorded sequence.
Part 2 has audio reactive visuals by producing sounds from the video pixels directly with sporadic MIDI playing, this will be available soon.
The sound in this video is produced by 4 audio server instances (distributed synthesis) on a single i7 6700 machine, the overall CPU load can be seen on the top left corner along with number of "partials" for each channels.
Most of the sounds heard in this video has a delay applied with a simple pixels-based delay.
9 samples are loaded and used in this video, all of them are single sounds of real or virtual instruments.