Coalescence is a Max For Live instrument. It is a concatenative multi-sampler that uses machine learning (SOM neural network) to organize similar sample slices into clusters based on a chosen spectral feature. There are three playback modes that take advantage of these clusters in various ways: Point, Rings, and Paths. Additionally, external audio input can be routed for modulations and to control what audio slices play based on its similarity. All of these combined with a robust modulation system makes Coalescence one beast of a compact sampler for many different situations!
Comes with:The device, user manual, 42 presets, 14 samples, Strange Mod modulator device (for one of the presets)
Version Info:Works with Live 10 and up!
Features:Supports dropping of multiple samples, individual or folders (up to 2000), for concatenative sampling. Each sample has individual parameters for pitch, volume, direction and transient sensitivity
Neural Network (SOM) that organizes sample slices based on a chosen spectral feature and visualizes them in a 2D circle
Three playback modes:
Point: choose a single sample slice point to play from, MIDI repitches playback. Closest to a classic sampler. Option for external audio input to control lookup point based on similarity
Rings: MIDI pitches trigger circlular ranges called Rings, when a Ring is triggered it plays a random sample within its range. Great for drum kits, or any kind of sample slicing. There are auto and manual Ring creation options
Paths: MIDI pitches trigger individual playback paths that can glide or jump through the sample slices. Great for creating sequences or all kinds of movement! Alternatively there is a Single path mode in which only one path can be triggered and MIDI pitches instead repitch the playback
Various settings for the network including ones for training and previewing the network:
4 different spectral features to choose for clustering the sample slices:
(Chroma) describes the slice in terms of a 24 step chromatic scale, good for sorting based on tonality. (Mel and Bark) these two describe the slice with intensities from low to high frequencies based on psychoacoustic perceptions of equal distant steps. Each scales the frequencies differently, good for anything you want sorted based on low to high frequencies such as percussion, etc. (Speech) uses Mel Cepstral Coefficients (MFCC) which are commonly used for speech recognition, good for sorting vowels, etc.
Option to use transients only for the sample slices fed into the network or using every spectral frame
Cluster radius size and various training parameters
Various sample playback settings:
Standard playback settings such as voices, playback direction, one shot or loop, loop size (can be in time, beats, or by transient length), pitch, and fade window
Various parameters for the different playback modes and sample slice lookup
A phase vocoder playback mode where you can time stretch. It also has a spectral attack and release for cheap blur effects or fading between specra
External audio input routing with various uses:
Option to have the sampler voices triggered by the input transient detector instead of by MIDI notes. This way you can trigger voices of the sampler with an external input!
As mentioned the input can optionally control what sample slice is playing based on its similarity to the input at any moment. This opens up the doors for a pseudo style-transfer and other effects (Note: they are not the cleanest and most robust results but great for experimentation and can work well if dialed-in and handled correctly). Some examples of uses: beat boxing with voice to trigger drum sounds, having one sound 'mask' another such as style-transfer-esk effect (using the phase vocoder playback mode), creating a voice controlled synth, etc
An envelope follower and pitch detector which can be used for modulation
A modulation system, each modulation source has two mappable destinations:
Two LFO's with perlin noise options
Two envelopes
Two random spray values created at the beginning of each voice
The routed external audio input's envelope follower and pitch detector
Standard MIDI sources: velocity, key pitch, aftertouch, pitch bend, mod wheel
Per voice filter with a few modes:
Standard simple biquad filter with standard shapes
Ladder filter mode
Vowel or Formant filter mode with formant options, frequecy shifting, spreading, and bandwidth sloping
NOTE OF TRANSPARENCY AND LIMITATIONS:
I do not claim this device to be the most robust neural sample library organizer, so if you are thinking to try and overload the device by dropping in many hundreds of mid-sized samples or very long samples simultaneously you will certainly hit limits. That said, it stil analyzes and trains samples fast, details below
Dropping in say a hundred or two one shot drum samples will be analyzed and trained by the network fast, but if they are hundreds of loops over 10 seconds in length each, that may take some time depending on the details
Dropping in individual samples up to about thirty seconds is fast (analyzes instantly for short samples and about 1-2 seconds up to thirty depending on samplerate), over that you may wait a few seconds or way longer depending how much longer. If you drop in a thirty minute sample you will be waiting a while (samplerate matters too)
An imporant note is that although the device can handle up to 2000 samples, the neural network only has 2500 slots for holding sample slices, so unless the samples are one-shot percussion samples you will fill up the network loooooong before you reach 2000 samples. Therefore this device is meant more to work with a handful of samples, or many one-shot samples than an entire library
Similarly, the neural network training time will vary a lot depending on how much sample data there is, and whether you are only sending in transients or every spectral frame. Sending in transients only is generally very fast but it depends on how much sample content there is
The good news is that the nerual network is saved in the device instance or preset, so if you do train a large amount of sample data, you won't have to again. Similarly for the sample analysis there is an option to save the analysis files so if you want to load a preset or long sample instantly you can have the analysis file saved
CURRENT ISSUES WITH SAMPLE LOADING:
(1): Protected samples from Ableton Packs cannot be loaded. This is because I cannot use the Live.Drop object to decode protected samples because of the multisample nature of the device.
(2): Samples dropped into Coalescence that were recorded in a never before saved Live set will not be recalled. This is because the path of the Live Set changes after it is saved for the first time and there is no legitimate way of getting the Path of the current Live set to work with paths relatively (a huge flaw in MaxForLive). This will also cause issues with changing Live Set names. In the meantime, always save your Live set for the first time before dropping in samples that were recorded in that Live set. If you accidently do that, just drop and swap the samples out after you've saved the set for the first time!
To Install And Use Presets:Drop the entire folder called 'Coalescence' (NOT the folder called 'Coalescence v.x.x.x') into the folder 'ableton/user library/presets/instruments/max instrument'. This is found in the 'Places' section in Live's browser or in your finder/file browser. If you are going through finder (mac) the Ableton folder is typically in your 'Music' folder. If you are going through your file browser (windows) it is typically in the 'My Music' folder. The path to the .amxd file should be: 'ableton/user library/presets/instruments/max instrument/Coalescence/Coalescence.amxd'
NOTE: There are two issues with certain sample loading situations at the moment, scroll down to the 'CURRENT ISSUES WITH SAMPLE LOADING' section below for more info...Only registered users can see Download Links. Please or login.
No comments yet