Part II

Chapter 5 - Additive Synthesis

In 18xx, the mathematician X X Fourier came up with a theory that stated that any sound can be described as a function of pure sine waves. This is a very important statement for computer music. It means that we can recreate any sound that we hear by adding number of sine waves together with different frequency, phase and amplitude. Obviously this was a costly technique in times of modular synthesis, as one would have to apply multiple oscillators to get the desired sound. This has changed with digital sound, where innumerable oscillators can be added together with little cost. Here is a proof:

// we add 500 oscillators together and the CPU is less than 20% 
{({SinOsc.ar(4444.4.rand, 0, 0.005)}!500).sum}.play

Adding waves

Adding waves together seems simple, and indeed it is. By using the plus operator we can add two signals together and their values at the same time add up to the combined value. In the following images we can see how simple sinusoidal waves add up:

Adding two waves of 440Hz together
Adding two waves of 440Hz together
{[SinOsc.ar(440), SinOsc.ar(440), SinOsc.ar(440)+SinOsc.ar(440)]}.plot
// try this as well
{a = SinOsc.ar(440, 0, 0.5); [a, a, a+a]}.plot
Adding a 440Hz and a 220Hz wave together
Adding a 440Hz and a 220Hz wave together
{[SinOsc.ar(440), SinOsc.ar(220), SinOsc.ar(440)+SinOsc.ar(220)]}.plot
Adding two 440 waves together but one with inverted phase
Adding two 440 waves together but one with inverted phase
{[SinOsc.ar(440), SinOsc.ar(440, pi), SinOsc.ar(440)+SinOsc.ar(440, pi)]}.plot

You see that two waves at the same frequency added together becomes twice the amplitude. When two waves with the amplitude of 1 are added together we get an amplitude of 2 and in the graph we get a clipping where the wave is clipped at 1. This can cause a distortion, but also resulting in a different wave form, namely a square wave. You can explore this by giving a sine wave the amplitude of 10, but then clip the signal at, say -0.75 and 0.75.

{SinOsc.ar(440, 0, 10).clip(-0.75, 0.75)}.scope

Most instrumental sounds can be roughly described as a combination of sine waves. Those sinusoidal waves are called partials (the horizontal lines you see in a spectrogram when you analyse a sound). In the example below we mix ten sine waves of frequencies between 200 and 2000. You might well be able to detect a pitch in the example if you run it many times, but since these are random frequencies they are not necessarily lining up to give us a solid pitch.

{Mix.fill(10, {SinOsc.ar(rrand(200,2000), 0, 0.1)})}.freqscope
{Mix.fill(10, {SinOsc.ar(rrand(200,2000), 0, 0.1)})}.spectrogram (XXX fix spectrogram quark)

In harmonic sounds, like the piano, guitar, or the violin we get partials that are whole number multiples of the fundamental (the lowest) partial. If they are fundamental multiples, the partials are called harmonics. The harmonics can be of varied amplitude, phase, envelope form, and duration. A saw wave is a waveform with all the harmonics represented, but lowering in amplitude:

{Saw.ar(880)}.freqscope

It is recommended that you play with adding waves together in various ways. Explore what happens when you add harmonics together (integer multiples of a fundamental frequency),

// adding two waves - the second is the octave (second harmonic) of the first
{(SinOsc.ar(440,0, 0.4) + SinOsc.ar(880, 0, 0.4))!2}.play
// here we add four harmonics (of equal amplitude) together
(
{	
var freq = 200;
SinOsc.ar(freq, 0, 0.2)   + 
SinOsc.ar(freq*2, 0, 0.2) +
SinOsc.ar(freq*3, 0, 0.2) + 
SinOsc.ar(freq*4, 0, 0.2) 
!2}.play
)

The harmonic series is something we all know intuitively and have heard many times (swing a flexible tube around your head and you will get a sound in the harmonic series). The Blip UGen in SuperCollider allows you to dynamically control the number of harmonics of equal amplitude:

{Blip.ar(440, MouseX.kr(1, 20))}.scope // using the Mouse
{Blip.ar(440, MouseX.kr(1, 20))}.freqscope
{Blip.ar(440, Line.kr(1, 22, 3) )}.play

Creating wave forms out of sinusoids

In SuperCollider you can create all kinds of wave forms out of a combination of sine waves. By adding SinOscs together, you can derive at your own unique wave forms that you might use in your synths. In this section we will look at how we use additive synthesis to derive at diverse wave forms.

// a) here is an array with 5 items:
Array.fill(5, {arg i; i.postln;});
// b) this is the same as (using a shortcut):
{arg i; i.postln;}.dup(5)
// c) or simply (using another shortcut):
{arg i; i.postln;}!5

// d) we can then sum the items in the array (add them together):
Array.fill(5, {arg i; i.postln;}).sum;
// e) we could do it this way as well:
sum({arg i; i.postln;}.dup(5));
// f) or this way:
({arg i; i.postln;}.dup(5)).sum;
// g) or this way:
({arg i; i.postln;}!5).sum;
// h) or simply this way:
sum({arg i; i.postln;}!5);

Above we created a Saw wave which contains harmonics up to the [Nyquist rate] (http://en.wikipedia.org/wiki/Nyquist_rate), which is half of the sample rate SuperCollider is running. The Saw UGen is “band-limited” which means that it does not alias and mirror back into the audible range. (Compare with LFSaw which will alias - you can both hear and see the harmonics mirror back into the audio range).

{Saw.ar(MouseX.kr(100, 1000))}.freqscope
{LFSaw.ar(MouseX.kr(100, 1000))}.freqscope

We can now try to create a saw wave out of sine waves. There is a simple algorithm for this, where each partial is an integer multiple of the fundamental frequency, and decreasing in amplitude by the reciprocal of the partials’s/harmonic’s number (1/harmnum).

A ‘Saw’ wave with 30 harmonics:

(
f = {
        ({arg i;
                var j = i + 1;
                SinOsc.ar(300 * j, 0,  j.reciprocal * 0.5);
        } ! 30).sum // we sum this function 30 times
!2}; // and we make it a stereo signal
)

f.plot; // let's plot the wave form
f.play; // listen to it
f.freqscope; // view and listen to it

By inverting the phase (using pi), we get an inverted wave form.

(
f = {
        Array.fill(30, {arg i;
                var j = i + 1;
                SinOsc.ar(300 * j, pi,  j.reciprocal * 0.5) // note pi
        }).sum // we sum this function 30 times
!2}; // and we make it a stereo signal
)

f.plot; // let's plot the wave form
f.play; // listen to it
f.freqscope; // view and listen to it

A square wave is a type of a pulse wave (If the length of the on time of the pulse is equal to the length of the off time – also known as a duty cycle of 1:1 – then the pulse wave may also be called a square wave). The square wave can be created by sine waves if we ignore all the even harmonics and only add the odd ones.

(
f = {
        ({arg i;
                var j = i * 2 + 1; // the odd harmonics (1,3,5,7,etc)
                SinOsc.ar(300 * j, 0, 1/j)
        } ! 20).sum;
};
)

f.plot;
f.play;
f.freqscope;

Let’s quickly look at the regular Pulse wave in SuperCollider:

{ Pulse.ar(440, MouseX.kr(0, 1), 0.5) }.scope;
// we could also recreate this with an algorithm on a sine wave:
{ if( SinOsc.ar(122)>0 , 1, -1 )  }.scope; // a square wave
{ if( SinOsc.ar(122)>MouseX.kr(0, 1) , 1, -1 )  }.scope; // MouseX controls the period
{ if( SinOsc.ar(122)>MouseX.kr(0, 1) , 1, -1 ) * 0.1 }.scope; // amplitude down

A triangle wave is a wave form, similar to the pulse wave in that it ignores the even harmonics, but it has a different algorithm for the phase and the amplitude:

(
f = {
        ({arg i;
                var j = i * 2 + 1;
                SinOsc.ar(300 * j, pi/2, 0.7/j.squared) // cosine wave (phase shift)
        } ! 20).sum;
};
)
f.plot;
f.play;
f.freqscope;

We have now created various wave forms using sine waves, and here is how to wrap them up in a SynthDef for future use:

SynthDef(\triwave, {arg freq=400, pan=0, amp=1;
	var wave;
	wave = ({arg i;
                	var j = i * 2 + 1;
                	SinOsc.ar(freq * j, pi/2, 0.6 / j.squared);
        	} ! 20).sum;
	Out.ar(0, Pan2.ar(wave * amp, pan));
}).add;

a = Synth(\triwave, [\freq, 300]);
a.set(\amp, 0.3, \pan, -1);
b = Synth(\triwave, [\freq, 900]);
b.set(\amp, 0.4, \pan, 1);
s.freqscope; // if the freqscope is not already running
b.set(\freq, 1400); // not band limited as we can see 

We have created various typical wave forms above in order to show how they are sums of sinusoidal waves. A good idea is to play with this further and create your own waveforms:

(
f = {
        ({arg i;
                var j = i * 2.cubed + 1;
                SinOsc.ar(MouseX.kr(20,800) * j, 0, 1/j)
        } ! 20).sum;
};
)
f.plot;
f.play;
(
f = {
        ({arg i;
                var j = i * 2.squared.distort + 1;
                SinOsc.ar(MouseX.kr(20,800) * j, 0, 0.31/j)
        } ! 20).sum;
};
)
f.plot;
f.play;

Bell Synthesis

Not all sounds are harmonic. Many musical instruments are inharmonic, for example timpani drums, xylophones, and bells. Here the partials of the sound are not in a harmonic relationship (or multiples of some fundamental frequency). This does not mean that we can’t detect pitch, as there will be certain partials that have stronger amplitude and longer duration than others. Since we know bells are inharmonic, the first thing we might try is to generate a sound with, say, 15 partials:

{ ({ SinOsc.ar(rrand(80, 800), 0, 0.1)} ! 15).sum }.play

Try to run this a few times. What we hear is a wave form that might be quite similar to a bell at first, but then the resemblance disappears, because the partials do not fade out. If we add an envelope to each of these sinusoids, we get a different sound:

{
Mix.fill( 10, { 	
	SinOsc.ar(rrand(200, 700), 0, 0.1) 
	* EnvGen.ar(Env.perc(0.0001, rrand(2, 6))) 
});
}.play

Above we are using Mix.fill instead of creating an array with ! and then .summing it. These two examples do the same thing, but it is good for the student of SuperCollider to learn different ways of reading and writing code.

You note that there is a “new” bell every time we run the above code. But what if we wanted the “same” bell? One way to do that is to “hard-code” the frequencies, durations, and the amplitudes of the bell.

{
var freq = [333, 412, 477, 567, 676, 890, 900, 994];
var dur = [4, 3.5, 3.6, 3.1, 2, 1.4, 2.4, 4.1];
var amp = [0.4, 0.2, 0.1, 0.4, 0.33, 0.22, 0.13, 0.4];
Mix.fill( 8, { arg i;
	SinOsc.ar(freq[i], 0, 0.1) 
	* EnvGen.ar(Env.perc(0.0001, dur[i])) 
});
}.play

Generating a SynthDef using a non-deterministic algorithms (such as random) in the SC-lang will also generate a SynthDef that is the “same” bell. Why? This is because the values (430.rand) are defined when the synth definition is compiled. Try to recompile the SynthDef and you get a new sound:

(
SynthDef(\mybell, {arg freq=333, amp=0.4, dur=2, pan=0.0;
	var signal;
	signal = Mix.fill(10, {
		SinOsc.ar(freq+(430.rand), 1.0.rand, 10.reciprocal) 
		* EnvGen.ar(Env.perc(0.0001, dur), doneAction:2) }) ;
	signal = Pan2.ar(signal * amp, pan);
	Out.ar(0, signal);
}).add
)
// let's try our bell
Synth(\mybell) // same sound all the time
Synth(\mybell, [\freq, 444+(400.rand)]) // new frequency, but same sound
// try to redefine the SynthDef above and you will now get a different bell:
Synth(\mybell) // same sound all the time

Another way of generating this bell sound would be to use the SynthDef from last tutorial, but here adding a duration to the envelope:

(
SynthDef(\sine, {arg freq=333, amp=0.4, dur, pan=0.0;
	var signal, env;
	env = EnvGen.ar(Env.perc(0.01, dur), doneAction:2);
	signal = SinOsc.ar(freq, 0, amp) * env;
	signal = Pan2.ar(signal, pan);
	Out.ar(0, signal);
}).add
);

(
var numberOfSynths;
numberOfSynths = 15;
Array.fill(numberOfSynths, {
	Synth(\stereosineWenv, [	
		\freq, 300+(430.rand),
		\phase, 1.0.rand,
		\amp, numberOfSynths.reciprocal, // reciprocal here means 1/numberOfSynths
		\dur, 2+(1.0.rand)]);
});
)

The power of using this style would be if you really wanted to be able to define all the parameters of the sound from the language, for example sonifying some complex information from gestural or other data.

The Klang Ugen

Another interesting way of achieving this is to use the Klang UGen. Klang is a bank of sine oscillators that takes arrays of frequencies, amplitudes and phase as arguments.

{Klang.ar(`[ [430, 810, 1050, 1220], [0.23, 0.13, 0.23, 0.13], [pi,pi,pi, pi]], 1, 0)}.play

And we create a SynthDef with the Klang Ugen:

(
SynthDef(\saklangbell, {arg freq=400, amp=0.4, dur=2, pan=0.0; // we add a new argument
	var signal, env;
	env = EnvGen.ar(Env.perc(0.01, dur), doneAction:2); // doneAction gets rid of the synth
	signal = Klang.ar(`[freq * [1.2,2.1,3.0,4.3], [0.25, 0.25, 0.25, 0.25], nil]) * env;
	signal = Pan2.ar(signal, pan);
	Out.ar(0, signal);
}).add
)
Synth(\saklangbell, [\freq, 100])

Xylophone Synthesis

Additive synthesis is good for various types of sound, but it suites very well for xylophones, bells and other metallic instruments (typically inharmonic sounds) as we saw with the bell example above. Using harmonic wave forms, such as a Saw wave, Square wave or Triangle wave would not be useful here as those are harmonic wave forms (as we know from the section above).

In additive synthesis, people often analyse the sound they’re trying to synthesise with generating a spectrogram of its frequencies.

A spectrogram of a xylophone sound
A spectrogram of a xylophone sound

The information the spectrogram gives us is three dimensional. It shows us the frequencies present in the sound on the vertical x-axis, the time on the horizontal y-axis, and amplitude is color (which we could imagine as the z-axis). We see that the partials don’t have the same type of envelopes: some have strong attack, others come smoothly in; some have much amplitude, others less; some have a long duration whilst other have less; and of them vibrate in frequency. These parameters can mix. A loud partial could die out quickly while a soft one can live for a long time.

{ ({ SinOsc.ar(rrand(180, 1200), 0.5*pi, 0.1) // the partial
		*
	// each partial gets its own envelope of 0.5 to 5 seconds
	EnvGen.ar(Env.perc(rrand(0.00001, 0.01), rrand(0.5, 5)))
} ! 12).sum }.play

Analysing the bell above we can detect the following partials * partial 1: xxx Hz, x sec. long, with amplitude of ca. x * partial 2: xxx Hz, x sec. long, with amplitude of ca. x * partial 3: xxx Hz, x sec. long, with amplitude of ca. x * partial 4: xxx Hz, x sec. long, with amplitude of ca. x * partial 5: xxx Hz, x sec. long, with amplitude of ca. x * partial 6: xxx Hz, x sec. long, with amplitude of ca. x * partial 7: xxx Hz, x sec. long, with amplitude of ca. x

We can now try to synthesize those harmonics:

{ SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)
}.play

And we get a decent inharmonic sound (inharmonic is where the partials are not whole number multiples of a fundamental frequency). We would now need to set the right amplitude as well and we’re still guessing from the spectrogram we made, but more importantly we should be using our ears.

{ SinOsc.ar(xxx, 0, xxx)+
SinOsc.ar(xxx, 0, xxx)+
SinOsc.ar(xxx, 0, xxx)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)
}.play

Some of the partials have a bit of vibration and we could simply turn the oscillator into a ‘detuned’ oscillator by adding two sines together:

// a regular 880 Hz wave at full amplitude
{SinOsc.ar(880)!2}.play
// a vibrating 880Hz wave (vibration at 3 Hz), where each is amp 0.5
{SinOsc.ar([880, 883], 0, 0.5).sum!2}.play
// the above is the same as (note the .sum):
{(SinOsc.ar(880, 0, 0.5)+SinOsc.ar(883, 0, 0.5))!2}.play
{ SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum
}.play

And finally, we need to create envelopes for each of the partials:

{ (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
 (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
 (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
 (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
 (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
}.play

And let’s listen to that. You will note that parenthesis have been put around each sine wave and its envelope multiplication. This is because SuperCollider calculates from left to right, and not giving + and - operators precedence, like in common maths and many other programming languages.

TIP: Operator Precedence - explore how these equations result in different outcomes

2+2*8 // you would expect 18 as the result, but SC returns what?
100/2-10 // here you would expect to get 40, and you get the same in SC. Why?
// now, for this reason it's a good practice to use parenthesis, e.g.,
2+(2*8)
100/(2-10) // if that's what you were trying to do

We have now created a reasonable representation of the bell sound that we listened to. The next thing to do is to turn that into a synth definition and make it stereo. Note that we add a general envelope with a doneAction:2, which will remove the synth from the server when it has stopped playing.

SynthDef(\bell, xxxx

// and we can play our new bell
Synth(\bell)

This bell has a specific frequency, but it would be nice to be able to pass a new frequency as a parameter. This could be done in many ways, one would be to pass the frequencies of each of the oscillators as arguments to the Synth. This would make the instrument quite flexible, but on the other hand it would weaken its unique character (now that so many more types of bell sounds - with their respective harmonic relationships - can be made with it). So here we decide to keep the same ratios between the partials for this unique bell sound, but a sound that can change in pitch. We find the ratios by dividing the frequencies by the lowest frequency.

[xxx, xxx2, xxx3, xxx4]/xxx
// which gives us this array:
[xxxxxxxxxxxxxxxxxxxxxxxxx]

We can now use those ratios in our synth definition

SynthDef(\bell, xxxx

// and we can play the bell with different frequencies
Synth(\bell, [\freq, 440])
Synth(\bell, [\freq, 220])
Synth(\bell, [\freq, 590])
Synth(\bell, [\freq, 1000.rand])

Harmonics GUI

Below you find a Graphical User Interface that allows you to control the harmonics of a fundamental frequency (the slider on the right is the fundamental freq). Here we are also introduced to the Osc UGen, which is a wavetable oscillator that reads its samples from a waveform stored in a buffer.

// we create a SynthDef
SynthDef(\oscsynth, { arg bufnum, freq = 440, ts= 1; 
	a = Osc.ar(bufnum, freq, 0, 0.2) * EnvGen.ar(Env.perc(0.01), timeScale:ts, doneAction:2);
	Out.ar(0, a ! 2);
}).add;

// and then we fill the buffer with our waveform and generate the GUI 
(
var bufsize, ms, slid, cspec, freq;
var harmonics;

freq = 220;
bufsize=4096; 
harmonics=20;

b=Buffer.alloc(s, bufsize, 1);

x = Synth(\oscsynth, [\bufnum, b.bufnum, \ts, 0.1]);

// GUI :
w = SCWindow("harmonics", Rect(200, 470, 20*harmonics+140,150)).front;
ms = SCMultiSliderView(w, Rect(20, 20, 20*harmonics, 100));
ms.value_(Array.fill(harmonics,0.0));
ms.isFilled_(true);
ms.valueThumbSize_(1.0);
ms.canFocus_(false);
ms.indexThumbSize_(10.0);
ms.strokeColor_(Color.blue);
ms.fillColor_(Color.blue(alpha: 0.2));
ms.gap_(10);
ms.action_({ b.sine1(ms.value, false, true, true) }); // set the harmonics
slid=SCSlider(w, Rect(20*harmonics+30, 20, 20, 100));
cspec= ControlSpec(70,1000, 'exponential', 10, 440);
slid.action_({	
	freq = cspec.map(slid.value); 	
	[\frequency, freq].postln;
	x.set(\freq, cspec.map(slid.value)); 
	});
slid.value_(0.3); 
slid.action.value;
SCButton(w, Rect(20*harmonics+60, 20, 70, 20))
	.states_([["Plot",Color.black,Color.clear]])
	.action_({	a = b.plot });
SCButton(w, Rect(20*harmonics+60, 44, 70, 20))
	.states_([["Start",Color.black,Color.clear], ["Stop!",Color.black,Color.clear]])
	.action_({arg sl;
		if(sl.value ==1, {
			x = Synth(\oscsynth, [\bufnum, b.bufnum, \freq, freq, \ts, 1000]);
			},{x.free;});
	});	
SCButton(w, Rect(20*harmonics+60, 68, 70, 20))
	.states_([["Play",Color.black,Color.clear]])
	.action_({
		Synth(\oscsynth, [\bufnum, b.bufnum, \freq, freq, \ts, 0.1]);
	});	
SCButton(w, Rect(20*harmonics+60, 94, 70, 20))
	.states_([["Play rand",Color.black,Color.clear]])
	.action_({
		Synth(\oscsynth, [\bufnum, b.bufnum, \freq, rrand(20,100)+50, \ts, 0.1]);
	});	
)

The “Play” and “Play rand” buttons on the interface allow you to hit Enter repeatedly whilst changing the harmonic energy of the sound. Can you synthesise a clarinet or an oboe this way? Can you find the sound of a trumpet? You can get close, but of course each of the harmonics would ideally have their own envelope and amplitude (as we saw in the xylophone synthesis above).

Some Additive SynthDefs with routines playing them

The examples above might have raised the question whether all the parameters of the synth could be set from the outside as arguments passed to the synth in the form of arrays. This is possible, of course, but it requires that the arrays are created as inputs when the SynthDef is compiled. In the example below, the partials and the amplitudes of 15 oscillators are set on compilation as the default arguments in respective arrays.

Note the # in front of the arrays in the arguments. It means that they are literal (fixed size) arrays.

(
SynthDef(\addSynthArray, { arg freq=300, dur=0.5, mul=100, addDiv=8, partials = #[1, 2, 3, \
4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], amps = #[ 0.30, 0.15, 0.10, 0.07, 0.06, 0.05, 0.\
04, 0.03, 0.03, 0.03, 0.02, 0.02, 0.02, 0.02, 0.02 ]; 	
	var signal, env;
	env = EnvGen.ar(Env.perc(0.01, dur), doneAction: 2);
	signal = Mix.arFill(partials.size, {arg i;
				SinOsc.ar(
					freq * harmonics[i], 
					0,
					amps[i]	
				)});
	
	Out.ar(0, signal.dup * env)
	}).add
)

// a saw wave sounding wave with 15 harmonics 
Synth(\addSynthArray, [\freq, 200])
Synth(\addSynthArray, [\freq, 300])
Synth(\addSynthArray, [\freq, 400])

This is because the synth it is using the default arguments of the SynthDef. Let’s try to pass a partials array

Synth(\addSynthArray, [\freq, 400, \partials, {|i| (i+1)+rrand(-0.2, 0.2)}!15])

What happened here? Let’s scrutinize the partials argument.

{|i| (i+1)+rrand(-0.2, 0.2)}!15
breaks down to
{|i|i}!15
or 
{arg i; i } ! 15
// but we don't want a frequency of zero, so we add 1
{|i| (i+1) }!15
// and then we add random values from -0.2 to 0.2
{|i| (i+1) + rrand(-0.2, 0.2) }!15
// resulting in frequencies such as 
{|i| (i+1) + rrand(-0.2, 0.2) * 440 }!15

We can now create a piece that sets new partial frequencies and their amplitude on every note. As mentioned above this could be carefully decided, or simply done randomly. If it is completely random, it might be worth looking into the Rand UGens though, as they allow for a random value to be generated within every synth.

// test the routine here below. uncommend and comment the variables f and a
(
fork {  // fork is basically a Routine
        100.do({
        		// partial frequencies:
         		// f = Array.fill(15, {arg i; i=i+1; i}).postln; // harmonic spectra (saw wave)
         		f = Array.fill(15, {10.0.rand}); // inharmonic spectra (a bell?)
         		// partial amplitudes:
         		// a = Array.fill(15, {arg i; i=i+1; 1/i;}).normalizeSum.postln; // saw wave amps
         		a = Array.fill(15, {1.0.rand}).normalizeSum.postln; // random amp on each harmon\
ic
         	  	Synth(\addSynthArray).set(\harmonics, f, \amps, a);
            		1.wait;
        });
      }  
)
(
n = rrand(10, 15);
{ Mix.arFill(n , { 
		SinOsc.ar( [67.0.rrand(2000), 67.0.rrand(2000)], 0, n.reciprocal)
		*
		EnvGen.kr(Env.sine(rrand(2.0, 10) ) )
	}) * EnvGen.kr(Env.perc(11, 6), doneAction: 2, levelScale: 0.75)
}.play;
)

fork {  // fork is basically a Routine
        100.do({
		n = rrand(10, 45);
		"Number of UGens: ".post; n.postln;
		{ Mix.fill(n , { 
			SinOsc.ar( [67.0.rrand(2000), 67.0.rrand(2000)], 0, n.reciprocal)
			*
			EnvGen.kr(Env.sine(rrand(4.0, 10) ) )
		}) * EnvGen.kr(Env.perc(11, 6), doneAction: 2, levelScale: 0.75)
		}.play;
		rrand(5, 10).wait;
		})
}

Using Control to set multiple parameters

There is another way to store and control arrays within a SynthDef. This is using the Control class. The controls are good for passing arrays into running Synths. In order to do this we use the Control UGen inside our SynthDef.

SynthDef("manySines", {arg out=0;
	var sines, control, numsines;
	numsines = 20;
	control = Control.names(\array).kr(Array.rand(numsines, 400.0, 1000.0));
	sines = Mix(SinOsc.ar(control, 0, numsines.reciprocal)) ;
	Out.ar(out, sines ! 2);
}).add;

Here we make an array of 20 frequency values inside a Control variable and pass this array to the SinOsc UGen which makes a “multichannel expansion,” i.e., it creates a sinewave in 20 succedent audio busses. (If you had a sound card with 20 channels, you’d get a sine out of each channel) But here we mix the sines into one signal. Finally in the Out UGen we use “! 2” which is a multichannel expansion trick that makes this a 2 channel signal (we could have used signal.dup).

b = Synth("manySines");

And here below we can change the frequencies of the Control

// our control name is "array"
b.setn(\array, Array.rand(20, 200, 1600)); 
b.setn(\array, {rrand(200, 1600)}!20); 
b.setn(\array, {rrand(200, 1600)}.dup(20));
// NOTE: All three lines above do exactly the same, just different syntax

Here below we use DynKlang (dynamic Klang) in order to change the synth in runtime:

(
SynthDef(\dynklang, { arg out=0, freq=110;
	var klank, n, harm, amp;
	n = 9;
	// harmonics
	harm = Control.names(\harm).kr(Array.series(4,1,4));
	// amplitudes
	amp = Control.names(\amp).kr(Array.fill(4,0.05));
	klank = DynKlang.ar(`[harm,amp], freqscale: freq);
	Out.ar(out, klank);
}).add;
)

a = Synth(\dynklang, [\freq, 230]);

a.set(\harm,  Array.rand(4, 1.0, 4.7))
a.set(\freq, rrand(30, 120))
a.set(\amp, Array.rand(4, 0.005, 0.1))

Klang and Dynklang

It can be laborious to build an array of synths and set the frequencies and amplitudes of each. For this we have a UGen called Klang. Klang is a bank of sine oscillators. It is more efficient than the DynKlang, but less flexible. (Don’t confuse with Klank and DynKlank which we will explore in the next chapter).

// bank of 12 oscillators of frequencies between 600 and 1000
{ Klang.ar(`[ Array.rand(12, 600.0, 1000.0), nil, nil ], 1, 0) * 0.05 }.play;
// here we create synths every 2 seconds
(
{
loop({
	{ Pan2.ar( 
		Klang.ar(`[ Array.rand(12, 200.0, 2000.0), nil, nil ], 0.5, 0)
		* EnvGen.kr(Env.sine(4), 1, 0.02, doneAction: 2), 1.0.rand2) 	
	}.play;
	2.wait;
})
}.fork;
)

Klang can not recieve updates to its frequencies nor can it be modulated. For that we use DynKlang (Dynamic Klang).

(
{ 
	DynKlang.ar(`[ 
		[800, 1000, 1200] + SinOsc.kr([2, 3, 0.2], 0, [130, 240, 1200]),
		[0.6, 0.4, 0.3],
		[pi,pi,pi]
	]) * 0.1
}.freqscope;
)

// amplitude modulation
(
{ 
	DynKlang.ar(`[ 
		[800, 1600, 2400, 3200],
		[0.1, 0.1, 0.1, 0.1] + SinOsc.kr([0.1, 0.3, 0.8, 0.05], 0, [1, 0.8, 0.8, 0.6]),
		[pi,pi,pi]
	]
) * 0.1
}.freqscope;
)

The following patch shows how a GUI is used to control the amplitudes of the DynKlang oscillator array

(	// create controls directly with literal arrays:
SynthDef(\dynsynth, {| freqs = #[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], 
	amps = #[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], 
	rings = #[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]|
	Out.ar(0, DynKlang.ar(`[freqs, amps, rings]))
}).add
)

(
var bufsize, ms, slid, cspec, rate;
var harmonics = 20;
GUI.qt;

x = Synth(\dynsynth).setn(
				\freqs, Array.fill(harmonics, {|i| 110*(i+1)}), 
				\amps, Array.fill(harmonics, {0})
				);

// GUI :
w = Window("harmonics", Rect(200, 470, 20*harmonics+40,140)).front;
ms = MultiSliderView(w, Rect(20, 10, 20*harmonics, 110));
ms.value_(Array.fill(harmonics,0.0));
ms.isFilled_(true);
ms.indexThumbSize_(10.0);
ms.strokeColor_(Color.blue);
ms.fillColor_(Color.blue(alpha: 0.2));
ms.gap_(10);
ms.action_({
	x.setn(\amps, ms.value*harmonics.reciprocal);
}); 
)

Chapter 6 - Subtractive Synthesis

Last chapter discussed additive synthesis. The idea is start with silence and add together partials and derive at the sound we are after. Subtractive synthesis works the opposite. We start with a rich sound - a broadband sound either rich in partials/harmonics or noise - and then filter the unwanted frequencies out. WhiteNoise and Saw waves are typical sound sources, as the noise has equal energy on all frequencies, but the saw wave has a naturally sounding harmonic structure with energy on every harmonic.

Noise Sources

The definition of noise is a signal that is aperiodic, i.e., there is no periodic repetition of some form in the signal. If there was such repetition, we would talk about a wave form and then a frequency of those repetitions. The frequency becomes pitch or musical notes. Not so in the world of noise: there are no repetitions that we can detect and thus we perceive it as the opposite of a signal; the antithesis of a meaning. Most of us remember the white noise of a dead analogue TV channel. Anyway, although noise might for some have negative connotations, it is a very useful musical element, in particular for synthesis as a rich input signal.

// WhiteNoise
{WhiteNoise.ar(0.4)}.plot(1)
{WhiteNoise.ar(0.4)}.play
{WhiteNoise.ar(0.4)}.scope
{WhiteNoise.ar(0.4)}.freqscope

// PinkNoise 
{PinkNoise.ar(1)}.plot(1)
{PinkNoise.ar(1)}.play
{PinkNoise.ar(1)}.freqscope

// BrownNoise
{BrownNoise.ar(1)}.plot(1)
{BrownNoise.ar(1)}.play
{BrownNoise.ar(1)}.freqscope

// Take a look at the source file called Noise.sc (or hit Apple+Y on WhiteNoise)
// You will find lots of interesting noise generators. For example these:

{ Crackle.ar(XLine.kr(0.99, 2, 10), 0.4) }.freqscope.scope;

{ LFDNoise0.ar(XLine.kr(1000, 20000, 10), 0.1) }.freqscope.scope;

{ LFClipNoise.ar(XLine.kr(1000, 20000, 10), 0.1) }.freqscope.scope;

// Impulse
{ Impulse.ar(80, 0.7) }.play
{ Impulse.ar(4, 0.7) }.play

// Dust (random impulses)
{ Dust.ar(80) }.play
{ Dust.ar(4) }.play

We can not start to sculpt sound with the use of filters and envelopes. For example, what would this remind us of:

{WhiteNoise.ar(1) * EnvGen.ar(Env.perc(0.001,0.3), doneAction:2)}.play

We can add a low pass filter (LPF) to the noise, so we cut off the high frequencies:

{LPF.ar(WhiteNoise.ar(1), 3300) * EnvGen.ar(Env.perc(0.001,0.5), doneAction:2)}.play

And here we use mouse movements to control the cutoff frequency (the x-axis) and the envelope duration (y-axis):

(
fork{
	100.do({
		{LPF.ar(WhiteNoise.ar(1), MouseX.kr(200,20000, 1)) 
			* EnvGen.ar(Env.perc(0.00001, MouseY.kr(1, 0.1, 1)), doneAction:2)}.play;
		1.wait;
	});
}
)

But what did that low pass filter do? LPF? HPF? These are filters that pass through the low frequencies, thus the name. A high pass filter will pass through the high frequencies. And a band pass filter will pass through the frequencies of a band that you specify. We can view the functionality of the low pass filter with the use of a frequency scope. Note also the quality parameter in the resonant low pass filter:

{LPF.ar(WhiteNoise.ar(0.4), MouseX.kr(100, 20000).poll(20, "cutoff"))}.freqscope;
{RLPF.ar(WhiteNoise.ar(0.4), MouseX.kr(100, 20000).poll(20, "cutoff"), MouseY.kr(0.01, 1).p\
oll(20, "quality"))}.freqscope

Filter Types

Filters are algorithms that are typically applied in the time domain of an audio signal. This, for example, means adding a delayed copy of the signal to itself.

Here is a very primitive such filter:

{
var signal;
var delaytime = MouseX.kr(0.000022675, 0.001); // from a sample 
signal = Saw.ar(220, 0.5);
d =  DelayC.ar(signal, 0.6, delaytime); 
(signal + d).dup
}.play

Let us try some of the filter UGens of SuperCollider:

// low pass filter
{LPF.ar(WhiteNoise.ar(0.4), MouseX.kr(40,20000,1)!2) }.play;

// low pass filter with XLine
{LPF.ar(WhiteNoise.ar(0.4), XLine.kr(40,20000, 3, doneAction:2)!2) }.play;

// high pass filter
{HPF.ar(WhiteNoise.ar(0.4), MouseX.kr(40,20000,1)!2) }.play;

// band pass filter (the Q is controlled by the MouseY)
{BPF.ar(WhiteNoise.ar(0.4), MouseX.kr(40,20000,1), MouseY.kr(0.01,1)!2) }.play;

// Mid EQ filter attenuates or boosts a frequency band
{MidEQ.ar(WhiteNoise.ar(0.024), MouseX.kr(40,20000,1), MouseY.kr(0.01,1), 24)!2 }.play;

// what's happening here?
{
var signal = MidEQ.ar(WhiteNoise.ar(0.4), MouseX.kr(40,20000,1), MouseY.kr(0.01,1), 24);
BPF.ar(signal, MouseX.kr(40,20000,1), MouseY.kr(0.01,1)) !2
}.play;

Resonating filters

A resonant filter does what is says on the tin, it resonates certain frequencies. The bandwidth of this resonance can vary, so with a WhiteNoise input, one could go from a very wide resonance (where the “quality” - the Q - of the filter is low), to a very narrow band resonance where the noise almost sounds like a sine wave. Let’s explore this with WhiteNoise and a band pass filter:

{BPF.ar(WhiteNoise.ar(0.4), MouseX.kr(100, 10000).poll(20, "cutoff"), MouseY.kr(0.01, 0.999\
9).poll(20, "rQ"))}.freqscope

Move your mouse around and explore how the Q factor, when increased, results in a narrower resonating bandwidth.

In a low pass and high pass resonant filters, the energy at the cutoff frequency can be increased or decreased by setting the Q factor (or in SuperCollider, the reciprocal (inverse) of Q).

// resonant low pass filter
{RLPF.ar(
	Saw.ar(222, 0.4), 
	MouseX.kr(100, 12000).poll(20, "cutoff"), 
	MouseY.kr(0.01, 0.9999).poll(20, "rQ")
)}.freqscope;
// resonant high pass filter
{RHPF.ar(
	Saw.ar(222, 0.4), 
	MouseX.kr(100, 12000).poll(20, "cutoff"), 
	MouseY.kr(0.01, 0.9999).poll(20, "rQ")
)}.freqscope;

There are bespoke resonance filters in SuperCollider, such as Resonz, Ringz and Formant.

// resonant filter
{ Resonz.ar(WhiteNoise.ar(0.5), MouseX.kr(40,20000,1), 0.1)!2 }.play

// a short impulse won't resonate
{ Resonz.ar(Dust.ar(0.5), 2000, 0.1) }.play

// for that we use Ringz
{ Ringz.ar(Dust.ar(2, 0.6), MouseX.kr(200,6000,1), 2) }.play

// X is frequency and Y is ring time
{ Ringz.ar(Impulse.ar(4, 0, 0.3),  MouseX.kr(200,6000,1), MouseY.kr(0.04,6,1)) }.play

{ Ringz.ar(Impulse.ar(LFNoise2.ar(2).range(0.5, 4), 0, 0.3),  LFNoise2.ar(0.1).range(200,30\
00), LFNoise2.ar(2).range(0.04,6,1)) }.play

{ Mix.fill(10, {Ringz.ar(Impulse.ar(LFNoise2.ar(rrand(0.1, 1)).range(0.5, 1), 0, 0.1),  LFN\
oise2.ar(0.1).range(200,12000), LFNoise2.ar(2).range(0.04,6,1)) })}.play

{ Formlet.ar(Impulse.ar(4, 0.9), MouseX.kr(300,2000), 0.006, 0.1) }.play;

{ Formlet.ar(LFNoise0.ar(4, 0.2), MouseX.kr(300,2000), 0.006, 0.1) }.play;

Klank and DynKlank

Just as Klang is a bank of fixed frequency oscillators, i.e., additive synthesis, Klank is a bank of fixed frequency resonators, where frequencies are subtracted of an input signal.

{ Ringz.ar(Dust.ar(3, 0.3), 440, 2) + Ringz.ar(Dust.ar(3, 0.3), 880, 2) }.play

//  using only one Dust UGen to trigger all the filters:
(
{ 
var trigger, freq;
trigger = Dust.ar(3, 0.3);
freq = 440;
Ringz.ar(trigger, 440, 2, 0.3) 		+ 
Ringz.ar(trigger, freq*2, 2, 0.3) 	+ 
Ringz.ar(trigger, freq*3, 2, 0.3) !2
}.play
)

// but there is a better way:

// Klank is a bank of resonators like Ringz, but the frequency is fixed. (there is DynKlank)

{ Klank.ar(`[[800, 1071, 1153, 1723], nil, [1, 1, 1, 1]], Impulse.ar(2, 0, 0.1)) }.play;

// whitenoise input
{ Klank.ar(`[[440, 980, 1220, 1560], nil, [2, 2, 2, 2]], WhiteNoise.ar(0.005)) }.play;

// AudioIn input
{ Klank.ar(`[[220, 440, 980, 1220], nil, [1, 1, 1, 1]], AudioIn.ar([1])*0.001) }.play;

Let’s explore the DynKlank UGen. It does the same as Klank, but it allows us to change the values after the synth has been instantiated.

{ DynKlank.ar(`[[800, 1071, 1353, 1723], nil, [1, 1, 1, 1]], Dust.ar(8, 0.1)) }.play;

{ DynKlank.ar(`[[200, 671, 1153, 1723], nil, [1, 1, 1, 1]], PinkNoise.ar([0.007,0.007])) }.\
play;

{ DynKlank.ar(`[[200, 671, 1153, 1723]*XLine.ar(1, [1.2, 1.1, 1.3, 1.43], 5), nil, [1, 1, 1\
, 1]], PinkNoise.ar([0.007,0.007])) }.play;

SynthDef(\dynklanks, {arg freqs = #[200, 671, 1153, 1723]; 
	Out.ar(0, 
		DynKlank.ar(`[freqs, nil, [1, 1, 1, 1]], PinkNoise.ar([0.007,0.007]))
	)
}).add

a = Synth(\dynklanks)
a.set(\freqs, [333, 444, 555, 666])
a.set(\freqs, [333, 444, 555, 666].rand)

We know resonant filters when we hear them. The typical cry-baby wah wah guitar pedal is a band pass filter, for example. In the examples below we use a SinOsc to “move” the band pass frequency up and down the frequency spectrum. The SinOsc is here effectively working as a LFO (Low Frequency Oscillator - usually with a frequency below 20 Hz).

{ BPF.ar(Saw.ar(440), 440+(3000* SinOsc.kr(2, 0, 0.9, 1))) ! 2 }.play;
{ BPF.ar(WhiteNoise.ar(0.5), 1440+(300* SinOsc.kr(2, 0, 0.9, 1)), 0.2) ! 2}.play;

Bell Synthesis using Subtractive Synthesis

The desired sound that you are trying to synthesize can be achieved through different methods. As an example, we could explore how to synthesize a bell sound with subtractive synthesis.

(
{
var chime, freqSpecs, burst, harmonics = 10;
var burstEnv, burstLength = 0.001;
freqSpecs = `[
	{rrand(100, 1200)}.dup(harmonics), //freq array
	{rrand(0.3, 1.0)}.dup(harmonics).normalizeSum, //amp array
	{rrand(2.0, 4.0)}.dup(harmonics)]; //decay rate array
burstEnv = Env.perc(0, burstLength); //envelope times
burst = PinkNoise.ar(EnvGen.kr(burstEnv, gate: Impulse.kr(1))*0.4); //Noise burst
Klank.ar(freqSpecs, burst)!2
}.play
)

This bell will be triggered every second. This is because the Impulse UGen is triggering the opening of the gate in the EnvGen (envelope generator) that uses the percussion envelope defined in the ‘burstEnv’ variable. If we wanted this to happen only once, we could set the frequency of the Impulse to zero. If we add a general envelope that frees the synth after being triggered, we could run a task that triggers bells every second.

(
Task({
	inf.do({
		{
		var chime, freqSpecs, burst, harmonics = 30.rand;
		var burstEnv, burstLength = 0.001;
		freqSpecs = `[
			{rrand(100, 8000)}.dup(harmonics), //freq array
			{rrand(0.3, 1.0)}.dup(harmonics).normalizeSum, //amp array
			{rrand(2.0, 4.0)}.dup(harmonics)]; //decay rate array
		burstEnv = Env.perc(0, burstLength); //envelope times
		burst = PinkNoise.ar(EnvGen.kr(burstEnv, gate: Impulse.kr(0))*0.5); //Noise burst
		Klank.ar(freqSpecs, burst)!2 * EnvGen.ar(Env.linen(0, 4, 0), doneAction: 2) 
		}.play;
		[0.125, 0.25, 0.5, 1].choose.wait;
	})
}).play
)

Simulating the Moog

The much loved MiniMoog is a typical subtractive synthesis synthesizer. A few oscillator types can be mixed together and subsequently passed through a characteristic resonance low pass filter. We could try to simulate a setting on the MiniMoog, using the MoogFF UGen that simulates the Moog VCF (Voltage Controlled Filter) low pass filter and choosing, say, a saw wave form (The MiniMoog also has triangle, square, and two pulse waves).

We would typically start by sketching our synth by hooking up the UGens in a .play or .freqscope:

{MoogFF.ar(Saw.ar(440), MouseX.kr(400, 16000), MouseY.kr(0.01, 4))}.freqscope

A common trick when simulating analogue equipment is to try to recreate the detuned oscillators of the analog synth (they are typically out of tune due to the difference of temperature within the synth itself). We can do this by adding another oscillator with a few Hz difference in frequency:

// here we add two Saws and split the signal into two channels
{ MoogFF.ar(Saw.ar(440, 0.4) + Saw.ar(442, 0.4), 4000 ) ! 2 }.freqscope
// like this:
{ ( SinOsc.ar(220, 0, 0.4) + SinOsc.ar(330, 0, 0.4) ) ! 2 }.play

// here we "expand" the input of the filter into two channels (the array)
{ MoogFF.ar([Saw.ar(440, 0.4), Saw.ar(442, 0.4)], 4000 )  }.freqscope
// like this - so different frequencies in each speaker:
{ [ SinOsc.ar(220, 0, 0.4), SinOsc.ar(330, 0, 0.4) ] }.play

// here we "expand" the saw into two channels, but sum them and then split into two
{ MoogFF.ar(Saw.ar([440, 442], 0.4).sum, 4000 ) ! 2 }.freqscope
// like this - and this is the one we'll use, although they're all fine:
{ SinOsc.ar( [220, 333], 0, 0.4) ! 2 }.play

We can then start to add arguments and prepare the synth graph for turning it into a SynthDef:

{ arg out=0, freq = 440, amp = 0.3, pan = 0, cutoff = 2, gain = 2, detune=2;
	var signal, filter;
	signal = Saw.ar([freq, freq+detune], amp).sum;
	filter = MoogFF.ar(signal, freq * cutoff, gain );
	Out.ar(out, Pan2.ar(filter, pan));
}.play

The two synth graphs above are pretty much the same, except we have removed the mouse input in the latter one. You can see the frequency, amp, pan, and filter cutoff values are derived from the default arguments in the top line. There are only three things left for us to do in order to have a good working general synth: add an envelope, and wrap the graph up in a SynthDef with a name:

SynthDef(\moog, { arg out=0, freq = 440, amp = 0.3, pan = 0, cutoff = 2, gain = 2, gate=1;
	var signal, filter, env;
	signal = Saw.ar(freq, amp);
	env = EnvGen.ar(Env.adsr(0.01, 0.3, 0.6, 1), gate: gate, doneAction:2);
	filter = MoogFF.ar(signal * env, freq * cutoff, gain );	
	Out.ar(out, Pan2.ar(filter, pan));
}).add;

a = Synth(\moog);
a.set(\freq, 222); // set the frequency of the synth
a.set(\cutoff, 4); // set the cutoff (this would cut of at the 4th harmonic. Why?)
a.set(\gate, 0); // kill the synth

We can now hook up a keyboard and play the \moog synth that we’ve designed. The MiniMoog is monophonic (only one note at a time), and it could be written like this:

(
c = 4;
MIDIdef.noteOn(\myOndef, {arg vel, key, channel, device;
	a.release; 
	a = Synth(\moog, [\freq, key.midicps, \amp, vel/127, \cutoff, c]);
	[key, vel].postln; 
});
MIDIdef.noteOff(\myOffdef, {arg vel, key, channel, device; 
	a.release; 
	//a = nil;
	[key, vel].postln; 
});
)
c = 10; // change the cutoff frequency at a later point 
// the 'c' variable could be set from a GUI or a MIDI controller

The “a == nil”, or “a.isNil” check is there to make sure that we don’t press another note and overwriting the variable ‘a’ with another synth. What would happen then is that the noteOff method would free the last synth put into variable ‘a’ and not the prior ones. Try to remove the condition and see what happens.

Finally, we might want to improve the MiniMoog and add a polyphonic feature. As we saw in an earlier chapter, we simply create an array for all the possible MIDI notes and turn them on and off:

a = Array.fill(127, { nil });
MIDIIn.connectAll;
MIDIdef.noteOn(\myOndef, {arg vel, key, channel, device; 
	// we use the key as index into the array as well
	a[key] = Synth(\moog, [\freq, key.midicps, \amp, vel/127, \cutoff, 4]);
});
MIDIdef.noteOff(\myOffdef, {arg vel, key, channel, device; 
	a[key].release;
});

We will leave it up to you to decide how you want to control the cutoff and gain parameters of the MoogFF filter UGen. This could be done through knobs or sliders on a MIDI interface, on a GUI, or you could even decide to explore mapping key press velocity to the cutoff frequency, such that the note sounds brighter (or dimmer?) the harder you press the key.

Chapter 7 - Modulation

Modulating one signal with another is one of the oldest and most common techniques in sound synthesis. Here, any parameter of an oscillator can be modulated by the output of another oscillator. Filters, PlayBufs (sound file players) and other things can also be modulated. In this chapter we will explore modulation, and in particular amplitude modulation (AM), ring modulation (RM) and frequency modulation (FM).

LFOs (Low Frequency Oscillators)

As mentioned most parameters or controls in an oscillator can be controlled by the output of another. Low frequency oscillators (LFOs) are oscillators that typically operate under 20 Hz, although in SuperCollider there is no point in trying to define oscillators as LFOs, as we might always want to increase that frequency to 40 or 400 Hz!

Here below are examples of a triangle wave that has different controls modulated by another UGen.

In the first example we have the frequency of one oscillator modulated by the output (amplitude) of another:

{ SinOsc.ar( 440 * SinOsc.ar(1), 0, 0.4) }.play

We hear that the modulation is 2 Hz, not one, and that is because the output of the modulating oscillator goes up to 1 and down to -1 in one second. So for a one cycle of modulation per second, you would have to give it 0.5 as an amplitude. Furthermore, a frequency argument with a negative sign is automatically turned into a positive one, as negative frequency does not make sense.

Let’s try the same for amplitude:

{ SinOsc.ar( 440, 0, 0.4 * SinOsc.ar(1)) }.play
// or perhaps using LFPulse (which outputs 1 and 0s if the amp is 1)
{ SinOsc.ar( 440, 0, 0.4 * LFPulse.ar(2)) }.play

We thus get the familiar effects of vibrato (modulation of frequency) and tremolo (modulation of amplitude) as they are commonly defined as:

// vibrato
{SinOsc.ar(440+SinOsc.ar(4, 0, 10), 0, 0.4) }.play
// tremolo
{SinOsc.ar(440, 0, SinOsc.ar(3, 0, 1)) }.play

In modulation synthesis we talk about a “modulator” (the oscillator that does the modulation) and the “carrier” which is the main signal being modulated.

// mouseX is the power of the vibrato
// mouseY is the frequency of the vibrato
{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseY.kr(20, 5), 0, MouseX.kr(5, 20)); 
	carrier = SinOsc.ar(440 + modulator, 0, 1);
	carrier ! 2 // the output
}.play

There are special Low Frequency Oscillators (LFOs) in SuperCollider. They are typically not band limited, which means that they start to alias (or mirror back) into the frequency domain. Consider the difference between Saw (band-limited) and LFSaw (non-band-limited) here:

{Saw.ar(MouseX.kr(100, 10000), 0.5)}.freqscope
{LFSaw.ar(MouseX.kr(100, 10000), 0.5)}.freqscope

When you move your mouse, you can see how the band-limited Saw only gives you the harmonics above the fundamental frequency set by the mouse. On the other hand, with LFSaw, you get the harmonics mirroring back into the audible range at the Nyquist frequency (half the sampling rate, very often 22.050Hz).

But the LFUgens are good for modulation and we typically can run them in the control rate (using .kr rather than .ar - which is typically 64 times less calculation per second -> that is, if the block size is set to 64 samples)

// LFSaw
{ SinOsc.ar(LFSaw.kr(4, 0, 200, 400), 0, 0.7) }.play

// LFTri
{ SinOsc.ar(LFTri.kr(4, 0, 200, 400), 0, 0.7) }.play
{ Saw.ar(LFTri.kr(4, 0, 200, 400), 0.7) }.play

// LFPar
{ SinOsc.ar(LFPar.kr(0.2, 0, 400,800),0, 0.7) }.play

// LFCub
{ SinOsc.ar(LFCub.kr(0.2, 0, 400,800),0, 0.7) }.play

// LFPulse
{ SinOsc.ar(LFPulse.kr(3, 1, 0.3, 200, 200),0, 0.7) }.play
{ SinOsc.ar(LFPulse.kr(3, 1, 0.3, 2000, 200),0, 0.7) }.play

// LFOs can also perform at audio rate
{ LFPulse.ar(LFPulse.kr(3, 1, 0.3, 200, 200),0, 0.7) }.play
{ LFSaw.ar(LFSaw.kr(4, 0, 200, 400), 0, 0.7) }.play
{ LFTri.ar(LFTri.kr(4, 0, 200, 400), 0, 0.7) }.play
{ LFTri.ar(LFSaw.kr(4, 0, 200, 800), 0, 0.7) }.play

Finally, we should note here at the end of this section on LFOs that the LFO frequency can of course go as high as you would like, but then it ceases being an LFO and starts to do different type of synthesis, which we will look at below. In the examples here, you will start to hear strange artefacts arriving when the oscillation goes up over 20 Hz (observe the post window).

{SinOsc.ar(440+SinOsc.ar(XLine.ar(4, 200, 10).poll(20, "mod freq:"), 0, 20), 0, 0.4) }.play
{SinOsc.ar(440, 0, SinOsc.ar(XLine.ar(4, 200, 10).poll(20, "mod freq:"), 0, 1)) }.play

Theremin

We have now obviously found the technique to create a Theremin using vibrato and tremolo:

// Using the MouseX to control amplitude
	{
		var f;
		f = MouseY.kr(4000, 200, 'exponential', 0.8);
		SinOsc.ar(
			freq: f+ (f*SinOsc.ar(7,0,0.02)),
			mul: MouseX.kr(0, 0.9)
		)
	}.play

// Using the MouseX to control vibrato speed
	{
		var f;
		f = MouseY.kr(4000, 200, 'exponential', 0.8);
		SinOsc.ar(
			freq: f+ (f*SinOsc.ar(3+MouseX.kr(1, 6),0,0.02)),
			mul: 0.3
		)
	}.play

Amplitude Modulation (AM synthesis)

In one of the examples above, the XLine Ugen to the LFO frequency up over 20Hz and we started to get some exciting artefacts in the sound. What was happening was that “sidebands” were appearing, i.e., partials on either side of the sine. Amplitude synthesis is a modulation that modulates the carrier with unipolar values (that is, they are between 0 and 1 - not bipolar (-1 to 1)).

In amplitude modulation, the sidebands are the sum and the difference of the carrier and the modulator frequency. For example, a 300 Hz carrier and 160 Hz modulator would generate 140 Hz and 460 Hz sidebands. However, the carrier frequency is always present.

{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(2, 20000, 1), 0, mul:0.5, add:1);
	carrier = SinOsc.ar(MouseY.kr(300,2000), 0, modulator);
	carrier ! 2;
}.play

If there are harmonics in the wave being modulated, each of the harmonics will have sidebands as well. - Check the saw wave.

{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(2, 2000, 1), mul:0.5, add:1);
	carrier = Saw.ar(533, modulator);
	carrier ! 2 // the output
}.play

In digital synthesis we can apply all kinds of mathematical operators to the sound, for example using .abs to calculate absolute values in the modulator. (this results in many sidebands - try also using .cubed and other unitary operators on the signal).

{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(2, 20000, 1)).abs;
	carrier = SinOsc.ar(MouseY.kr(200,2000), 0, modulator);
	carrier!2 // the output
}.play

Ring Modulation

As mentioned above, ring modulation uses a bipolar modulation values (-1 to 1) whereas AM uses unipolar modulation values (0 to 1). This results in ordinary amplitude modulation outputting the original carrier frequency as well as the two side bands for each of the spectral components of the carrier and modulation signals. Ring modulation, however, cancels out the carrier frequencies and simply outputs the side-bands.

{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(2, 200, 1));
	carrier = SinOsc.ar(333, 0, modulator);
	carrier!2;
}.play

Ring modulation was used much in the early electronic music studios, for example in Cologne, BBC Radiophonic workshop and so on. The Barrons used the technique in the music for Forbidden Planet and so did Stockhausen in his Microphonie II, where voices are modulated with the sound of an Hammond organ. Let’s try to ring modulate a voice:

b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");
{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(20, 200, 1));
	carrier = PlayBuf.ar(1, b, 1, loop:1) * modulator;
	carrier ! 2;
}.play;

Here a sine wave is modulating the voice of a girl saying “Columbia this is Houston, over…”. We could use one sound file to ring modulate the output of another:

b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");
c = Buffer.read(s, "yourSound.wav");
c.play
{
	var modulator, carrier;
	modulator = PlayBuf.ar(1, c, 1, loop:1);
	carrier = PlayBuf.ar(1, b, 1, loop:1) * modulator;
	carrier ! 2;
}.play;

Frequency Modulation (FM Synthesis)

FM Synthesis is a popular synthesis technique that works well for a number of sounds. It became popular with the Yamaha DX7 Synthesizer in the late 1980s, but it was invented in 197XXX when John Chowning, musician and researcher at Stanford University, discovered the power of FM synthesis. He was working in the lab one day when he accidentally plugged the output of one oscillator into the frequency input of another and he heard a sound rich with partials (or sidebands as we call them in modulation synthesis). It’s important to realise that at the time, an oscillator was expensive equipment, and the possibility of getting so many partials out of only two oscillators was very exciting in musical, engineering, and economical terms.

Chowning’s famous FM synthesis piece is called Stria and can be found on the interwebs. The piece was an eye opener for many musicians, as its sounds were so unusual in timbre, rendering the texture of the piece surprising and novel. Imagine being there at the time and hearing these “unnatural” sounds for the first time!

1980s synth pop music is of course full with the sounds of FM synthesis, but musicians were typically using the DX7, but very often using the pre-installed sounds of the synth itself rather than making their own. The reason for this could be that FM synthesis is quite hard to learn, as there are so multiple parameters at play in any sound. The story is that the user interface of the DX7 prevented people from designing sounds in an effective and ergonomic way, thus the lack of new and exploratory sound design using that synth.

{SinOsc.ar(1400 + SinOsc.ar(MouseX.kr(2,2000,1), 0, MouseY.kr(1,1000)), 0, 0.5)!2}.freqscope

Using the frequency scope in the example above, you will see that when you move your mouse around, sidebands are appearing, spreading with even distance to each other, and the more amplitude the modulator has, the more sidebands you get. Let’s explore the above example with comments, in order to get the terminology right:

// the same as above - with explanations:
{
SinOsc.ar(2000 	// the carrier and the carrier frequency
	+ SinOsc.ar(MouseX.kr(2,2000,1),  // the modulator and the modulator frequency
		0, 					  // the phase of the modulator
		MouseY.kr(1,1000) 		  // the modulation depth (index)
		), 
0,		// the carrier phase 
0.5)	// the carrier amplitude
}.play

What is happening is that we have a carrier oscillator (the first SinOsc) with a frequency of 2000 Hz. We then add to this frequency the output of another oscillator. Note that the amplitude of the modulator is very high: it goes up to 1000, which would become uncomfortable for your ears were you to play that on its own. So when you move the mouse across the x-axis, you notice that around the carrier frequency partial (of 2000Hz) there are appearing sidebands with the distance of the modulator frequency. That is, if the modulator frequency is 250 Hz, you get sidebands of 1750 and 2250; 1500 and 2500; 1250 and 2750, etc. The stronger the modulation depth, or the index, of the modulator (its amplitude basically), the louder the sidebands will become.

We could of course create all those sidebands with oscillators in an additive synthesis style, but note the efficiency of FM compared to Additive synthesis:

// FM
{PMOsc.ar(1000, 800, 12, mul: EnvGen.kr(Env.perc(0, 0.5), Impulse.kr(1)))}.play;
 // compared with additive synthesis:
{ 
Mix.ar( 
 SinOsc.ar((1000 + (800 * (-20..20))),  // we're generating 41 oscillators (see *)
  mul: 0.1*EnvGen.kr(Env.perc(0, 0.5), Impulse.kr(1))) 
)}.play 

TIP:

// * run this line : (1000 + (1000 * (-20..20))) // and see the frequency array that is mixed down with Mix.ar // (I think this is an example from David Cope)

Below are two patches that serve well to explore the power of simple FM synthesis. In the first one, a LFNoise0 UGen is used to trigger a new number between 20 and 60, 4 times per second. This number will be a floating point number (a fractional number) so it is rounded to an integer. Then the number is turned into frequency values using .midicps (where MIDI note value is turned into a value of cycles per second).

{ var freq, ratio, modulator, carrier;
freq = LFNoise0.kr(4, 20, 60).round(1).midicps; 
ratio = MouseX.kr(1,4); 
modulator = SinOsc.ar(freq * ratio, 0, MouseY.kr(0.1,10));
carrier = SinOsc.ar(freq + (modulator * freq), 0, 0.5);
carrier	
}.play

// let's fork it and create a perc Env!
{	
	40.do({
			{ var freq, ratio, modulator, carrier;
			freq = rrand(60, 72).midicps; 
			ratio = MouseX.kr(0.5,2); 
			modulator = SinOsc.ar(freq * ratio, 0, MouseY.kr(0.1,10));
			carrier = SinOsc.ar(freq + (modulator * freq), 0, 0.5);
			carrier * EnvGen.ar(Env.perc(0, 1), doneAction:2)
		}.play;
		0.5.wait;
	});
}.fork

The PMOsc - Phase modulation

Frequency modulation and phase modulation are pretty much the same. In SuperCollider we have a PMOsc (Phase Modulation Oscillator), and we can try to make the above example using that:

{PMOsc.ar(1400, MouseX.kr(2,2000,1), MouseY.kr(0,1), 0)!2}.freqscope

You will note a feature in phase modulation, in that when the modulating frequency is low (< 20Hz), you don’t get the vibrato-like effect of the frequency modulation synth.

The magic of the PMOsc can be studied if we look under the hood. PMOsc is a pseudo-UGen, i.e., it is not written in C and compiled as a plugin for the SC-server, but rather defined when the class library of SuperCollider is compiled (on startup or if you hit Cmd+K XXX)

How does the PMOsc work? Let’s check the source file (Cmd+j or Ctrl+j). You will see that the PMOsc.ar method simply returns (with the ^ symbol) a SinOsc with another SinOsc in the phase argument slot.

PMOsc  {
	*ar { arg carfreq,modfreq,pmindex=0.0,modphase=0.0,mul=1.0,add=0.0; 
		^SinOsc.ar(carfreq, SinOsc.ar(modfreq, modphase, pmindex),mul,add)
	}	
	*kr { arg carfreq,modfreq,pmindex=0.0,modphase=0.0,mul=1.0,add=0.0; 
		^SinOsc.kr(carfreq, SinOsc.kr(modfreq, modphase, pmindex),mul,add)
	}
}

Here are a few examples for studying the PM oscillator:

{ PMOsc.ar(MouseX.kr(500,2000), 600, 3, 0, 0.1) }.play; // modulate carfreq
{ PMOsc.ar(2000, MouseX.kr(200,1500), 3, 0, 0.1) }.play; // modulate modfreq
{ PMOsc.ar(2000, 500, MouseX.kr(0,10), 0, 0.1) }.play; // modulate index

The SuperCollider documentation of the UGen presents a nice demonstration of the UGen that looks a bit like this:

e = Env.linen(2, 5, 2);
fork{
    inf.do({
        { LinPan2.ar(EnvGen.ar(e) 
			*
			PMOsc.ar(2000.0.rand,800.0.rand, Line.kr(0, 12.0.rand,9),0,0.1), 
			1.0.rand2)
			}.play;
        2.wait;
    })
}

Other examples of PM synthesis:

{ var freq, ratio;
freq = LFNoise0.kr(4, 20, 60).round(1).midicps; 
ratio = MouseX.kr(1,4); 
SinOsc.ar(freq, 				// the carrier and the carrier frequency
		SinOsc.ar(freq * ratio, 	// the modulator and the modulator frequency
		0, 					// the phase of the modulator
		MouseY.kr(0.1,10) 		// the modulation depth (index)
		), 
0.5)		// the carrier amplitude
}.play

Same patch without the comments and modulator and carrier put into variables:

{ var freq, ratio, modulator, carrier;
	freq = LFNoise0.kr(4, 20, 60).round(1).midicps; 
	ratio = MouseX.kr(1,4); 
	modulator = SinOsc.ar(freq * ratio, 0, MouseY.kr(0.1,10));
	carrier = SinOsc.ar(freq, modulator, 0.5);
	carrier	
}.play

The use of Envelopes in FM synthesis

Frequency modulation is a complex technique and Chowning’s initial research paper shows a wide range of applications of this synthesis method. For example, in the patch below, we have a much lower modulation amplitude (between 0 and 1) but we multiply the carrier frequency with the modulator.

(
var carrier, carFreq, carAmp, modulator, modFreq, modAmp; 
carFreq = 2000; 
carAmp = 0.2;		
modFreq = 327; 
modAmp = 0.2; 
{
	modAmp = MouseX.kr(0, 1); 	// choose normalized range for modulation
	modFreq = MouseY.kr(10, 1000, 'exponential');
	modulator = SinOsc.ar( modFreq, 0, modAmp);			
	carrier = SinOsc.ar( carFreq + (modulator * carFreq), 0, carAmp);
	[ carrier, carrier, modulator ]
}.play
)

And we can compare that technique with our initial FM example. In short, the frequency of the carrier is used as a parameter in the index (amplitude) of the modulator. These are design details and there are multiple ways of using FM synthesis to derive at the sound that you are after.

// current technique 
{ SinOsc.ar( 1400 + (SinOsc.ar( MouseY.kr(10, 1000, 1), 0, MouseX.kr(0, 1)) * 1400), 0, 0.5\
) ! 2 }.play
// our first example
{ SinOsc.ar(1400 + SinOsc.ar(MouseY.kr(10, 1000,1), 0, MouseX.kr(1,1000)), 0, 0.5) ! 2 }.pl\
ay

One of the key techniques in FM synthesis is to use envelopes do control the parameters in the modulator. By changing the width and amplitude of the sidebands, we can get many interesting sounds, for example trumpets, mallets or bells.

Let us first create a basic FM synthesis synth definition and try to play it with diverse arguments:

SynthDef(\fmsynth, {arg outbus = 0, freq=440, carPartial=1, modPartial=1, index=3, mul=0.2,\
 ts=1;
	var mod, car, env;
	// modulator frequency
	mod = SinOsc.ar(freq * modPartial, 0, freq * index );
	// carrier frequency
	car = SinOsc.ar((freq * carPartial) + mod, 0, mul );
	// envelope
	env = EnvGen.ar( Env.perc(0.01, 1), doneAction: 2, timeScale: ts);
	Out.ar( outbus, car * env)
}).add;

Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \ts, 1]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 2.5, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 3.5, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 4.0, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 300.0, \carPartial, 1.5, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 0.5, \ts, 2]);

Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 300.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 400.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 800.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);

Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1.1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1.15, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1.2, \ts, 2]);
SynthDef(\fmsynthenv, {arg outbus = 0, freq=440, carPartial=1, modPartial=1, index=3, mul=0\
.2, ts=1;
	var mod, car, env;
	var modfreqenv, modindexenv;
	modfreqenv = EnvGen.kr(Env.perc(0.1, ts/10, 0.125))+1; // add 1 so we're not starting from\
 zero
	modindexenv = EnvGen.kr(Env.sine(ts, 1))+1;
	mod = SinOsc.ar(freq * modPartial * modfreqenv, 0, freq * index * modindexenv);
	car = SinOsc.ar((freq * carPartial) + mod, 0, mul );
	env = EnvGen.ar( Env.perc(0.01, 1), doneAction: 2, timeScale: ts);
	Out.ar( outbus, Pan2.ar(car * env))
}).add;

Synth(\fmsynthenv, [ \freq, 440.0, \ts, 10]);
Synth(\fmsynthenv, [ \freq, 440.0, \ts, 1]);
Synth(\fmsynthenv, [ \freq, 110.0, \ts, 2]);

Chapter 8 - Envelopes and shaping sound

In both analog and digital synthesis, we typically operate with sound sources that are constantly running - whether those are analog oscillators or digital unit generators. This is great fun of course and we can delight in altering parameters by turning knobs or or setting control values, sculpting the sound we are after. However, this sound is not very musical. Hardly any musical instruments can have infinite sound, and in instrumental sounds we typically get an initial burst of energy, the sound then reaches some sort of equilibrium until it fades out.

The way we shape these sounds in both analog and digital synthesis is to use so-called “envelopes.” They wrap around our sound and give it the desired shape we’re after. Most people have for example heard about the ADSR envelope (where the shape is Attack, Decay, Sustain, and Release) which is one of the available envelopes in SuperCollider:

The shape of an ADSR envelope
The shape of an ADSR envelope

Envelopes in SuperCollider come in two types, sustaining (un-timed) and non-sustaining (timed) envelopes. A gate is a trigger (a positive number) that holds the envelope open until it gets a message to close it (such as 0 or less). This is like a finger pressing down a key on a MIDI keyboard. If we were using an ADSR envelope, when the finger presses the key, we would run the A (attack) and the D (decay), but then the S (sustain) would last as long as the finger is pressed. On R (release), when the finger releases the key, the R argument defines how long it takes for the sound to fade out. Synths with gated envelopes are can therefore be of un-definite time, i.e., its time is not set at the point of initialising the synth.

However, using a non-gated envelope, or a timed one, we set the duration of the sound at the time of triggering the synth. Here we don’t need to use a gate to trigger and release a synth.

Envelope types

Envelopes are powerful as we can define precisely the shape of a sound. This could be the amplitude of a sound, but it could also be a definition of frequency, filter cutoff, and so on. Let’s look at a few common envelope types in SuperCollider:

Env.linen(1, 2, 3, 0.6).test.plot;
Env.triangle(1, 1).test.plot;
Env.sine(1, 1).test.plot;
Env.perc(0.05, 1, 1, -4).test.plot;
Env.adsr(0.2, 0.2, 0.5, 1, 1, 1).test.plot;
Env.asr(0.2, 0.5, 1, 1).test.plot;
Env.cutoff(1, 1).test(2).plot;
// using .new you can define your own envelope with as many points as you like
Env.new([0, 1, 0.3, 0.8, 0], [2, 3, 1, 4],'sine').test.plot;
Env.new([0,1, 0.3, 0.8, 0], [2, 3, 1, 4],'linear').test.plot;
Env.new({1.0.rand}!10, {2.0.rand}!9).test.plot;
Env.new({1.0.rand}!100, {2.0.rand}!99).test.plot;

Different sounds require different envelopes. For example, if we wanted to synthesise a snare sound, we might choose to use the .perc method of Env.

{ LPF.ar(WhiteNoise.ar(0.5), 2000) * EnvGen.ar(Env.perc(0.001, 0.5)) ! 2 }.play

// And more bespoke envelopes can be created with the .new method:
{ Saw.ar(EnvGen.ar(Env.sine(0.3).range(140, 120))) * EnvGen.ar(Env.new([0, 1, 0, 0.5, 0], [\
0.3, 0, 0.1,0])) ! 2 }.play

// Note that above we are using a .sine envelope to modulate the frequency argument of the \
Saw UGen.

Envelopes define points in time that have a target value, duration and shape. So we can define the value, length and shape of each of the nodes. The .new method expects arrays for the value, duration and shape arguments. This can be very useful, as through a very simple syntax you can create complex transitions of value through time:

Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], \welch).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], \step).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], \sqr).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], [2, 0, 5, 3]).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], [0, 0, 0, 0]).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], [5, 5, 5, 5]).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], [20, -20, 20, 20]).plot;

The last array defines the curve where 0 is linear, positive number curves the segment up, and a negative number curves it down. Check the Env documentation for further explanation.

The EnvGen - Envelope Generator

The envelope itself does nothing. It is simply a description of a form; of values in time and the shape of the line between those values. If we want to apply this envelope to a signal, we need to use the EnvGen UGen to play the envelope within a synth graph. You note that the EnvGen has an .ar or a .kr argument, so it works either in audio rate or control rate. The envelope arguments are the following:

EnvGen.ar(envelope, gate, levelScale, levelBias, timeScale, doneAction)

where the first argument is the envelope type (for example Env.perc(0.1, 1)), the second argument is the gate (not used with timed envelopes, but the default of the gate argument is 1, so it triggers the synth), the third argument is levelScale, which can scale the levels (such as amplitude) of the envelope, the fourth is levelBias which offsets the envelope’s breakpoints, the fifth is timeScale, which can shorten or stretch the envelope (so a second long Env.sine(1), could become 10 second long), and finally we have the doneAction, but this defines what will happen to the synth instance after the envelope has done its job.

doneActions

The doneActions are an important aspect of how the SC-server works. One of the key strengths of SuperCollider is how a synth can be created and removed very effectively, making it useful for granular synthesis, or playback of notes. Here a grain or a note can be a synth that exists for 20 milliseconds or 20 minutes. Users of data flow languages, such as Pure Data, will appreciate how useful this is, as synths can be spawned at wish, and don’t need to be hard wired beforehand.

When the synth has exceeded its lifetime through the function of the envelope it will typically become silent. However, we don’t want to pile synths up after they have played, but rather free the server of them. Unused synths will still run, use up processing power (CPU), and eventually cause some distortion in the sound; for example, if hundreds of synths have not been freed from the server and are still using CPU.

The doneActions are the following:

  • 0 - Do nothing when the envelope has ended.
  • 1 - Pause the synth running, it is still resident.
  • 2 - Remove the synth and deallocate it.
  • 3 - Remove and deallocate both this synth and the preceding node.
  • 4 - Remove and deallocate both this synth and the following node.
  • 5 - Same as 3. If the preceding node is a group then free all members of the group.
  • 6 - Same as 4. If the following node is a group then free all members of the group.
  • 7 - Same as 3. If the synth is part of a group, free all preceding nodes in the group.
  • 8 - Same as 4. If the synth is part of a group, free all following nodes in the group.
  • 9 - Same as 2, but pause the preceding node.
  • 10 - Same as 2, but pause the following node.
  • 11 - Same as 2, but if the preceding node is a group then free its synths.
  • 12 - Same as 2, but if the following node is a group then free its synths.
  • 13 - Frees the synth and all preceding and following nodes.

The doneActions are used in the EnvGen UGen all the time and it is important not to forget it. However there are other UGens in SuperCollider that also can free their enclosing synth when an event has happened - such as finishing playing a sample buffer. The other UGens are the following:

  • PlayBuf and RecordBuf - doneAction when the buffer has been played or recorded.
  • Line and XLine - doneAction when a line has ended.
  • Linen - doneAction when the envelope is finished.
  • LFGauss - doneAction after the completion of a cycle.
  • DemandEnvGen - Similar to EnvGen.
  • DetectSilence - doneAction when the UGen detects silence below a threshold.
  • Duty and TDuty - doneAction evaluated when a duty stream ends.
SynthDef(\sine, {arg freq=440, amp=0.1, gate=1, dA = 2;
	var signal, env;
	signal = SinOsc.ar(freq, 0, amp);
	env = EnvGen.ar(Env.adsr(0.2, 0.2, 0.5, 0.3, 1, 1), gate, doneAction: dA);
	Out.ar(0, Pan2.ar(signal * env, 0));
}).add

s.plotTree // watch the nodes appearing on the server tree

In the examples below, when you add a node, it is always added at the top of the node tree. This is how SC server does it by default. Synths can be added anywhere in the three though, but that will be discussed later in the chapter on busses, nodes and groups. [xxx, 15. ]

// doneAction = 0
a = Synth(\sine, [\dA, 0])
a.release
a.set(\gate, 1)

// doneAction = 1
a = Synth(\sine, [\dA, 1])
a.release
a.set(\gate, 1)
a.run(true)

// doneAction = 2
a = Synth(\sine, [\dA, 2])
a.release
a.set(\gate, 1) // it's gone! (see server synth count)

// doneAction = 3
a = Synth(\sine, [\dA, 3])
b = Synth(\sine, [\freq, 660, \dA, 3])
a.release

// doneAction = 3
a = Synth(\sine, [\dA, 3])
b = Synth(\sine, [\freq, 660, \dA, 3], addAction:\addToTail)
b.release

// doneAction = 3
a = Synth(\sine, [\freq, 440, \dA, 3])
b = Synth(\sine, [\freq, 660, \dA, 3])
c = Synth(\sine, [\freq, 880, \dA, 3])
b.release // will release b and c

// doneAction = 4
a = Synth(\sine, [\freq, 440, \dA, 4])
b = Synth(\sine, [\freq, 660, \dA, 4])
c = Synth(\sine, [\freq, 880, \dA, 4])
b.release // will release a and b

// doneAction = 5
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 5])
c.release // will only free c (itself)

// doneAction = 5
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 5], addAction:\addToTail)
c.release // will free itself and the preceding group

// doneAction = 6
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 6])
c.release // will free itself and the following group

// doneAction = 7
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g )
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 7], target:g)
d = Synth(\sine, [\freq, 1100, \dA, 0], target:g)
e = Synth(\sine, [\freq, 1300, \dA, 0], target:g)
c.release // will free itself and preceding nodes in a group

// doneAction = 8
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 8], target:g)
d = Synth(\sine, [\freq, 1100, \dA, 0], target:g)
e = Synth(\sine, [\freq, 1300, \dA, 0], target:g)
c.release // will free itself and preceding nodes in a group

// doneAction = 9
a = Synth(\sine, [\freq, 440, \dA, 9])
b = Synth(\sine, [\freq, 660, \dA, 0])
a.release // will free itself and pause the preceding node
b.run(true) // it was only paused

// doneAction = 10
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 10])
d = Synth(\sine, [\freq, 1100, \dA, 0])
c.release // will free itself and pause following nodes in a group
g.run(true) // it was only paused

// doneAction = 11
a = Synth(\sine, [\freq, 440, \dA, 11])
b = Synth(\sine, [\freq, 660, \dA, 0])
a.release // will free itself and the preceding node

// doneAction = 12
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 12])
d = Synth(\sine, [\freq, 1100, \dA, 0])
c.release // will free itself and the following node

// doneAction = 13
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 13])
d = Synth(\sine, [\freq, 1100, \dA, 0])
x = Synth(\sine, [\freq, 2100, \dA, 0])
e = Synth(\sine, [\freq, 1300, \dA, 0])
c.release // will free itself and the following node

Triggers and Gates

The difference between a gated and timed envelope has become clear in the above examples, but to put it in very simple terms, think of the piano as having a timed envelope (as the note dies after a while), but the organ as having a gated envelope (as the note only stops when the key is released). For user input it is good to be able to keep the envelope open as long as the user wants and free it at some event, such as releasing a key (or a person exiting a room in a sound installation).

Gates

Gates are typically used to start a sound that contains an envelope of some sort. They ‘open up’ for a flow values to pass through for a period of time (timed or untimed). When a gate closes, it typically runs the release part of the envelope used.

d = Synth(\sine, [\freq, 1100]) // key down
d.release // key up

// compare with
d = Synth(\sine, [\freq, 840]) // key down
d.free // kill immediately

// gate holds the EnvGen open. Here using Dust (random impulses) to trigger a new envelope
{EnvGen.ar(Env.adsr(0.001, 0.8, 1, 1), Dust.ar(1)) *  Saw.ar(55)!2}.play

// Here using Impulse (periodic impulses)
{EnvGen.ar(Env.adsr(0.001, 0.8, 1, 1), Impulse.ar(2)) *  SinOsc.ar(LFNoise0.ar(2).range(200\
, 1000))!2}.play

// With a doneAction: 2 we kill the synth after the first envelope
{EnvGen.ar(Env.adsr(0.001, 0.8, 0.1, 0.1), Impulse.ar(2), doneAction:2) *  SinOsc.ar(2222)!\
2}.play

// but if we increase the release time of the envelope, it will be retriggered before the d\
oneAction can kill it
{EnvGen.ar(Env.adsr(0.001, 0.8, 0.1, 1), Impulse.ar(2), doneAction:2) *  SinOsc.ar(1444)!2}\
.play

Triggers are similar to gates, they start a process, but they do not have the release function Gates have. So they are used to trigger envelopes.

trigger rate - Arguments that begin with “t_” (e.g. t_trig), or that are specified as \tr in the def’s rates argument (see below), will be made as a TrigControl. Setting the argument will create a control-rate impulse at the set value. This is useful for triggers.

Triggers

In the example above we saw how Dust and Impulse could be used to trigger an envelope. The trigger can be set from everywhere (code, GUI, system, etc) but we need to use “t_” in front of trigger arguments.

(
a = { arg t_gate = 1;
	var freq;
	freq = EnvGen.kr(Env.new([200, 200, 800], [0, 1.6]), t_gate);
     SinOsc.ar(freq, 0, 0.2) ! 2 
}.play;
)

a.set(\t_gate, 1)  // try to evaluate this line repeatedly
a.free // if you observe the server window you see this synth disappearing

(
a = { arg t_gate = 1;
	var env;
	env = EnvGen.kr(Env.adsr, t_gate);
     SinOsc.ar(888, 0, 1 * env) ! 2 
}.play;
)

a.set(\t_gate, 1)  // repeat this
a.free // free the synth (since it didn't have a doneAction:2)

// If you are curious about what doneAction:2 would have done, try this:
(
a = { arg t_gate = 1;
	var env;
	env = EnvGen.kr(Env.adsr, t_gate, doneAction:2);
     SinOsc.ar(888, 0, 1 * env) ! 2 
}.play;
)

a.set(\t_gate, 1)  // why does this line not retrigger the synth?
// Now try the same with doneAction:0

If you want to keep the same synth on the server and trigger it from another process than the synthesis control parameter process you can use gates and triggers for the envelope. Use doneAction:0 to keep the synth on the server before or after the envelope is triggered.

Let’s turn the examples above into SynthDefs and explore the concept of gates:

SynthDef(\trigtest, {arg freq, amp, dur=1, gate;
	var signal, env;
	env = EnvGen.ar(Env.adsr(0.01, dur, amp, 0.7), gate, doneAction:0); 
	signal = SinOsc.ar(freq) * env;
	Out.ar(0, signal);
}).add

a = Synth(\trigtest, [\freq, 333, \amp, 1, \gate, 0]) // gate is 0, no sound
a.set(\gate, 1)
a.set(\gate, 0)

// the synth is still running, even if it is silent
a.set(\freq, 788) // change the frequency

a.set(\gate, 1)
a.set(\gate, 0)

The example below does the same, but here with a fixed time envelope. Since that envelope finishes when it is done, it does not work with gates. We need a trigger to trigger it back to life.

// here we use a t_trig to retrigger the synth
SynthDef(\trigtest2, {arg freq, amp, dur=1, t_trig;
	var signal, env;
	env = EnvGen.ar(Env.perc(0.01, dur, amp), t_trig, doneAction:0); 
	signal = SinOsc.ar(freq) * env;
	Out.ar(0, signal);
}).add

a = Synth(\trigtest2, [\freq, 333, \amp, 1, \t_trig, 1])

a.set(\freq, 788)
a.set(\t_trig, 1);
a.set(\amp, 0.28)
a.set(\t_trig, 1);

a.set(\freq, 588)
a.set(\t_trig, 1);
a.set(\amp, 0.8)
a.set(\t_trig, 1);

Exercise: Explore the difference between a gate and a trigger.

MIDI Keyboard Example

The techniques we’ve been exploring above are useful when creating user interfaces for your synth. As an example we could create a synth definition to be controlled by a MIDI controller. Other usage could be networked communication, input from other software, or running musical patterns within SuperCollider itself. In the example below we build upon the example we did in chapter 4, but here we add pitch bend and vibrato.

MIDIIn.connectAll; // we connect all the incoming devices
MIDIFunc.noteOn({arg ...x; x.postln; }); // we post all the args

//First we create a synth definition for this example:
SynthDef(\moog, {arg freq=440, amp=1, gate=1, pitchBend=0, cutoff=20, vibrato=0;
	var signal, env;
	signal = LPF.ar(VarSaw.ar([freq, freq+2]+pitchBend+SinOsc.ar(vibrato, 0, 1, 1), 0, XLine.a\
r(0.7, 0.9, 0.13)), (cutoff * freq).min(16000));
	env = EnvGen.ar(Env.adsr(0), gate, levelScale: amp, doneAction:2);
	Out.ar(0, signal*env);
}).add;

a = Array.fill(127, { nil }); // create an array of nils, where the Synths will live
g = Group.new; // we create a Group to be able to set cutoff of all active notes
c = 6;
MIDIdef.noteOn(\myOndef, {arg vel, key, channel, device; 
	// we use the key as index into the array as well
	a[key] = Synth(\moog, [\freq, key.midicps, \amp, vel/127, \cutoff, 10], target:g);
});
MIDIdef.noteOff(\myOffdef, {arg vel, key, channel, device; 
	a[key].release;
	a[key] = nil; // we put nil back in the array as we use it in the if-statements below
});

MIDIdef.cc(\myPitchBend, { arg val; 
	c=val.linlin(0, 127, -10, 10); 
	"Pitch Bend : ".post; val.postln;
	a.do({arg synth; 
		if( synth != nil , { synth.set(\pitchBend, val ) }); 
	});	
});

MIDIdef.bend(\myVibrato, { arg val; 
	c=val.linlin(0, 127, 1, 20); 
	"Vibrato : ".post; val.postln;
	a.do({arg synth; 
		if( synth != nil , { synth.set(\vibrato, c ) }); 
	});	
});

Chapter 9 - Samples and Buffers

SuperCollider offers multiple ways of working with recorded sound. Sampling is one of the key techniques of computer music programming today, originating in tape-based instruments such as the Chamberlin or Mellotrone, but popularised in digital systems with samplers like E-mu Emulator and the Akai S-Series. Sampled sound is also the source of more recent techniques, such as granular and concatenative synthesis.

The first thing we need to know is that a sample is a collection of amplitude values in an array. If we were using 44.1kHz sample rate, we would have 44100 samples in the array if our sound was one second. And twice that amount if our sound was stereo.

We could therefore generate 1 second of whitenoise like this:

Array.fill(44100, {1.0.rand2});

The interesting question then is: how do we play these samples? What mechanism will read this and send it to the sound card? For that we use Buffers and UGens that can read them, such as PlayBuf.

Buffers

In short, a buffer is a collection of values in the memory of the computer. In SuperCollider, buffers are loaded onto the server not the language, so in our white noise example above, we would have to find a way to move our collection of values from the language to the server (as that’s where they would be played). Buffers can be used to contain all kinds of values in addition to sound, for example control data, gestural data from human movement, sonification data, and so on.

Allocating a Buffer

In order to create a buffer, we need to allocate it on the server. This is done through an .alloc method:

b = Buffer.alloc(s, 44100 * 4.0, 1); // 4 seconds of sound on a 44100 Hz system, 1 channel

// in the post window we get this information:
//  - > Buffer(0, 176400, 1, 44100, nil) // bufnum, number of samples, channels, sample-rat\
e, path

// If you run the line again, you will see that the bufnum has increased by 1.

// and we can get to this information by calling the server:
b.bufnum;

c = Buffer.alloc(s, 44100 * 4.0, 2); // same but now 2 channels

// This means that we now have twice the amount of samples, but same amount of frames
b.numFrames;
c.numFrames;

// and the number of channels
b.numChannels;
c.numChannels;

// It's clear though that 'c' has twice the amount of samples, even if both buffers have eq\
ual amount of frames

b.numFrames * b.numChannels;
c.numFrames * c.numChannels;

As mentioned buffers are collection of values in the RAM (Random Access Memory) of the computer. This means that the playhead can jump back and forth in the sound, play it fast or slow, backwards or forwards, and so on. But it also means that, unlike sound file playback from disk (where sound is buffered at regular intervals), the whole sound is stored in the memory of the computer. Try to open your Terminal and then run this line:

a = Array.fill(10, {Buffer.alloc(s,44100 * 8.0, 2)});

// You will see how the memory of the process called scsynth increases
// (scsynth is the name of the SuperCollider server process)

// now run the following line and watch when the memory is de-allocated.
10.do({arg i; a[i].free;})

We have now allocated some buffers on the server, but they only contain values of zero. Try playing it:

b.play
// We can load the samples from the server into an array ('a') in the language to check
// This means that the server will send the values from the server to the language over OSC.
b.loadToFloatArray(action: {arg array; a = array; a.postln;})

a.postln // and we see lots of 0s.

If we wanted to listen to the noise we created above, we could simply load the array into the buffer.

a = Array.fill(44100, {1.0.rand2}); // 1 second of noise (in an array in the language)
b = Buffer.loadCollection(s, a); // this line loads the array into the buffer (on the serve\
r)
b.play // and now we have a beautiful noise!

// We could then observe the samples by getting it back to the language like we did above:
a = Array.fill(44100, {arg i; i=i/10; sin(i)}); // fill a buffer with a sine wave
b = Buffer.loadCollection(s, a); // load the array onto the server
b.play // and now we have a beautiful sine!
b.loadToFloatArray(action: {arg array; a = array; Post << a}) // lots of samples

Reading a soundfile into a Buffer

We can read a sound file into a buffer simply by providing the path to it. This path is either relative to the SuperCollider application (so ‘hello.aif’ could be loaded if it was next to the SuperCollider application). Note that the IDE allows you to drag a file from your file system into the code document and the full path appears.

b = Buffer.read(s, "sounds/a11wlk01.wav");
b.bufnum; // let's check its bufnum

{ PlayBuf.ar(1, b) ! 2 }.play // the first argument is the number of channels

// We can wrap this into a SynthDef, of course
(
SynthDef(\playBuf,{ arg out = 0, bufnum;
	var signal;
	signal = PlayBuf.ar(1, bufnum, BufRateScale.kr(bufnum));
	Out.ar(out, signal ! 2)
}).add
)
x = Synth(\playBuf, [\bufnum, b.bufnum]) // we pass in either the buffer or the buffer numb\
er

x.free; // free the synth 
b.free; // free the buffer

// for many buffers, the typical thing to do is to load them into an array:
b = Array.fill(10, {Buffer.read(s, "sounds/a11wlk01.wav")});

// and then we can access it from the index in the array
x = Synth(\playBuf, [\bufnum, b[2].bufnum])

Since the PlayBuf requires information on the number of channels in the sound file, users need to make sure that this is clear, so people often come up with systems like this in their code:

b = Buffer.read(s, Platform.userAppSupportDir+/+"sounds/a11wlk01.wav");

SynthDef(\playMono, { arg out=0, buffer, rate=1;
	Out.ar(out, PlayBuf.ar(1, buffer, rate, loop:1) ! 2)
}).add;

SynthDef(\playMono, { arg out=0, buffer, rate=1;
	Out.ar(out, PlayBuf.ar(2, buffer, rate, loop:1)) // no "! 2"
}).add;

// And then
If(b.numChannels == 1, {
	x = Synth(\playMono, [\buffer, b]) // we pass in either the buffer or the buffer number
}, {
	x = Synth(\playStereo, [\buffer, b]) // we pass in either the buffer or the buffer number
});

Note that we don’t need the “!2” in the stereo version as that would simply make the left channel expand into the right (and add to the right channel), whereas the right channel would expand into Bus 3. [Bus 1, Bus 2, Bus 3, Bus 4, Bus 5, etc….] [ Left , Right ] [ Left , Right ]

Let us play a little with Buffer playback in order to get a feel for the possibilities of sound stored in random access memory.

// Change the playback speed
{Pan2.ar(PlayBuf.ar(1, b, MouseX.kr(-1,2), loop:1))}.play

// Scratch around in the file
{ PlayBuf.ar(1, b, MouseX.kr(-1.5, 1.5), loop: 1) ! 2 }.play

// Or perhaps a bit more excitingly 
{
	var speed;
	speed = MouseX.kr(-10, 10);
	speed = speed - DelayN.kr(speed, 0.1, 0.1);
	speed = MouseButton.kr(1, 0, 0.3) + speed ;
	PlayBuf.ar(1, b, speed, loop: 1) ! 2;
}.play

// Another version
{BufRd.ar(1, b, Lag.ar( K2A.ar( MouseX.kr(0,1)) * BufFrames.ir(b), 1))!2}.play

// Jumping to a random location in the buffer using LFNoise0
{PlayBuf.ar(1, b, 1, LFNoise0.ar(12)*BufFrames.ir(b), loop:1)!2}.play

// And so on ….

Recording live sound into a Buffer

Live sound can of course be fed directly into a Buffer for further manipulation. This could be useful if you are recording the sound, transforming it, overdubbing, cutting it up, scratching, and so on. However, in many cases a simple SoundIn UGen might be sufficient (and no Buffers used).

b = Buffer.alloc(s, 44100 * 4.0, 1); // 4 second mono buffer
// Warning, you might get feedback if you're not using headphones
{ RecordBuf.ar(SoundIn.ar(0), b); nil }.play; // run this for at least 4 seconds
{ PlayBuf.ar(1, b) }.play; // play it back

SuperCollider really makes this simple. However, the RecordBuf does more than simply recording. Since it loops, you can also overwrite the data that is already in the buffer with the preLevel argument. The preLevel argument is the amount that the data that is in the buffer is multiplied with before it is added to the incoming sound. We can now explore this in a more SuperCollider way of doing things, with SynthDefs and Synths.

SynthDef(\recBuf, { arg buffer=0, recLevel=0.5, preLevel=0.5;
	var in;
	in = SoundIn.ar(0);
	RecordBuf.ar(in, buffer, 0, recLevel, preLevel, loop:1);
}).add;

// we record into the buffer
x = Synth(\recBuf, [\buffer, b, \preLevel, 0]);
x.free;

// and we can play it back using the playBuf synthdef we created above
z = Synth(\playMono, [\buffer, b])
z.free;

// We could also explore the overdubbing of sound (leave this running)
(
x = Synth(\recBuf, [\buffer, b]); // here preLevel is 0.5 by default
z = Synth(\playMono, [\buffer, b, \rate, 1.5]); 
)

// Change the playback rate of the buffer
z.set(\rate, 0.75);

// if we like what we have recorded, we can easily write it to disk as a soundfile:
b.write("myBufRecording.aif", "AIFF", 'int16');

It is clear that playing with the recLevel and preLevel of a buffer recording, can create interesting layers of sound, where instrumentalists can record on top of what they already recorded. People could also engage in an “I’m Sitting in a Room” exercise a la Lucier.

Finally, as mentioned at the beginning of this chapter, buffers can contain any data and are not necessarily bound to audio content. In the example below we use the buffer to record mouse values at control rate (which is sample rate / block size) and write that mouse movement to disk in the form of an audio file.

b = Buffer.alloc(s, (s.sampleRate/s.options.blockSize) * 5, 1); // 5 secs of control rate
{RecordBuf.kr(MouseY.kr, b); SinOsc.ar(1000*MouseY.kr) }.play // recording the mouse
b.write("mouse.aif") // write the buffer to disk, aif is as good format as any

// play it back
b = Buffer.read(s, "mouse.aif")
{SinOsc.ar(1000*PlayBuf.kr(1, b))}.play

BufRd and BufWr

There are other UGens that can be helpful when playing back buffers. BufRd (buffer read) and BufWr (buffer write) are good examples of this, and so is the LoopBuf (from the sc3-plugins that are in the SuperCollider Extensions distribution).

In the example below we use a Phasor to ‘drive’ the reading of the buffer. This reading has to read sample by sample from the buffer, for example by providing the start and the end sample you want to read:

{ BufRd.ar(1, b, Phasor.ar(0, 1, 0, BufFrames.kr(b))) }.play;

// This way we can easily use SinOsc to modulate the play rate
{ BufRd.ar(1, b, Phasor.ar(0, SinOsc.ar(1).range(0.5, 1.5), 0, BufFrames.kr(b))) }.play;

// And we can also use the mouse to drive the reading 
b = Buffer.read(s, "sounds/a11wlk01.wav");

// Move the mouse!
SynthDef(\scratch, {arg bufnum, pitch=1, start=0, end;
	var signal;
	signal = BufRd.ar(1, bufnum, Lag.ar(K2A.ar(MouseX.kr(1, end)), 0.4));
	Out.ar(0, signal!2);
}).play(s, [\bufnum, b.bufnum, \end, b.numFrames]);

Streaming from disk

If your sound file is very long, it is probably a good idea to stream the sound from disk, just like popular digital audio workstations do. This is because long stereo files would quickly fill up your RAM if working with many sound files.

// We still need a buffer (but we are cueing it, i.e. not filling)
b = Buffer.cueSoundFile(s, Platform.resourceDir +/+ "sounds/a11wlk01-44_1.aiff", 0, 1);

SynthDef(\playcuedBuf,{ arg out = 0, bufnum;
	var signal;
	signal = DiskIn.ar(1, bufnum, loop:1);
	Out.ar(out, signal ! 2)
}).add;

x = Synth(\playcuedBuf, [\bufnum, b]);

Wavetables and wavetable look-up oscillators

Wavetables are a classic method of sound synthesis. It works similarly to the BufRd of a Buffer above, but here we are creating a bespoke wavetable (which can often be visualised for manipulation) and using wavetable look-up oscillators to play the content of the wavetable back. In fact many of the oscillators of SuperCollider use wavetable look-up under the hood, SinOsc being a good example.

Let’s start with creating a SynthDef with an Osc (which is a wavetable look-up oscillator). It expects to get a signal in the form of a SuperCollider Wavetable, which is a special format for interpolating oscillators.

(
SynthDef(\wavetable,{ arg out = 0, buffer;
	var signal;
	signal = Osc.ar(buffer, MouseX.kr(60,300)); // mouseX controlling pitch
	Out.ar(out, signal ! 2)
}).add
)

// we then allocate a Buffer with 512 samples (the buffer size must be the power of 2)
b = Buffer.alloc(s, 512, 1); 
b.sine1(1.0, true, true, true); // and we fill it with a sine wave

b.plot // notice something strange?
b.getToFloatArray(action: { |array|  { array[0, 2..].plot }.defer }); // check this

// let's listen to it
a = Synth(\wavetable, [\buffer, b])
a.free;

// You can hear that it sounds very different from a PlayBuf trying to play the same file (\
and here we get aliasing), since the PlayBuf is not band limited:

{PlayBuf.ar(1, b, MouseX.kr(-1, 10), loop:1)}.play;

// We can then create different waveforms
b.sine1(1.0/[1,2,3,4], true, true, true); //
b.getToFloatArray(action: { |array|  { array[0, 2..].plot }.defer }); // view the wave
a = Synth(\wavetable, [\buffer, b])
a.free;

// A saw wave
b.sine1(0.3/Array.series(90,1,1)*2, false, true, true);
b.getToFloatArray(action: { |array|  { array[0, 2..].plot }.defer });
a = Synth(\wavetable, [\buffer, b])
a.free;

// Random numbers
b.sine1(Array.fill(50, {1.0.rand}), true, true, true);
b.getToFloatArray(action: { |array|  { array[0, 2..].plot }.defer });

a = Synth(\wavetable, [\buffer, b])
a.free;

// We can also use an envelope to fill a buffer
a = Env([0, 1, 0.2, 0.3, -1, 0.3, 0], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1], \sin);
a.plot; // view this envelope 

// But we need to turn the envelope into a signal and then into a wavetable
c = a.asSignal(256).asWavetable;
c.size; // the size of the wavetable is twice the size of the signal... 512

// now we neet to put this wavetable into a buffer:
b = Buffer.alloc(s, 512);
b.setn(0, c);

// play it
a = Synth(\oscplayer, [\bufnum, b.bufnum])
a.free;

// try to load the above without turning the data into a wavetable, i.e.,
a = Env([0, 1, 0.2, 0.3, -1, 0.3, 0], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1], \sin);
c = a.asSignal(256);
b = Buffer.alloc(s, 512);
b.setn(0, c);
a = Synth(\oscplayer, [\bufnum, b.bufnum])

// and you will hear aliasing where the partials of the sound mirror back into the audio ra\
nge

Above we saw how an envelope was turned into a Signal which was then converted to a Wavetable. Signals are a type of a numerical collection in SuperCollider that allows for various math operations. These can be useful for FFT manipulation of data arrays or simply writing data to a file, as in this example:

f = SoundFile.new;
f.openWrite( Platform.userAppSupportDir +/+ "sounds/writetest.wav");
d = Signal.fill(44100, { |i| // one second of sound  
	// 1.0.rand2;  // white noise
	// sin(i/10); // a sine wave
	sin(i/10).cubed;
});
f.writeData(d);
f.close;

Below we explore further how Signals can be used with wavetable oscillators.

x = Signal.sineFill(512, [0,0,0,1]);
// We can now operate in many ways on the signal
[x, x.neg, x.abs, x.sign, x.squared, x.cubed, x.asin.normalize, x.exp.normalize, x.distort]\
.flop.flat.plot(numChannels: 9);

c = x.asWavetable;

b = Buffer.alloc(s, 512);
b.setn(0, c); // set the wavetable into the buffer so Osc can read it.

// play it
a = Synth(\wavetable, [\buffer, b])
a.free;

// And the following lines will load a different wavetable into the buffer
c = x.exp.normalize.asWavetable;
b.setn(0, c);
c = x.abs.asWavetable;
b.setn(0, c);
c = x.squared.asWavetable;
b.setn(0, c);
c = x.asin.normalize.asWavetable;
b.setn(0, c);
c = x.distort.asWavetable;
b.setn(0, c);

// try also COsc (Chorusing wavetable oscillator)
{COsc.ar(b, MouseX.kr(60,300))!2}.play

// OscN
{OscN.ar(b, MouseX.kr(60,300))!2}.play // works better with the non-asWavetable example abo\
ve

// Variable OSC - which can morph between wavetables
b = {Buffer.alloc(s, 512)} ! 9;
x = Signal.sineFill(512, [0,0,0,1]);
[x, x.neg, x.abs, x.sign, x.squared, x.cubed, x.asin.normalize, x.exp.normalize, x.distort]\
.do({arg signal, i; b[i].setn(0, signal.asWavetable)});

{ VOsc.ar(b[0].bufnum + MouseX.kr(0,7), [120,121], 0, 0.3) }.play

// change the content of the wavetables to something random
9.do({arg i; b[i].sine1(Array.fill(512, {1.0.rand2}), true, true, true); })

// VOsc3 
{ VOsc3.ar(b[0].bufnum + MouseX.kr(0,7), [120,121], 0, 0.3) }.play

People often want to draw their own sound in a wavetable. We can end this excursion into wavetable synthesis by creating a graphical user interface that allows for the drawing of wavetables.

(
var size = 512;
var canvas, wave, lastPos, lastVal;

w = Window("Wavetable", Rect(100, 100, 1024, 500)).front;
wave = Signal.sineFill(size, [1]);
b = Buffer.alloc(s, size * 2); // double the size for the wavetable

Slider(w, Rect(0, 5, 1024, 20)).action_({|sl| x.set(\freq, sl.value*1000)});  
  UserView(w, Rect(0, 30, 1024, 470))
    .background_(Color.black)
    .animate_(true)
    .mouseMoveAction_({ |me, x, y, mod, btn|
       var pos = (size * (x / me.bounds.width)).floor;
       var val = (2 * (y / me.bounds.height)) - 1;
       val = min(max(val, -1), 1);
       wave.clipPut(pos, val);
       if(lastPos != nil, {
           for(lastPos + 1, pos - 1, { |i|
               wave.clipPut(i, lastVal + (((i - lastPos) / (pos - lastPos)) * (val - lastVa\
l)));
           });
           for(pos + 1, lastPos - 1, { |i|
               wave.clipPut(i, lastVal + (((i - lastPos) / (pos - lastPos)) * (val - lastVa\
l)));
           });
       });
       lastPos = pos;
       lastVal = val;
       b.loadCollection(wave.asWavetable);
       })
       .mouseUpAction_({
           lastPos = nil;
          lastVal = nil;
       })
       .drawFunc_({ |me|
	         Pen.color = Color.white;
           Pen.moveTo(0@(me.bounds.height * (wave[0] + 1) / 2));
           for(1, size - 1, { |i, a|
               Pen.lineTo((me.bounds.width * i /size)@(me.bounds.height * (wave[i] + 1)/2))
           });
           Pen.stroke;
       });
b.loadCollection(wave.asWavetable);
x = {arg freq=440; Osc.ar(b, freq) *0.4 ! 2 }.play;
)

// 6) ========= Pitch and time changes ==========

b = Buffer.read(s, "sounds/a11wlk01-44_1.aiff");

// The most common way
// here double rate (and pitch) results in half the length (time) of the file

(
SynthDef(\playBuf,{ arg out = 0, bufnum;
	var signal;
	signal = PlayBuf.ar(1, bufnum, MouseX.kr(0.2, 4), loop:1);
	Out.ar(out, signal ! 2)
}).add
)

x = Synth(\playBuf, [\bufnum, b.bufnum])
x.free

// we could use PitchShift to change the pitch without changing the time
// PitchShift is a granular synthesis pitch shifter (other techniques include Phase Vocoder\
s)

(
SynthDef(\playBufWPitchShift,{ arg out = 0, bufnum;
	var signal;
	signal = PlayBuf.ar(1, bufnum, 1, loop:1);
	signal = PitchShift.ar(
		signal,	// stereo audio input
		0.1, 			// grain size
		MouseX.kr(0,2),	// mouse x controls pitch shift ratio
		0, 				// pitch dispersion
		0.004			// time dispersion
	);
	Out.ar(out, signal ! 2)
}).add
)

x = Synth(\playBufWPitchShift, [\bufnum, b.bufnum])
x.free

// for time streching check out the Warp0, Warp1 Ugens.

Chapter 10 - Granular and Concatenative Synthesis

Granular synthesis is a synthesis technique that became available for most practical purposes with digital computer music software. Early pioneers were Barry Truax and Iannis Xenakis, but the technique has been well explored in the work of Curtis Roads, both in his musical output and in a fine book called Microsound. The idea in granular synthesis is to synthesize a sound using small grains, typically of 10-50 millisecond duration, that are wrapped in envelopes. These grains can then result in a continuous sound or more discontinuous ‘grain clouds’. Here the individual grains become the building blocks, almost atoms, of a more complex structure.

Granular Synthesis

Granular synthesis is used in many pitch shifting and time stretching features of commercial software so most people would be well aware of its functionality and power. Let us explore the pitch shifting through the use of an indigenous SuperCollider UGen, the PitchShift. In the examples below, the grains are 100 ms windows that overlap. What is really happening is that the sample is played at variable rate (where rate of 2 is an octave higher), but the grains are layered on top of each other in order to maintain the time of the sound.

An example of a grain
An example of a grain
b = Buffer.read(s, Platform.userAppSupportDir+/+"sounds/a11wlk01.wav");

// MouseX controls the pitch
{ PitchShift.ar(PlayBuf.ar(1, b, 1, loop:1), 0.1, MouseX.kr(0,2), 0, 0.01) ! 2}.play;
// Same as above, but here MouseY gives random pitch
{ PitchShift.ar(PlayBuf.ar(1, b, 1, loop:1), 0.1, MouseX.kr(0,2), MouseY.kr(0, 2), 0.01) ! \
2}.play;

The grains are windows with a specific envelope (typically a Hanning envelope) and they overlap in order to create the continuous sound. Play around with the parameters of window size and overlap to explore how they result in different sound. The above examples used PitchShift for the purposes of changing the pitch but keeping the same playback rate. Below we use Warp1 to time stretch sound where the pitch remains the same.

// speed up the sound (with same pitch)
{Warp1.ar(1, b, Line.kr(0,1, 1), 1, 0.1, -1, 8, 0.1, 2)!2}.play

// slow down the sound (with the same pitch)
{Warp1.ar(1, b, Line.kr(0,1, 10), 1, 0.09, -1, 8, 0.1, 2)!2}.play

// use the mouse to read the sound (at the same pitch)
{Warp1.ar(1, b, MouseX.kr(0,1), 1, 0.1, -1, 8, 0.1, 2)!2}.play

// A SinOsc reading the sound (at the same pitch)
{Warp1.ar(1, b, SinOsc.kr(0.07).range(0,1), 1, 0.1, -1, 8, 0.1, 2)!2}.play

// use the mouse to read the sound (and control the pitch)
{Warp1.ar(1, b, MouseX.kr(0,1), MouseY.kr(0.5,2), 0.1, -1, 8, 0.1, 2)!2}.play

TGrains

The TGrains Ugen - or Trigger Grains - is a handy UGen for quick and basic granular synthesis. Here we can pass arguments such as number of grains per second, grain duration, rate (which is pitch), etc.

// mouse Y controlling number of grains per second
{TGrains.ar(2, Impulse.ar(MouseY.kr(1, 30)), b, 1, MouseX.kr(0,BufDur.kr(b)), 2/MouseY.kr(1\
, 10), 0, 0.8, 2)}.play

// mouse Y controlling pitch
{TGrains.ar(2, Impulse.ar(20), b, MouseY.kr(0.5, 2), MouseX.kr(0,BufDur.kr(b)), 2/MouseY.kr\
(1, 10), 0, 0.8, 2)}.play

// random pitch location, with mouse X controlling number 
// of grains per second an mouse Y controlling grain duration
{
TGrains.ar(2, 
	Impulse.ar(MouseX.kr(1, 50)), 
	b, 
	LFNoise0.ar(40, add:1), 
	LFNoise0.ar(40).abs * BufDur.kr(b), 
	MouseY.kr(0.01, 0.05), 
	0, 
	0.8, 
	2)
}.play

GrainIn

GrainIn enables you to granularize incoming audio. This UGen is part of a collection of other granular Ugens, such as GrainSin, GrainFM, and GrainBuf. Take a look at the documentation of these UGens and explore their functionality.

SynthDef(\sagrain, {arg amp=1, grainDur=0.1, grainSpeed=10, panWidth=0.5;
	var pan, granulizer;
	pan = LFNoise0.kr(grainSpeed, panWidth);
	granulizer = GrainIn.ar(2, Impulse.kr(grainSpeed), grainDur, SoundIn.ar(0), pan);
	Out.ar(0, granulizer * amp);
}).add;

x = Synth(\sagrain)

x.set(\grainDur, 0.02)
x.set(\amp, 0.02)
x.set(\amp, 1)

x.set(\grainDur, 0.1)
x.set(\grainSpeed, 5)
x.set(\panWidth, 1)

Custom built granular synthesis

Having explored some features of granular synthesis above, the best way to really understand what granular synthesis is would be to make our own granular synth engine that spawns grains of some sort according to our own rate, pitch, wave form, and envelope.

In the examples above we have continued the chapter on Buffers and used sampled sound as the source of our granular synthesis. Here below we will explore the technique with simpler waveforms, such as the sine wave.

SynthDef(\sineGrain, { arg freq=800, amp=0.4, dur=0.1, pan=0;
	var signal, env;
	// A Gaussian curve envelope that's released from the server after playback
	env = EnvGen.kr(Env.sine(dur, amp), doneAction: 2);
	signal = FSinOsc.ar(freq, 0, env);
	OffsetOut.ar(0, Pan2.ar(signal, pan)); 
}).add;

Synth(\sineGrain, [\freq, 500, \dur, 0.05]) // 50 ms grain duration

// we can then trigger 1000 grains, one every 10 ms
(
Task({
   1000.do({ 
   		Synth(\sineGrain, 
			[\freq, rrand(440, 1600), // 
			\amp, rrand(0.1,0.3),
			\dur, rrand(0.02, 0.1)
			]);
		0.01.wait;
	});
}).start;
)

If our grains all have the same pitch, we should be able to generate a continuous sine wave out of the grains as they will be overlapping as shown in this image

[image]

Task({
   1000.do({ 
   		Synth(\sineGrain, 
			[\freq, 440,
			\amp, 0.4,
			\dur, 0.1
			]);
		0.05.wait; // density
	});
}).start;

But the sound is not perfectly continuous. This is because when we create a Synth, it is being sent as quickly as possible to the server. As the language-synth communication is asynchronous there might be slight time differences in the time it takes to send the OSC message over to the server, and this causes the fluctuation. We therefore need to timestamp our messages and it can be done either through messaging style communication with the server, or using s.bind (which makes an OSC bundle under the hood and sends a time stamped OSC message to the server).

Task({
   1000.do({ 
		s.sendBundle(0.2, 
			["/s_new", \sineGrain, x = s.nextNodeID, 0, 1], 
			["/n_set", x, \freq, 440, \amp, 0.4, \dur, 0.1]
		);
		0.05.wait; // density
	});
}).start;

// Or simply (and probably more readably)
Task({
   1000.do({
		s.bind{
			Synth(\sineGrain, 
				[\freq, 440,
				\amp, 0.4,
				\dur, 0.1
			]);
		};
		0.05.wait; // density
	});
}).start;

There can be different envelopes in the grains. Here we use a Perc envelope:

SynthDef(\sineGrainWPercEnv, { arg freq = 800, amp = 0.1, envdur = 0.1, pan=0;
	var signal;
	signal = FSinOsc.ar(freq, 0, EnvGen.kr(Env.perc(0.001, envdur), doneAction: 2)*amp);
	OffsetOut.ar(0, Pan2.ar(signal, pan)); 
}).add;

Task({
   1000.do({
		s.bind{
			Synth(\sineGrainWPercEnv, 
				[\freq, rrand(1300, 4000),
				\amp, rrand(0.1, 0.2),
				\envdur, rrand(0.1, 0.2),
				\pan, 1.0.rand2
			]);
		};
		0.01.wait; // density
	});
}).start;

// Or doing the same using the Pbind Pattern
Pbind(
	\instrument, \sineGrainWPercEnv,
	\freq, Pfunc({rrand(1300, 4000)}),
	\amp, Pfunc({rrand(0.1, 0.2)}),
	\envdur, Pfunc({rrand(0.1, 0.2)}),
	\dur, 0.01, // density
	\pan, Pfunc({1.0.rand2})
).play;

The two examples above serve as a good explanation of how Patterns and Tasks work. We’ve got the same SynthDef, same arguments, but Patterns do operate with default keywords (like \instrument, \freq, \amp, and \dur). We therefore had to make sure that our envelope argument was not called \dur, since Pbind uses that to control the density (or the time it takes until the next event is fired - so “\dur, 0.01” in the pattern is the same as “0.01.wait” in the Task)

Pbind(
	\instrument, \sineGrainWPercEnv,
	\freq, Pseq([1000, 2000, 4000], inf), // try to add 3000 in here
	\amp, Pfunc({rrand(0.1, 0.2)}),
	\envdur, Pseq([0.01, 0.02, 0.04], inf),
	\dur, Pseq([0.01, 0.02, 0.04], inf), // density
	\pan, Pseq([0.9, -0.9],inf)
).play;

Finally, let’s try this out with a buffer.

b = Buffer.read(s, Platform.userAppSupportDir+/+"sounds/a11wlk01-44_1.aiff");

SynthDef(\bufGrain,{ arg out = 0, buffer, rate=1.0, amp = 0.1, dur = 0.1, startPos=0;
	var signal;
	signal = PlayBuf.ar(1, buffer, rate, 1, startPos) * EnvGen.kr(Env.sine(dur, amp), doneActi\
on: 2);
	OffsetOut.ar(out, signal ! 2)
}).add;

Synth(\bufGrain, [\buffer, b]); // try it

Task({
   1000.do({ arg i;
   		Synth(\bufGrain, 
			[\buffer, b,
   			\rate, rrand(0.8, 1.2),
			\amp, rrand(0.05,0.2),
			\dur, rrand(0.06, 0.1),
			\startPos, i*100 // jumping 100 samples per grain
		]);
		0.01.wait;
	});
}).start;

Concatenative Synthesis

Concatenative synthesis is a rather recent technique of data-driven synthesis, where source sounds are analysed into a database, segmented into units, but then an target sound (for example live audio input) is analysed and matched with the closest unit in the database which is then played. This is done at a very granular level, prompting Zils and Pachet to call the technique musaicing, from musical mosaicing, as it enables the synthesis of a coherent sound at a macro level that is built up of smaller units of sound, just like in traditional mosaics. The technique is therefore quite related to granular synthesis in the sense that a macro-sound is built out of micro-sounds. The technique can be quite complex to work with as users might have to analyse and build up a database of source sounds. However, people have built plugins and classes in SuperCollider that help with this purpose and in this section here we will explore some of the work done in this area by Nick Collins, a long time SuperCollider user and developer.

b = Buffer.read(s,Platform.userAppSupportDir+/+"sounds/a11wlk01.wav");


{Concat2.ar(SoundIn.ar(0),PlayBuf.ar(1, b, 1, loop:1),1.0,1.0,1.0,0.1,0,0.0,1.0,0.0,0.0)}.p\
lay

// mouse X used control the match length
{Concat2.ar(SoundIn.ar(0),PlayBuf.ar(1, b, 1, loop:1),1.0,1.0,1.0,MouseX.kr(0.0,0.1),0,1.0,\
0.0,1.0,1.0)}.play

Chapter 11 - Physical Modelling

Physical modelling is a common synthesis technique where a mathematical model is built of some physical object. The maths here can be quite complex and outside the scope of this book. However, it is worth exploring the technique as there are PM UGens in SuperCollider and many musical instruments can easily be built using simple physical models, using filters and alike. Waveguide synthesis can model the physics of the acoustic instrument or sound generating object. It simulates the traveling of waves through a string or a tube. The physical structures of an instrument can be thought of as waveguides or a transmission lines. In physical modelling, as opposed to traditional synthesis types (AM, FM, granular, etc), we are not imitating the sound of an instrument, but rather simulating the instrument itself and the physical laws that are involved in the creation of a sound.In physical modelling of sound we typically operate with excitation and a resonant body. The excitation is the material and weight of the thing that hits, whilst the resonant body is what is being hit and resonates. In many cases it does not make sense separating the two this way mathematically, but from a user-perspective we can think of material bodies of wood, glass, metal, or a string, as examples, being hit by a finger, a plectrum, a metal hammer, or anything imaginable, for example falling sand. Further resolution can be designed in the model of the instrument, for example defining the bridge of a guitar, the type of strings, the type of body, the room which the instrument is in, etc.

For a good text on physical modelling, check Julius O. Smith’s “Physical Audio Signal Processing”: http://ccrma.stanford.edu/~jos/pasp/pasp.html

Karplus-Strong synthesis (named after its authors) is a precursor of physical modelling and is good for synthesising strings and percussion sounds.

// we generate a short burst (the excitation)
{ Decay.ar(Impulse.ar(1), 0.1, WhiteNoise.ar) }.play

// we then wrap that noise in a fast repeating delay
{ CombL.ar(Decay.ar(Impulse.ar(1), 0.1, WhiteNoise.ar), 0.02, 0.001, 3, 1) }.play

The repeat rate of the delay becomes the pitch of the string, so 0.001 is 1000 Hz, or in a reciprocal relationship. We could therefore write 440.reciprocal in the delayTime argument of the CombL, and it would give us a string sound of 440 Hz. The duration of the string is controlled by the decayTime argument. This is the basic ingredient of a string synthesizer, but for further development, you might want to consider applying filters to the noise, or perhaps use another type of noise. Also, the time of the burst (above 100 ms) will affect the sound heavily.

SynthDef(\ks_string, { arg note, pan, rand, delayTime;
	var x, y, env;
	env = Env.new(#[1, 1, 0],#[2, 0.001]);
	// A simple exciter x, with some randomness.
	x = Decay.ar(Impulse.ar(0, 0, rand), 0.1+rand, WhiteNoise.ar); 
 	x = CombL.ar(x, 0.05, note.reciprocal, delayTime, EnvGen.ar(env, doneAction:2)); 
	x = Pan2.ar(x, pan);
	Out.ar(0, LeakDC.ar(x));
}).add;

{ // and play the synthdef
	20.do({
		Synth(\ks_string, 
			[\note, [48, 50, 53, 58].midicps.choose, 
			\pan, 1.0.rand2, 
			\rand, 0.1+0.1.rand, 
			\delayTime, 2+1.0.rand]);
		[0.125, 0.25, 0.5].choose.wait;
	});
}.fork;

// here using patterns
Pdef(\kspattern, 
	Pbind(\instrument, \ks_string, // using our sine synthdef
			\note, Pseq.new([60, 61, 63, 66], inf).midicps, // freq arg
			\dur, Pseq.new([0.25, 0.5, 0.25, 1], inf),  // dur arg
			\rand, Prand.new([0.2, 0.15, 0.15, 0.11], inf),  // dur arg
			\pan, 1.0.rand2,
			\delayTime, 2+1.0.rand;  // envdur arg
		)
).play;

Compare using white noise and pink noise as an exciter, as well as using a resonant filter to filter the burst:

// white noise
{  
	var burstEnv, burst; 
	burstEnv = EnvGen.kr(Env.perc(0, 0.01), gate: Impulse.kr(1.5));
	burst = WhiteNoise.ar(burstEnv);
	CombL.ar(burst, 0.2, 0.003, 1.9, add: burst);  
}.play;

// pink noise
{  
	var burstEnv, burst; 
	burstEnv = EnvGen.kr(Env.perc(0, 0.01), gate: Impulse.kr(1.5));
	burst = PinkNoise.ar(burstEnv);
	CombL.ar(burst, 0.2, 0.003, 1.9, add: burst);  
}.play;

// here we use RLPF (resonant low pass filter) to filter the white noise burst
{  
	var burstEnv, burst; 
	burstEnv = EnvGen.kr(Env.perc(0, 0.01), gate: Impulse.kr(1.5));
	burst = RLPF.ar(WhiteNoise.ar(burstEnv), MouseX.kr(100, 12000), MouseY.kr(0.001, 0.999));
	CombL.ar(burst, 0.2, 0.003, 1.9, add: burst);  
}.play;

SuperCollider comes with a plug called Pluck which is an implementation of the Karplus-Strong synthesis. This should be more effective than the examples above, but of similar sound.

{Pluck.ar(WhiteNoise.ar(0.1), Impulse.kr(2), MouseY.kr(220, 880).reciprocal, MouseY.kr(220,\
 880).reciprocal, 10, coef:MouseX.kr(-0.1, 0.5)) !2 }.play(s)

We could create a SynthDef with Pluck.

SynthDef(\pluck, {arg freq=440, trig=1, time=2, coef=0.1, cutoff=2, pan=0;
	var pluck, burst;
	burst = LPF.ar(WhiteNoise.ar(0.5), freq*cutoff);
	pluck = Pluck.ar(burst, trig, freq.reciprocal, freq.reciprocal, time, coef:coef);
	Out.ar(0, Pan2.ar(pluck, pan));
}).add;

Synth(\pluck);
Synth(\pluck, [\coef, 0.01]);
Synth(\pluck, [\coef, 0.3]);
Synth(\pluck, [\coef, 0.7]);

Synth(\pluck, [\coef, 0.3, \time, 0.1]);
Synth(\pluck, [\coef, 0.3, \time, 5]);

Synth(\pluck, [\coef, 0.2, \time, 5, \cutoff, 1]);
Synth(\pluck, [\coef, 0.2, \time, 5, \cutoff, 2]);
Synth(\pluck, [\coef, 0.2, \time, 5, \cutoff, 5]);
Synth(\pluck, [\coef, 0.2, \time, 5, \cutoff, 15]);

// A guitar that might need a little distortion
Pbind(\instrument, \pluck,
	\freq, Pseq([72, 70, 67,65, 63, 60, 48], inf).midicps,
	\dur, Pseq([0.5, 0.5, 0.375, 0.125, 0.5, 2], 1),
	\cutoff, Pseq([15, 10, 5, 2, 10, 10, 15], 1)	
).play

Biquad filter

In SuperCollider, the SOS UGen is a second order biquad filter that can be used to create various interesting sounds. We could start with simple glass-like sound:

{SOS.ar(Impulse.ar(2), 0.0, 0.05, 0.0, MouseY.kr(1.45, 1.998, 1), MouseX.kr(-0.999, -0.9998\
, 1))!2}.play

And with slight changes we have a more woody type of sound:

SynthDef(\marimba, {arg out=0, amp=0.1, t_trig=1, freq=100, rq=0.006;
	var env, signal;
	var rho, theta, b1, b2;
	b1 = 1.987 * 0.9889999999 * cos(0.09);
	b2 = 0.998057.neg;
	signal = SOS.ar(K2A.ar(t_trig), 0.3, 0.0, 0.0, b1, b2);
	signal = RHPF.ar(signal*0.8, freq, rq) + DelayC.ar(RHPF.ar(signal*0.9, freq*0.99999, rq*0.\
999), 0.02, 0.01223);
	signal = Decay2.ar(signal, 0.4, 0.3, signal);
	DetectSilence.ar(signal, 0.01, doneAction:2);
	Out.ar(out, signal*(amp*0.4)!2);
}).add;

Pbind(
	\instrument, \marimba, 
	\midinote, Prand([[1,5], 2, [3, 5], 7, 9, 3], inf) + 48, 
	\dur, 0.2 
).play;

// Or perhaps
SynthDef(\wood, {arg out=0, amp=0.3, pan=0, sustain=0.5, t_trig=1, freq=100, rq=0.06;
	var env, signal;
	var rho, theta, b1, b2;
	b1 = 2.0 * 0.97576 * cos(0.161447);
	b2 = 0.9757.squared.neg;
	signal = SOS.ar(K2A.ar(t_trig), 1.0, 0.0, 0.0, b1, b2);
	signal = Decay2.ar(signal, 0.4, 0.8, signal);
	signal = Limiter.ar(Resonz.ar(signal, freq, rq*0.5), 0.9);
	env = EnvGen.kr(Env.perc(0.00001, sustain, amp), doneAction:2);
	Out.ar(out, Pan2.ar(signal, pan)*env);
}).add;

Pbind(
	\instrument, \wood, 
	\midinote, Prand([[1,5], 2, [3, 5], 7, 9, 3], inf) + 48, 
	\dur, 0.2 
).play;

Waveguide synthesis

Waveguide synthesis is the most common form of physical modelling, often using delay lines, filtering, feedback and other non-linear elements. The Waveguide flute below is based upon Hans Mikelson’s Csound slide flute (ultimately derived from Perry Cook’s) STK slide flute physical model. SuperCollider port by John E. Bower, who kindly allowed for the flute’s inclusion in this tutorial.

SynthDef("waveguideFlute", { arg scl = 0.2, pch = 72, ipress = 0.9, ibreath = 0.09, ifeedbk\
1 = 0.4, ifeedbk2 = 0.4, dur = 1, gate = 1, amp = 2, vibrato=0.2;	
	var kenv1, kenv2, kenvibr, kvibr, sr, cr, block, poly, signalOut, ifqc,  fdbckArray;
	var aflow1, asum1, asum2, afqc, atemp1, ax, apoly, asum3, avalue, atemp2, aflute1;
	
	sr = SampleRate.ir;
	cr = ControlRate.ir;
	block = cr.reciprocal;
	ifqc = pch.midicps;	
	// noise envelope
	kenv1 = EnvGen.kr(Env.new( 
		[ 0.0, 1.1 * ipress, ipress, ipress, 0.0 ], [ 0.06, 0.2, dur - 0.46, 0.2 ], 'linear' )
	);
	// overall envelope
	kenv2 = EnvGen.kr(Env.new(
		[ 0.0, amp, amp, 0.0 ], [ 0.1, dur - 0.02, 0.1 ], 'linear' ), doneAction: 2 
	);
	// vibrato envelope
	kenvibr = EnvGen.kr(Env.new( [ 0.0, 0.0, 1, 1, 0.0 ], [ 0.5, 0.5, dur - 1.5, 0.5 ], 'linea\
r') )*vibrato;
	// create air flow and vibrato
	aflow1 = LFClipNoise.ar( sr, kenv1 );
	kvibr = SinOsc.ar( 5, 0, 0.1 * kenvibr );
	asum1 = ( ibreath * aflow1 ) + kenv1 + kvibr;
	afqc = ifqc.reciprocal - ( asum1/20000 ) - ( 9/sr ) + ( ifqc/12000000 ) - block;
	fdbckArray = LocalIn.ar( 1 );
	aflute1 = fdbckArray;
	asum2 = asum1 + ( aflute1 * ifeedbk1 );
	//ax = DelayL.ar( asum2, ifqc.reciprocal * 0.5, afqc * 0.5 );
	ax = DelayC.ar( asum2, ifqc.reciprocal - block * 0.5, afqc * 0.5 - ( asum1/ifqc/cr ) + 0.0\
01 );
	apoly = ax - ( ax.cubed );
	asum3 = apoly + ( aflute1 * ifeedbk2 );
	avalue = LPF.ar( asum3, 2000 );
	aflute1 = DelayC.ar( avalue, ifqc.reciprocal - block, afqc );
	fdbckArray = [ aflute1 ];
	LocalOut.ar( fdbckArray );
	signalOut = avalue;
	OffsetOut.ar( 0, [ signalOut * kenv2, signalOut * kenv2 ] );	
}).add;

// Test the flute
Synth(\waveguideFlute, [\amp, 0.5, \dur, 5, \ipress, 0.90, \ibreath, 0.00536, \ifeedbk1, 0.\
4, \ifeedbk2, 0.4, \pch, 60, \vibrato, 0.2] );

// test the flute player's skills:
Routine({
	var pitches, durations, rhythm;
	pitches = Pseq( [ 47, 49, 53, 58, 55, 53, 52, 60, 54, 43, 52, 59, 65, 58, 59, 61, 67, 64, \
58, 53, 66, 73 ], inf ).asStream;
	durations = Pseq([ Pseq( [ 0.15 ], 17 ), Pseq( [ 2.25, 1.5, 2.25, 3.0, 4.5 ], 1 ) ], inf).\
asStream;
	17.do({
		rhythm = durations.next;		
		Synth(\waveguideFlute, [\amp, 0.6, \dur, rhythm, \ipress, 0.93, \ibreath, 0.00536, \ifeed\
bk1, 0.4, \ifeedbk2, 0.4, \pch, pitches.next] );
		rhythm.wait;	
	});
	5.do({
		rhythm = durations.next;		
		Synth(\waveguideFlute, [\amp, 0.6, \dur, rhythm + 0.25, \ipress, 0.93, \ibreath, 0.00536,\
 \ifeedbk1, 0.4, \ifeedbk2, 0.4, \pch, pitches.next] );		
		rhythm.wait;
	});	
}).play;

Filters

Filters are a vital element in physical modelling. The main concepts here are some kind of an exciter (where in SuperCollider we might use triggers such as Impulse, Dust, or filtered noise) and a resonator (such as the Resonz and Klank resonators, Delays, Reverbs, etc.)

Ringz

The Ringz is a powerful ringing filter, with a decay time, so the impulse can ring for N amount of seconds. Let’s explore some examples:

// triggering a ringing filter by one impulse
{ Ringz.ar(Impulse.ar(0), 2000, 2) }.play

// one impulse per second
{ Ringz.ar(Impulse.ar(1), 2000, 2) }.play

// here using an envelope to soften the attack
{ Ringz.ar(EnvGen.ar(Env.perc(0.01, 1, 1), Impulse.ar(1)), 2000, 2) }.play

// playing with the frequency
{ Ringz.ar(Impulse.ar(4)*0.2, LFNoise0.ar(4)*2000, 0.1) }.play

// using XLine to increase rate and frequency
{ Ringz.ar(Impulse.ar(XLine.ar(1, 10, 4))*0.2, LFNoise0.ar(XLine.ar(1, 10, 4))*2000, 0.1) }\
.play

// using Dust instead of Impulse
{ Ringz.ar(Dust.ar(3, 0.3), 2000, 2) }.play

// here we use an Impulse to trigger the incoming sound
{ Ringz.ar(Impulse.ar(MouseX.kr(1, 100, 1)), 1800, MouseY.kr(0.05, 1), 0.4) }.play;

// control frequency as well
{ Ringz.ar(Impulse.ar(10)*0.5, MouseY.kr(100,1000), MouseX.kr(0.001,1)) }.play

// you could also usa an envelope to soften the attack
{ Ringz.ar(EnvGen.ar(Env.perc(0.001, 1), Impulse.kr(MouseX.kr(1, 100, 1))), 1800, MouseY.kr\
(0.05, 1), 0.4) }.play;

// here resonating white noise instead of a trigger
{ Ringz.ar(WhiteNoise.ar(0.005), 600, 4) }.play

// would this be useful in synthesizing a flute?
{ Ringz.ar(LPF.ar(WhiteNoise.ar(0.005), MouseX.kr(100, 10000)), 600, 1) !2}.play

// a modified example from the documentation
{({Ringz.ar(WhiteNoise.ar(0.001),XLine.kr(exprand(100.0,5000.0), exprand(100.0,5000.0), 20)\
, 0.5)}!10).sum}.play

// The Formlet UGen is a type of Ringz filter, useful for formant control:
{ Formlet.ar(Blip.ar(MouseX.kr(10, 400), 1000, 0.1), MouseY.kr(10, 1000), 0.005, 0.04) }.pl\
ay;

Resonz, Klank and DynKlank

The Resonz is a …

// mouse y controlling frequency - mousex controling bandwidth ratio
{ Resonz.ar(Impulse.ar(10)*1.5, MouseY.kr(40,10000), MouseX.kr(0.001,1)) }.play

// here with white noise - mouse y controlling frequency - mousex controling bandwidth ratio
{ Resonz.ar(WhiteNoise.ar(0.1), MouseY.kr(40,10000), MouseX.kr(0.001,1)) }.play

// playing with Ringz and Resonz
{ Ringz.ar(Resonz.ar(Dust.ar(20)*1.5, MouseY.kr(40,10000), MouseX.kr(0.001,1)), MouseY.kr(4\
0,10000), 0.04) }.play;

// let's explore the resonance using the freqscope
{ Resonz.ar(WhiteNoise.ar(0.1), MouseY.kr(40,10000), MouseX.kr(0.001,1)) }.freqscope

// Klank is a bank of Resonz filters 
{ Klank.ar(`[[800, 1071, 1153, 1723], nil, [1, 0.9, 0.1, 2]], Impulse.ar(1, 0, 0.2)) }.play;

// Klank filtering WhiteNoise
{ Klank.ar(`[[800, 1200, 1600, 200], [1, 0.8, 0.4, 0.8], [1, 1, 1, 1]], WhiteNoise.ar(0.001\
)) }.play;

// DynKlang is dynamic - using the mouse to change frequency and ringtime
{   var freqs, ringtimes;
    freqs = [800, 1071, 1153, 1723] * MouseX.kr(0.5, 2);
    ringtimes = [1, 1, 1, 1] * MouseY.kr(0.001, 5);
	DynKlank.ar(`[freqs, nil, ringtimes ], PinkNoise.ar(0.001))
}.play;

Decay

{ Decay.ar(Impulse.ar(XLine.kr(1,50,20), 0.25), 0.2, FSinOsc.ar(600), 0)  }.play;

{ Decay2.ar(Impulse.ar(XLine.kr(1,50,20), 0.25), 0.1, 0.3, FSinOsc.ar(600)) }.play;

SynthDef(\clap, {arg out=0, pan=0, amp=0.3, filterfreq=50, rq=0.01;
	var env, signal, attack, noise, hpf1, hpf2;
	noise = WhiteNoise.ar(1)+SinOsc.ar([filterfreq/2,filterfreq/2+4 ], pi*0.5, XLine.kr(1,0.01\
,4));
	hpf1 = RLPF.ar(noise, filterfreq, rq);
	hpf2 = RHPF.ar(noise, filterfreq/2, rq/4);
	env = EnvGen.kr(Env.perc(0.003, 0.00035));
	signal = (hpf1+hpf2) * env;
	signal = CombC.ar(signal, 0.5, 0.03, 0.031)+CombC.ar(signal, 0.5, 0.03016, 0.06);
	signal = Decay.ar(signal, 1.5);
	signal = FreeVerb.ar(signal, 0.23, 0.1, 0.12);
	Out.ar(out, Pan2.ar(signal * amp, pan));
	DetectSilence.ar(signal, doneAction:2);
}).add;

Synth(\clap, [\filterfreq, 1700, \rq, 0.14, \amp, 0.1]);

TBall, Spring and Friction

Physical modelling can involve the mathematical modelling of all kinds of phenomena, from wind to water to the simulation of moving or falling objects where gravity, speed, surface type, etc., are all parameters. The popular Box2D library (of AngryBirds fame) is one such library that simulates physical systems. In SuperCollider there are UGens that do that, for example TBall (Trigger Ball) and Spring

// arguments are trigger, gravity, damp and friction
{TBall.ar(Impulse.ar(0), 0.1, 0.2, 0.01)*20}.play

// a light ball falling on a bouncy surface on the moon?
{TBall.ar(Impulse.ar(0), 0.1, 0.1, 0.001)*20}.play

// a heavy ball falling on a soft surface?
{TBall.ar(Impulse.ar(0), 0.1, 0.2, 0.1)*20}.play

Having explored the qualities of the TBall as a system that outputs impulses according to a physical system, we can now apply these impulses in some of the resonant filters that we have explored above.

// here using Ringz to create a metal ball falling on a marble table
{Ringz.ar(TBall.ar(Impulse.ar(0), 0.09, 0.1, 0.01)*20, 3000, 0.08)}.play

// many balls falling randomly (using Dust)
{({Ringz.ar(TBall.ar(Dust.ar(2), 0.09, 0.1, 0.01)*20, rrand(2000,3000), 0.07)}!5).sum}.play

// here using Decay to create a metal ball falling on a marble table
{Decay.ar(TBall.ar(Impulse.ar(0), 0.09, 0.1, 0.01)*20, 1)}.play

// a drummer on the snare?
{LPF.ar(WhiteNoise.ar(0.5), 4000)*Decay.ar(TBall.ar(Impulse.ar(0), 0.2, 0.16, 0.003)*20, 1)\
!2}.play

{SOS.ar(TBall.ar(Impulse.ar(0), 0.09, 0.1, 0.01)*20, 0.6, 0.0, 0.0, rrand(1.991, 1.994), -0\
.9982)}.play

// Txalaparta? 
{({|x| SOS.ar(TBall.ar(Impulse.ar(1, x*0.1*x), 0.8, 0.2, 0.02)*20, 0.6, 0.0, 0.0, rrand(1.9\
92, 1.99), -0.9982)}!6).sum}.play

The Spring UGen is a physical model of a resonating spring. Considering the wave properties of spring this can be very useful for synthesis.

{
	var trigger =LFNoise0.ar(1)>0;
	var signal = SinOsc.ar(Spring.ar(trigger,1,4e-06)*1220);
	var env = EnvGen.kr(Env.perc(0.001,5),trigger);
	Out.ar(0, Pan2.ar(signal * env, 0));
}.play

// Two springs:
{
	var trigger = LFNoise0.ar(1)>0;
	var springs = Spring.ar(trigger,1,4e-06) * Spring.ar(trigger,2,4e-07);
	var signal = SinOsc.ar(springs*1220);
	var env = EnvGen.kr(Env.perc(0.001,5),trigger);
	Out.ar(0, Pan2.ar(signal * env, 0));
}.play

// And here are two tweets (less than 140 characters) simulating timpani drums. 

play{{x=LFNoise0.ar(1)>0;SinOsc.ar(Spring.ar(x,4,3e-05)*(70.rand+190)+(30.rand+90))*EnvGen.\
kr(Env.perc(0.001,5),x)}!2}

// here heavy on the tuning pedal
play{{x=LFNoise0.ar(1)>0;SinOsc.ar(Spring.ar(x,4,3e-05)*(70.rand+190)+LFNoise2.ar(1).range(\
90,120))*EnvGen.kr(Env.perc(0.001,5),x)}!2}

In the SC3plugins you’ll find a Friction UGen which is a physical model of a mass resting on a belt. The documentation of the UGen is good, but two examples are provided here for fun:

{Friction.ar(Ringz.ar(Impulse.ar(1), [400, 412]), 0.0002, 0.2, 2, 2.697)}.play

{Friction.ar(Klank.ar(`[[400, 412, 340]], Impulse.ar(1)), 0.0002, 0.2, 2, 2.697)!2}.play

The MetroGnome

How about trying to synthesise a wooden old-fashioned metronome?

(
SynthDef(\metro, {arg tempo=1, filterfreq=1000, rq=1.0;
var env, signal;
	var rho, theta, b1, b2;
	theta = MouseX.kr(0.02, pi).poll;
	rho = MouseY.kr(0.7, 0.9999999).poll;
	b1 = 2.0 * rho * cos(theta);
	b2 = rho.squared.neg;
	signal = SOS.ar(Impulse.ar(tempo), 1.0, 0.0, 0.0, b1, b2);
	signal = RHPF.ar(signal, filterfreq, rq);
	Out.ar(0, Pan2.ar(signal, 0));
}).add
)

// Move the mouse to find your preferred metronome (low left works best for me). We are her\
e polling the MouseX and MouseY Ugens, so you will be able to follow their output in the po\
st window.

a = Synth(\metro) // we create our metronome
a.set(\tempo, 0.5.reciprocal) // 120 bpm (0.5.reciprocal = 2 bps)
a.set(\filterfreq, 4000) // try 1000 (for example)
a.set(\rq, 0.1) // try 0.5 (for example)

// Let's reinterpret the Poème symphonique was composed by György Ligeti (in 1962)
// http://www.youtube.com/watch?v=QCp7bL-AWvw

SynthDef(\ligetignome, {arg tempo=1, filterfreq=1000, rq=1.0;
var env, signal;
	var rho, theta, b1, b2;
	b1 = 2.0 * 0.97576 * cos(0.161447);
	b2 = 0.97576.squared.neg;
	signal = SOS.ar(Impulse.ar(tempo), 1.0, 0.0, 0.0, b1, b2);
	signal = RHPF.ar(signal, filterfreq, rq);
	Out.ar(0, Pan2.ar(signal, 0));
}).add;

// and we create 10 different metronomes running in different tempi
// (try with 3 metros or 30 metros)
(
10.do({
	Synth(\ligetignome).set(
		\tempo, (rrand(0.5,1.5)).reciprocal, 
		\filterfreq, rrand(500,4000), 
		\rq, rrand(0.3,0.9) )
});
)

The STK synthesis kit

Many years ago, Paul Lansky ported the STK physical modelling kit by Perry Cook and Gary Scavone for SuperCollider. This collection of UGens can be found in the SC3plugins, but they have not been maintained and the code is in a bad shape, although there are still some UGens that work. It could be a good project for someone wanting to explore the source of a classic physical modelling source code to update the UGens for SuperCollider 3.7+.

Here below we have a model of a xylophone:

SynthDef(\xylo, { |out=0, freq=440, gate=1, amp=0.3, sustain=0.5, pan=0|
	var sig = StkBandedWG.ar(freq, instr:1, mul:3);
	var env = EnvGen.kr(Env.adsr(0.0001, sustain, sustain, 0.3), gate, doneAction:2);
	Out.ar(out, Pan2.ar(sig, pan, env * amp));
}).add;

Synth(\xylo)

Pbind(\instrument, \xylo, \freq, Pseq(({|x|x+60}!13).mirror).midicps, \dur, 0.2).play