Scoring Sound
Scoring Sound
Buy on Leanpub

Table of Contents

Preface

SuperCollider is one of the most expressive, elegant and popular programming languages used in contemporary computer music. It has become well known for its fantastic sound, strong object oriented design, useful primitives for musical composition, and openness in terms of aims and objectives. Unlike much commercial software, one of the key design ideas of SuperCollider is to be a tabula rasa, a blank page, that allows musicians to design their own musical compositions or instruments with as few technological constraints as possible. There are other environments that operate in similar space, such as Pure Data, CSound, Max/MSP, ChucK, LuaAV, and now JavaScript’s webaudio, but I believe that SuperCollider excels at important aspects required for such work, and for this reason it has been my key platform for musical composition and instrument design since 2001, although I have enjoyed working in other environments as well.

This book is an outcome of teaching SuperCollider in various British higher education institutions since 2005, in particular at the Digital Music and Sound Arts programme at the University of Brighton, Music Informatics at the University of Sussex, Sonic Arts at Middlesex University and Music Informatics at the University of Westminster. Lacking the ideal course book, I created a tutorial that I’ve used for teaching synthesis and algorithmic composition in SuperCollider. The tutorial’s focus was not on teaching SuperCollider as a programming language, but to explore key concepts, from different synthesis techniques and algorithmic composition to user interfacing with graphical user interfaces or hardware. I have subsequently used this tutorial in diverse workshops given around the world, from Istanbul to Reykjavik; from Madrid to Rovaniemi.

An earlier version of this book was published on the DVD of MIT’s The SuperCollider Book. The SuperCollider book is an excellent source for the study of SuperCollider and is highly recommended, but has different aims than this current book as it goes deeper into more specific areas, whilst the current book aims to be present a smoother introduction, a general overview and a specific focus on practical uses of SuperCollider. The original tutorial was initially released as .sc files, then moved over to the new .scd document format, and finally ported to the .html format that became the standard help file format of SuperCollider. SuperCollider documentation has now gained a new and fantastic documentation format which can be studied by exploring the new documentation system of SuperCollider. However, with this updated tutorial, I have decided to port it into a more modern ebook format that would be applicable in the diverse readers on different operating systems. I have chosen Lean Publishing as the publication platform for this rewriting, as I can write the book in the attractive markdown format and use github for revision control. Furthermore, I can publish the book ad-hoc, get real-time feedback from readers, and disseminate the book in the typical modern ebook formats appropriate to most ebook readers.

The aim with this book is the same as my initial tutorials written in 2005, i.e., to serve as a good undergraduate introduction to SuperCollider programming, audio synthesis, algorithmic composition, and interface building of innovative creative audio systems. I do hope that my past and future students will find this work useful, and I sincerely hope that it also is beneficial to anyone who decides to embark upon the exciting expedition into the fantastic and enticing world of SuperCollider: the ideal workshop for people whose creative material is sound.

I encourage any reader who finds bugs, errors, or simply would like a better explanation of a topic to give me feedback through this book’s [feedback channel]: (http://whatevah.com).

Brighton, June, 2013.

Introduction

Embarking upon learning SuperCollider can be daunting at first. The environment the user is faced with can seem confusing, but let’s be assured that this feeling will be quickly overcome. Learning SuperCollider is in many ways similar to learning an acoustic instrument and it takes hours of practice to reach excellence. However, it should be noted here immediately at the beginning that such excellence need not necessarily be the goal. Indeed, one can write some very good music knowing only a few chords on a guitar or the piano!

The SuperCollider IDE (Integrated Development Environment) is the same on the Linux, Mac, and Windows operating systems. There might be minor differences, but it looks roughly like this (and this picture contains some labels for further explanation):

A screenshot of the SuperCollider IDE
A screenshot of the SuperCollider IDE

You will see a coding window on the left, a documentation window, and a post window where the SuperCollider language informs you what it is up to. So let’s dive straight into the first exercise, the famous “Hello World” print into a console (the post window). Simply type “Hello world”.postln; (or “Hola Mundo”.postln; if you like) into the coding window, highlight that text and hit Shift + Return (or go to the XXX menu and select “Interpret code XXX”). If you look at the post window, a “Hello World” has been posted there. Now try to write the same with a spelling mistake, such as “Hello World”.possstln; and you will see an error message appearing.

SuperCollider is case sensitive, which means that it understands “SinOsc” but has no clue what “Sinosc” means. You will also notice the semicolon (;) at the end of every line written. This is for the SuperCollider interpreter (the language parser) to understand that the current line has ended. The SuperCollider environment consists of three different elements (or processes): the IDE (the text editor that you see in front of you), the language (sclang that is the programming language), and the synth (the audio server that will generate the sound). Later chapters will explain how different languages (such as Java, C++, or Pd) can communicate to the audio server.

Now, let’s dive straight into making some sound, as that’s really the reason you are reading this book. First boot the audio server from the XXX menu and then type:

{SinOsc.ar(440)}.play;

into the code window and evaluate that line (and if your code is only one line, you can simply place the cursor somewhere in that line and hit Shift+Return). You will hear a sound in the left speaker of your system (yes all oscillators are mono by nature). It might be loud, and you will need to stop it. Hit Cmd+. or Ctrl+. to stop the sound. There is also a menu item to stop the sound, but it is recommended that you simply write these key commands into the motor memory of your fingers.

Let us play a little with this code (hit Cmd/Ctrl+period (Cmd+.) to stop the sound after every line):

// Octave higher
{SinOsc.ar(880, 0, 1)}.play;
// Half the amplitude
{SinOsc.ar(880, 0, 0.5)}.play;
// Add another oscillator to multiply the frequency
{SinOsc.ar(880 * SinOsc.ar(2), 0, 0.5)}.play;
// Or multiply the the amplitude
{SinOsc.ar(880, 0, 0.5 * SinOsc.ar(2) )}.play;

What happened here? We are listening to a sine wave oscillator of 880 Hz, or cycles per second. The sine wave oscillator is what is called in most sound programming languages a “unit generator” and it outputs samples according to specific algorithms. So a SinOsc will output samples in different way than a Saw. Furthermore, in the code above we are using the output of one oscillator to multiply parameters of another. But the question arises: which parameters? What is that comma after 880 and the stuff appearing after it?

Finally, what we have listened to is a sine wave of 880 Hz, with respective amplitudes of 1 and 0.5. And this is important: signals sent to the sound card of your computer are typically consisting of samples of values between −1 and 1 in amplitude. If the signal is above 1 or below −1, you typically get what is called “clipping” and the sound most likely becomes distorted.

You might also have noticed the information given to you at the bottom of the IDE, that you have used a number of synths (s), ugens (u), groups (g), SynthDefs (sd XXX check). This will be explained in the following chapters, but for now: congratulations with having made some sound in SuperCollider!

About the Installation

You have now installed and explored SuperCollider on your system. This book does not cover how to install SuperCollider on the different operating systems, but we should note that on any SuperCollider installation, a user specific area is created where you can install your classes, find the synth definitions you have created, and install SC-plugins. This is in your user directory: Mac: ~/Library/Application Support/SuperCollider Linux: Windows:

Part I

Chapter 1 - The SuperCollider language

This chapter will introduce the fundamentals for creating and running a simple SuperCollider program. It will introduce the basic concepts needed for further exploration and it will be the only silent chapter in the book. We will learn the basic key orientation practices of SuperCollider, that is how to run code, post into the post window and use the documentation system. We will also discuss the key fundamental things needed to understand and write SuperCollider code, namely: variables, arrays, functions and basic data flow syntax. Having grasped the topics introduced in this chapter, you should be able to write practically anything that you want, although later we will go into Object Orientated Programming, which will make things considerably more effective and perhaps easy.

The semicolon, brackets and running a program

The semicolon “;” is what divides one instruction from the next. It defines a line of code. After the semicolon, the interpreter looks at next line. There has to be semicolon after each line of code. Forgetting it will give you errors printed in the post console.

This code will work fine if you evaluate only this line:

"Hello World".postln

But not this, if you evaluate both lines (by highlighting both and evaluating them with Shift+Return):

"Hello World".postln
"Goodbye World".postln;

Why not? Because the interpreter (the SC language) will not understand

"Hello World".postln"Goodbye World".postln; 

However, this will work:

"Hello World".postln; "Goodbye World".postln; 

It is up to you how you format your code, but you’d typically want to keep it readable for yourself in the future and other readers too. There is however a style of SC coding used for Tweeting, where the 140 character limit introduces interesting constraints for composers. Below is a Twitter composition by Tim Walters, but as you can see, it is not good for human readability although it sounds good (The language doesn’t care about human readability, but we do):

play{HPF.ar(({|k|({|i|SinOsc.ar(i/96,Saw.ar(2**(i+k))/Decay.ar(Impulse.ar(0.5**i/k),[k*i+1,\
k*i+1*2],3**k))}!6).product}!32).sum/2,40)}

It can get tiring to having to select many lines of code and here is where brackets come in handy as they can create a scope for the interpreter. So this following code:

var freq = 440;
var amp = 0.5;
{SinOsc.ar(freq, 0, amp}.play;

will not work unless you highlight all three lines. Imagine if these were 100 lines: you would have to do some tedious scrolling up and down the document. So using brackets, you can simply double click after or before a bracket, and it will highlight all the text between the matching brackets.

(
var freq = 440;
var amp = 0.5;
{SinOsc.ar(freq, 0, amp}.play;
)

Matching brackets

Often when writing SuperCollider code, you will experience errors whose origin you can’t figure out. Double clicking between brackets and observe whether they are matching properly is one of the key methods of debugging SuperCollider code.

(
"you ran the program and ".post; 
(44+77).post; " is the sum of 44 and 77".postln;
"the next line - the interpreter posts it twice as it's the last line".postln;
)

The following will not work. Why not? Look at the post window.

(
(44+77).postln
55.postln;
)

Note that the • sign is where the interpreter finds the error.

The post window

You have already posted into the post window (many other languages use a “print” and “println” for this purpose). But let’s explore the post window a little further.

(
"hello".post; // post something
"one, two, three".post;
)


(
"hello there".postln; // post something and make a line break
"one, two, three".postln;
)

1+4; // returns 5

Scale.minor.degrees // returns an array with the degrees in the minor scale

You can also use postf:

"the first value is %, and the second one is % \n".postf(1111, 9999);

If you are posting a long list you might not get the whole content using .postln, as SC is lazy and doesn’t like printing too long data structures, like lists.

For this purpose use the following:

Post << "hey"

Example

Array.fill(1000, {100.rand}).postln; // you see you get ...etc...

Whereas,

Post << Array.fill(1000, {100.rand}) // you get the whole list

The Documentation system (The help system)

The documentation system in SuperCollider is a good source for information and learning. It includes introduction tutorials, overviews and documentation for almost every class in SuperCollider. The documentation files typically contain examples of how to use the specific class/UGen, and thus serves as a great source for learning and understanding. Many SC users go straight into the documentation when they start writing code, using it as a template and copy-paste the examples into their projects.

So if you highlight the word Array in an SC document and hit Cmd+d or Ctrl+d (d for documentation), you will get the documentation for that class. You will see the superclasses/subclasses and learn about all the methods that the Array class has.

Also, if you want to read and browse all the documentation, you can open a help browser: Help.gui or simply Cmd+D or Ctrl+D (uppercase D).

Comments

Comments are information written for humans, but ignored by the language interpreter. It is a good practice to write comments where you think you might forget what a certain block of code will do. It is also a communication to another programmer who might read your code. Feel free to write as many comments as you want, but often it might be a better practice to name your variables and function names (we’ll learn later in this section what these words mean) such that you don’t need to add a comment.

// This is a comment

/*
And this is 
also a comment
*/

Comments are red by default, but can be any colour (in the Format menu choose ‘syntax colorize’)

Variables

Here is a mantra to memorise: Variables are containers of some value. They are names or references to values that could change (their value can vary). So we could create a variable that is a property of yourself called age. Every year this variable will increase by one integer (a whole number). So let us try this now:

var age = 33;
age = age + 1; // here the variable ‘age’ gets a new value, or 33 + 1
age.postln; // and it posts 34

SuperCollider is not strongly typed so there is no need to declare the data type of variables. Data types (in other languages) include : integer, float, double, string, custom objects, etc… But in SuperCollider you can create a variable that contains an integer at one stage, but later contains reference to a string or a float. This can be handy, but one has to be careful as this can introduce bugs in your code.

Above we created a variable ‘age’, and we declared that variable by writing ‘var’ in front of it. All variables have to be declared before you can use them. There are two exceptions, all lowercase letters from a to z (note that ’s’ is a special variable that is by default used as a reference to the SC Server) can be used without declaration, and so can ‘environmental’ variables (which can be considered global within a certain context) and they start with the ‘~’ symbol. More on that later.

a = 3; // we assign the number 3 to the variable "a"
a = "hello"; // we can also assign a string to it.
a = 0.333312; // or a floating point number;
a = [1, 34, 55, 0.1, "string in a list", \symbol, pi]; // or an array with mixed types

a // hit this line and we see in the post window what "a" contains

SuperCollider has scope, so if you declare a variable within a certain scope, such as a function, they can have a local value within that scope. So try to run this code (by double clicking behind the first bracket).

(
var v, a;
v = 22;
a = 33;
The value of a is : .post; a.postln;
)
The value of a is now : .post; a.postln; // then run this line 

So ‘a’ is a global variable. This is good for prototyping and testing, but not recommended as a good software design. A variable with the name ‘myvar’ could not be global – only single lowercase characters.

If we want longer variable names, we can use environmental variables (using the ~ symbol): they can be seen as global variables, accessible from anywhere in your code

~myvar = 333;

~myvar // post it;

But typically we just declare the variable (var) in the beginning of the program and assign its value where needed. Environmental variables are not necessary, although they can be useful, and this book will not use them extensively.

But why use variables at all? Why not simply write the numbers or the value wherever we need it? Let’s take one example that should demonstrate clearly why they are useful:

{
 // declare the variables
var freq, oscillator, filter, signal;
freq = 333; // set the frequency variable
 // create a Saw wave oscillator with two channels
oscillator = Saw.ar([freq, freq+2]);
// use a resonant low pass filter on the oscillator
filter = RLPF.ar(oscillator, freq*4, 0.25);
// multiply the signal by 0.5 to lower the amplitude 
signal = filter * 0.5;
}.play;

As you can see, the ‘freq’ variable is used in various places in the above synthesizer. You can now change the value of the variable to something like 500, and it the frequency will ‘automatically’ be turned into 500 Hz in the left channel, 502 Hz in the right, and the cutoff frequency will be 2000 Hz. So instead of changing these variables throughout the code, you change it in one place and its value magically plugged into every location where that variable is used.

Functions

Functions are an important feature of SuperCollider and most other programming languages. They are used to encapsulate algorithms or functionality that we only want to write once, but use in different places at various times. They can be seen as a black box or a factory that takes some input, parses it, and returns some output. Just as a sophisticated coffee machine might take coffee beans and water as input, it then grounds the beans, boils the water, brews the coffee, and finally outputs a lovely drink. The key point is that you don’t need (or want) to know precisely how all this happens. It’s enough to know where to fill up the beans and water, and then how to operate the buttons of the machine (strength, number of cups, etc.). The coffee machine is a [black box] (http://en.wikipedia.org/wiki/Black_box).

Functions in SuperCollider are notated with curly brackets ‘{}’

Let’s create a function that posts the value of 44. We store it in a variable ‘f’, so we can call it later.

f = { 44.postln };

When you run this line of code, you see that the SuperCollider post window notifies you that it has been given a function. It does not post 44 into the post window. For that we have to call the function, i.e., to ask it to perform its calculation and return some value to us.

f.value // to call the function we need to get its value

Let us write a more complex function:

f = {
    69 + ( 12 * log( 220/440 ) / log(2) )
};
f.value // returns the MIDI note 57 (the MIDI note for 220 Hz)

This is a typical function that calculates the midi note of a given frequency in Hz (or cycles per second). Most electronic musicians know that the MIDI note 60 is C, and that 69 is A, and that a is 440 Hz. But how is this calculated? Well the function above does return the MIDI note of 220 Hz. But this is a function without any input (or argument as it is called in the lingo). Let’s open up this input channel, by drilling a hole into the black box, and let’s name this argument ‘freq’ as that’s what we want to put in.

f = { arg freq;
    69 + ( 12 * log( freq/440 ) / log(2) )
}

We have now an input into our function, an argument named ‘freq’. Note that this argument has been put into the right position inside the calculation. We can now put in any frequency and get the relevant MIDI note.

f.value(440) // returns 69
f.value(880) // returns 81
f.value(261) // returns 59.958555396543 (a fractional MIDI note, close to C (or 60))

The above is a good example of why functions are so great. The algorithm of calculating the MIDI note from frequency is somewhat complex (or nasty?), and we don’t really want to memorise it or write it more than once. We have simply created a black box that we put in to the ‘f’ variable and now we can call it whenever we want without knowing what is inside the black box.

We will be using functions all the time in the coming chapters. It is vital to understand how they receive arguments, process the data, and return a value.

The final thing to say about functions at this stage is that they can have default values in their arguments. This means that we don’t have to pass in all the arguments of the function.

f = { arg salary, tax=20;
    var aftertax;
    aftertax = salary - (salary * (tax/100) )
}

So here above is a function that calculates the pay after tax, with the default tax rate set at 20%. Of course we can’t be sure that this is the tax rate forever, or in different countries, so this needs to be an argument that can be set in the different contexts.

f.value(2000) // here we use the default 20% tax rate
f.value(2000, 35) // and here the tax percentage has become 35%

You will see the following

f = { arg string; string.postln; } // we will post the string that comes into the function
f.value(hi there") // and here we call the function passing “hi there” as the argument.

Often written in this form:

f = {|string| string.postln;} // arguments can be defined within two pipes ‘|’
f.("hi there") // and you can skip the .value and just write a dot (.)

Arrays, Lists and Dictionaries

Arrays are one of the most useful things to understand and use in computer music. This is where we can store bunch of data (whether pitches, scales, synths, or any other information you might want to reference). A common thing a novice programmer typically does is to create lots of variables for data that could be stored in an array, so let’s dive straight into learning how to use arrays and lists.

An array can be seen as a storage space for things that you need to use later. Like a bag or a box where you keep your things. We typically keep the reference to the array in a variable so we can access it anywhere in our code:

a = [11, 22, 33, 44, 55]; // we create an array with these five numbers

You will see that the post window posts the array there when you run this line. Now let us try to play a little with the array:

a[0]; // we get at the first item in the array (most programming languages index at zero)
a[4] // returns 55, as index 4 into the array contains the value 55
a[1]+a[4] // returns 77 as 22 plus 55 equals 77
a.reverse // we can reverse the array
a.maxItem // the array can tell us what is the highest value

and so on. The array we created above had five defined items in it. But we can create arrays differently, where we fill it algorithmically with any data we’d be interested in:

a = Array.fill(5, { 100.rand }); // create an array with five random numbers from 0 to 100

What happened here is that we tell the Array class to fill a new array with five items, but then we pass it a function (introduced above) and the function will be evaluated five times. Compare that with:

a = Array.fill(5, 100.rand ); // create an array with ONE random number from 0 to 100

We can now play a little bit with that function that we pass to the array creation:

a = Array.fill(5, { arg i; i }); // create a function with the iterator (‘i’) argument
a = Array.fill(5, { arg i; (i+1)*11 }); // the same as the first array we created
a = Array.fill(5, { arg i; i*i });
a = Array.series(5, 10, 2); // a new method (series). 
// Fill the array with 5 items, starting at 10, adding 2 in every step.

You might wonder why this is so fantastic or important. The fact is that arrays are used everywhere in computer music. The sound file you will load in later in this book will be stored in an array, with each sample in its own slot in an array. Then you can jump back and forth in the array, scratching, cutting, break beating or whatever you would like to do, but the fact is that this is all done with data (the samples of your soundfile) stored in an array. Or perhaps you want to play a certain scale.

m = Scale.minor.degrees; // the Scale class will return the degrees of the minor scale

m is here an array with the following values: [ 0, 2, 3, 5, 7, 8, 10 ]. So in a C scale, 0 would be C, 2 would be D (two half notes above C), 3 would be E flat, and so on. We could represent those values as MIDI notes, where 60 is the C note (~ 261Hz). And we could even look at the actual frequencies in Hertz of those MIDI notes. (Those frequencies would be passed to the oscillators as they are expecting frequencies and not MIDI notes as arguments).

m = Scale.minor.degrees; // Scale class returns the degrees of the minor scale
m = m.add(12); // you might want to add the octave (12) into your array
m = m+60 // here we simply add 60 to all the values in the array
m = m.midicps // and here we turn the MIDI notes into their frequency values
m = m.cpsmidi // but let’s turn them back to MIDI values for now

We could now play with the ‘m’ array a little. In an algorithmic composition, for example, you might want to pick a random note from the minor scale

n = m.choose; // choose a random MIDI note and store it in the variable ’n’
x = m.scramble; // we could create a melody by scrambling the array
x = m.scramble[0..3] // scramble the list and select the first 4 notes
p = m.mirror // mirror the array (like an ascending and descending scale)

You will note that in ‘x = m.scramble’ above, the ‘x’ variable contains an array with a scrambled version of the ‘m’ array. The ‘m’ array is still intact: you haven’t scrambled that one, you’ve simply said “put a scrambled version of ‘m’ into variable ‘x’.” So the original ‘m’ is still there. If you really wanted to scramble ‘m’ you would have to do:

m = m.scramble; // a scrambled version of the ‘m’ array is put back into the ‘m’ variable
// But now it’s all scrambled up. Let’s sort it into ascending numbers again:
m = m.sort

Arrays can contain anything, and in SuperCollider, they can contain values of mixed types, such as integers, strings, floats, and so on.

a = [1, two, 3.33, Scale.minor] // we mix types into the array.
// This can be dangerous as the following
a[0]*10 // will work
a[1]*10 // but this won’t, as you cant multiply the word “two” with 10 

Arrays can contain other arrays, containing other arrays of any dimensions.

// a function that will create a 5 item array with random numbers from 0 to 10
f = { Array.fill(5, { 10.rand }) }; // array generating function 
a = Array.fill(10, f.value);  // create another array with 10 items of the above array
// But the above was evaluated only once. Why? 
// Because, you need to pass it a function to get a different array every time. Like this:
a = Array.fill(10, { f.value } );  // create another array with 10 items of the above array
// We can get at the first array and see it’s different from the second array
a[0]
a[1]
// We could put a new array into a[0] (that slot contains an array)
a[0] = f.value
// We could put a new array into a[0][0] (an integer)
a[0][0] = f.value

Above we added 12 to the minor scale.

m = Scale.minor.degrees;
m.add(12) // but try to run this line many times, the array won’t grow forever

Lists

It is here that the List class becomes useful.

l = List.new;
l.add(100.rand) // try to run this a few times and watch the list grow

Lists are like arrays - and implement many of the same methods - but the are slightly more expensive than arrays. In the example above you could simply do ‘a = a.add(100.rand)’ if ‘a’ was an array, but many people like lists for reasons we will discuss later.

Dictionaries

A dictionary is a collection of items where keys are mapped to values. Here, keys are keywords that are identifiers for slots in the collection. You can think of this like names for values. This can be quite useful. Let’s explore two examples:

a = Dictionary.new
a.put(\C, 60)
a.put(\Cs, 61)
a.put(\D, 62)
a[\Ds] = 63 // same as .put
// and now, let's get the values
a.at(\D)
a[\D#] // same as .at

a.keys
a.values
a.getPairs
a.findKeyForValue(60)

Imagine how you would do this with an Array. One way would be

a = [\C, 60, \Cs, 61, \D, 62, \Ds, 63]
// we find the slot of a key:
x = a.indexOf(\D) // 4
a[x+1]
// or simply
a[a.indexOf(\D)+1]

but using an array you need to keep track of the how things are organised and indexed.

Another Dictionary example:

b = Dictionary.new
b.put(\major, [ 0, 2, 4, 5, 7, 9, 11 ])
b.put(\minor, [ 0, 2, 3, 5, 7, 8, 10 ])
b[\minor]

Methods?

We have now seen things as 100.rand and a.reverse. How does .rand and .reverse work? Well, SuperCollider is an Object Orientated language and these are methods of the respective classes. So an integer (like 100), has methods like .rand, .midicps, or .neg. It does not have a .reverse method. Why not? Because you can’t reverse a number. However, an array (like [11,22,33,44,55]) can be reversed or added to. We will explore this later in the chapter about Object Orientated programming in SC, but for now it is enough to think that the object (an instantiation of the class) has relevant methods. Or to use an analogy: let’s say we have a class called Car. This class is the information needed to build the car. When we build a Car, we instantiate the class and we have an actual Car. This car can then have some methods, for instance: start, drive, turn, putWipersOn. And these methods could have arguments, like speed(60), or turn(-60). You could think about the object as the noun, the method as the verb, and the argument as the adjective. (As in: John (object) walks (method) fast (adjective)).

// we create a new car. 4 indicating for example number of seats
c = Car.new(4); 
c.start;
c.drive(40); // the car drives 40 miles per hour
c.turn(-60); // the car turns 60 degrees to the left

So to really understand a class like Array or List you need to read the documentation and explore the methods available. Note also that the Array is subclassing (or getting methods from its superclass) the ArrayedColldection class. This means that it has all the methods of its superclass. Like a class “Car” might have a superclass called “Vehicle” of which a “Motorbike” would also be a subclass (a sibling to “Car”). You can explore this by peeking under the hood of SC a little:

Array.openHelpFile // get the documentation of the Array class
Array.dumpInterface // get the interface or the methods of the Array class
Array.dumpFullInterface // get the methods of Array’s superclasses as well.

You can see that in the .dumpFullInterface method will tell you all the methods Array inherits from its superclasses.

Now, this might give you a bit of a brainache, but don’t worry, you will gradually learn this terminology and what it means for you in your musical or sound practice with SuperCollider. Wikipedia is good place to start reading about [Object Oriented Programming] (https://en.wikipedia.org/wiki/Object-oriented_programming).

Conditionals, data flow and control

The final thing we should discuss before we start to make sounds with SuperCollider is how we control data and take decisions. This is about logic, about human thinking, and how to encode such decisions in the form of code. Such logic the basic form of all clever systems, for example in artificial intelligence. In short it is about establishing conditions and then decide what to do with them. For example: if it is raining and I’m going out, I take my umbrella with me, else I leave it at home. It’s about basic logic that humans do all the time throughout the day. And programming languages have ways formalise such conditions, most typically with an if-else statement.

In pseudocode it looks like this: if( condition, { then do this }, { else do this }); as in: if( rain, { umbrella }, { no umbrella });

So the condition represents a state that is either true or false. If it is true (there is rain), then it evaluates the first function, if false (no rain) it evaluates the second condition.

Another form is a simple if statement where you don’t need to specify what to do if it’s false: if( hungry, { eat } );

So let’s play with this:

if( true, { "condition is TRUE".postln;}, {"condition is FALSE".postln;});
if( false, { "condition is TRUE".postln;}, {"condition is FALSE".postln;});

You can see that true and false are keywords in SuperCollider. They are so called Boolean values. You should not use those as variables (well, you can’t). In digital systems, we operate in binary code, in 1s and 0s. True is associated with 1 and false with 0.

true.binaryValue;
false.binaryValue;

Boolean logic is named after George Boole who wrote an important paper in 1848 (“The Calculus of Logic”) on expressions and reasoning. In short it involves the statements AND, OR, and not.

A simple boolean truth table might look like this

true AND true = true
true AND false = false
false AND false = false
true OR true = true
true OR false = true
false or false = false

And also

true AND not false = true

etc. Let’s try this in SuperCollder code and observe the post window. But first we need to learn the basic syntax for the Boolean operators:

== stands for equal != stands for not equal && stands for and || stands for or

And we also use comparison operators

”>” stands for more than
“<” stands for less than
“>=” stands for more than or equal to
“<=” stands for less than or equal to

true == true // returns true
true != true // returns false (as true does indeed equal true)
true == false // returns false
true != false // returns true (as true does not equal false)
3 == 3 // yes, 3 equals 3
3 != 4 // true, 3 does not equal 4
true || false // returns true, as one of the elements are true
false || false // returns false, as both of the elements are false
3 > 4 // false, as 3 is less than 4
3 < 4 // true
3 < 3 // false
3 <= 3 // true, as 3 is indeed less than or equal to 3

You might not realise it yet, but knowing what you now know is very powerful and it is something you will use all the time for synthesis, algorithmic composition, instrument building, sound installations, and so on. So make sure that you understand this properly. Let’s play with this a bit more in if-statements:

if( 3==3, { "condition is TRUE".postln;}, {"condition is FALSE".postln;});
if( 3==4, { "condition is TRUE".postln;}, {"condition is FALSE".postln;});
// and things can be a bit more complex:
if( (3 < 4) && (true != false), {"TRUE".postln;}, {"FALSE".postln;});

What happened in that last statement? It asks: is 3 less than 4? Yes. AND is true not equal to false? Yes. Then both conditions are true, and that’s what it posts. Note that of course the values in the string (inside the quotation marks) could be anything, we’re just posting now. So you could write:

if( (3 < 4) && (true != false), {"VERDAD".postln;}, {"FALSO".postln;}); 

in Spanish if you’d like, but you could not write this:

verdad == verdad

as the SuperCollider language is in English.

But what if you have lots of conditions to compare? Here you could use a switch statement:

(
a = 4.rand; // a will be a number from 0 to 4;
switch(a)
{0} {"a is zero".postln;} // runs this if a is zero
{1} {"a is one".postln;} // runs this if a is one
{2} {"a is two".postln;} // etc.
{3} {"a is three".postln;};
)

Another way is to use the case statement, and it might be faster than the switch.

(
a = 4.rand; // a will be a number from 0 to 4;
case
{a == 0} {"a is zero".postln;} // runs this if a is zero
{a == 1} {"a is one".postln;} // runs this if a is one
{a == 2} {"a is two".postln;} // etc.
{a == 3} {"a is three".postln;};
)

Note that both in switch and case, the semicolon is only after the last testing condition. (so the line evaluation goes from “case…… to that semicolon” )

Looping and iterating

The final thing we need to learn in this chapter is looping. Looping is one of the key tricks used in programming. Say we want to generate 1000 synths at once. It would be tedious to write and evaluate 1000 lines of code one after another, but it’s easy to loop one line of code 1000 times!

In many programming languages this is done with a [for-loop] (http://en.wikipedia.org/wiki/For_loop):

for(int i = 0; i > 10, i++) {
	println("i is now" + i);		
}

The above code will work in JavaScript, C, JavaScript and many other languages. But SuperCollider is a fully object orientated language, where everything is an object - which can have methods - so an integer can have a method like .neg, or .midicps, but also .do (the loop).

So in SuperCollider we can simply do:

10.do({ "SCRAMBLE THIS 10 TIMES".scramble.postln; })

What happened is that it loops through the command 10 times and evaluates the function (which scrambles and posts the string we wrote) every time. We could then make a counter:

(
var counter = 0;
10.do({ 
	counter = counter + 1;
	"counter is now: ".post; 
	counter.postln; 
})
)

But instead of such counter we can use the argument passed into the function in a loop:

10.do({arg counter; counter.postln;});
// you can call this argument whatever you want:
10.do({arg num; num.postln;});
// and the typical convention is to use the character "i" (for iteration):
10.do({arg i; i.postln;});

Let’s now try to make a small program that gives us all the prime numbers from 0 to 10000. There is a method of the Integer class that is called isPrime which comes in handy here. We will use many of the things learned in this chapter, i.e., creating a List, making a do loop with a function that has a iterator argument, and then we’ll ask if the iterator is a prime number, using an if-statement. If it is (i.e. true), we add it to the list. Finally we post the result to the post window. But note that we’re only posting after we’ve done the 10000 iterations.

(
p = List.new;
10000.do({ arg i; // i is the iteration from 0 to 10000
	if( i.isPrime, { p.add(i) }); // no else condition - we don't need it
});
Post << p;
)

We can also loop through an Array or a List. Then the do-loop will pick up up all the items of the array and pass it into the function that you write inside the do loop. Additionally, it will add an iterator. So we have two arguments to the function:

(
[ 11, 22, 33, 44, 55, 66, 77, 88, 99 ].do({arg item, counter; 
	item.post; " is in the array at slot: ".post; counter.postln;
});
)

So it posts the slot (the counter/iterator always starts at zero), and the item in the list. You can call the arguments whatever you want of course. Example:

[ 11, 22, 33, 44, 55, 66, 77, 88, 99 ].do({arg aa, bb; aa.post; " is in the array at slot: \
".post; bb.postln });

Another looping technique is to use the for-loop:

for(startValue, endValue, function); // this is the syntax
for(100, 130, { arg i; i = i+10; i.postln; }) // example

We might also want to use the forBy-loop:

forBy(startValue, endValue, stepValue, function); // the syntax
forBy(100, 130, 4, { arg i; i = i+10; i.postln; }) // example

While is another type of loop:

while (testFunc, bodyFunc); // syntax
(
i = 0;
while ({ i < 30 }, {  i = i + 1; i.postln; });
)

This is enough about the language. Now is the time to dive into making sounds and explore the synthesis capabilities of SuperCollider. But first let us learn some tricks of peeking under the hood of the SuperCollider language:

Peaking under the hood

Each UGen or Class in SuperCollider has a class definition in a class file. These files are compiled every time SuperCollider is started and become the application environment we are using. SC is an “interpreted” language. (As opposed to a “compiled” language like C or JavaScript). If you add a new class to SuperCollider, you need to recompile the language (there is a menu item for that), or simply start again.

XXX FIX THIS: - For checking the sourcefile, type Apple + i (or cmd + i) when a class is highlighted (say SinOsc) - For checking the implementations of a method (which classes support it), type Apple + Y - poll - For checking references to a method (which classes support it), type Shift + Apple + Y - poll

UGen.dumpSubclassList // UGen is a class. Try dumping LFSaw for example

UGen.browse  // examine methods interactively in a GUI (OSX)

SinOsc.dumpFullInterface  // list all methods for the classhierarchically
SinOsc.dumpMethodList  // list instance methods alphabetically
SinOsc.openHelpFile

Chapter 2 - The SuperCollider Server

The SuperCollider Server, or SC Synth as it’s also known as, is an elegant and great sounding audio engine. As mentioned earlier, SuperCollider is traditionally separated between a server and a client, that is, an audio server (the SC Synth) and the SuperCollider language client (sc-lang). When the server is booted, it connects to the default audio device (such as internal or external audio cards), but you can set it to any audio device available to your computer (for example using virtual audio routing software like Jack).

The SC Synth renders audio and has an elegant structure of Busses, Groups, Synths and multitude of UGens, and it works a bit like a modular synth, where the output of certain chain of oscillators and filters can be routed into another module. The audio is created through creating graphs called Synth Definitions. These are definitions of synths, but in a wide sense as they can do practically anything audio related (for example performing audio analysis rather than synthesis).

The SC Synth is a program that runs independently from the SuperCollider IDE or language. You can use any software to control it, like C/C++, Java, Python, Lua, Pure Data, Max/MSP or any other.

This chapter will introduce the SuperCollider server for the most basic purposes of getting started with this amazing engine for audio work. This section will be fundamental for the succeeding chapters.

Booting the Server

When you “boot the server”, you are basically starting a new process on your computer that does not have a GUI (Graphical User Interface). If you observe the list of running processes of your computer, you will see that when you boot the server, a new process will appear (try typing “top” into a Unix Terminal). The server can be booted through a menu command (Menu-> XXX), or through a command line.

// let us explore the 's' variable, that stands for the synth:
s.postln; // we see that it contains a localhost synth
s.addr // the address of the synth (IP address and Port)
s.name // the localhost server is the default server (see Main.sc file)
s.serverRunning // is it running?
s.avgCPU // how much CPU is it using right now?

// Let's boot the server. Look at the post window
s.boot

We can explore creating our own servers with specific ports and IP addresses:

n = NetAddr("127.0.0.1", 57200); // IP (get it from whatsmyip.org) and port
p = Server.new("hoho", n); // create a server with the specific net address
p.makeWindow; // make a GUI window
p.boot; // boot it

// try the server:
{SinOsc.ar(444)}.play(p);
// stop it
p.quit;

From the above you might start to think about possibilities of having the server running on a remote computer with various clients communicating to it over network, and yes, that is precisely one of the innovative ideas of SuperCollider 3. You could put any server (with a remote IP address and port) into your server variable and communicate to it over a network. Or have many servers on diverse computers, instructing each of them to render audio. All this is common in SuperCollider practice, but the most common setup is using the SuperCollider IDE to write SC Lang code to control a localhost audio server (localhost meaning “on the same computer”). And that is what we will focus on for a while.

The Unit Generators

Unit Generators have been the key building blocks of digital synthesis systems, since Max Matthews’ Music N systems in the 1960s. Written in C/C++ and compiled as plugins for the SC Server, they encapsulate complex calculations into a simple black box that returns to us - the synth builders or musicians - what we are after, namely an output that could be in the form of a wave or a filter. The Unit Generators, or UGens as they are commonly called, are modular and the output of one can be the input of another. You can think of them like units in a modular synthesizer, for example the Moog:

A Moog Modular Synth
A Moog Modular Synth

UGens typically have audio rate (.ar) and control rate (.kr) methods. Some have initialization rate as well. The difference here is that an audio rate UGen will output as many samples as the sample rate per second. A computer with 44.1kHz sample rate will require each UGen to calculate 44100 samples per second. Control rate is of much lower rate than the sample rate and gives the synth designer the possibility of saving computational power (or CPU cycles) if used wisely.

// Here is a sine wave unit generator
// it has an audio rate method (the .ar)
// and its argument order is frequency, phase and multiplication
{SinOsc.ar(440, 0, 1)}.play 
// now try to run a SinOsc with control rate:
{SinOsc.kr(440, 0, 1)}.play // and it is inaudible

The control rate SinOsc is inaudible, but it is running fine on the server. We use control rate UGens to control other UGens, for example frequency, amplitude, or filter frequency. Let’s explore that a little:

// A sine wave of 1 Hz modulates the 440 Hz frequency
{SinOsc.ar(440*SinOsc.kr(1), 0, 1)}.play 
// A control rate sine wave of 3 Hz modulates the amplitude
{SinOsc.ar(440, 0, SinOsc.kr(3))}.play 
// An audio rate sine wave of 3 Hz modulates the amplitude
{SinOsc.ar(440, 0, SinOsc.ar(3))}.play
// and as you can hear, there is no difference
 
// 2 Hz modulation of the cutoff frequency of a Low Pass Filter (LPF)
// we add 1002, so the filter does not go into negative range
// which might blow up the filter
{LPF.ar(Saw.ar(440), SinOsc.kr(2, 0, 1000)+1002)}.play 

The beauty of UGens is how one can connect the output of one to the input of another. Oscillator UGens typically output values between -1 and 1, in a certain pattern (e.g., sine wave, saw wave, or square wave) and in a certain frequency. Other UGens such as filters or FFT processing do calculations on an incoming signal and output a new signal. Let’s explore one more example of connected UGens that demonstrates their modular power:

{
	// we create a slow oscillator in control rate
	a = SinOsc.kr(1);
	// the output of 'a' is used to multiply the frequency of a saw wave
	// resulting in a frequency from 440 to 660. Why?
	b = Saw.ar(220*(a+2), 0.5);
	// and here we use 'a' to control amplitude (from -0.5 to 0.5)
	c = Saw.ar(110, a*0.5);
	// we add b and c, and use a to control the filter cutoff frequency
	// we simply added a .range method to a so it now outputs
	// values between 100 and 2000 at 1 Hz
	d = LPF.ar(b+c, a.range(100, 2000));
	Out.ar(0, Pan2.ar(d, 0));
}.play

This is a simple case study of how UGens can be added (b+c), and used in any calculation (such as a*0.5 - which is an amplitude modulation, creating a tremolo effect) of the signal. For a bit of fun, let’s try to use a microphone and make a little effect of your voice:

{
	// we take sound in from the sound card
	a = SoundIn.ar(0);
	// and we ring modulate using the mouse to control frequency
	b = a * SinOsc.ar(MouseX.kr(100, 3000));
	// we also use the mouse (vertical) to control delay
	c = b + AllpassC.ar(b, 1, MouseY.kr(0.001, 0.2), 2);
	// and here, instead of Pan2, we simply use an array [c, c]
	Out.ar(0, [c, c]);
}.play

A good way to explore UGens is to browse them in the documentation.

UGen.browse; // XXX check if this works

The SynthDef

Above we explored UGens by wrapping them in a function and call .play on that function ({}.play). What this does is to turn the function (indicated by {}, as we learned in the chapter 1) into a synth definition that is sent to the server and then played. The {}.play (or Function:play, if you want to peek into the source code – by highlighting “Function:play” and hit, Cmd+I – and explore how SC compiles the function into a SynthDef under the hood) is how many people sketch sound in SuperCollider and it’s good for demonstration purposes, but for all real synth building, we need to create a synth definition, or a SynthDef.

A SynthDef is a pre-compiled graph of unit generators. This graph is written to a binary file and sent to the server over OSC (Open Sound Control - See chapter XXX). This file is stored in the “synthdefs” folder on your system. In a way you could see it as your own VST plugin for SuperCollider, and you don’t need the source code for it to work (although it does not make sense to throw that away).

It is recommended that the SynthDef help file is read carefully and properly understood. The SynthDef is a key class of SuperCollider and very important. It adds synths to the server or writes synth definition files to the disk, amongst many other things. Let’s start by exploring how we can turn a unit generator graph function into a synth definition:

// this simple synth
{Saw.ar(440)}.play
// becomes this synth definition
SynthDef(\mysaw, {
	Out.ar(0, Saw.ar(440));
}).add;

You notice that we have done two things: given the function a name (\mysaw), and we’ve wrapped our saw wave in an ‘Out’ UGen which defines which ‘Bus’ the audio is sent to. If you have an 8 channel sound card, you could send audio to any bus from 0 to 7. You could also send it to bus number 20, but we would not be able to hear it then. However, we could put another synth there that routes the audio back onto audio card busses, for example 0-7.

// you can use the 'Out' UGen in Function:play
{Out.ar(1, Saw.ar(440))}.play // out on the right speaker

NOTE: There is a difference in the Function-play code and the SynthDef, in that we need the Out Ugen in a synth definition to tell the server which audiobus the sound should go out of. (0 is left, 1 is right)

But back to our SynthDef, we can now try to instantiate it, and create a Synth. (A Synth is an instantiation (child) of a SynthDef). This synth can then be controlled if we reference it with a variable.

// create a synth and put it into variable 'a'
a = Synth(\mysaw);
// create another synth and put it into variable 'b'
b = Synth(\mysaw);
a.free; // kill a
b.free; // kill b

This is obviously not a very interesting synth. It is ‘hardcoded’, i.e., the parameters in it (such as frequency and amplitude) are static and we can’t change them. This is only done in very specific situations, as normally we would like to specify the values of our synth both when initialising the synth and after it has been started.

In order to open the SynthDef up for specified parameters and enabling it to be changed, we need to put arguments into the UGen function graph. Remember in chapter 1 how we created a function with arguments:

f = {arg a, b; 
	c = a + b; 
	postln("c is now: " + c)
};
f.value(2, 3);

Note that you can not write ‘f.value’, as you will get an error trying to add ‘nil’ to ‘nil’ (‘a’ and ‘b’ are both nil in the arg slots in the function. To solve that we can give them default values:

f = {arg a=2, b=3; 
	c = a + b; 
	postln("c is now: " + c)
};
f.value(22, 33);
f.value;

So we add the arguments for the synthdef, and we add a Pan2 UGen that enables us to pan the sound from the left (-1) to the right (1). The centre is 0:

SynthDef(\mysaw, { arg freq=440, amp=0.2, pan=0;
	Out.ar(0, Pan2.ar(Saw.ar(freq, amp), pan));
}).add;
// this now allows us to create a new synth:
a = Synth(\mysaw); // explore the Synth help file
// and control it, using the .set, method of the Synth:
a.set(\freq, 220);
a.set(\amp, 0.8);
a.set(\freq, 555, \amp, 0.4, \pan, -1);

This synth definition could be written better and more understandable. Let’s say we were to add a filter to the synth, it might look like this:

SynthDef(\mysaw, { arg freq=440, amp=0.2, pan=0, cutoff=880, rq=0.3;
	Out.ar(0, Pan2.ar(RLPF.ar(Saw.ar(freq, amp), pan), cutoff, rq));
}).add;

But this is starting to be hard to read. Let us make the SynthDef easier to read (although for the computer it is the same, as it only cares about where the semicolons (;) are).

// the same as above, but more readable
SynthDef(\mysaw, { arg freq=440, amp=0.2, pan=0, cutoff=880, rq=0.3;
	var signal, filter, panned;
	signal = Saw.ar(freq, amp);
	filter = RLPF.ar(signal, cutoff, rq);
	panned = Pan2.ar(filter, pan);
	Out.ar(0, panned);
}).add;

This is roughly how you will write and see other people write synth definitions from now on. The individual parts of a UGen graph are typically put into variables to be more human readable and easier to understand. The exception are SuperCollider tweets (#supercollider) where we have the 140 character limit. We can now explore the synth definition a bit more:

a = Synth(\mysaw); // we create a synth with the default arguments
b = Synth(\mysaw, [\freq, 880, \cutoff, 12000]); // we pass arguments
a.set(\cutoff, 500);
b.set(\freq, 444);
a.set(\freq, 1000, \cutoff, 1200);
b.set(\cutoff, 4000);
b.set(\rq, 0.1);

Observing server activity (Poll, Scope and FreqScope)

SuperCollider has various ways to explore what is happening on the server, in addition to the most obvious one: sound itself. Due to the separation between the SC server and the sc-lang, this means that data has to be sent from the server and back to the language, since it’s the language that prints or displays the data. The server is just a lean mean sound machine and doesn’t care about anything else. Firstly we can try to poll (get) the data from a UGen and post it to the post window:

// we can explore the output of the SinOsc
{SinOsc.ar(1).poll}.play // you won't be able to hear this
// and compare to white noise:
{WhiteNoise.ar(1).poll}.play // the first arg of noise is amplitude
// we can explore the mouse:
{MouseX.kr(10, 1000).poll}.play // nothing to hear

// we can poll the frequency of a sound:
{SinOsc.ar(LFNoise2.ar(1).range(100, 1000).poll)}.play
// or we poll the amplitude of it
{SinOsc.ar(LFNoise2.ar(1).range(100, 1000)).poll}.play
// and we can add a label (first arg is poll rate, second is label)
{SinOsc.ar(LFNoise2.ar(1).range(100, 1000).poll(10, "freq"))}.play

People often use poll to explore what is happening in the synth, to debug, or try to understand why something is not working. But it is typically not used in a concrete situation (XXX rephrase?). Another way to explore the server state is to use scope:

// we can explore the output of the SinOsc
{SinOsc.ar(1)}.scope // you won't be able to hear this
// and compare to white noise:
{WhiteNoise.ar(1)}.scope // the first arg of noise is amplitude
// we can scope the mouse state (but note the control rate):
{MouseX.kr(-1, 1)}.scope // nothing to hear
// the range method maps the output from -1 to 1 into 100 to 1000
{SinOsc.ar(LFNoise2.ar(1).range(100, 1000))}.scope;
// same here, we explore the saw wave form at different frequencies
{Saw.ar(220*SinOsc.ar(0.5).range(1, 10))}.scope

The scope shows amplitude over time, that is: the horizontal axis is time and the vertical axis is amplitude. This is often called a time-domain view of the signal. But we can also explore the frequency content of the sound, a view we call frequency-domain view. This is achieved by performing an FFT analysis of the signal which is then displayed to the scope (don’t worry, this happens ‘under the hood’ and we’ll learn about this in chapter XXX). Now let’s explore the freqscope:

// we see the wave at 1000 Hz, with amplitude modulated
{SinOsc.ar(1000, 0, SinOsc.ar(0.25))}.freqscope
// some white noise again:
{WhiteNoise.ar(1)}.freqscope // random values throughout the spectrum
// and we can now experienc the power of the scope
{RLPF.ar(WhiteNoise.ar(1), MouseX.kr(20, 12000), MouseY.kr(0.01, 0.99))}.freqscope
// we can now explore various wave forms:
{Saw.ar(440*XLine.ar(1, 10, 5))}.freqscope // check the XLine helpfile
// LFTri is a non-bandlimited UGen, so explore the mirroring or 'aliasing'
{LFTri.ar(440*XLine.ar(1, 10, 25))}.freqscope

Futhermore, there is a Spectrogram Quark that shows a spectrogram view of the audio signal, but this is not part of the SuperCollider distribution. However, it’s easy to install and we will cover this in the chapter on the Quarks.

A quick intro to busses and multichannel expansion

Chapter XXX will go deeper into busses, groups, and how to route the audio signals through the SC Server. However, it is important at this stage to understand how the server works in terms of channels (or busses). Firstly, all oscillators are mono. Many newcomers to SuperCollider find it strange that they only hear a signal in their left ear when using headphones running a SinOsc. Well, it would be strange to have it in stereo, quadrophonic, 5.1 or any other format, unless we specifically ask for that! We therefore need to copy the signal into the next bus if we want stereo. The image below shows a rough sketch of how the sc synth works.

A sketch illustrating busses in the SC Synth
A sketch illustrating busses in the SC Synth

We can see that by default SuperCollider has 8 output channels, 8 input channels, and 112 private audio bus channels (where we can run effects and other things). This means that if you have an 8 channel sound card, you can send a signal out on any of the first 8 busses. If you have a 16 channel sound card, you need to enter the ServerOptions class and change the ‘numOutputBusChannels’ variable to 16. More on that later, but let’s now look at some examples:

// sound put out on different busses
{ Out.ar(0, LFPulse.ar(220, 0, 0.5, 0.3)) }.play; // left speaker (bus 0)
{ Out.ar(1, LFPulse.ar(220, 0, 0.5, 0.3)) }.play; // right speaker (bus 1)
{ Out.ar(2, LFPulse.ar(220, 0, 0.5, 0.3)) }.play; // third speaker (bus 2)

// Pan2 makes takes the signal and converts it into an array of two signals
{ Out.ar(0, Pan2.ar(PinkNoise.ar(1), 0)) }.scope(8)
// or we can play it out on bus 6 (and you probably won't hear it)
{ Out.ar(0, Pan2.ar(PinkNoise.ar(1), 0)) }.scope(8)
// but the above is the same as:
{ a = PinkNoise.ar(1); Out.ar(0, [a, a]) }.scope(8)
// and (where the first six channels are silent):
{ a = PinkNoise.ar(1); Out.ar(0, [0, 0, 0, 0, 0, 0, a, a]) }.scope(8)
// however, it's not the same as:
{ Out.ar(0, [PinkNoise.ar(1), PinkNoise.ar(1)]) }.scope(8)
// why not? -> because we now have TWO signals rather than one

It is thus clear how the busses of the server are represented by an array containing signals (as in: [signal, signal, signal, signal, etc.]). We can now take a mono signal and ‘expand’ it into other busses. This is called multichannel expansion:

{ SinOsc.ar(440) }.scope(8)
{ [SinOsc.ar(440), SinOsc.ar(880)] }.scope(8)
// same as:
{ SinOsc.ar([440, 880]) }.scope(8)
// a trick to 'expand into an array'
{ SinOsc.ar(440) ! 2 }.scope(8)
// if that was strange, check this:
123 ! 30

Enough of this. We will explore busses and audio signal routing in chapter XXX later. However, it is important to understand this at the current stage.

Getting values back to the language

As we have discussed, the SuperCollider language and server are two separate applications. They communicate through the OSC protocol. This means that the communication between the two is asynchronous, or in other words, that you can’t know precisely how long it takes for a message to arrive. If we would like to do something with audio data in the language, such as visualising it, posting it, or such, we need to send a message to the server and wait for it to respond back. This can happen in various ways, but a typical way of doing this is to use the SendTrig Ugen:

// this is happening in the language
OSCdef(\listener, {arg msg, time, addr, recvPort; msg.postln; }, '/tr', n);
// and this happens in the server
{
	var freq;
	freq = LFSaw.ar(0.75, 0, 100, 900);
	SendTrig.kr(Impulse.kr(10), 0, freq);
	SinOsc.ar(freq, 0, 0.5)
}.play 

What we see above is the SendTrig, sending 10 messages every second to the language (the Impulse triggers those messages). It sends a ‘/tr’ OSC message to port 57120 locally. (Don’t worry, we’ll explore this later in a chapter on OSC). The OSCdef then has a function that posts the message from the server.

// this is happening in the language
OSCdef(\listener, {arg msg, time, addr, recvPort; msg.postln; }, '/tr', n);
// and this happens on the server
{
	var freq;
	freq = LFSaw.ar(0.75, 0, 100, 900);
	SendTrig.kr(Impulse.kr(10), 0, freq);
	SinOsc.ar(freq, 0, 0.5)
}.play 

A little bit more complex example might involve a GUI (Graphical User Interfaces are part of the language) and synthesis on the server:

(
// this is happening in the language
var win, freqslider, mouseslider;
win = Window.new.front;
freqslider = Slider(win, Rect(20, 10, 40, 280));
mouseslider = Slider2D(win, Rect(80, 10, 280, 280));

OSCdef(\sliderdef, {arg msg, time, addr, recvPort; 
	{freqslider.value_(msg[3].linlin(600, 1400, 0, 1))}.defer; 
}, '/slider', n); // the OSC message we listen to
OSCdef(\sliderdef2D, {arg msg, time, addr, recvPort; 
	{ mouseslider.x_(msg[3]); mouseslider.y_(msg[4]); }.defer; 
}, '/slider2D', n); // the OSC message we listen to
	
// and this happens on the server
{
	var mx, my, freq;
	freq = LFSaw.ar(0.75, 0, 400, 1000); // outputs 600 to 1400 Hz. Why?
	mx = LFNoise2.kr(2).range(0,1);
	my = LFNoise2.kr(2).range(0, 1);
	SendReply.kr(Impulse.kr(10), '/slider', freq); // sending the OSC message 
	SendReply.kr(Impulse.kr(10), '/slider2D', [mx, my]); 
	(SinOsc.ar(freq, 0, 0.5)+RLPF.ar(WhiteNoise.ar(0.3), mx.range(100, 3000), my))!2 ;
}.play;
 )

We could also write values to a control bus on the server, from which we can read in the language. Here is an example:

b = Bus.control(s,1); // we create a control bus
{Out.kr(b, MouseX.kr(20,22000))}.play // and we write the output of some UGen to the bus
b.get({arg val; val.postln;}); // we poll the puss from the language
// or even:
fork{loop{ b.get({arg val; val.postln;});0.1.wait; }}

Check the source of Bus (by hitting Cmd+I) and locate the .get method. You will see that the Bus .get method is using an OSCresponder (XXX is that the case in 3.6?) underneath. It is therefore “asynchronous”, meaning that it will not happen in the linear order of your code. (The language is asking server for the value, and the server then sends back to language. This takes time).

Here is a program that demonstrates the asynchronous nature of b.get. The {}.play from above has to be running. Note how the numbered lines of code appear in the post window “in the wrong order”! (Instead of a synchronous posting of 1, 2 and 3, we get the order of 1, 3 and 2). It takes between 0.1 and 10 milliseconds to get the value on a 2.8 GHz Intel computer.

(
x = 0; y= 0;
b = Bus.control(s,1); // we create a control bus
{Out.kr(b, MouseX.kr(20,22000))}.play;
t = Task({
	inf.do({
		"1 - before b.get : ".post; x = Main.elapsedTime.postln;
		b.get({|val| 	
			"2 - ".post; val.postln; 
			y = Main.elapsedTime.postln;
			"the asynchronious process took : ".post; (y-x).post; " seconds".postln;
		}); //  this value is returned AFTER the next line
		"3 - after b.get : ".post;  Main.elapsedTime.postln;
		0.5.wait;
	})
}).play;
)

This type of communication from the server to the language is not very common. The other way (from language to server) is however. This section is therefore not vital for your work in SuperCollider, but you will at some point stumble into the question of synchronous and asynchronous communication with the server and this section should prepare you for that.

ProxySpace

SuperCollider is an extremely wide and flexible language. It is profoundly deep and you will find new things to explore for years to come. Typically SC users find their own way of working in the language and then explore new areas when they find they need so, or are curious.

ProxySpace is one such area. It makes live coding and various on line coding extremely flexible. Effects can be routed in and out of proxies, and source changed. Below you will find a quick examples that are useful when testing UGens or making prototypes for synths that you will write as synthdefs later. ProxySpace is also often used in live coding. Evaluate the code below line by line:

p= ProxySpace.push(s.boot)

~signal.play;
~signal.fadeTime_(2) // fading in and out in 2 secs
~signal= {SinOsc.ar(400, 0, 1)!2}
~signal= {SinOsc.ar([400, 404], 0, LFNoise0.kr(4))}
~signal= {Saw.ar([400, 404],  LFNoise0.kr(4))}
~signal= {Saw.ar([400, 404],  Pulse.ar(2))}
~signal= {Saw.ar([400, 404],  Pulse.ar(Line.kr(1, 30, 20)))}
~signal= {LFSaw.ar([400, 404],  LFNoise0.kr(4))}
~signal= {Pulse.ar([400, 404],  LFNoise0.kr(4))}
~signal= {Blip.ar([400, 404],  12, Pulse.ar(2))}
~signal= {Blip.ar([400, 404],  24, LFNoise0.kr(4))}
~signal= {Blip.ar([400, 404],  4, LFNoise0.kr(4))}
~signal= {Blip.ar([400, 404],  MouseX.kr(4, 40), LFNoise0.kr(4))}
~signal= {Blip.ar([200, 204],  5, Pulse.ar(1))}

// now let's try to add some effects 

~signal[1] = \filter -> {arg sig; (sig*0.6)+FreeVerb.ar(sig, 0.85, 0.86, 0.3)}; // reverb
~signal[2] = \filter -> {arg sig; sig + AllpassC.ar(sig, 1, 0.15, 1.3 )}; // delay
~signal[3] = \filter -> {arg sig; (sig * SinOsc.ar(2.1, 0, 5.44, 0))*0.5}; // tremolo
~signal[4] = \filter -> {arg sig; PitchShift.ar(sig, 0.008, SinOsc.ar(2.1, 0, 0.11, 1))}; /\
/ pitchshift
~signal[5] = \filter -> {arg sig; (3111.33*sig.distort/(1+(2231.23*sig.abs))).distort*0.2};\
 // distort
~signal[1] = nil;
~signal[2] = nil;
~signal[3] = nil;
~signal[4] = nil;
~signal[5] = nil;

Another ProxySpace example:

p = ProxySpace.push(s.boot);
~blipper = { |freq=20, nHarm=30, amp=0.1| Blip.ar(freq, nHarm, amp)!2 };
~blipper.play;
~lfo = { MouseX.kr(10, 100, 1) };
~blipper.map(\freq, ~lfo);
~blipper.set(\nHarm, 50)
~lfn = { LFDNoise3.kr(15, 30, 40) };
~blipper.map(\nHarm, ~lfn);
~lfn = 30;
~blipper.set(\nHarm, 50);

Ndef

(XXX fill out this section)

and: jitlib_basic_concepts_02 there is a help-file for Ndef. However, many essential parts are documented in NodeProxy-help which is the common basis for Ndef as well as ProxySpace.

Chapter 3 - Controlling the Server

This chapter explores how we use the SuperCollider language to control the SC Server. From a certain perspective the server with its synth definitions can be seen as an instrument and the language as the performer or the score. The SuperCollider language is an interpreted object orientated and functional language written in C/C++ and deriving much of its inspiration from Smalltalk. In many ways similar to Python, Ruby, Lua or JavaScript, but these are all different languages, and for good reasons: there is no point in creating a programming language that’s the same as another.

SuperCollider is a powerful language, and as its author James McCartney writes in a 19xx paper:

Different languages are based on different paradigms and lead to different types of approaches to solve a given problem. Those who use a particular computer language learn to think in that language and can see problems in terms of how a solution would look in that language. (McCartney 2003)

SuperCollider is very open and allows us to do things in multiple different ways. We could talk about different coding or compositional styles. And none are better than others. It depends on what people get used to and what practices are in line with how they already think or would like to think.

Music is a time-based art form. It is largely about scheduling events in time (which is a notational practice) or about performing those events yourself (which is an instrumental practice). SuperCollider is good for both practices and it provides the user with specific functionalities that make sense for a musical programming language, which might seem strange in a general language. This chapter and the next will introduce diverse wasy to control the server through automated loops, through patterns, Graphical User Interfaces, and other interface protocols such as MIDI or OSC.

Tasks, Routines, forks and loops

We have learned to design synths graphs with UGens, and wrap them into a SynthDef. We have started and stopped a Synth on the server, but we might ask: then what? How do we make music with SuperCollider? How do we schedule things to happen repeatedly in time?

The most basic way of scheduling things is to create a process that loops and runs the same code repeatedly. Such a process can count, so we can use the counter to access data from arrays and this allows us to use the counter as an index into an array into which we can write anything, perhaps a melody. But first let us look at a basic routine:

Routine({
	inf.do({arg i;
		"iteration: ".post; i.postln;
		0.25.wait; 
	})
}).play

This could also be written as:

fork{
	inf.do({arg i;
		"iteration: ".post; i.postln;
		0.25.wait; 
	})
}

but the key thing is that we have a routine that serves like an engine that can be paused and woken up again after a certain wait. Try to run the do-loop without a fork:

// this won't work, as there is no routine involved
100.do({arg i; "iteration: ".post; i.postln; 0.25.wait; });
// but this will work, as we are not asking the loop to wait:
100.do({arg i; "iteration: ".post; i.postln; })

A routine can be played with different clocks (TempoClock, SystemClock, and AppClock) and we will explore them later in this chapter. But here is how we can ask different clocks to play the routines:

(
r = Routine.new({
	10.do({ arg a;
		a.postln;
		1.wait;
	});
	0.5.wait;
	"routine finished!".postln;
});
)

SystemClock.play(r); // and then we run it
r.reset // we have to reset the routine to start it again:
AppClock.play(r); // here we tell AppClock to play routine r
r.play(AppClock) // or we can use this syntax
r.stop; // stop the routine
r.play; // try to start the routine again... It won't work.

In the last line above we experience that we can’t restart a routine after it has stopped. Here is where Tasks come in handy, but they are pauseable processes that behave like routines. (Check the Task helpfile).

(
t = Task({
	inf.do({arg i;
		"iteration is: ".post; i.postln;
		0.25.wait;
	})
});
)

t.play;
t.pause;
t.resume;
t.stop;
t.play;
t.reset; 

Let’s make some music with a Task. We can put some note values into an array and then ask a Task to loop through that array, repeating the melody we make. Let us create a SynthDef that we would like to use for this piece of music:

SynthDef(\ch3synth1, {arg freq=333, amp=0.4, pan=0.0, dur=0.41; // the arguments
	var signal, env;
	env = EnvGen.ar(Env.perc(0.001, dur), doneAction:2); // doneAction gets rid of the synth
	signal = LFTri.ar(freq, 0, amp) * env; // the envelope multiplies the signal
	signal = Pan2.ar(signal, pan);
	Out.ar(0, signal);
}).add;

And here we create a composition to play it:

(
m = ([ 0, 1, 5, 6, 10, 12 ]+48).midicps;
m = m.scramble; // try to re-evaluate only this line
t = Task({
	inf.do({arg i;
		Synth(\ch3synth1, [\freq, m.wrapAt(i)]);
		0.25.wait;
	})
});
t.play;
)

In fact we could create a loop that re-evaluates the m.scramble line:

f = fork{
	inf.do({arg i;	
		m = m.scramble; 
		"SCRAMBLING".postln;
		4.8.wait; // why did I choose 4.8 second wait.
	})
}

BTW. Nobody said this was going to be good music, but music it is.

Patterns

Patterns are interesting methods for creating musical structures in an efficient way. Patterns are high-level abstractions of keys and values that can be ‘bound’ together to control synths. Patterns use the TempoClock of the language to send control messages to the server. Patterns are related to Events, but those are collections of keys and values that can be used to control synths.

All this might seem very convoluted, but the key point is that we are operating with default values that can be used to control synths. A principal pattern to understand is the Pbind (a Pattern that binds keys to values, such as \frequency (a key) to 440 (a value)).

().play; // run this Event and we observe the posting of default arguments
Pbind().play; // the event arguments are used in the Pbind.

The Pbind is using the default arguments to play the ‘default’ synth (one that is defined by SuperCollider), a frequency of 261.6256, amplitude of 0.1, and so on.

// here we have a Pattern that binds the frequency key to the value of 1000
Pbind(\freq, 1000, \dur, 0.25).play;

The keys that the patterns play match the arguments of the SynthDef. Let’s create a SynthDef that we can fully control with a pattern:

// the synthdef has the conventional 'freq' and 'amp' arguments, but also our own 'cutoff'
SynthDef(\patsynth1, { arg out=0, freq=440, amp=0.1,  pan=0, cutoff=1000, gate = 1;
    var signal = MoogFF.ar( Saw.ar(freq, amp), cutoff, 3);
    var env = EnvGen.kr(Env.adsr(), gate, doneAction: 2);
    Out.ar(out, Pan2.ar(signal, pan, env) );
}).add;
// we play our 'patsynth1' instrument, and control the cutoff parameter
Pbind(\instrument, \patsynth1, \freq, 100, \cutoff, 300, \amp, 0.6).play;
// try this as well:
Pbind(\instrument, \patsynth1, \freq, 100, \cutoff, 3000, \amp, 0.6).play;

Patterns have some default timing mechanism, so we can control the duration until the next event, and we can also set the sustain of the note:

Pbind(\instrument, \patsynth1, \freq, 100, \amp, 0.6, \dur, 0.5).play;
Pbind(\instrument, \patsynth1, \freq, 100, \amp, 0.6, \dur, 0.5, \sustain, 0.1).play;

All this is quite musically boring, but here is where patterns start to get exciting. There are diverse list patterns that allow us to operate with lists, for example by going sequentially through the list (Pseq), picking random values from the list (Prand), shuffling the list (Pshuf), and so on:

// here we format it differently, into pairs of keys and values
Pbind(
	\instrument, \patsynth1, 
	\freq, Pseq([100, 200, 120, 180], inf), // sequencing frequency
	\amp, 0.6, 
	\dur, 0.5
).play;

// we can use list patterns for values to any keys:
Pbind(
	\instrument, \patsynth1, 
	\freq, Prand([100, 200, 120, 180], inf), 
	\amp, Pseq([0.3, 0.6], inf),
	\dur, Pseq([0.125, 0.25, 0.5, 0.25], inf), 
).play;

Pbind(
	\instrument, \patsynth1, 
	\freq, Pseq([100, 200, 120, 180], inf), 
	\cutoff, Pseq([1000, 2000, 3000], inf), // only 3 items in the list - it loops 
	\amp, Pseq([0.3, 0.6], inf), , 
	\dur, Pseq([0.125, 0.25, 0.5, 0.25], inf), 
).play;

There will be more on patterns later, but at this stage it is a good idea to play with the pattern documentation files, for example the ones found under Streams-Patterns-Events. There is also a fantastic Practical Guide to Patterns in the SuperCollider Documentation. Under ‘Streams-Patterns-Events>A-Practical-Guide’

To end this section on patterns, let’s simply play a little with Pdefs:

// here we put a pattern into a variable "a"
(
a = Pdef.new(\example1, 
		Pbind(\instrument, \patsynth1, // using our sine synthdef
			\freq, Pseq([220, 440, 660, 880], inf), // freq arg
			\dur, Pseq([0.25, 0.5, 0.25, 0.5], inf);  // dur arg
		)
);
)

a.play;
a.pause;
a.resume;

// but we don't need to:
(
Pdef(\example2, 
	Pbind(\instrument, \patsynth1, // using our sine synthdef
		\freq, Pseq.new([720, 770, 990, 880], inf), // freq arg
		\dur, Pseq.new([0.25, 0.5, 0.25, 0.5], inf);  // dur arg
	)
);
)

Pdef(\example2).play;
Pdef(\example2).pause;
Pdef(\example2).resume;

// Now, let's play them both together with a bit of timeshift

(
Pdef(\example1).quant_([2, 0, 0]);
Pdef(\example2).quant_([2, 0.5, 1]); // offset by half a beat
Pdef(\example1).play;
Pdef(\example2).play;
)

// and without stopping we redefine the example1 pattern:
(
Pdef(\example1, 
	Pbind(\instrument, \patsynth1, // using our sine synthdef
		\freq, Pseq.new([
			Pseq.new([220, 440, 660, 880], 4),
			Pseq.new([220, 440, 660, 880], 4) * 1.5], // transpose the melody
			inf),
		\dur, Pseq.new([0.25, 0.125, 0.125, 0.25, 0.5], inf);  // dur arg
	)
);
)

The TempoClock

TempoClock is one of three clocks available for timing organisation in SuperCollider. The others are SystemClock and AppClock. TempoClock is a scheduler like SystemClock, but it schedules in beats rather than milliseconds. AppClock is less accurate, but it can call GUI primitives and therefore to be used when GUI’s need update from a clock controlled process.

Let’s start by creating a clock, give it the tempo of 1 beat per second (that’s 60 bpm), and schedule a function to be played in 4 beats time. The arguments of beats and seconds since SuperCollider was started are passed into the function, and we post those.

t = TempoClock.new;
t.tempo = 1;
t.sched(4, { arg beat, sec; [beat, sec].postln; }); // wait for 4 beats (4 secs);

You will note that the beat is a fractional number. This is because the beat returns the appropriate beat time of the clock’s thread. If you prefer to have the beats in whole numbers, you can use the schedAbs method:

t = TempoClock.new;
t.tempo = 4; // we make the tempo 240 bpm (240/60 = 4)
t.schedAbs(4, { arg beat, sec; [beat, sec].postln; }); // wait for 4 beats (1 sec);

If we would like to schedule the function repeatedly, we add a number representing the next beat at the end of the function.

t = TempoClock.new;
t.tempo = 1;
t.schedAbs(0, { arg beat, sec; [beat, sec].postln; 1}); 

And with this knowledge we can start to make some music:

t = TempoClock.new;
t.tempo = 1;
t.schedAbs(0, { arg beat, sec; [beat, sec].postln; 1}); 
t.schedAbs(0, { arg beat, sec; "_Scramble_".scramble.postln; 0.5});

We can try to make some rhythmic pattern with the tempoclock now. Let us just use a simple synth like the one we had above, but now we call it ‘clocksynth’.

// our synth
SynthDef(\clocksynth, { arg out=0, freq=440, amp=0.5,  pan=0, cutoff=1000, gate = 1;
    var signal = MoogFF.ar( Saw.ar(freq, amp), cutoff, 3);
    var env = EnvGen.kr(Env.perc(), gate, doneAction: 2);
    Out.ar(out, Pan2.ar(signal, pan, env) );
}).add;
// the clock
t = TempoClock.new;
t.tempo = 2;
t.schedAbs(0, { arg beat, sec; 
	Synth(\clocksynth, [\freq, 440]);
	if(beat%4==0, { Synth(\clocksynth, [\freq, 440/4, \amp, 1]); });
	if(beat%2==0, { Synth(\clocksynth, [\freq, 440*4, \amp, 1]); });
1}); 

Yet another trick to play sounds in SuperCollider is to use “fork” and schedule a pattern through looping. If you look at the source of .fork (by hitting Cmd+I) you will see that it is essentially a Routine (like above), but it is making our lives easier by wrapping it up in a method of Function.

(
var clock, waitTime;
waitTime = 2;
clock = TempoClock(2, 0);

{ // a fork
	"starting the program".postln;
	{ // and we fork again (play 10 sines)
		10.do({|i|
			Synth(\clocksynth, [\freq, 1000+(rand(1000))]);
			"synth nr : ".post; i.postln;
			(waitTime/10).wait; // wait for 100 milliseconds
		});
		"end of 1st fork".postln;
	}.fork(clock);
	waitTime.wait;
	"finished waiting, now we play the 2nd fork".postln;
	{ // and now we play another fork where the frequency is lower
		20.do({|i|
			Synth(\clocksynth, [\freq, 100+(rand(1000))]);
			"synth nr : ".post; i.postln;
			(waitTime/10).wait;
		});
		"end of 2nd fork".postln;
	}.fork(clock);
	"end of the program".postln;
}.fork(clock);
)

Note that the interpreter reaches the end of the program before the last fork is finished playing.

This is enough about the TempoClock at this stage. We will explore it in more depth later.

GUI Control

Graphical user interfaces are a very common way for musicians to control their compositions. They serve like a control board for things that the language can do, and to control the server. In the next chapter we will explore interfaces in SuperCollider, but this example is provided in this chapter to give an indication of how the language works.

// we create a synth (here a oscillator with 16 harmonics) ( SynthDef(\simpleSynth, {|freq, amp| var signal, harmonics; harmonics = 16; signal = Mix.fill(harmonics, {arg i; SinOsc.ar(freq*(i+1), 1.0.rand, amp * harmonics.reciprocal/(i+1)) }); Out.ar(0, signal ! 2); }).add; )

(
var synth, win, freqsl, ampsl;
// create a GUI window
win = Window.new("simpleSynth", Rect(100, 100, 300, 90), false).front;
// and place the frequency and amplitude sliders in the window
StaticText.new(win, Rect(10,10, 160, 20)).font_(Font("Helvetica", 9)).string_("freq");
freqsl = Slider.new(win, Rect(40,10, 160, 24)).value_(1.0.rand)
	.action_({arg sl; synth.set(\freq, sl.value*1000;) });
StaticText.new(win, Rect(10,46, 160, 20)).font_(Font("Helvetica", 9)).string_("amp");
ampsl = Slider.new(win, Rect(40,46, 160, 24)).value_(1.0.rand)
	.action_({arg sl; synth.set(\amp, sl.value) });
// a button to start and stop the synth. If the button value is 1 we start it, else stop it
Button.new(win, Rect(220, 10, 60, 60)).states_([["create"], ["kill"]])
	.action_({arg butt;
		if(butt.value == 1, {
			// the synth is created with freq and amp values from the sliders
			synth = Synth(\simpleSynth, [\freq, freqsl.value*1000, \amp, ampsl.value]);
		},{
			synth.free;
		});
	});
)

Chapter 4 - Interfaces and Communication (GUI/MIDI/OSC)

SuperCollider is a very open environment. It can be used for practically anything sound related, whether that is scientific study of sound, instrument building, DJing, generative composition, or creating interactive installations. For these purposes we often need real-time interaction with the system and this can be done through various ways, but typically through screen-based or hardware interaction. This section will introduce the most common ways of interacting with the SuperCollider language.

MIDI - Musical Instrument Digital Interface

MIDI: A popular 80s technology (SC2 Documentation)

MIDI is one of the most common protocols for hardware and software communication. It is a simple protocol that has proven valuable, although it is currently seen to have gone past its prime. The key point of using MIDI in SuperCollider is to be able to interact with hardware controllers, synthesizers, and other software. SuperCollider has a strong MIDI implementation and should support everything you might want to do with MIDI.

// we initialise the MIDI client and the post window will output your devices
MIDIClient.init;
// the sources are the input devices you have plugged in
MIDIClient.sources;
// the destinations are the devices that can receive MIDI
MIDIClient.destinations;

Using MIDI Controllers (Input)

Let’s start with exploring MIDI controllers. The MIDI methods that you will use will depend on what type of controller you’ve got. The following are the input methods of MIDIIn:

  • noteOn
  • noteOff
  • control
  • bend
  • touch
  • polyTouch
  • program
  • sysex
  • sysrt
  • smpte

If you were to use a relatively good MIDI keyboard, you would be able to use most of these methods. In the following example we will explore the interaction with a simple MIDI keyboard.

MIDIIn.connectAll; // we connect all the incoming devices
MIDIFunc.noteOn({arg ...x; x.postln; }); // we post all the args

On the device I’m using now (Korg NanoKEY), I get an array formatted thus [127, 60, 0, 1001096579], where the first item is the velocity (how hard I hit the key), the second is the MIDI note, the third is the MIDI channel, and the fourth is the device number (so if you have different devices, you can differentiate between them using this ID).

For the example below, we will use the convenient MIDIdef class to register the definition we want to use for the incoming MIDI messages. Making such definitions is common in SuperCollider, as we make SynthDefs and OSCdefs as well. (XXX HIDdefs?) Let’s hook the incoming note and velocity values up to the freq and amp values of a synth that we create. Note that the MIDIdef contains two things, its name and the function it will trigger on every incoming MIDI note on! We simply create a Synth inside that function.

//First we create a synth definition for this example:
SynthDef(\midisynth1, {arg freq=440, amp=0.1;
	var signal, env;
	signal = VarSaw.ar([freq, freq+2], 0, XLine.ar(0.7, 0.9, 0.13));
	env = EnvGen.ar(Env.perc(0.001), doneAction:2); // this envelope will die out
	Out.ar(0, signal*env*amp);
}).add;

Synth(\midisynth1) // let's try it

// and now we can play the synth
MIDIdef.noteOn(\mydef, {arg vel, key, channel, device; 
	Synth(\midisynth1, [\freq, key.midicps, \amp, vel/127]);
	[key, vel].postln; 
});

But the above is not a common synth-like behaviour. Typically you’d hold down the note and it would not be released until you release your finger off the keyboard key. We therefore need to use an ADSR envelope. [XXX make Wikipedia link].

//First we create a synth definition for this example:
SynthDef(\midisynth2, {arg freq=440, amp=0.1, gate=1;
	var signal, env;
	signal = VarSaw.ar([freq, freq+2], 0, XLine.ar(0.7, 0.9, 0.13));
	env = EnvGen.ar(Env.adsr(0.001), gate, doneAction:2);
	Out.ar(0, signal*env);
}).add;
// since we added default freq and amp arguments we can try it:
a = Synth(\midisynth) // playing 440 Hz
a.release // and the synth will play until we release it (gate = 0)
// the adsr envelope in the synth keeps the gate open as long as note is down

// now let's connect the MIDI
MIDIIn.connectAll; // we connect all the incoming devices
MIDIdef.noteOn(\mydef, {arg vel, key, channel, device; 
	Synth(\midisynth, [\freq, key.midicps, \amp, vel/127]);
	[key, vel].postln; 
});

What’s going on here? The synth definition uses a common trick to create a slight detuning in the frequency in order to make the sound more “analogue” or imperfect. We use a VarSaw that can change the saw waveform and we do change it with the XLine UGen. The synth def has an amp argument for the volume and a gate argument that keeps the synth playing until we tell it to stop.

But what happened? We play and we get a cacophony of sound. The notes are piling up on top of each other as they are not released. How would you solve this?

You could put the note into a variable:

MIDIdef.noteOn(\myOndef, {arg vel, key, channel, device; 
	a = Synth(\midisynth2, [\freq, key.midicps, \amp, vel/127]);
	[key, vel].postln; 
});
MIDIdef.noteOff(\myOffdef, {arg vel, key, channel, device; 
	a.release;
	[key, vel].postln; 
});

And it will release the note when you release your finger. However, now the problem is that if you press another key whilst holding down the first one, the second key will be the Synth that is put into variable ‘a’, so you have lost the reference to the first one. You can’t release it! There is no access to it. Here is where SuperCollider excels as a programming language and makes things so simple and easy compared to data-flow programming environments like Pd or Max/MSP. We just create an array and put our synths into it. Here every note has a slot in the array and we turn the synths on and off depending on the MIDI message:

a = Array.fill(127, { nil });
g = Group.new; // we create a Group to be able to set cutoff of all active notes
c = 6;
MIDIdef.noteOn(\myOndef, {arg vel, key, channel, device; 
	// we use the key as index into the array as well
	a[key] = Synth(\moog, [\freq, key.midicps, \amp, vel/127, \cutoff, c], target:g);
	
});
MIDIdef.noteOff(\myOffdef, {arg vel, key, channel, device; 
	a[key].release;
});
MIDIdef.cc(\modulation, { arg val; c=val.linlin(0, 127, 6, 20); g.set(\cutoff, c) });

MIDI Communication (Output)

It is equally easy to control external hardware or software with SuperCollider’s MIDI functionality. Just as above we initialise the MIDI client and check which devices are available:

// we initialise the MIDI client and the post window will output your devices
MIDIClient.init;
// the destinations are the devices that can receive MIDI
MIDIClient.destinations;

// the default device is selected
m = MIDIOut(0); 
// or select your own device from the list of destinations
m = MIDIOut(0, MIDIClient.destinations[0].uid); 
// we now have a MIDIOut object stored in variable 'm'.
// now we can use the object to send out MIDI messages:
m.latency = 0; // we put the latency to 0 (default is 0.2)
m.noteOn(0, 60, 100); // note on
m.noteOff(0, 60, 100); // note off

And you could control your device using Patterns:

Pbind(
	\type, \midi, 
	\midiout, m, 
	\midinote, Prand([60, 62, 63, 66, 69], inf), 
	\chan, 1, 
	\amp, 1, 
	\dur, 0.25
).play;

or for example a Task:

a =[72, 76, 79, 71, 72, 74, 72, 81, 79, 84, 79, 77, 76, 77, 76];
t = Task({
 	inf.do({arg i; // i is the counter and wrapAt can wrap the array
	m.noteOff(0, a.wrapAt(i-1), 100); // note off
	m.noteOn(0, a.wrapAt(i), 100); // note on
	0.25.wait;
 	})
}).play; 

You might have recognised the beginning of a Mozart melody there, but perhaps not, as the note lengths were not correct. How would you solve that? Try to fix the timing of the notes as an exercise. Tip: create a duration array (in var ‘d’ for example) and put that instead of “0.25.wait;” above. Use the wrapAt(i) to get at the correct duration slot.

OSC - Open Sound Control

Open Sound Control has become the principal protocol replacing MIDI in the 21st century. It is fast and flexible network protocol that can be used to communicate between applications (like SC Server and sc-lang), between computers (on a local network or the internet), or to hardware (that supports OSC). It is used by musicians and media artists all over the world and it has become so popular that commercial software companies are now supporting it in their software. In many ways it could have been called OMC (Open Media Control) as it is used in graphics, video, 3D software, games, and robotics as well.

OSC is a protocol of communication (how to send messages), but it does not define a standard of what to communicate (that’s the open bit). Unlike MIDI, it can send all kinds of information through the network (integers, floats, strings, arrays, etc.), and the user can define the message names (or address spaces as they are also called).

There are two things that the user needs to know: the computer’s IP address, and the listening Port. * IP address: Typically something like “194.81.199.106” or locally “127.0.0.1” (localhost) * Port: You can use any port, but ideally choose a port above 10000.

You have already used OSC in the SendTrig example of chapter 2, but there it was ‘under the hood’, so to speak, as the communication took place in the SuperCollider classes.

n = NetAddr("127.0.0.1", 57120);
a = OSCdef(\test, { arg msg, time, addr, recvPort; msg.postln; }, '/hello', n);
n.sendMsg('/hello', 4000.rand); // run this line a few times
n.sendMsg('/hola', 4000.rand); // try this, but it won't work. Why not?
a.free;

OSC messages make use of Unix-like address spaces. You know what that is already, as you are used to how the internet uses ‘/’ to indicate a folder down in web addresses. For example here this OSCdef.html document lies in a folder called ‘Classes’: http://doc.sccode.org/Classes/OSCdef.html together with lots of other documents. The address above is ‘/hello’ (not ‘/hola’).

The idea here is that we can send messages directly deep into the internals of our synthesizers or systems, for example like this:

‘/synth1/oscillator2/lowpass/cutoff’, 12000 ‘/synth1/oscillator2/frequency’, 300 ‘/light3/red/intensity’, 10 ‘/robot/leftarm/upper/xdegrees’, 90

and so on. We are here giving direct messages that are human-readable as well as specific for the machine. This is very different from how people used to use MIDI where you have no way of naming things, you have to resolve to mapping your things with only 16 channels and often constrained in messaging with numbers from 0-127.

Try to open Pure Data and create a new patch with the following in it:

[dumpOSC 12000] | | [print]

Then send the messages to Pd with this code in SC:

n = NetAddr("127.0.0.1", 12000);
n.sendMsg('/hello', 4000.rand); // Pure Data will print this message

Try to do the same with another computer on the same network, but then send it to that computer’s IP address:

n = NetAddr(other_computer_IP_address, 12000);
n.sendMsg('/hello', 4000.rand);

Use the same Pd patch on that computer, but then run the following lines in SuperCollider:

a = OSCdef(\test, { arg msg, time, addr, recvPort; msg.postln; }, '/hello', nil);

You notice that there is now ‘nil’ in the sender address. This allows any computer on the network to send to your computer. If you would limit that to a specific net address (for example NetAddr(“192.9.12.199”, 3000)), it would only be able to receive OSC messages from that specific address/computer.

Hopefully you have now been able to send OSC messages to another software on your computer, to Pd on another computer, and to SuperCollider on another computer. These examples were on the same network. You might have to change settings in your firewall for this to work over networks, and if you are on an institutional network (such as a University network) you might even have to ask the system administrators to open up for a specific port if the incoming message is coming from outside the network (Internally it works without admin changes).

We could end this section by creating a little program that is typical for who people use OSC over networks on the same or different computers. Here below we have the listener:

// synth definition used in this example
SynthDef(\osc, {arg freq=220, cutoff=1200;
	Out.ar(0, LPF.ar(Saw.ar(freq, 0.5), cutoff));
}).add;
// the four OSC defs, that represent the program functionality
OSCdef(\createX, { arg msg, time, addr, recvPort; 
	x = Synth(\osc);
}, '/create', nil); 
OSCdef(\releaseX, { arg msg, time, addr, recvPort;
	x.free; 
	}, '/free', nil);
OSCdef(\freqX, { arg msg, time, addr, recvPort; 
	x.set(\freq, msg[1]);
	}, '/freq', nil);
OSCdef(\cutoffX, { arg msg, time, addr, recvPort;
	x.set(\cutoff, msg[1]);
	}, '/cutoff', nil);

And the other system (another software or another computer) will send something like this:

n = NetAddr("127.0.0.1", 57120);
n.sendMsg('/create')
n.sendMsg('/freq', rrand(100, 2000))
n.sendMsg('/cutoff', rrand(100, 2000))
n.sendMsg('/free')

The messages could be wrapped into functionality that is plugged to some GUI, a hardware sensor (pressure sensor and motion tracker for example), or perhaps algorithmically generated together with some animated graphics.

GUI - Graphical User Interfaces

Note that we are creating N number of synths (defined in the variable “nrSynths”) and putting them all into one List. That way we can access and control them individually from the GUI. Look at how the sliders and buttons of the GUI are controlling directly their respective synth by accessing synthList[i] (where “i” is the index of the synth in the list)

TIP: change the nrSynths variable to some other number (10, 16, etc) and see what happens.

(
var synthList, nrSynths;
nrSynths = 6;

synthList = Array.fill(nrSynths, {0});

w = SCWindow("SC Window", Rect(400, 64, 650, 360)).front;

nrSynths.do({arg i;

	// we create the buttons
	SCButton(w, Rect(10+(i*(w.bounds.width/nrSynths)), 20, (w.bounds.width/nrSynths)-10, 20))
		.states_([["on",Color.black,Color.clear],["off",Color.white,Color.black]])
		.action_({arg butt;
			if(butt.value == 1, {
				synthList.put(i, Synth(\GUIsine));
				synthList.postln;
			}, {
				synthList[i].free;
			})
		});
		
	// frequency slider
	SCSlider(w, Rect(10+(i*(w.bounds.width/nrSynths)), 60, (w.bounds.width/nrSynths)-10, 20))
		.action_({arg sl;
				synthList[i].set(\freq, sl.value*1000); // simple mapping (check ControlSpec)
		});
		
	// amplitude slider
	SCSlider(w, Rect(10+(i*(w.bounds.width/nrSynths)), 100, (w.bounds.width/nrSynths)-10, 20))
		.action_({arg sl;
				synthList[i].set(\amp, sl.value);
		});
});
)

ControlSpec - Scaling/mapping values

In the examples above we have used a very crude mapping of a slider onto a frequency argument in a synth. A slider in SuperCollider GUI gives a value from 0 to 1.0 in resolution defined by yourself and the size of the slider (the longer the slider, the higher resolution). So above we are using parts of the slider to control frequency values from 0 to 20 Hz that we are most likely not interested in. And we might want an exponential mapping or negative.

The ControlSpec is the equivalent to [scale] in Pd or Max. Check the helpfile.

The ControlSpec takes the following arguments: minval, maxval, warp, step, default,units

a = ControlSpec.new(20, 22050, \exponential, 1, 440);
a.warp
a.default

// so any value we pass to the ControlSpec is mapped to our specification above
a.map(0.1)
a.map(0.99)

// we could constrain the mapping
a.constrain(16000)
a.map(1.66) // clips at max frequency (22050)

// we can also unmap values
a.unmap(11025) // we get a high value as pitch is exponetial

// let's see what this maps to on a linear scale (yes you guessed right)
a = ControlSpec.new(20, 22050, \lin, 1, 440);
a.unmap(11025).round(0.1)


// TIP: An array can be cast into a ControlSpec with the method .asSpec
[300, 3000, \exponential, 100].asSpec

// TIP2: Take a look at the source file for ControlSpec (Apple+y)
// You will see lots of different warps, like db, pan, midi, rq, etc.

(
var w, c, d, warparray, stringarray;
w = SCWindow("control", Rect(128, 64, 340, 960)).front;
warparray = [\unipolar, \bipolar, \freq, \lofreq, \phase, \midi, \db, \amp, \pan, \delay, \\
beats];
stringarray = [];

warparray.do({arg warpmode, i;
	a = warpmode.asSpec;
	SCStaticText(w, Rect(10, 30+(i*50), 300, 20)).string_(warparray[i].asString);
	stringarray = stringarray.add(SCStaticText(w, Rect(80, 30+(i*50), 300, 20)));
	SCSlider(w, Rect(10, 10+(i*50), 300, 20))
		.action_({arg sl;
			stringarray[i].string = "unmapped value"
			+ sl.value.round(0.01) 
			+ "......" 
			+ "mapped to:" 
		+ warpmode.asSpec.map(sl.value).round(0.01)
		})
});
)

Now we finish this by taking the example above and map the slider to pitch. Try to explore different warp modes for the pitch. And create an amplitude slider.

(
var spec, synth;
w = SCWindow("SC Window", Rect(128, 64, 340, 360)).front;
spec = [100, 1000, \exponential].asSpec;

SCButton(w, Rect(20,20, 100, 30))
	.states_([["on",Color.black, Color.clear],["off",Color.black, Color.green(alpha:0.2)]])
	.action_({ arg button; if(button.value == 1, { synth = Synth(\GUIsine)}, {synth.free }) });
	
SCSlider(w, Rect(20, 100, 200, 20))
	// HERE WE USE THE SPEC !!! - we map the spec to the value of the slider (0 to 1.0)
	.action_({arg sl; synth.set(\freq, spec.map(sl.value)) }); 
)

Other Views (but not all)

(

w = Window("SC Window", Rect(400, 64, 650, 360)).front;
a = Button(w, Rect(20,20, 60, 20))
	.states_([["on",Color.black,Color.clear],["off",Color.black,Color.clear]])
	.action_({arg butt; butt.value.postln;});

b = Slider(w, Rect(20, 50, 60, 20))
	.action_({arg sl;
		sl.value.postln;
	});

e = Slider(w, Rect(90, 20, 20, 60))
	.action_({arg sl;
		sl.value.postln;
	});
	
c = 2DSlider(w, Rect(20, 80, 60, 60))
	.action_({arg sl;
		[\x, sl.x.value, \y, sl.y.value].postln;
	});

d = RangeSlider(w, Rect(20, 150, 60, 20))
	.action_({arg sl;
		[\lo, sl.lo.value, \hi, sl.hi.value].postln;
	});

f = NumberBox(w, Rect(130, 20, 100, 20))
	.action_({
		arg numb; numb.value.postln;	
	});

g = StaticText(w, Rect(130, 50, 100, 20))
	.string_("some text");
	
h = ListView(w,Rect(130,80,80,50))
	.items_(["aaa","bbb", "ccc", "ddd", "eee", "fff"])
	.action_({ arg sbs;
		[sbs.value, sbs.item].postln;	// .value returns the integer
	});

i = MultiSliderView(w, Rect(130, 150, 100, 50))
	.action_({arg xb; ("index: " ++ xb.index ++" value: " ++ xb.currentvalue).postln});

j = PopUpMenu(w, Rect(20, 178, 100, 20))
	.items_(["one", "two", "three", "four", "five"])
	.action_({ arg sbs;
		sbs.value.postln;	// .value returns the integer
	});

k = EnvelopeView(w, Rect(20, 220, 200, 80))
	.drawLines_(true)
	.selectionColor_(Color.red)
	.drawRects_(true)
	.resize_(5)
	.action_({arg b; [b.index,b.value].postln})
	.thumbSize_(5)
	.value_([[0.0, 0.1, 0.5, 1.0],[0.1,1.0,0.8,0.0]]);

)

HID - Human Interface Devices

SuperCollider has good support for using joysticks, game pads, drawing tablets and other interfaces that work with the HID protocol (A subset of the USB protocol and using the USB port of the computer).

Hardware - serial Arduino info

Arduino - Bela

Part II

Chapter 5 - Additive Synthesis

In 18xx, the mathematician X X Fourier came up with a theory that stated that any sound can be described as a function of pure sine waves. This is a very important statement for computer music. It means that we can recreate any sound that we hear by adding number of sine waves together with different frequency, phase and amplitude. Obviously this was a costly technique in times of modular synthesis, as one would have to apply multiple oscillators to get the desired sound. This has changed with digital sound, where innumerable oscillators can be added together with little cost. Here is a proof:

// we add 500 oscillators together and the CPU is less than 20% 
{({SinOsc.ar(4444.4.rand, 0, 0.005)}!500).sum}.play

Adding waves

Adding waves together seems simple, and indeed it is. By using the plus operator we can add two signals together and their values at the same time add up to the combined value. In the following images we can see how simple sinusoidal waves add up:

Adding two waves of 440Hz together
Adding two waves of 440Hz together
{[SinOsc.ar(440), SinOsc.ar(440), SinOsc.ar(440)+SinOsc.ar(440)]}.plot
// try this as well
{a = SinOsc.ar(440, 0, 0.5); [a, a, a+a]}.plot
Adding a 440Hz and a 220Hz wave together
Adding a 440Hz and a 220Hz wave together
{[SinOsc.ar(440), SinOsc.ar(220), SinOsc.ar(440)+SinOsc.ar(220)]}.plot
Adding two 440 waves together but one with inverted phase
Adding two 440 waves together but one with inverted phase
{[SinOsc.ar(440), SinOsc.ar(440, pi), SinOsc.ar(440)+SinOsc.ar(440, pi)]}.plot

You see that two waves at the same frequency added together becomes twice the amplitude. When two waves with the amplitude of 1 are added together we get an amplitude of 2 and in the graph we get a clipping where the wave is clipped at 1. This can cause a distortion, but also resulting in a different wave form, namely a square wave. You can explore this by giving a sine wave the amplitude of 10, but then clip the signal at, say -0.75 and 0.75.

{SinOsc.ar(440, 0, 10).clip(-0.75, 0.75)}.scope

Most instrumental sounds can be roughly described as a combination of sine waves. Those sinusoidal waves are called partials (the horizontal lines you see in a spectrogram when you analyse a sound). In the example below we mix ten sine waves of frequencies between 200 and 2000. You might well be able to detect a pitch in the example if you run it many times, but since these are random frequencies they are not necessarily lining up to give us a solid pitch.

{Mix.fill(10, {SinOsc.ar(rrand(200,2000), 0, 0.1)})}.freqscope
{Mix.fill(10, {SinOsc.ar(rrand(200,2000), 0, 0.1)})}.spectrogram (XXX fix spectrogram quark)

In harmonic sounds, like the piano, guitar, or the violin we get partials that are whole number multiples of the fundamental (the lowest) partial. If they are fundamental multiples, the partials are called harmonics. The harmonics can be of varied amplitude, phase, envelope form, and duration. A saw wave is a waveform with all the harmonics represented, but lowering in amplitude:

{Saw.ar(880)}.freqscope

It is recommended that you play with adding waves together in various ways. Explore what happens when you add harmonics together (integer multiples of a fundamental frequency),

// adding two waves - the second is the octave (second harmonic) of the first
{(SinOsc.ar(440,0, 0.4) + SinOsc.ar(880, 0, 0.4))!2}.play
// here we add four harmonics (of equal amplitude) together
(
{	
var freq = 200;
SinOsc.ar(freq, 0, 0.2)   + 
SinOsc.ar(freq*2, 0, 0.2) +
SinOsc.ar(freq*3, 0, 0.2) + 
SinOsc.ar(freq*4, 0, 0.2) 
!2}.play
)

The harmonic series is something we all know intuitively and have heard many times (swing a flexible tube around your head and you will get a sound in the harmonic series). The Blip UGen in SuperCollider allows you to dynamically control the number of harmonics of equal amplitude:

{Blip.ar(440, MouseX.kr(1, 20))}.scope // using the Mouse
{Blip.ar(440, MouseX.kr(1, 20))}.freqscope
{Blip.ar(440, Line.kr(1, 22, 3) )}.play

Creating wave forms out of sinusoids

In SuperCollider you can create all kinds of wave forms out of a combination of sine waves. By adding SinOscs together, you can derive at your own unique wave forms that you might use in your synths. In this section we will look at how we use additive synthesis to derive at diverse wave forms.

// a) here is an array with 5 items:
Array.fill(5, {arg i; i.postln;});
// b) this is the same as (using a shortcut):
{arg i; i.postln;}.dup(5)
// c) or simply (using another shortcut):
{arg i; i.postln;}!5

// d) we can then sum the items in the array (add them together):
Array.fill(5, {arg i; i.postln;}).sum;
// e) we could do it this way as well:
sum({arg i; i.postln;}.dup(5));
// f) or this way:
({arg i; i.postln;}.dup(5)).sum;
// g) or this way:
({arg i; i.postln;}!5).sum;
// h) or simply this way:
sum({arg i; i.postln;}!5);

Above we created a Saw wave which contains harmonics up to the [Nyquist rate] (http://en.wikipedia.org/wiki/Nyquist_rate), which is half of the sample rate SuperCollider is running. The Saw UGen is “band-limited” which means that it does not alias and mirror back into the audible range. (Compare with LFSaw which will alias - you can both hear and see the harmonics mirror back into the audio range).

{Saw.ar(MouseX.kr(100, 1000))}.freqscope
{LFSaw.ar(MouseX.kr(100, 1000))}.freqscope

We can now try to create a saw wave out of sine waves. There is a simple algorithm for this, where each partial is an integer multiple of the fundamental frequency, and decreasing in amplitude by the reciprocal of the partials’s/harmonic’s number (1/harmnum).

A ‘Saw’ wave with 30 harmonics:

(
f = {
        ({arg i;
                var j = i + 1;
                SinOsc.ar(300 * j, 0,  j.reciprocal * 0.5);
        } ! 30).sum // we sum this function 30 times
!2}; // and we make it a stereo signal
)

f.plot; // let's plot the wave form
f.play; // listen to it
f.freqscope; // view and listen to it

By inverting the phase (using pi), we get an inverted wave form.

(
f = {
        Array.fill(30, {arg i;
                var j = i + 1;
                SinOsc.ar(300 * j, pi,  j.reciprocal * 0.5) // note pi
        }).sum // we sum this function 30 times
!2}; // and we make it a stereo signal
)

f.plot; // let's plot the wave form
f.play; // listen to it
f.freqscope; // view and listen to it

A square wave is a type of a pulse wave (If the length of the on time of the pulse is equal to the length of the off time – also known as a duty cycle of 1:1 – then the pulse wave may also be called a square wave). The square wave can be created by sine waves if we ignore all the even harmonics and only add the odd ones.

(
f = {
        ({arg i;
                var j = i * 2 + 1; // the odd harmonics (1,3,5,7,etc)
                SinOsc.ar(300 * j, 0, 1/j)
        } ! 20).sum;
};
)

f.plot;
f.play;
f.freqscope;

Let’s quickly look at the regular Pulse wave in SuperCollider:

{ Pulse.ar(440, MouseX.kr(0, 1), 0.5) }.scope;
// we could also recreate this with an algorithm on a sine wave:
{ if( SinOsc.ar(122)>0 , 1, -1 )  }.scope; // a square wave
{ if( SinOsc.ar(122)>MouseX.kr(0, 1) , 1, -1 )  }.scope; // MouseX controls the period
{ if( SinOsc.ar(122)>MouseX.kr(0, 1) , 1, -1 ) * 0.1 }.scope; // amplitude down

A triangle wave is a wave form, similar to the pulse wave in that it ignores the even harmonics, but it has a different algorithm for the phase and the amplitude:

(
f = {
        ({arg i;
                var j = i * 2 + 1;
                SinOsc.ar(300 * j, pi/2, 0.7/j.squared) // cosine wave (phase shift)
        } ! 20).sum;
};
)
f.plot;
f.play;
f.freqscope;

We have now created various wave forms using sine waves, and here is how to wrap them up in a SynthDef for future use:

SynthDef(\triwave, {arg freq=400, pan=0, amp=1;
	var wave;
	wave = ({arg i;
                	var j = i * 2 + 1;
                	SinOsc.ar(freq * j, pi/2, 0.6 / j.squared);
        	} ! 20).sum;
	Out.ar(0, Pan2.ar(wave * amp, pan));
}).add;

a = Synth(\triwave, [\freq, 300]);
a.set(\amp, 0.3, \pan, -1);
b = Synth(\triwave, [\freq, 900]);
b.set(\amp, 0.4, \pan, 1);
s.freqscope; // if the freqscope is not already running
b.set(\freq, 1400); // not band limited as we can see 

We have created various typical wave forms above in order to show how they are sums of sinusoidal waves. A good idea is to play with this further and create your own waveforms:

(
f = {
        ({arg i;
                var j = i * 2.cubed + 1;
                SinOsc.ar(MouseX.kr(20,800) * j, 0, 1/j)
        } ! 20).sum;
};
)
f.plot;
f.play;
(
f = {
        ({arg i;
                var j = i * 2.squared.distort + 1;
                SinOsc.ar(MouseX.kr(20,800) * j, 0, 0.31/j)
        } ! 20).sum;
};
)
f.plot;
f.play;

Bell Synthesis

Not all sounds are harmonic. Many musical instruments are inharmonic, for example timpani drums, xylophones, and bells. Here the partials of the sound are not in a harmonic relationship (or multiples of some fundamental frequency). This does not mean that we can’t detect pitch, as there will be certain partials that have stronger amplitude and longer duration than others. Since we know bells are inharmonic, the first thing we might try is to generate a sound with, say, 15 partials:

{ ({ SinOsc.ar(rrand(80, 800), 0, 0.1)} ! 15).sum }.play

Try to run this a few times. What we hear is a wave form that might be quite similar to a bell at first, but then the resemblance disappears, because the partials do not fade out. If we add an envelope to each of these sinusoids, we get a different sound:

{
Mix.fill( 10, { 	
	SinOsc.ar(rrand(200, 700), 0, 0.1) 
	* EnvGen.ar(Env.perc(0.0001, rrand(2, 6))) 
});
}.play

Above we are using Mix.fill instead of creating an array with ! and then .summing it. These two examples do the same thing, but it is good for the student of SuperCollider to learn different ways of reading and writing code.

You note that there is a “new” bell every time we run the above code. But what if we wanted the “same” bell? One way to do that is to “hard-code” the frequencies, durations, and the amplitudes of the bell.

{
var freq = [333, 412, 477, 567, 676, 890, 900, 994];
var dur = [4, 3.5, 3.6, 3.1, 2, 1.4, 2.4, 4.1];
var amp = [0.4, 0.2, 0.1, 0.4, 0.33, 0.22, 0.13, 0.4];
Mix.fill( 8, { arg i;
	SinOsc.ar(freq[i], 0, 0.1) 
	* EnvGen.ar(Env.perc(0.0001, dur[i])) 
});
}.play

Generating a SynthDef using a non-deterministic algorithms (such as random) in the SC-lang will also generate a SynthDef that is the “same” bell. Why? This is because the values (430.rand) are defined when the synth definition is compiled. Try to recompile the SynthDef and you get a new sound:

(
SynthDef(\mybell, {arg freq=333, amp=0.4, dur=2, pan=0.0;
	var signal;
	signal = Mix.fill(10, {
		SinOsc.ar(freq+(430.rand), 1.0.rand, 10.reciprocal) 
		* EnvGen.ar(Env.perc(0.0001, dur), doneAction:2) }) ;
	signal = Pan2.ar(signal * amp, pan);
	Out.ar(0, signal);
}).add
)
// let's try our bell
Synth(\mybell) // same sound all the time
Synth(\mybell, [\freq, 444+(400.rand)]) // new frequency, but same sound
// try to redefine the SynthDef above and you will now get a different bell:
Synth(\mybell) // same sound all the time

Another way of generating this bell sound would be to use the SynthDef from last tutorial, but here adding a duration to the envelope:

(
SynthDef(\sine, {arg freq=333, amp=0.4, dur, pan=0.0;
	var signal, env;
	env = EnvGen.ar(Env.perc(0.01, dur), doneAction:2);
	signal = SinOsc.ar(freq, 0, amp) * env;
	signal = Pan2.ar(signal, pan);
	Out.ar(0, signal);
}).add
);

(
var numberOfSynths;
numberOfSynths = 15;
Array.fill(numberOfSynths, {
	Synth(\stereosineWenv, [	
		\freq, 300+(430.rand),
		\phase, 1.0.rand,
		\amp, numberOfSynths.reciprocal, // reciprocal here means 1/numberOfSynths
		\dur, 2+(1.0.rand)]);
});
)

The power of using this style would be if you really wanted to be able to define all the parameters of the sound from the language, for example sonifying some complex information from gestural or other data.

The Klang Ugen

Another interesting way of achieving this is to use the Klang UGen. Klang is a bank of sine oscillators that takes arrays of frequencies, amplitudes and phase as arguments.

{Klang.ar(`[ [430, 810, 1050, 1220], [0.23, 0.13, 0.23, 0.13], [pi,pi,pi, pi]], 1, 0)}.play

And we create a SynthDef with the Klang Ugen:

(
SynthDef(\saklangbell, {arg freq=400, amp=0.4, dur=2, pan=0.0; // we add a new argument
	var signal, env;
	env = EnvGen.ar(Env.perc(0.01, dur), doneAction:2); // doneAction gets rid of the synth
	signal = Klang.ar(`[freq * [1.2,2.1,3.0,4.3], [0.25, 0.25, 0.25, 0.25], nil]) * env;
	signal = Pan2.ar(signal, pan);
	Out.ar(0, signal);
}).add
)
Synth(\saklangbell, [\freq, 100])

Xylophone Synthesis

Additive synthesis is good for various types of sound, but it suites very well for xylophones, bells and other metallic instruments (typically inharmonic sounds) as we saw with the bell example above. Using harmonic wave forms, such as a Saw wave, Square wave or Triangle wave would not be useful here as those are harmonic wave forms (as we know from the section above).

In additive synthesis, people often analyse the sound they’re trying to synthesise with generating a spectrogram of its frequencies.

A spectrogram of a xylophone sound
A spectrogram of a xylophone sound

The information the spectrogram gives us is three dimensional. It shows us the frequencies present in the sound on the vertical x-axis, the time on the horizontal y-axis, and amplitude is color (which we could imagine as the z-axis). We see that the partials don’t have the same type of envelopes: some have strong attack, others come smoothly in; some have much amplitude, others less; some have a long duration whilst other have less; and of them vibrate in frequency. These parameters can mix. A loud partial could die out quickly while a soft one can live for a long time.

{ ({ SinOsc.ar(rrand(180, 1200), 0.5*pi, 0.1) // the partial
		*
	// each partial gets its own envelope of 0.5 to 5 seconds
	EnvGen.ar(Env.perc(rrand(0.00001, 0.01), rrand(0.5, 5)))
} ! 12).sum }.play

Analysing the bell above we can detect the following partials * partial 1: xxx Hz, x sec. long, with amplitude of ca. x * partial 2: xxx Hz, x sec. long, with amplitude of ca. x * partial 3: xxx Hz, x sec. long, with amplitude of ca. x * partial 4: xxx Hz, x sec. long, with amplitude of ca. x * partial 5: xxx Hz, x sec. long, with amplitude of ca. x * partial 6: xxx Hz, x sec. long, with amplitude of ca. x * partial 7: xxx Hz, x sec. long, with amplitude of ca. x

We can now try to synthesize those harmonics:

{ SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)
}.play

And we get a decent inharmonic sound (inharmonic is where the partials are not whole number multiples of a fundamental frequency). We would now need to set the right amplitude as well and we’re still guessing from the spectrogram we made, but more importantly we should be using our ears.

{ SinOsc.ar(xxx, 0, xxx)+
SinOsc.ar(xxx, 0, xxx)+
SinOsc.ar(xxx, 0, xxx)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)+
SinOsc.ar(xxx, 0, 0.1)
}.play

Some of the partials have a bit of vibration and we could simply turn the oscillator into a ‘detuned’ oscillator by adding two sines together:

// a regular 880 Hz wave at full amplitude
{SinOsc.ar(880)!2}.play
// a vibrating 880Hz wave (vibration at 3 Hz), where each is amp 0.5
{SinOsc.ar([880, 883], 0, 0.5).sum!2}.play
// the above is the same as (note the .sum):
{(SinOsc.ar(880, 0, 0.5)+SinOsc.ar(883, 0, 0.5))!2}.play
{ SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum+
SinOsc.ar([xxx, xxx], 0, xxx).sum
}.play

And finally, we need to create envelopes for each of the partials:

{ (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
 (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
 (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
 (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
 (SinOsc.ar([xxx, xxx], 0, xxx).sum *
EnvGen.ar(Env.perc(0.00001, xxx))) +
}.play

And let’s listen to that. You will note that parenthesis have been put around each sine wave and its envelope multiplication. This is because SuperCollider calculates from left to right, and not giving + and - operators precedence, like in common maths and many other programming languages.

TIP: Operator Precedence - explore how these equations result in different outcomes

2+2*8 // you would expect 18 as the result, but SC returns what?
100/2-10 // here you would expect to get 40, and you get the same in SC. Why?
// now, for this reason it's a good practice to use parenthesis, e.g.,
2+(2*8)
100/(2-10) // if that's what you were trying to do

We have now created a reasonable representation of the bell sound that we listened to. The next thing to do is to turn that into a synth definition and make it stereo. Note that we add a general envelope with a doneAction:2, which will remove the synth from the server when it has stopped playing.

SynthDef(\bell, xxxx

// and we can play our new bell
Synth(\bell)

This bell has a specific frequency, but it would be nice to be able to pass a new frequency as a parameter. This could be done in many ways, one would be to pass the frequencies of each of the oscillators as arguments to the Synth. This would make the instrument quite flexible, but on the other hand it would weaken its unique character (now that so many more types of bell sounds - with their respective harmonic relationships - can be made with it). So here we decide to keep the same ratios between the partials for this unique bell sound, but a sound that can change in pitch. We find the ratios by dividing the frequencies by the lowest frequency.

[xxx, xxx2, xxx3, xxx4]/xxx
// which gives us this array:
[xxxxxxxxxxxxxxxxxxxxxxxxx]

We can now use those ratios in our synth definition

SynthDef(\bell, xxxx

// and we can play the bell with different frequencies
Synth(\bell, [\freq, 440])
Synth(\bell, [\freq, 220])
Synth(\bell, [\freq, 590])
Synth(\bell, [\freq, 1000.rand])

Harmonics GUI

Below you find a Graphical User Interface that allows you to control the harmonics of a fundamental frequency (the slider on the right is the fundamental freq). Here we are also introduced to the Osc UGen, which is a wavetable oscillator that reads its samples from a waveform stored in a buffer.

// we create a SynthDef
SynthDef(\oscsynth, { arg bufnum, freq = 440, ts= 1; 
	a = Osc.ar(bufnum, freq, 0, 0.2) * EnvGen.ar(Env.perc(0.01), timeScale:ts, doneAction:2);
	Out.ar(0, a ! 2);
}).add;

// and then we fill the buffer with our waveform and generate the GUI 
(
var bufsize, ms, slid, cspec, freq;
var harmonics;

freq = 220;
bufsize=4096; 
harmonics=20;

b=Buffer.alloc(s, bufsize, 1);

x = Synth(\oscsynth, [\bufnum, b.bufnum, \ts, 0.1]);

// GUI :
w = SCWindow("harmonics", Rect(200, 470, 20*harmonics+140,150)).front;
ms = SCMultiSliderView(w, Rect(20, 20, 20*harmonics, 100));
ms.value_(Array.fill(harmonics,0.0));
ms.isFilled_(true);
ms.valueThumbSize_(1.0);
ms.canFocus_(false);
ms.indexThumbSize_(10.0);
ms.strokeColor_(Color.blue);
ms.fillColor_(Color.blue(alpha: 0.2));
ms.gap_(10);
ms.action_({ b.sine1(ms.value, false, true, true) }); // set the harmonics
slid=SCSlider(w, Rect(20*harmonics+30, 20, 20, 100));
cspec= ControlSpec(70,1000, 'exponential', 10, 440);
slid.action_({	
	freq = cspec.map(slid.value); 	
	[\frequency, freq].postln;
	x.set(\freq, cspec.map(slid.value)); 
	});
slid.value_(0.3); 
slid.action.value;
SCButton(w, Rect(20*harmonics+60, 20, 70, 20))
	.states_([["Plot",Color.black,Color.clear]])
	.action_({	a = b.plot });
SCButton(w, Rect(20*harmonics+60, 44, 70, 20))
	.states_([["Start",Color.black,Color.clear], ["Stop!",Color.black,Color.clear]])
	.action_({arg sl;
		if(sl.value ==1, {
			x = Synth(\oscsynth, [\bufnum, b.bufnum, \freq, freq, \ts, 1000]);
			},{x.free;});
	});	
SCButton(w, Rect(20*harmonics+60, 68, 70, 20))
	.states_([["Play",Color.black,Color.clear]])
	.action_({
		Synth(\oscsynth, [\bufnum, b.bufnum, \freq, freq, \ts, 0.1]);
	});	
SCButton(w, Rect(20*harmonics+60, 94, 70, 20))
	.states_([["Play rand",Color.black,Color.clear]])
	.action_({
		Synth(\oscsynth, [\bufnum, b.bufnum, \freq, rrand(20,100)+50, \ts, 0.1]);
	});	
)

The “Play” and “Play rand” buttons on the interface allow you to hit Enter repeatedly whilst changing the harmonic energy of the sound. Can you synthesise a clarinet or an oboe this way? Can you find the sound of a trumpet? You can get close, but of course each of the harmonics would ideally have their own envelope and amplitude (as we saw in the xylophone synthesis above).

Some Additive SynthDefs with routines playing them

The examples above might have raised the question whether all the parameters of the synth could be set from the outside as arguments passed to the synth in the form of arrays. This is possible, of course, but it requires that the arrays are created as inputs when the SynthDef is compiled. In the example below, the partials and the amplitudes of 15 oscillators are set on compilation as the default arguments in respective arrays.

Note the # in front of the arrays in the arguments. It means that they are literal (fixed size) arrays.

(
SynthDef(\addSynthArray, { arg freq=300, dur=0.5, mul=100, addDiv=8, partials = #[1, 2, 3, \
4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], amps = #[ 0.30, 0.15, 0.10, 0.07, 0.06, 0.05, 0.\
04, 0.03, 0.03, 0.03, 0.02, 0.02, 0.02, 0.02, 0.02 ]; 	
	var signal, env;
	env = EnvGen.ar(Env.perc(0.01, dur), doneAction: 2);
	signal = Mix.arFill(partials.size, {arg i;
				SinOsc.ar(
					freq * harmonics[i], 
					0,
					amps[i]	
				)});
	
	Out.ar(0, signal.dup * env)
	}).add
)

// a saw wave sounding wave with 15 harmonics 
Synth(\addSynthArray, [\freq, 200])
Synth(\addSynthArray, [\freq, 300])
Synth(\addSynthArray, [\freq, 400])

This is because the synth it is using the default arguments of the SynthDef. Let’s try to pass a partials array

Synth(\addSynthArray, [\freq, 400, \partials, {|i| (i+1)+rrand(-0.2, 0.2)}!15])

What happened here? Let’s scrutinize the partials argument.

{|i| (i+1)+rrand(-0.2, 0.2)}!15
breaks down to
{|i|i}!15
or 
{arg i; i } ! 15
// but we don't want a frequency of zero, so we add 1
{|i| (i+1) }!15
// and then we add random values from -0.2 to 0.2
{|i| (i+1) + rrand(-0.2, 0.2) }!15
// resulting in frequencies such as 
{|i| (i+1) + rrand(-0.2, 0.2) * 440 }!15

We can now create a piece that sets new partial frequencies and their amplitude on every note. As mentioned above this could be carefully decided, or simply done randomly. If it is completely random, it might be worth looking into the Rand UGens though, as they allow for a random value to be generated within every synth.

// test the routine here below. uncommend and comment the variables f and a
(
fork {  // fork is basically a Routine
        100.do({
        		// partial frequencies:
         		// f = Array.fill(15, {arg i; i=i+1; i}).postln; // harmonic spectra (saw wave)
         		f = Array.fill(15, {10.0.rand}); // inharmonic spectra (a bell?)
         		// partial amplitudes:
         		// a = Array.fill(15, {arg i; i=i+1; 1/i;}).normalizeSum.postln; // saw wave amps
         		a = Array.fill(15, {1.0.rand}).normalizeSum.postln; // random amp on each harmon\
ic
         	  	Synth(\addSynthArray).set(\harmonics, f, \amps, a);
            		1.wait;
        });
      }  
)
(
n = rrand(10, 15);
{ Mix.arFill(n , { 
		SinOsc.ar( [67.0.rrand(2000), 67.0.rrand(2000)], 0, n.reciprocal)
		*
		EnvGen.kr(Env.sine(rrand(2.0, 10) ) )
	}) * EnvGen.kr(Env.perc(11, 6), doneAction: 2, levelScale: 0.75)
}.play;
)

fork {  // fork is basically a Routine
        100.do({
		n = rrand(10, 45);
		"Number of UGens: ".post; n.postln;
		{ Mix.fill(n , { 
			SinOsc.ar( [67.0.rrand(2000), 67.0.rrand(2000)], 0, n.reciprocal)
			*
			EnvGen.kr(Env.sine(rrand(4.0, 10) ) )
		}) * EnvGen.kr(Env.perc(11, 6), doneAction: 2, levelScale: 0.75)
		}.play;
		rrand(5, 10).wait;
		})
}

Using Control to set multiple parameters

There is another way to store and control arrays within a SynthDef. This is using the Control class. The controls are good for passing arrays into running Synths. In order to do this we use the Control UGen inside our SynthDef.

SynthDef("manySines", {arg out=0;
	var sines, control, numsines;
	numsines = 20;
	control = Control.names(\array).kr(Array.rand(numsines, 400.0, 1000.0));
	sines = Mix(SinOsc.ar(control, 0, numsines.reciprocal)) ;
	Out.ar(out, sines ! 2);
}).add;

Here we make an array of 20 frequency values inside a Control variable and pass this array to the SinOsc UGen which makes a “multichannel expansion,” i.e., it creates a sinewave in 20 succedent audio busses. (If you had a sound card with 20 channels, you’d get a sine out of each channel) But here we mix the sines into one signal. Finally in the Out UGen we use “! 2” which is a multichannel expansion trick that makes this a 2 channel signal (we could have used signal.dup).

b = Synth("manySines");

And here below we can change the frequencies of the Control

// our control name is "array"
b.setn(\array, Array.rand(20, 200, 1600)); 
b.setn(\array, {rrand(200, 1600)}!20); 
b.setn(\array, {rrand(200, 1600)}.dup(20));
// NOTE: All three lines above do exactly the same, just different syntax

Here below we use DynKlang (dynamic Klang) in order to change the synth in runtime:

(
SynthDef(\dynklang, { arg out=0, freq=110;
	var klank, n, harm, amp;
	n = 9;
	// harmonics
	harm = Control.names(\harm).kr(Array.series(4,1,4));
	// amplitudes
	amp = Control.names(\amp).kr(Array.fill(4,0.05));
	klank = DynKlang.ar(`[harm,amp], freqscale: freq);
	Out.ar(out, klank);
}).add;
)

a = Synth(\dynklang, [\freq, 230]);

a.set(\harm,  Array.rand(4, 1.0, 4.7))
a.set(\freq, rrand(30, 120))
a.set(\amp, Array.rand(4, 0.005, 0.1))

Klang and Dynklang

It can be laborious to build an array of synths and set the frequencies and amplitudes of each. For this we have a UGen called Klang. Klang is a bank of sine oscillators. It is more efficient than the DynKlang, but less flexible. (Don’t confuse with Klank and DynKlank which we will explore in the next chapter).

// bank of 12 oscillators of frequencies between 600 and 1000
{ Klang.ar(`[ Array.rand(12, 600.0, 1000.0), nil, nil ], 1, 0) * 0.05 }.play;
// here we create synths every 2 seconds
(
{
loop({
	{ Pan2.ar( 
		Klang.ar(`[ Array.rand(12, 200.0, 2000.0), nil, nil ], 0.5, 0)
		* EnvGen.kr(Env.sine(4), 1, 0.02, doneAction: 2), 1.0.rand2) 	
	}.play;
	2.wait;
})
}.fork;
)

Klang can not recieve updates to its frequencies nor can it be modulated. For that we use DynKlang (Dynamic Klang).

(
{ 
	DynKlang.ar(`[ 
		[800, 1000, 1200] + SinOsc.kr([2, 3, 0.2], 0, [130, 240, 1200]),
		[0.6, 0.4, 0.3],
		[pi,pi,pi]
	]) * 0.1
}.freqscope;
)

// amplitude modulation
(
{ 
	DynKlang.ar(`[ 
		[800, 1600, 2400, 3200],
		[0.1, 0.1, 0.1, 0.1] + SinOsc.kr([0.1, 0.3, 0.8, 0.05], 0, [1, 0.8, 0.8, 0.6]),
		[pi,pi,pi]
	]
) * 0.1
}.freqscope;
)

The following patch shows how a GUI is used to control the amplitudes of the DynKlang oscillator array

(	// create controls directly with literal arrays:
SynthDef(\dynsynth, {| freqs = #[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], 
	amps = #[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], 
	rings = #[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]|
	Out.ar(0, DynKlang.ar(`[freqs, amps, rings]))
}).add
)

(
var bufsize, ms, slid, cspec, rate;
var harmonics = 20;
GUI.qt;

x = Synth(\dynsynth).setn(
				\freqs, Array.fill(harmonics, {|i| 110*(i+1)}), 
				\amps, Array.fill(harmonics, {0})
				);

// GUI :
w = Window("harmonics", Rect(200, 470, 20*harmonics+40,140)).front;
ms = MultiSliderView(w, Rect(20, 10, 20*harmonics, 110));
ms.value_(Array.fill(harmonics,0.0));
ms.isFilled_(true);
ms.indexThumbSize_(10.0);
ms.strokeColor_(Color.blue);
ms.fillColor_(Color.blue(alpha: 0.2));
ms.gap_(10);
ms.action_({
	x.setn(\amps, ms.value*harmonics.reciprocal);
}); 
)

Chapter 6 - Subtractive Synthesis

Last chapter discussed additive synthesis. The idea is start with silence and add together partials and derive at the sound we are after. Subtractive synthesis works the opposite. We start with a rich sound - a broadband sound either rich in partials/harmonics or noise - and then filter the unwanted frequencies out. WhiteNoise and Saw waves are typical sound sources, as the noise has equal energy on all frequencies, but the saw wave has a naturally sounding harmonic structure with energy on every harmonic.

Noise Sources

The definition of noise is a signal that is aperiodic, i.e., there is no periodic repetition of some form in the signal. If there was such repetition, we would talk about a wave form and then a frequency of those repetitions. The frequency becomes pitch or musical notes. Not so in the world of noise: there are no repetitions that we can detect and thus we perceive it as the opposite of a signal; the antithesis of a meaning. Most of us remember the white noise of a dead analogue TV channel. Anyway, although noise might for some have negative connotations, it is a very useful musical element, in particular for synthesis as a rich input signal.

// WhiteNoise
{WhiteNoise.ar(0.4)}.plot(1)
{WhiteNoise.ar(0.4)}.play
{WhiteNoise.ar(0.4)}.scope
{WhiteNoise.ar(0.4)}.freqscope

// PinkNoise 
{PinkNoise.ar(1)}.plot(1)
{PinkNoise.ar(1)}.play
{PinkNoise.ar(1)}.freqscope

// BrownNoise
{BrownNoise.ar(1)}.plot(1)
{BrownNoise.ar(1)}.play
{BrownNoise.ar(1)}.freqscope

// Take a look at the source file called Noise.sc (or hit Apple+Y on WhiteNoise)
// You will find lots of interesting noise generators. For example these:

{ Crackle.ar(XLine.kr(0.99, 2, 10), 0.4) }.freqscope.scope;

{ LFDNoise0.ar(XLine.kr(1000, 20000, 10), 0.1) }.freqscope.scope;

{ LFClipNoise.ar(XLine.kr(1000, 20000, 10), 0.1) }.freqscope.scope;

// Impulse
{ Impulse.ar(80, 0.7) }.play
{ Impulse.ar(4, 0.7) }.play

// Dust (random impulses)
{ Dust.ar(80) }.play
{ Dust.ar(4) }.play

We can not start to sculpt sound with the use of filters and envelopes. For example, what would this remind us of:

{WhiteNoise.ar(1) * EnvGen.ar(Env.perc(0.001,0.3), doneAction:2)}.play

We can add a low pass filter (LPF) to the noise, so we cut off the high frequencies:

{LPF.ar(WhiteNoise.ar(1), 3300) * EnvGen.ar(Env.perc(0.001,0.5), doneAction:2)}.play

And here we use mouse movements to control the cutoff frequency (the x-axis) and the envelope duration (y-axis):

(
fork{
	100.do({
		{LPF.ar(WhiteNoise.ar(1), MouseX.kr(200,20000, 1)) 
			* EnvGen.ar(Env.perc(0.00001, MouseY.kr(1, 0.1, 1)), doneAction:2)}.play;
		1.wait;
	});
}
)

But what did that low pass filter do? LPF? HPF? These are filters that pass through the low frequencies, thus the name. A high pass filter will pass through the high frequencies. And a band pass filter will pass through the frequencies of a band that you specify. We can view the functionality of the low pass filter with the use of a frequency scope. Note also the quality parameter in the resonant low pass filter:

{LPF.ar(WhiteNoise.ar(0.4), MouseX.kr(100, 20000).poll(20, "cutoff"))}.freqscope;
{RLPF.ar(WhiteNoise.ar(0.4), MouseX.kr(100, 20000).poll(20, "cutoff"), MouseY.kr(0.01, 1).p\
oll(20, "quality"))}.freqscope

Filter Types

Filters are algorithms that are typically applied in the time domain of an audio signal. This, for example, means adding a delayed copy of the signal to itself.

Here is a very primitive such filter:

{
var signal;
var delaytime = MouseX.kr(0.000022675, 0.001); // from a sample 
signal = Saw.ar(220, 0.5);
d =  DelayC.ar(signal, 0.6, delaytime); 
(signal + d).dup
}.play

Let us try some of the filter UGens of SuperCollider:

// low pass filter
{LPF.ar(WhiteNoise.ar(0.4), MouseX.kr(40,20000,1)!2) }.play;

// low pass filter with XLine
{LPF.ar(WhiteNoise.ar(0.4), XLine.kr(40,20000, 3, doneAction:2)!2) }.play;

// high pass filter
{HPF.ar(WhiteNoise.ar(0.4), MouseX.kr(40,20000,1)!2) }.play;

// band pass filter (the Q is controlled by the MouseY)
{BPF.ar(WhiteNoise.ar(0.4), MouseX.kr(40,20000,1), MouseY.kr(0.01,1)!2) }.play;

// Mid EQ filter attenuates or boosts a frequency band
{MidEQ.ar(WhiteNoise.ar(0.024), MouseX.kr(40,20000,1), MouseY.kr(0.01,1), 24)!2 }.play;

// what's happening here?
{
var signal = MidEQ.ar(WhiteNoise.ar(0.4), MouseX.kr(40,20000,1), MouseY.kr(0.01,1), 24);
BPF.ar(signal, MouseX.kr(40,20000,1), MouseY.kr(0.01,1)) !2
}.play;

Resonating filters

A resonant filter does what is says on the tin, it resonates certain frequencies. The bandwidth of this resonance can vary, so with a WhiteNoise input, one could go from a very wide resonance (where the “quality” - the Q - of the filter is low), to a very narrow band resonance where the noise almost sounds like a sine wave. Let’s explore this with WhiteNoise and a band pass filter:

{BPF.ar(WhiteNoise.ar(0.4), MouseX.kr(100, 10000).poll(20, "cutoff"), MouseY.kr(0.01, 0.999\
9).poll(20, "rQ"))}.freqscope

Move your mouse around and explore how the Q factor, when increased, results in a narrower resonating bandwidth.

In a low pass and high pass resonant filters, the energy at the cutoff frequency can be increased or decreased by setting the Q factor (or in SuperCollider, the reciprocal (inverse) of Q).

// resonant low pass filter
{RLPF.ar(
	Saw.ar(222, 0.4), 
	MouseX.kr(100, 12000).poll(20, "cutoff"), 
	MouseY.kr(0.01, 0.9999).poll(20, "rQ")
)}.freqscope;
// resonant high pass filter
{RHPF.ar(
	Saw.ar(222, 0.4), 
	MouseX.kr(100, 12000).poll(20, "cutoff"), 
	MouseY.kr(0.01, 0.9999).poll(20, "rQ")
)}.freqscope;

There are bespoke resonance filters in SuperCollider, such as Resonz, Ringz and Formant.

// resonant filter
{ Resonz.ar(WhiteNoise.ar(0.5), MouseX.kr(40,20000,1), 0.1)!2 }.play

// a short impulse won't resonate
{ Resonz.ar(Dust.ar(0.5), 2000, 0.1) }.play

// for that we use Ringz
{ Ringz.ar(Dust.ar(2, 0.6), MouseX.kr(200,6000,1), 2) }.play

// X is frequency and Y is ring time
{ Ringz.ar(Impulse.ar(4, 0, 0.3),  MouseX.kr(200,6000,1), MouseY.kr(0.04,6,1)) }.play

{ Ringz.ar(Impulse.ar(LFNoise2.ar(2).range(0.5, 4), 0, 0.3),  LFNoise2.ar(0.1).range(200,30\
00), LFNoise2.ar(2).range(0.04,6,1)) }.play

{ Mix.fill(10, {Ringz.ar(Impulse.ar(LFNoise2.ar(rrand(0.1, 1)).range(0.5, 1), 0, 0.1),  LFN\
oise2.ar(0.1).range(200,12000), LFNoise2.ar(2).range(0.04,6,1)) })}.play

{ Formlet.ar(Impulse.ar(4, 0.9), MouseX.kr(300,2000), 0.006, 0.1) }.play;

{ Formlet.ar(LFNoise0.ar(4, 0.2), MouseX.kr(300,2000), 0.006, 0.1) }.play;

Klank and DynKlank

Just as Klang is a bank of fixed frequency oscillators, i.e., additive synthesis, Klank is a bank of fixed frequency resonators, where frequencies are subtracted of an input signal.

{ Ringz.ar(Dust.ar(3, 0.3), 440, 2) + Ringz.ar(Dust.ar(3, 0.3), 880, 2) }.play

//  using only one Dust UGen to trigger all the filters:
(
{ 
var trigger, freq;
trigger = Dust.ar(3, 0.3);
freq = 440;
Ringz.ar(trigger, 440, 2, 0.3) 		+ 
Ringz.ar(trigger, freq*2, 2, 0.3) 	+ 
Ringz.ar(trigger, freq*3, 2, 0.3) !2
}.play
)

// but there is a better way:

// Klank is a bank of resonators like Ringz, but the frequency is fixed. (there is DynKlank)

{ Klank.ar(`[[800, 1071, 1153, 1723], nil, [1, 1, 1, 1]], Impulse.ar(2, 0, 0.1)) }.play;

// whitenoise input
{ Klank.ar(`[[440, 980, 1220, 1560], nil, [2, 2, 2, 2]], WhiteNoise.ar(0.005)) }.play;

// AudioIn input
{ Klank.ar(`[[220, 440, 980, 1220], nil, [1, 1, 1, 1]], AudioIn.ar([1])*0.001) }.play;

Let’s explore the DynKlank UGen. It does the same as Klank, but it allows us to change the values after the synth has been instantiated.

{ DynKlank.ar(`[[800, 1071, 1353, 1723], nil, [1, 1, 1, 1]], Dust.ar(8, 0.1)) }.play;

{ DynKlank.ar(`[[200, 671, 1153, 1723], nil, [1, 1, 1, 1]], PinkNoise.ar([0.007,0.007])) }.\
play;

{ DynKlank.ar(`[[200, 671, 1153, 1723]*XLine.ar(1, [1.2, 1.1, 1.3, 1.43], 5), nil, [1, 1, 1\
, 1]], PinkNoise.ar([0.007,0.007])) }.play;

SynthDef(\dynklanks, {arg freqs = #[200, 671, 1153, 1723]; 
	Out.ar(0, 
		DynKlank.ar(`[freqs, nil, [1, 1, 1, 1]], PinkNoise.ar([0.007,0.007]))
	)
}).add

a = Synth(\dynklanks)
a.set(\freqs, [333, 444, 555, 666])
a.set(\freqs, [333, 444, 555, 666].rand)

We know resonant filters when we hear them. The typical cry-baby wah wah guitar pedal is a band pass filter, for example. In the examples below we use a SinOsc to “move” the band pass frequency up and down the frequency spectrum. The SinOsc is here effectively working as a LFO (Low Frequency Oscillator - usually with a frequency below 20 Hz).

{ BPF.ar(Saw.ar(440), 440+(3000* SinOsc.kr(2, 0, 0.9, 1))) ! 2 }.play;
{ BPF.ar(WhiteNoise.ar(0.5), 1440+(300* SinOsc.kr(2, 0, 0.9, 1)), 0.2) ! 2}.play;

Bell Synthesis using Subtractive Synthesis

The desired sound that you are trying to synthesize can be achieved through different methods. As an example, we could explore how to synthesize a bell sound with subtractive synthesis.

(
{
var chime, freqSpecs, burst, harmonics = 10;
var burstEnv, burstLength = 0.001;
freqSpecs = `[
	{rrand(100, 1200)}.dup(harmonics), //freq array
	{rrand(0.3, 1.0)}.dup(harmonics).normalizeSum, //amp array
	{rrand(2.0, 4.0)}.dup(harmonics)]; //decay rate array
burstEnv = Env.perc(0, burstLength); //envelope times
burst = PinkNoise.ar(EnvGen.kr(burstEnv, gate: Impulse.kr(1))*0.4); //Noise burst
Klank.ar(freqSpecs, burst)!2
}.play
)

This bell will be triggered every second. This is because the Impulse UGen is triggering the opening of the gate in the EnvGen (envelope generator) that uses the percussion envelope defined in the ‘burstEnv’ variable. If we wanted this to happen only once, we could set the frequency of the Impulse to zero. If we add a general envelope that frees the synth after being triggered, we could run a task that triggers bells every second.

(
Task({
	inf.do({
		{
		var chime, freqSpecs, burst, harmonics = 30.rand;
		var burstEnv, burstLength = 0.001;
		freqSpecs = `[
			{rrand(100, 8000)}.dup(harmonics), //freq array
			{rrand(0.3, 1.0)}.dup(harmonics).normalizeSum, //amp array
			{rrand(2.0, 4.0)}.dup(harmonics)]; //decay rate array
		burstEnv = Env.perc(0, burstLength); //envelope times
		burst = PinkNoise.ar(EnvGen.kr(burstEnv, gate: Impulse.kr(0))*0.5); //Noise burst
		Klank.ar(freqSpecs, burst)!2 * EnvGen.ar(Env.linen(0, 4, 0), doneAction: 2) 
		}.play;
		[0.125, 0.25, 0.5, 1].choose.wait;
	})
}).play
)

Simulating the Moog

The much loved MiniMoog is a typical subtractive synthesis synthesizer. A few oscillator types can be mixed together and subsequently passed through a characteristic resonance low pass filter. We could try to simulate a setting on the MiniMoog, using the MoogFF UGen that simulates the Moog VCF (Voltage Controlled Filter) low pass filter and choosing, say, a saw wave form (The MiniMoog also has triangle, square, and two pulse waves).

We would typically start by sketching our synth by hooking up the UGens in a .play or .freqscope:

{MoogFF.ar(Saw.ar(440), MouseX.kr(400, 16000), MouseY.kr(0.01, 4))}.freqscope

A common trick when simulating analogue equipment is to try to recreate the detuned oscillators of the analog synth (they are typically out of tune due to the difference of temperature within the synth itself). We can do this by adding another oscillator with a few Hz difference in frequency:

// here we add two Saws and split the signal into two channels
{ MoogFF.ar(Saw.ar(440, 0.4) + Saw.ar(442, 0.4), 4000 ) ! 2 }.freqscope
// like this:
{ ( SinOsc.ar(220, 0, 0.4) + SinOsc.ar(330, 0, 0.4) ) ! 2 }.play

// here we "expand" the input of the filter into two channels (the array)
{ MoogFF.ar([Saw.ar(440, 0.4), Saw.ar(442, 0.4)], 4000 )  }.freqscope
// like this - so different frequencies in each speaker:
{ [ SinOsc.ar(220, 0, 0.4), SinOsc.ar(330, 0, 0.4) ] }.play

// here we "expand" the saw into two channels, but sum them and then split into two
{ MoogFF.ar(Saw.ar([440, 442], 0.4).sum, 4000 ) ! 2 }.freqscope
// like this - and this is the one we'll use, although they're all fine:
{ SinOsc.ar( [220, 333], 0, 0.4) ! 2 }.play

We can then start to add arguments and prepare the synth graph for turning it into a SynthDef:

{ arg out=0, freq = 440, amp = 0.3, pan = 0, cutoff = 2, gain = 2, detune=2;
	var signal, filter;
	signal = Saw.ar([freq, freq+detune], amp).sum;
	filter = MoogFF.ar(signal, freq * cutoff, gain );
	Out.ar(out, Pan2.ar(filter, pan));
}.play

The two synth graphs above are pretty much the same, except we have removed the mouse input in the latter one. You can see the frequency, amp, pan, and filter cutoff values are derived from the default arguments in the top line. There are only three things left for us to do in order to have a good working general synth: add an envelope, and wrap the graph up in a SynthDef with a name:

SynthDef(\moog, { arg out=0, freq = 440, amp = 0.3, pan = 0, cutoff = 2, gain = 2, gate=1;
	var signal, filter, env;
	signal = Saw.ar(freq, amp);
	env = EnvGen.ar(Env.adsr(0.01, 0.3, 0.6, 1), gate: gate, doneAction:2);
	filter = MoogFF.ar(signal * env, freq * cutoff, gain );	
	Out.ar(out, Pan2.ar(filter, pan));
}).add;

a = Synth(\moog);
a.set(\freq, 222); // set the frequency of the synth
a.set(\cutoff, 4); // set the cutoff (this would cut of at the 4th harmonic. Why?)
a.set(\gate, 0); // kill the synth

We can now hook up a keyboard and play the \moog synth that we’ve designed. The MiniMoog is monophonic (only one note at a time), and it could be written like this:

(
c = 4;
MIDIdef.noteOn(\myOndef, {arg vel, key, channel, device;
	a.release; 
	a = Synth(\moog, [\freq, key.midicps, \amp, vel/127, \cutoff, c]);
	[key, vel].postln; 
});
MIDIdef.noteOff(\myOffdef, {arg vel, key, channel, device; 
	a.release; 
	//a = nil;
	[key, vel].postln; 
});
)
c = 10; // change the cutoff frequency at a later point 
// the 'c' variable could be set from a GUI or a MIDI controller

The “a == nil”, or “a.isNil” check is there to make sure that we don’t press another note and overwriting the variable ‘a’ with another synth. What would happen then is that the noteOff method would free the last synth put into variable ‘a’ and not the prior ones. Try to remove the condition and see what happens.

Finally, we might want to improve the MiniMoog and add a polyphonic feature. As we saw in an earlier chapter, we simply create an array for all the possible MIDI notes and turn them on and off:

a = Array.fill(127, { nil });
MIDIIn.connectAll;
MIDIdef.noteOn(\myOndef, {arg vel, key, channel, device; 
	// we use the key as index into the array as well
	a[key] = Synth(\moog, [\freq, key.midicps, \amp, vel/127, \cutoff, 4]);
});
MIDIdef.noteOff(\myOffdef, {arg vel, key, channel, device; 
	a[key].release;
});

We will leave it up to you to decide how you want to control the cutoff and gain parameters of the MoogFF filter UGen. This could be done through knobs or sliders on a MIDI interface, on a GUI, or you could even decide to explore mapping key press velocity to the cutoff frequency, such that the note sounds brighter (or dimmer?) the harder you press the key.

Chapter 7 - Modulation

Modulating one signal with another is one of the oldest and most common techniques in sound synthesis. Here, any parameter of an oscillator can be modulated by the output of another oscillator. Filters, PlayBufs (sound file players) and other things can also be modulated. In this chapter we will explore modulation, and in particular amplitude modulation (AM), ring modulation (RM) and frequency modulation (FM).

LFOs (Low Frequency Oscillators)

As mentioned most parameters or controls in an oscillator can be controlled by the output of another. Low frequency oscillators (LFOs) are oscillators that typically operate under 20 Hz, although in SuperCollider there is no point in trying to define oscillators as LFOs, as we might always want to increase that frequency to 40 or 400 Hz!

Here below are examples of a triangle wave that has different controls modulated by another UGen.

In the first example we have the frequency of one oscillator modulated by the output (amplitude) of another:

{ SinOsc.ar( 440 * SinOsc.ar(1), 0, 0.4) }.play

We hear that the modulation is 2 Hz, not one, and that is because the output of the modulating oscillator goes up to 1 and down to -1 in one second. So for a one cycle of modulation per second, you would have to give it 0.5 as an amplitude. Furthermore, a frequency argument with a negative sign is automatically turned into a positive one, as negative frequency does not make sense.

Let’s try the same for amplitude:

{ SinOsc.ar( 440, 0, 0.4 * SinOsc.ar(1)) }.play
// or perhaps using LFPulse (which outputs 1 and 0s if the amp is 1)
{ SinOsc.ar( 440, 0, 0.4 * LFPulse.ar(2)) }.play

We thus get the familiar effects of vibrato (modulation of frequency) and tremolo (modulation of amplitude) as they are commonly defined as:

// vibrato
{SinOsc.ar(440+SinOsc.ar(4, 0, 10), 0, 0.4) }.play
// tremolo
{SinOsc.ar(440, 0, SinOsc.ar(3, 0, 1)) }.play

In modulation synthesis we talk about a “modulator” (the oscillator that does the modulation) and the “carrier” which is the main signal being modulated.

// mouseX is the power of the vibrato
// mouseY is the frequency of the vibrato
{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseY.kr(20, 5), 0, MouseX.kr(5, 20)); 
	carrier = SinOsc.ar(440 + modulator, 0, 1);
	carrier ! 2 // the output
}.play

There are special Low Frequency Oscillators (LFOs) in SuperCollider. They are typically not band limited, which means that they start to alias (or mirror back) into the frequency domain. Consider the difference between Saw (band-limited) and LFSaw (non-band-limited) here:

{Saw.ar(MouseX.kr(100, 10000), 0.5)}.freqscope
{LFSaw.ar(MouseX.kr(100, 10000), 0.5)}.freqscope

When you move your mouse, you can see how the band-limited Saw only gives you the harmonics above the fundamental frequency set by the mouse. On the other hand, with LFSaw, you get the harmonics mirroring back into the audible range at the Nyquist frequency (half the sampling rate, very often 22.050Hz).

But the LFUgens are good for modulation and we typically can run them in the control rate (using .kr rather than .ar - which is typically 64 times less calculation per second -> that is, if the block size is set to 64 samples)

// LFSaw
{ SinOsc.ar(LFSaw.kr(4, 0, 200, 400), 0, 0.7) }.play

// LFTri
{ SinOsc.ar(LFTri.kr(4, 0, 200, 400), 0, 0.7) }.play
{ Saw.ar(LFTri.kr(4, 0, 200, 400), 0.7) }.play

// LFPar
{ SinOsc.ar(LFPar.kr(0.2, 0, 400,800),0, 0.7) }.play

// LFCub
{ SinOsc.ar(LFCub.kr(0.2, 0, 400,800),0, 0.7) }.play

// LFPulse
{ SinOsc.ar(LFPulse.kr(3, 1, 0.3, 200, 200),0, 0.7) }.play
{ SinOsc.ar(LFPulse.kr(3, 1, 0.3, 2000, 200),0, 0.7) }.play

// LFOs can also perform at audio rate
{ LFPulse.ar(LFPulse.kr(3, 1, 0.3, 200, 200),0, 0.7) }.play
{ LFSaw.ar(LFSaw.kr(4, 0, 200, 400), 0, 0.7) }.play
{ LFTri.ar(LFTri.kr(4, 0, 200, 400), 0, 0.7) }.play
{ LFTri.ar(LFSaw.kr(4, 0, 200, 800), 0, 0.7) }.play

Finally, we should note here at the end of this section on LFOs that the LFO frequency can of course go as high as you would like, but then it ceases being an LFO and starts to do different type of synthesis, which we will look at below. In the examples here, you will start to hear strange artefacts arriving when the oscillation goes up over 20 Hz (observe the post window).

{SinOsc.ar(440+SinOsc.ar(XLine.ar(4, 200, 10).poll(20, "mod freq:"), 0, 20), 0, 0.4) }.play
{SinOsc.ar(440, 0, SinOsc.ar(XLine.ar(4, 200, 10).poll(20, "mod freq:"), 0, 1)) }.play

Theremin

We have now obviously found the technique to create a Theremin using vibrato and tremolo:

// Using the MouseX to control amplitude
	{
		var f;
		f = MouseY.kr(4000, 200, 'exponential', 0.8);
		SinOsc.ar(
			freq: f+ (f*SinOsc.ar(7,0,0.02)),
			mul: MouseX.kr(0, 0.9)
		)
	}.play

// Using the MouseX to control vibrato speed
	{
		var f;
		f = MouseY.kr(4000, 200, 'exponential', 0.8);
		SinOsc.ar(
			freq: f+ (f*SinOsc.ar(3+MouseX.kr(1, 6),0,0.02)),
			mul: 0.3
		)
	}.play

Amplitude Modulation (AM synthesis)

In one of the examples above, the XLine Ugen to the LFO frequency up over 20Hz and we started to get some exciting artefacts in the sound. What was happening was that “sidebands” were appearing, i.e., partials on either side of the sine. Amplitude synthesis is a modulation that modulates the carrier with unipolar values (that is, they are between 0 and 1 - not bipolar (-1 to 1)).

In amplitude modulation, the sidebands are the sum and the difference of the carrier and the modulator frequency. For example, a 300 Hz carrier and 160 Hz modulator would generate 140 Hz and 460 Hz sidebands. However, the carrier frequency is always present.

{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(2, 20000, 1), 0, mul:0.5, add:1);
	carrier = SinOsc.ar(MouseY.kr(300,2000), 0, modulator);
	carrier ! 2;
}.play

If there are harmonics in the wave being modulated, each of the harmonics will have sidebands as well. - Check the saw wave.

{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(2, 2000, 1), mul:0.5, add:1);
	carrier = Saw.ar(533, modulator);
	carrier ! 2 // the output
}.play

In digital synthesis we can apply all kinds of mathematical operators to the sound, for example using .abs to calculate absolute values in the modulator. (this results in many sidebands - try also using .cubed and other unitary operators on the signal).

{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(2, 20000, 1)).abs;
	carrier = SinOsc.ar(MouseY.kr(200,2000), 0, modulator);
	carrier!2 // the output
}.play

Ring Modulation

As mentioned above, ring modulation uses a bipolar modulation values (-1 to 1) whereas AM uses unipolar modulation values (0 to 1). This results in ordinary amplitude modulation outputting the original carrier frequency as well as the two side bands for each of the spectral components of the carrier and modulation signals. Ring modulation, however, cancels out the carrier frequencies and simply outputs the side-bands.

{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(2, 200, 1));
	carrier = SinOsc.ar(333, 0, modulator);
	carrier!2;
}.play

Ring modulation was used much in the early electronic music studios, for example in Cologne, BBC Radiophonic workshop and so on. The Barrons used the technique in the music for Forbidden Planet and so did Stockhausen in his Microphonie II, where voices are modulated with the sound of an Hammond organ. Let’s try to ring modulate a voice:

b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");
{
	var modulator, carrier;
	modulator = SinOsc.ar(MouseX.kr(20, 200, 1));
	carrier = PlayBuf.ar(1, b, 1, loop:1) * modulator;
	carrier ! 2;
}.play;

Here a sine wave is modulating the voice of a girl saying “Columbia this is Houston, over…”. We could use one sound file to ring modulate the output of another:

b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");
c = Buffer.read(s, "yourSound.wav");
c.play
{
	var modulator, carrier;
	modulator = PlayBuf.ar(1, c, 1, loop:1);
	carrier = PlayBuf.ar(1, b, 1, loop:1) * modulator;
	carrier ! 2;
}.play;

Frequency Modulation (FM Synthesis)

FM Synthesis is a popular synthesis technique that works well for a number of sounds. It became popular with the Yamaha DX7 Synthesizer in the late 1980s, but it was invented in 197XXX when John Chowning, musician and researcher at Stanford University, discovered the power of FM synthesis. He was working in the lab one day when he accidentally plugged the output of one oscillator into the frequency input of another and he heard a sound rich with partials (or sidebands as we call them in modulation synthesis). It’s important to realise that at the time, an oscillator was expensive equipment, and the possibility of getting so many partials out of only two oscillators was very exciting in musical, engineering, and economical terms.

Chowning’s famous FM synthesis piece is called Stria and can be found on the interwebs. The piece was an eye opener for many musicians, as its sounds were so unusual in timbre, rendering the texture of the piece surprising and novel. Imagine being there at the time and hearing these “unnatural” sounds for the first time!

1980s synth pop music is of course full with the sounds of FM synthesis, but musicians were typically using the DX7, but very often using the pre-installed sounds of the synth itself rather than making their own. The reason for this could be that FM synthesis is quite hard to learn, as there are so multiple parameters at play in any sound. The story is that the user interface of the DX7 prevented people from designing sounds in an effective and ergonomic way, thus the lack of new and exploratory sound design using that synth.

{SinOsc.ar(1400 + SinOsc.ar(MouseX.kr(2,2000,1), 0, MouseY.kr(1,1000)), 0, 0.5)!2}.freqscope

Using the frequency scope in the example above, you will see that when you move your mouse around, sidebands are appearing, spreading with even distance to each other, and the more amplitude the modulator has, the more sidebands you get. Let’s explore the above example with comments, in order to get the terminology right:

// the same as above - with explanations:
{
SinOsc.ar(2000 	// the carrier and the carrier frequency
	+ SinOsc.ar(MouseX.kr(2,2000,1),  // the modulator and the modulator frequency
		0, 					  // the phase of the modulator
		MouseY.kr(1,1000) 		  // the modulation depth (index)
		), 
0,		// the carrier phase 
0.5)	// the carrier amplitude
}.play

What is happening is that we have a carrier oscillator (the first SinOsc) with a frequency of 2000 Hz. We then add to this frequency the output of another oscillator. Note that the amplitude of the modulator is very high: it goes up to 1000, which would become uncomfortable for your ears were you to play that on its own. So when you move the mouse across the x-axis, you notice that around the carrier frequency partial (of 2000Hz) there are appearing sidebands with the distance of the modulator frequency. That is, if the modulator frequency is 250 Hz, you get sidebands of 1750 and 2250; 1500 and 2500; 1250 and 2750, etc. The stronger the modulation depth, or the index, of the modulator (its amplitude basically), the louder the sidebands will become.

We could of course create all those sidebands with oscillators in an additive synthesis style, but note the efficiency of FM compared to Additive synthesis:

// FM
{PMOsc.ar(1000, 800, 12, mul: EnvGen.kr(Env.perc(0, 0.5), Impulse.kr(1)))}.play;
 // compared with additive synthesis:
{ 
Mix.ar( 
 SinOsc.ar((1000 + (800 * (-20..20))),  // we're generating 41 oscillators (see *)
  mul: 0.1*EnvGen.kr(Env.perc(0, 0.5), Impulse.kr(1))) 
)}.play 

TIP:

// * run this line : (1000 + (1000 * (-20..20))) // and see the frequency array that is mixed down with Mix.ar // (I think this is an example from David Cope)

Below are two patches that serve well to explore the power of simple FM synthesis. In the first one, a LFNoise0 UGen is used to trigger a new number between 20 and 60, 4 times per second. This number will be a floating point number (a fractional number) so it is rounded to an integer. Then the number is turned into frequency values using .midicps (where MIDI note value is turned into a value of cycles per second).

{ var freq, ratio, modulator, carrier;
freq = LFNoise0.kr(4, 20, 60).round(1).midicps; 
ratio = MouseX.kr(1,4); 
modulator = SinOsc.ar(freq * ratio, 0, MouseY.kr(0.1,10));
carrier = SinOsc.ar(freq + (modulator * freq), 0, 0.5);
carrier	
}.play

// let's fork it and create a perc Env!
{	
	40.do({
			{ var freq, ratio, modulator, carrier;
			freq = rrand(60, 72).midicps; 
			ratio = MouseX.kr(0.5,2); 
			modulator = SinOsc.ar(freq * ratio, 0, MouseY.kr(0.1,10));
			carrier = SinOsc.ar(freq + (modulator * freq), 0, 0.5);
			carrier * EnvGen.ar(Env.perc(0, 1), doneAction:2)
		}.play;
		0.5.wait;
	});
}.fork

The PMOsc - Phase modulation

Frequency modulation and phase modulation are pretty much the same. In SuperCollider we have a PMOsc (Phase Modulation Oscillator), and we can try to make the above example using that:

{PMOsc.ar(1400, MouseX.kr(2,2000,1), MouseY.kr(0,1), 0)!2}.freqscope

You will note a feature in phase modulation, in that when the modulating frequency is low (< 20Hz), you don’t get the vibrato-like effect of the frequency modulation synth.

The magic of the PMOsc can be studied if we look under the hood. PMOsc is a pseudo-UGen, i.e., it is not written in C and compiled as a plugin for the SC-server, but rather defined when the class library of SuperCollider is compiled (on startup or if you hit Cmd+K XXX)

How does the PMOsc work? Let’s check the source file (Cmd+j or Ctrl+j). You will see that the PMOsc.ar method simply returns (with the ^ symbol) a SinOsc with another SinOsc in the phase argument slot.

PMOsc  {
	*ar { arg carfreq,modfreq,pmindex=0.0,modphase=0.0,mul=1.0,add=0.0; 
		^SinOsc.ar(carfreq, SinOsc.ar(modfreq, modphase, pmindex),mul,add)
	}	
	*kr { arg carfreq,modfreq,pmindex=0.0,modphase=0.0,mul=1.0,add=0.0; 
		^SinOsc.kr(carfreq, SinOsc.kr(modfreq, modphase, pmindex),mul,add)
	}
}

Here are a few examples for studying the PM oscillator:

{ PMOsc.ar(MouseX.kr(500,2000), 600, 3, 0, 0.1) }.play; // modulate carfreq
{ PMOsc.ar(2000, MouseX.kr(200,1500), 3, 0, 0.1) }.play; // modulate modfreq
{ PMOsc.ar(2000, 500, MouseX.kr(0,10), 0, 0.1) }.play; // modulate index

The SuperCollider documentation of the UGen presents a nice demonstration of the UGen that looks a bit like this:

e = Env.linen(2, 5, 2);
fork{
    inf.do({
        { LinPan2.ar(EnvGen.ar(e) 
			*
			PMOsc.ar(2000.0.rand,800.0.rand, Line.kr(0, 12.0.rand,9),0,0.1), 
			1.0.rand2)
			}.play;
        2.wait;
    })
}

Other examples of PM synthesis:

{ var freq, ratio;
freq = LFNoise0.kr(4, 20, 60).round(1).midicps; 
ratio = MouseX.kr(1,4); 
SinOsc.ar(freq, 				// the carrier and the carrier frequency
		SinOsc.ar(freq * ratio, 	// the modulator and the modulator frequency
		0, 					// the phase of the modulator
		MouseY.kr(0.1,10) 		// the modulation depth (index)
		), 
0.5)		// the carrier amplitude
}.play

Same patch without the comments and modulator and carrier put into variables:

{ var freq, ratio, modulator, carrier;
	freq = LFNoise0.kr(4, 20, 60).round(1).midicps; 
	ratio = MouseX.kr(1,4); 
	modulator = SinOsc.ar(freq * ratio, 0, MouseY.kr(0.1,10));
	carrier = SinOsc.ar(freq, modulator, 0.5);
	carrier	
}.play

The use of Envelopes in FM synthesis

Frequency modulation is a complex technique and Chowning’s initial research paper shows a wide range of applications of this synthesis method. For example, in the patch below, we have a much lower modulation amplitude (between 0 and 1) but we multiply the carrier frequency with the modulator.

(
var carrier, carFreq, carAmp, modulator, modFreq, modAmp; 
carFreq = 2000; 
carAmp = 0.2;		
modFreq = 327; 
modAmp = 0.2; 
{
	modAmp = MouseX.kr(0, 1); 	// choose normalized range for modulation
	modFreq = MouseY.kr(10, 1000, 'exponential');
	modulator = SinOsc.ar( modFreq, 0, modAmp);			
	carrier = SinOsc.ar( carFreq + (modulator * carFreq), 0, carAmp);
	[ carrier, carrier, modulator ]
}.play
)

And we can compare that technique with our initial FM example. In short, the frequency of the carrier is used as a parameter in the index (amplitude) of the modulator. These are design details and there are multiple ways of using FM synthesis to derive at the sound that you are after.

// current technique 
{ SinOsc.ar( 1400 + (SinOsc.ar( MouseY.kr(10, 1000, 1), 0, MouseX.kr(0, 1)) * 1400), 0, 0.5\
) ! 2 }.play
// our first example
{ SinOsc.ar(1400 + SinOsc.ar(MouseY.kr(10, 1000,1), 0, MouseX.kr(1,1000)), 0, 0.5) ! 2 }.pl\
ay

One of the key techniques in FM synthesis is to use envelopes do control the parameters in the modulator. By changing the width and amplitude of the sidebands, we can get many interesting sounds, for example trumpets, mallets or bells.

Let us first create a basic FM synthesis synth definition and try to play it with diverse arguments:

SynthDef(\fmsynth, {arg outbus = 0, freq=440, carPartial=1, modPartial=1, index=3, mul=0.2,\
 ts=1;
	var mod, car, env;
	// modulator frequency
	mod = SinOsc.ar(freq * modPartial, 0, freq * index );
	// carrier frequency
	car = SinOsc.ar((freq * carPartial) + mod, 0, mul );
	// envelope
	env = EnvGen.ar( Env.perc(0.01, 1), doneAction: 2, timeScale: ts);
	Out.ar( outbus, car * env)
}).add;

Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \ts, 1]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 2.5, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 3.5, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 4.0, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 300.0, \carPartial, 1.5, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 0.5, \ts, 2]);

Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 300.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 400.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 800.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);

Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1.1, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1.15, \ts, 2]);
Synth(\fmsynth, [ \outbus, 0, \freq, 600.0, \carPartial, 1.5, \modPartial, 1.2, \ts, 2]);
SynthDef(\fmsynthenv, {arg outbus = 0, freq=440, carPartial=1, modPartial=1, index=3, mul=0\
.2, ts=1;
	var mod, car, env;
	var modfreqenv, modindexenv;
	modfreqenv = EnvGen.kr(Env.perc(0.1, ts/10, 0.125))+1; // add 1 so we're not starting from\
 zero
	modindexenv = EnvGen.kr(Env.sine(ts, 1))+1;
	mod = SinOsc.ar(freq * modPartial * modfreqenv, 0, freq * index * modindexenv);
	car = SinOsc.ar((freq * carPartial) + mod, 0, mul );
	env = EnvGen.ar( Env.perc(0.01, 1), doneAction: 2, timeScale: ts);
	Out.ar( outbus, Pan2.ar(car * env))
}).add;

Synth(\fmsynthenv, [ \freq, 440.0, \ts, 10]);
Synth(\fmsynthenv, [ \freq, 440.0, \ts, 1]);
Synth(\fmsynthenv, [ \freq, 110.0, \ts, 2]);

Chapter 8 - Envelopes and shaping sound

In both analog and digital synthesis, we typically operate with sound sources that are constantly running - whether those are analog oscillators or digital unit generators. This is great fun of course and we can delight in altering parameters by turning knobs or or setting control values, sculpting the sound we are after. However, this sound is not very musical. Hardly any musical instruments can have infinite sound, and in instrumental sounds we typically get an initial burst of energy, the sound then reaches some sort of equilibrium until it fades out.

The way we shape these sounds in both analog and digital synthesis is to use so-called “envelopes.” They wrap around our sound and give it the desired shape we’re after. Most people have for example heard about the ADSR envelope (where the shape is Attack, Decay, Sustain, and Release) which is one of the available envelopes in SuperCollider:

The shape of an ADSR envelope
The shape of an ADSR envelope

Envelopes in SuperCollider come in two types, sustaining (un-timed) and non-sustaining (timed) envelopes. A gate is a trigger (a positive number) that holds the envelope open until it gets a message to close it (such as 0 or less). This is like a finger pressing down a key on a MIDI keyboard. If we were using an ADSR envelope, when the finger presses the key, we would run the A (attack) and the D (decay), but then the S (sustain) would last as long as the finger is pressed. On R (release), when the finger releases the key, the R argument defines how long it takes for the sound to fade out. Synths with gated envelopes are can therefore be of un-definite time, i.e., its time is not set at the point of initialising the synth.

However, using a non-gated envelope, or a timed one, we set the duration of the sound at the time of triggering the synth. Here we don’t need to use a gate to trigger and release a synth.

Envelope types

Envelopes are powerful as we can define precisely the shape of a sound. This could be the amplitude of a sound, but it could also be a definition of frequency, filter cutoff, and so on. Let’s look at a few common envelope types in SuperCollider:

Env.linen(1, 2, 3, 0.6).test.plot;
Env.triangle(1, 1).test.plot;
Env.sine(1, 1).test.plot;
Env.perc(0.05, 1, 1, -4).test.plot;
Env.adsr(0.2, 0.2, 0.5, 1, 1, 1).test.plot;
Env.asr(0.2, 0.5, 1, 1).test.plot;
Env.cutoff(1, 1).test(2).plot;
// using .new you can define your own envelope with as many points as you like
Env.new([0, 1, 0.3, 0.8, 0], [2, 3, 1, 4],'sine').test.plot;
Env.new([0,1, 0.3, 0.8, 0], [2, 3, 1, 4],'linear').test.plot;
Env.new({1.0.rand}!10, {2.0.rand}!9).test.plot;
Env.new({1.0.rand}!100, {2.0.rand}!99).test.plot;

Different sounds require different envelopes. For example, if we wanted to synthesise a snare sound, we might choose to use the .perc method of Env.

{ LPF.ar(WhiteNoise.ar(0.5), 2000) * EnvGen.ar(Env.perc(0.001, 0.5)) ! 2 }.play

// And more bespoke envelopes can be created with the .new method:
{ Saw.ar(EnvGen.ar(Env.sine(0.3).range(140, 120))) * EnvGen.ar(Env.new([0, 1, 0, 0.5, 0], [\
0.3, 0, 0.1,0])) ! 2 }.play

// Note that above we are using a .sine envelope to modulate the frequency argument of the \
Saw UGen.

Envelopes define points in time that have a target value, duration and shape. So we can define the value, length and shape of each of the nodes. The .new method expects arrays for the value, duration and shape arguments. This can be very useful, as through a very simple syntax you can create complex transitions of value through time:

Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], \welch).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], \step).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], \sqr).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], [2, 0, 5, 3]).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], [0, 0, 0, 0]).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], [5, 5, 5, 5]).plot;
Env.new([0, 1, 0.5, 1, 0], [1, 2, 3, 2], [20, -20, 20, 20]).plot;

The last array defines the curve where 0 is linear, positive number curves the segment up, and a negative number curves it down. Check the Env documentation for further explanation.

The EnvGen - Envelope Generator

The envelope itself does nothing. It is simply a description of a form; of values in time and the shape of the line between those values. If we want to apply this envelope to a signal, we need to use the EnvGen UGen to play the envelope within a synth graph. You note that the EnvGen has an .ar or a .kr argument, so it works either in audio rate or control rate. The envelope arguments are the following:

EnvGen.ar(envelope, gate, levelScale, levelBias, timeScale, doneAction)

where the first argument is the envelope type (for example Env.perc(0.1, 1)), the second argument is the gate (not used with timed envelopes, but the default of the gate argument is 1, so it triggers the synth), the third argument is levelScale, which can scale the levels (such as amplitude) of the envelope, the fourth is levelBias which offsets the envelope’s breakpoints, the fifth is timeScale, which can shorten or stretch the envelope (so a second long Env.sine(1), could become 10 second long), and finally we have the doneAction, but this defines what will happen to the synth instance after the envelope has done its job.

doneActions

The doneActions are an important aspect of how the SC-server works. One of the key strengths of SuperCollider is how a synth can be created and removed very effectively, making it useful for granular synthesis, or playback of notes. Here a grain or a note can be a synth that exists for 20 milliseconds or 20 minutes. Users of data flow languages, such as Pure Data, will appreciate how useful this is, as synths can be spawned at wish, and don’t need to be hard wired beforehand.

When the synth has exceeded its lifetime through the function of the envelope it will typically become silent. However, we don’t want to pile synths up after they have played, but rather free the server of them. Unused synths will still run, use up processing power (CPU), and eventually cause some distortion in the sound; for example, if hundreds of synths have not been freed from the server and are still using CPU.

The doneActions are the following:

  • 0 - Do nothing when the envelope has ended.
  • 1 - Pause the synth running, it is still resident.
  • 2 - Remove the synth and deallocate it.
  • 3 - Remove and deallocate both this synth and the preceding node.
  • 4 - Remove and deallocate both this synth and the following node.
  • 5 - Same as 3. If the preceding node is a group then free all members of the group.
  • 6 - Same as 4. If the following node is a group then free all members of the group.
  • 7 - Same as 3. If the synth is part of a group, free all preceding nodes in the group.
  • 8 - Same as 4. If the synth is part of a group, free all following nodes in the group.
  • 9 - Same as 2, but pause the preceding node.
  • 10 - Same as 2, but pause the following node.
  • 11 - Same as 2, but if the preceding node is a group then free its synths.
  • 12 - Same as 2, but if the following node is a group then free its synths.
  • 13 - Frees the synth and all preceding and following nodes.

The doneActions are used in the EnvGen UGen all the time and it is important not to forget it. However there are other UGens in SuperCollider that also can free their enclosing synth when an event has happened - such as finishing playing a sample buffer. The other UGens are the following:

  • PlayBuf and RecordBuf - doneAction when the buffer has been played or recorded.
  • Line and XLine - doneAction when a line has ended.
  • Linen - doneAction when the envelope is finished.
  • LFGauss - doneAction after the completion of a cycle.
  • DemandEnvGen - Similar to EnvGen.
  • DetectSilence - doneAction when the UGen detects silence below a threshold.
  • Duty and TDuty - doneAction evaluated when a duty stream ends.
SynthDef(\sine, {arg freq=440, amp=0.1, gate=1, dA = 2;
	var signal, env;
	signal = SinOsc.ar(freq, 0, amp);
	env = EnvGen.ar(Env.adsr(0.2, 0.2, 0.5, 0.3, 1, 1), gate, doneAction: dA);
	Out.ar(0, Pan2.ar(signal * env, 0));
}).add

s.plotTree // watch the nodes appearing on the server tree

In the examples below, when you add a node, it is always added at the top of the node tree. This is how SC server does it by default. Synths can be added anywhere in the three though, but that will be discussed later in the chapter on busses, nodes and groups. [xxx, 15. ]

// doneAction = 0
a = Synth(\sine, [\dA, 0])
a.release
a.set(\gate, 1)

// doneAction = 1
a = Synth(\sine, [\dA, 1])
a.release
a.set(\gate, 1)
a.run(true)

// doneAction = 2
a = Synth(\sine, [\dA, 2])
a.release
a.set(\gate, 1) // it's gone! (see server synth count)

// doneAction = 3
a = Synth(\sine, [\dA, 3])
b = Synth(\sine, [\freq, 660, \dA, 3])
a.release

// doneAction = 3
a = Synth(\sine, [\dA, 3])
b = Synth(\sine, [\freq, 660, \dA, 3], addAction:\addToTail)
b.release

// doneAction = 3
a = Synth(\sine, [\freq, 440, \dA, 3])
b = Synth(\sine, [\freq, 660, \dA, 3])
c = Synth(\sine, [\freq, 880, \dA, 3])
b.release // will release b and c

// doneAction = 4
a = Synth(\sine, [\freq, 440, \dA, 4])
b = Synth(\sine, [\freq, 660, \dA, 4])
c = Synth(\sine, [\freq, 880, \dA, 4])
b.release // will release a and b

// doneAction = 5
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 5])
c.release // will only free c (itself)

// doneAction = 5
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 5], addAction:\addToTail)
c.release // will free itself and the preceding group

// doneAction = 6
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 6])
c.release // will free itself and the following group

// doneAction = 7
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g )
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 7], target:g)
d = Synth(\sine, [\freq, 1100, \dA, 0], target:g)
e = Synth(\sine, [\freq, 1300, \dA, 0], target:g)
c.release // will free itself and preceding nodes in a group

// doneAction = 8
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 8], target:g)
d = Synth(\sine, [\freq, 1100, \dA, 0], target:g)
e = Synth(\sine, [\freq, 1300, \dA, 0], target:g)
c.release // will free itself and preceding nodes in a group

// doneAction = 9
a = Synth(\sine, [\freq, 440, \dA, 9])
b = Synth(\sine, [\freq, 660, \dA, 0])
a.release // will free itself and pause the preceding node
b.run(true) // it was only paused

// doneAction = 10
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 10])
d = Synth(\sine, [\freq, 1100, \dA, 0])
c.release // will free itself and pause following nodes in a group
g.run(true) // it was only paused

// doneAction = 11
a = Synth(\sine, [\freq, 440, \dA, 11])
b = Synth(\sine, [\freq, 660, \dA, 0])
a.release // will free itself and the preceding node

// doneAction = 12
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 12])
d = Synth(\sine, [\freq, 1100, \dA, 0])
c.release // will free itself and the following node

// doneAction = 13
g = Group.new;
a = Synth(\sine, [\freq, 440, \dA, 0], target:g)
b = Synth(\sine, [\freq, 660, \dA, 0], target:g)
c = Synth(\sine, [\freq, 880, \dA, 13])
d = Synth(\sine, [\freq, 1100, \dA, 0])
x = Synth(\sine, [\freq, 2100, \dA, 0])
e = Synth(\sine, [\freq, 1300, \dA, 0])
c.release // will free itself and the following node

Triggers and Gates

The difference between a gated and timed envelope has become clear in the above examples, but to put it in very simple terms, think of the piano as having a timed envelope (as the note dies after a while), but the organ as having a gated envelope (as the note only stops when the key is released). For user input it is good to be able to keep the envelope open as long as the user wants and free it at some event, such as releasing a key (or a person exiting a room in a sound installation).

Gates

Gates are typically used to start a sound that contains an envelope of some sort. They ‘open up’ for a flow values to pass through for a period of time (timed or untimed). When a gate closes, it typically runs the release part of the envelope used.

d = Synth(\sine, [\freq, 1100]) // key down
d.release // key up

// compare with
d = Synth(\sine, [\freq, 840]) // key down
d.free // kill immediately

// gate holds the EnvGen open. Here using Dust (random impulses) to trigger a new envelope
{EnvGen.ar(Env.adsr(0.001, 0.8, 1, 1), Dust.ar(1)) *  Saw.ar(55)!2}.play

// Here using Impulse (periodic impulses)
{EnvGen.ar(Env.adsr(0.001, 0.8, 1, 1), Impulse.ar(2)) *  SinOsc.ar(LFNoise0.ar(2).range(200\
, 1000))!2}.play

// With a doneAction: 2 we kill the synth after the first envelope
{EnvGen.ar(Env.adsr(0.001, 0.8, 0.1, 0.1), Impulse.ar(2), doneAction:2) *  SinOsc.ar(2222)!\
2}.play

// but if we increase the release time of the envelope, it will be retriggered before the d\
oneAction can kill it
{EnvGen.ar(Env.adsr(0.001, 0.8, 0.1, 1), Impulse.ar(2), doneAction:2) *  SinOsc.ar(1444)!2}\
.play

Triggers are similar to gates, they start a process, but they do not have the release function Gates have. So they are used to trigger envelopes.

trigger rate - Arguments that begin with “t_” (e.g. t_trig), or that are specified as \tr in the def’s rates argument (see below), will be made as a TrigControl. Setting the argument will create a control-rate impulse at the set value. This is useful for triggers.

Triggers

In the example above we saw how Dust and Impulse could be used to trigger an envelope. The trigger can be set from everywhere (code, GUI, system, etc) but we need to use “t_” in front of trigger arguments.

(
a = { arg t_gate = 1;
	var freq;
	freq = EnvGen.kr(Env.new([200, 200, 800], [0, 1.6]), t_gate);
     SinOsc.ar(freq, 0, 0.2) ! 2 
}.play;
)

a.set(\t_gate, 1)  // try to evaluate this line repeatedly
a.free // if you observe the server window you see this synth disappearing

(
a = { arg t_gate = 1;
	var env;
	env = EnvGen.kr(Env.adsr, t_gate);
     SinOsc.ar(888, 0, 1 * env) ! 2 
}.play;
)

a.set(\t_gate, 1)  // repeat this
a.free // free the synth (since it didn't have a doneAction:2)

// If you are curious about what doneAction:2 would have done, try this:
(
a = { arg t_gate = 1;
	var env;
	env = EnvGen.kr(Env.adsr, t_gate, doneAction:2);
     SinOsc.ar(888, 0, 1 * env) ! 2 
}.play;
)

a.set(\t_gate, 1)  // why does this line not retrigger the synth?
// Now try the same with doneAction:0

If you want to keep the same synth on the server and trigger it from another process than the synthesis control parameter process you can use gates and triggers for the envelope. Use doneAction:0 to keep the synth on the server before or after the envelope is triggered.

Let’s turn the examples above into SynthDefs and explore the concept of gates:

SynthDef(\trigtest, {arg freq, amp, dur=1, gate;
	var signal, env;
	env = EnvGen.ar(Env.adsr(0.01, dur, amp, 0.7), gate, doneAction:0); 
	signal = SinOsc.ar(freq) * env;
	Out.ar(0, signal);
}).add

a = Synth(\trigtest, [\freq, 333, \amp, 1, \gate, 0]) // gate is 0, no sound
a.set(\gate, 1)
a.set(\gate, 0)

// the synth is still running, even if it is silent
a.set(\freq, 788) // change the frequency

a.set(\gate, 1)
a.set(\gate, 0)

The example below does the same, but here with a fixed time envelope. Since that envelope finishes when it is done, it does not work with gates. We need a trigger to trigger it back to life.

// here we use a t_trig to retrigger the synth
SynthDef(\trigtest2, {arg freq, amp, dur=1, t_trig;
	var signal, env;
	env = EnvGen.ar(Env.perc(0.01, dur, amp), t_trig, doneAction:0); 
	signal = SinOsc.ar(freq) * env;
	Out.ar(0, signal);
}).add

a = Synth(\trigtest2, [\freq, 333, \amp, 1, \t_trig, 1])

a.set(\freq, 788)
a.set(\t_trig, 1);
a.set(\amp, 0.28)
a.set(\t_trig, 1);

a.set(\freq, 588)
a.set(\t_trig, 1);
a.set(\amp, 0.8)
a.set(\t_trig, 1);

Exercise: Explore the difference between a gate and a trigger.

MIDI Keyboard Example

The techniques we’ve been exploring above are useful when creating user interfaces for your synth. As an example we could create a synth definition to be controlled by a MIDI controller. Other usage could be networked communication, input from other software, or running musical patterns within SuperCollider itself. In the example below we build upon the example we did in chapter 4, but here we add pitch bend and vibrato.

MIDIIn.connectAll; // we connect all the incoming devices
MIDIFunc.noteOn({arg ...x; x.postln; }); // we post all the args

//First we create a synth definition for this example:
SynthDef(\moog, {arg freq=440, amp=1, gate=1, pitchBend=0, cutoff=20, vibrato=0;
	var signal, env;
	signal = LPF.ar(VarSaw.ar([freq, freq+2]+pitchBend+SinOsc.ar(vibrato, 0, 1, 1), 0, XLine.a\
r(0.7, 0.9, 0.13)), (cutoff * freq).min(16000));
	env = EnvGen.ar(Env.adsr(0), gate, levelScale: amp, doneAction:2);
	Out.ar(0, signal*env);
}).add;

a = Array.fill(127, { nil }); // create an array of nils, where the Synths will live
g = Group.new; // we create a Group to be able to set cutoff of all active notes
c = 6;
MIDIdef.noteOn(\myOndef, {arg vel, key, channel, device; 
	// we use the key as index into the array as well
	a[key] = Synth(\moog, [\freq, key.midicps, \amp, vel/127, \cutoff, 10], target:g);
});
MIDIdef.noteOff(\myOffdef, {arg vel, key, channel, device; 
	a[key].release;
	a[key] = nil; // we put nil back in the array as we use it in the if-statements below
});

MIDIdef.cc(\myPitchBend, { arg val; 
	c=val.linlin(0, 127, -10, 10); 
	"Pitch Bend : ".post; val.postln;
	a.do({arg synth; 
		if( synth != nil , { synth.set(\pitchBend, val ) }); 
	});	
});

MIDIdef.bend(\myVibrato, { arg val; 
	c=val.linlin(0, 127, 1, 20); 
	"Vibrato : ".post; val.postln;
	a.do({arg synth; 
		if( synth != nil , { synth.set(\vibrato, c ) }); 
	});	
});

Chapter 9 - Samples and Buffers

SuperCollider offers multiple ways of working with recorded sound. Sampling is one of the key techniques of computer music programming today, originating in tape-based instruments such as the Chamberlin or Mellotrone, but popularised in digital systems with samplers like E-mu Emulator and the Akai S-Series. Sampled sound is also the source of more recent techniques, such as granular and concatenative synthesis.

The first thing we need to know is that a sample is a collection of amplitude values in an array. If we were using 44.1kHz sample rate, we would have 44100 samples in the array if our sound was one second. And twice that amount if our sound was stereo.

We could therefore generate 1 second of whitenoise like this:

Array.fill(44100, {1.0.rand2});

The interesting question then is: how do we play these samples? What mechanism will read this and send it to the sound card? For that we use Buffers and UGens that can read them, such as PlayBuf.

Buffers

In short, a buffer is a collection of values in the memory of the computer. In SuperCollider, buffers are loaded onto the server not the language, so in our white noise example above, we would have to find a way to move our collection of values from the language to the server (as that’s where they would be played). Buffers can be used to contain all kinds of values in addition to sound, for example control data, gestural data from human movement, sonification data, and so on.

Allocating a Buffer

In order to create a buffer, we need to allocate it on the server. This is done through an .alloc method:

b = Buffer.alloc(s, 44100 * 4.0, 1); // 4 seconds of sound on a 44100 Hz system, 1 channel

// in the post window we get this information:
//  - > Buffer(0, 176400, 1, 44100, nil) // bufnum, number of samples, channels, sample-rat\
e, path

// If you run the line again, you will see that the bufnum has increased by 1.

// and we can get to this information by calling the server:
b.bufnum;

c = Buffer.alloc(s, 44100 * 4.0, 2); // same but now 2 channels

// This means that we now have twice the amount of samples, but same amount of frames
b.numFrames;
c.numFrames;

// and the number of channels
b.numChannels;
c.numChannels;

// It's clear though that 'c' has twice the amount of samples, even if both buffers have eq\
ual amount of frames

b.numFrames * b.numChannels;
c.numFrames * c.numChannels;

As mentioned buffers are collection of values in the RAM (Random Access Memory) of the computer. This means that the playhead can jump back and forth in the sound, play it fast or slow, backwards or forwards, and so on. But it also means that, unlike sound file playback from disk (where sound is buffered at regular intervals), the whole sound is stored in the memory of the computer. Try to open your Terminal and then run this line:

a = Array.fill(10, {Buffer.alloc(s,44100 * 8.0, 2)});

// You will see how the memory of the process called scsynth increases
// (scsynth is the name of the SuperCollider server process)

// now run the following line and watch when the memory is de-allocated.
10.do({arg i; a[i].free;})

We have now allocated some buffers on the server, but they only contain values of zero. Try playing it:

b.play
// We can load the samples from the server into an array ('a') in the language to check
// This means that the server will send the values from the server to the language over OSC.
b.loadToFloatArray(action: {arg array; a = array; a.postln;})

a.postln // and we see lots of 0s.

If we wanted to listen to the noise we created above, we could simply load the array into the buffer.

a = Array.fill(44100, {1.0.rand2}); // 1 second of noise (in an array in the language)
b = Buffer.loadCollection(s, a); // this line loads the array into the buffer (on the serve\
r)
b.play // and now we have a beautiful noise!

// We could then observe the samples by getting it back to the language like we did above:
a = Array.fill(44100, {arg i; i=i/10; sin(i)}); // fill a buffer with a sine wave
b = Buffer.loadCollection(s, a); // load the array onto the server
b.play // and now we have a beautiful sine!
b.loadToFloatArray(action: {arg array; a = array; Post << a}) // lots of samples

Reading a soundfile into a Buffer

We can read a sound file into a buffer simply by providing the path to it. This path is either relative to the SuperCollider application (so ‘hello.aif’ could be loaded if it was next to the SuperCollider application). Note that the IDE allows you to drag a file from your file system into the code document and the full path appears.

b = Buffer.read(s, "sounds/a11wlk01.wav");
b.bufnum; // let's check its bufnum

{ PlayBuf.ar(1, b) ! 2 }.play // the first argument is the number of channels

// We can wrap this into a SynthDef, of course
(
SynthDef(\playBuf,{ arg out = 0, bufnum;
	var signal;
	signal = PlayBuf.ar(1, bufnum, BufRateScale.kr(bufnum));
	Out.ar(out, signal ! 2)
}).add
)
x = Synth(\playBuf, [\bufnum, b.bufnum]) // we pass in either the buffer or the buffer numb\
er

x.free; // free the synth 
b.free; // free the buffer

// for many buffers, the typical thing to do is to load them into an array:
b = Array.fill(10, {Buffer.read(s, "sounds/a11wlk01.wav")});

// and then we can access it from the index in the array
x = Synth(\playBuf, [\bufnum, b[2].bufnum])

Since the PlayBuf requires information on the number of channels in the sound file, users need to make sure that this is clear, so people often come up with systems like this in their code:

b = Buffer.read(s, Platform.userAppSupportDir+/+"sounds/a11wlk01.wav");

SynthDef(\playMono, { arg out=0, buffer, rate=1;
	Out.ar(out, PlayBuf.ar(1, buffer, rate, loop:1) ! 2)
}).add;

SynthDef(\playMono, { arg out=0, buffer, rate=1;
	Out.ar(out, PlayBuf.ar(2, buffer, rate, loop:1)) // no "! 2"
}).add;

// And then
If(b.numChannels == 1, {
	x = Synth(\playMono, [\buffer, b]) // we pass in either the buffer or the buffer number
}, {
	x = Synth(\playStereo, [\buffer, b]) // we pass in either the buffer or the buffer number
});

Note that we don’t need the “!2” in the stereo version as that would simply make the left channel expand into the right (and add to the right channel), whereas the right channel would expand into Bus 3. [Bus 1, Bus 2, Bus 3, Bus 4, Bus 5, etc….] [ Left , Right ] [ Left , Right ]

Let us play a little with Buffer playback in order to get a feel for the possibilities of sound stored in random access memory.

// Change the playback speed
{Pan2.ar(PlayBuf.ar(1, b, MouseX.kr(-1,2), loop:1))}.play

// Scratch around in the file
{ PlayBuf.ar(1, b, MouseX.kr(-1.5, 1.5), loop: 1) ! 2 }.play

// Or perhaps a bit more excitingly 
{
	var speed;
	speed = MouseX.kr(-10, 10);
	speed = speed - DelayN.kr(speed, 0.1, 0.1);
	speed = MouseButton.kr(1, 0, 0.3) + speed ;
	PlayBuf.ar(1, b, speed, loop: 1) ! 2;
}.play

// Another version
{BufRd.ar(1, b, Lag.ar( K2A.ar( MouseX.kr(0,1)) * BufFrames.ir(b), 1))!2}.play

// Jumping to a random location in the buffer using LFNoise0
{PlayBuf.ar(1, b, 1, LFNoise0.ar(12)*BufFrames.ir(b), loop:1)!2}.play

// And so on ….

Recording live sound into a Buffer

Live sound can of course be fed directly into a Buffer for further manipulation. This could be useful if you are recording the sound, transforming it, overdubbing, cutting it up, scratching, and so on. However, in many cases a simple SoundIn UGen might be sufficient (and no Buffers used).

b = Buffer.alloc(s, 44100 * 4.0, 1); // 4 second mono buffer
// Warning, you might get feedback if you're not using headphones
{ RecordBuf.ar(SoundIn.ar(0), b); nil }.play; // run this for at least 4 seconds
{ PlayBuf.ar(1, b) }.play; // play it back

SuperCollider really makes this simple. However, the RecordBuf does more than simply recording. Since it loops, you can also overwrite the data that is already in the buffer with the preLevel argument. The preLevel argument is the amount that the data that is in the buffer is multiplied with before it is added to the incoming sound. We can now explore this in a more SuperCollider way of doing things, with SynthDefs and Synths.

SynthDef(\recBuf, { arg buffer=0, recLevel=0.5, preLevel=0.5;
	var in;
	in = SoundIn.ar(0);
	RecordBuf.ar(in, buffer, 0, recLevel, preLevel, loop:1);
}).add;

// we record into the buffer
x = Synth(\recBuf, [\buffer, b, \preLevel, 0]);
x.free;

// and we can play it back using the playBuf synthdef we created above
z = Synth(\playMono, [\buffer, b])
z.free;

// We could also explore the overdubbing of sound (leave this running)
(
x = Synth(\recBuf, [\buffer, b]); // here preLevel is 0.5 by default
z = Synth(\playMono, [\buffer, b, \rate, 1.5]); 
)

// Change the playback rate of the buffer
z.set(\rate, 0.75);

// if we like what we have recorded, we can easily write it to disk as a soundfile:
b.write("myBufRecording.aif", "AIFF", 'int16');

It is clear that playing with the recLevel and preLevel of a buffer recording, can create interesting layers of sound, where instrumentalists can record on top of what they already recorded. People could also engage in an “I’m Sitting in a Room” exercise a la Lucier.

Finally, as mentioned at the beginning of this chapter, buffers can contain any data and are not necessarily bound to audio content. In the example below we use the buffer to record mouse values at control rate (which is sample rate / block size) and write that mouse movement to disk in the form of an audio file.

b = Buffer.alloc(s, (s.sampleRate/s.options.blockSize) * 5, 1); // 5 secs of control rate
{RecordBuf.kr(MouseY.kr, b); SinOsc.ar(1000*MouseY.kr) }.play // recording the mouse
b.write("mouse.aif") // write the buffer to disk, aif is as good format as any

// play it back
b = Buffer.read(s, "mouse.aif")
{SinOsc.ar(1000*PlayBuf.kr(1, b))}.play

BufRd and BufWr

There are other UGens that can be helpful when playing back buffers. BufRd (buffer read) and BufWr (buffer write) are good examples of this, and so is the LoopBuf (from the sc3-plugins that are in the SuperCollider Extensions distribution).

In the example below we use a Phasor to ‘drive’ the reading of the buffer. This reading has to read sample by sample from the buffer, for example by providing the start and the end sample you want to read:

{ BufRd.ar(1, b, Phasor.ar(0, 1, 0, BufFrames.kr(b))) }.play;

// This way we can easily use SinOsc to modulate the play rate
{ BufRd.ar(1, b, Phasor.ar(0, SinOsc.ar(1).range(0.5, 1.5), 0, BufFrames.kr(b))) }.play;

// And we can also use the mouse to drive the reading 
b = Buffer.read(s, "sounds/a11wlk01.wav");

// Move the mouse!
SynthDef(\scratch, {arg bufnum, pitch=1, start=0, end;
	var signal;
	signal = BufRd.ar(1, bufnum, Lag.ar(K2A.ar(MouseX.kr(1, end)), 0.4));
	Out.ar(0, signal!2);
}).play(s, [\bufnum, b.bufnum, \end, b.numFrames]);

Streaming from disk

If your sound file is very long, it is probably a good idea to stream the sound from disk, just like popular digital audio workstations do. This is because long stereo files would quickly fill up your RAM if working with many sound files.

// We still need a buffer (but we are cueing it, i.e. not filling)
b = Buffer.cueSoundFile(s, Platform.resourceDir +/+ "sounds/a11wlk01-44_1.aiff", 0, 1);

SynthDef(\playcuedBuf,{ arg out = 0, bufnum;
	var signal;
	signal = DiskIn.ar(1, bufnum, loop:1);
	Out.ar(out, signal ! 2)
}).add;

x = Synth(\playcuedBuf, [\bufnum, b]);

Wavetables and wavetable look-up oscillators

Wavetables are a classic method of sound synthesis. It works similarly to the BufRd of a Buffer above, but here we are creating a bespoke wavetable (which can often be visualised for manipulation) and using wavetable look-up oscillators to play the content of the wavetable back. In fact many of the oscillators of SuperCollider use wavetable look-up under the hood, SinOsc being a good example.

Let’s start with creating a SynthDef with an Osc (which is a wavetable look-up oscillator). It expects to get a signal in the form of a SuperCollider Wavetable, which is a special format for interpolating oscillators.

(
SynthDef(\wavetable,{ arg out = 0, buffer;
	var signal;
	signal = Osc.ar(buffer, MouseX.kr(60,300)); // mouseX controlling pitch
	Out.ar(out, signal ! 2)
}).add
)

// we then allocate a Buffer with 512 samples (the buffer size must be the power of 2)
b = Buffer.alloc(s, 512, 1); 
b.sine1(1.0, true, true, true); // and we fill it with a sine wave

b.plot // notice something strange?
b.getToFloatArray(action: { |array|  { array[0, 2..].plot }.defer }); // check this

// let's listen to it
a = Synth(\wavetable, [\buffer, b])
a.free;

// You can hear that it sounds very different from a PlayBuf trying to play the same file (\
and here we get aliasing), since the PlayBuf is not band limited:

{PlayBuf.ar(1, b, MouseX.kr(-1, 10), loop:1)}.play;

// We can then create different waveforms
b.sine1(1.0/[1,2,3,4], true, true, true); //
b.getToFloatArray(action: { |array|  { array[0, 2..].plot }.defer }); // view the wave
a = Synth(\wavetable, [\buffer, b])
a.free;

// A saw wave
b.sine1(0.3/Array.series(90,1,1)*2, false, true, true);
b.getToFloatArray(action: { |array|  { array[0, 2..].plot }.defer });
a = Synth(\wavetable, [\buffer, b])
a.free;

// Random numbers
b.sine1(Array.fill(50, {1.0.rand}), true, true, true);
b.getToFloatArray(action: { |array|  { array[0, 2..].plot }.defer });

a = Synth(\wavetable, [\buffer, b])
a.free;

// We can also use an envelope to fill a buffer
a = Env([0, 1, 0.2, 0.3, -1, 0.3, 0], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1], \sin);
a.plot; // view this envelope 

// But we need to turn the envelope into a signal and then into a wavetable
c = a.asSignal(256).asWavetable;
c.size; // the size of the wavetable is twice the size of the signal... 512

// now we neet to put this wavetable into a buffer:
b = Buffer.alloc(s, 512);
b.setn(0, c);

// play it
a = Synth(\oscplayer, [\bufnum, b.bufnum])
a.free;

// try to load the above without turning the data into a wavetable, i.e.,
a = Env([0, 1, 0.2, 0.3, -1, 0.3, 0], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1], \sin);
c = a.asSignal(256);
b = Buffer.alloc(s, 512);
b.setn(0, c);
a = Synth(\oscplayer, [\bufnum, b.bufnum])

// and you will hear aliasing where the partials of the sound mirror back into the audio ra\
nge

Above we saw how an envelope was turned into a Signal which was then converted to a Wavetable. Signals are a type of a numerical collection in SuperCollider that allows for various math operations. These can be useful for FFT manipulation of data arrays or simply writing data to a file, as in this example:

f = SoundFile.new;
f.openWrite( Platform.userAppSupportDir +/+ "sounds/writetest.wav");
d = Signal.fill(44100, { |i| // one second of sound  
	// 1.0.rand2;  // white noise
	// sin(i/10); // a sine wave
	sin(i/10).cubed;
});
f.writeData(d);
f.close;

Below we explore further how Signals can be used with wavetable oscillators.

x = Signal.sineFill(512, [0,0,0,1]);
// We can now operate in many ways on the signal
[x, x.neg, x.abs, x.sign, x.squared, x.cubed, x.asin.normalize, x.exp.normalize, x.distort]\
.flop.flat.plot(numChannels: 9);

c = x.asWavetable;

b = Buffer.alloc(s, 512);
b.setn(0, c); // set the wavetable into the buffer so Osc can read it.

// play it
a = Synth(\wavetable, [\buffer, b])
a.free;

// And the following lines will load a different wavetable into the buffer
c = x.exp.normalize.asWavetable;
b.setn(0, c);
c = x.abs.asWavetable;
b.setn(0, c);
c = x.squared.asWavetable;
b.setn(0, c);
c = x.asin.normalize.asWavetable;
b.setn(0, c);
c = x.distort.asWavetable;
b.setn(0, c);

// try also COsc (Chorusing wavetable oscillator)
{COsc.ar(b, MouseX.kr(60,300))!2}.play

// OscN
{OscN.ar(b, MouseX.kr(60,300))!2}.play // works better with the non-asWavetable example abo\
ve

// Variable OSC - which can morph between wavetables
b = {Buffer.alloc(s, 512)} ! 9;
x = Signal.sineFill(512, [0,0,0,1]);
[x, x.neg, x.abs, x.sign, x.squared, x.cubed, x.asin.normalize, x.exp.normalize, x.distort]\
.do({arg signal, i; b[i].setn(0, signal.asWavetable)});

{ VOsc.ar(b[0].bufnum + MouseX.kr(0,7), [120,121], 0, 0.3) }.play

// change the content of the wavetables to something random
9.do({arg i; b[i].sine1(Array.fill(512, {1.0.rand2}), true, true, true); })

// VOsc3 
{ VOsc3.ar(b[0].bufnum + MouseX.kr(0,7), [120,121], 0, 0.3) }.play

People often want to draw their own sound in a wavetable. We can end this excursion into wavetable synthesis by creating a graphical user interface that allows for the drawing of wavetables.

(
var size = 512;
var canvas, wave, lastPos, lastVal;

w = Window("Wavetable", Rect(100, 100, 1024, 500)).front;
wave = Signal.sineFill(size, [1]);
b = Buffer.alloc(s, size * 2); // double the size for the wavetable

Slider(w, Rect(0, 5, 1024, 20)).action_({|sl| x.set(\freq, sl.value*1000)});  
  UserView(w, Rect(0, 30, 1024, 470))
    .background_(Color.black)
    .animate_(true)
    .mouseMoveAction_({ |me, x, y, mod, btn|
       var pos = (size * (x / me.bounds.width)).floor;
       var val = (2 * (y / me.bounds.height)) - 1;
       val = min(max(val, -1), 1);
       wave.clipPut(pos, val);
       if(lastPos != nil, {
           for(lastPos + 1, pos - 1, { |i|
               wave.clipPut(i, lastVal + (((i - lastPos) / (pos - lastPos)) * (val - lastVa\
l)));
           });
           for(pos + 1, lastPos - 1, { |i|
               wave.clipPut(i, lastVal + (((i - lastPos) / (pos - lastPos)) * (val - lastVa\
l)));
           });
       });
       lastPos = pos;
       lastVal = val;
       b.loadCollection(wave.asWavetable);
       })
       .mouseUpAction_({
           lastPos = nil;
          lastVal = nil;
       })
       .drawFunc_({ |me|
	         Pen.color = Color.white;
           Pen.moveTo(0@(me.bounds.height * (wave[0] + 1) / 2));
           for(1, size - 1, { |i, a|
               Pen.lineTo((me.bounds.width * i /size)@(me.bounds.height * (wave[i] + 1)/2))
           });
           Pen.stroke;
       });
b.loadCollection(wave.asWavetable);
x = {arg freq=440; Osc.ar(b, freq) *0.4 ! 2 }.play;
)

// 6) ========= Pitch and time changes ==========

b = Buffer.read(s, "sounds/a11wlk01-44_1.aiff");

// The most common way
// here double rate (and pitch) results in half the length (time) of the file

(
SynthDef(\playBuf,{ arg out = 0, bufnum;
	var signal;
	signal = PlayBuf.ar(1, bufnum, MouseX.kr(0.2, 4), loop:1);
	Out.ar(out, signal ! 2)
}).add
)

x = Synth(\playBuf, [\bufnum, b.bufnum])
x.free

// we could use PitchShift to change the pitch without changing the time
// PitchShift is a granular synthesis pitch shifter (other techniques include Phase Vocoder\
s)

(
SynthDef(\playBufWPitchShift,{ arg out = 0, bufnum;
	var signal;
	signal = PlayBuf.ar(1, bufnum, 1, loop:1);
	signal = PitchShift.ar(
		signal,	// stereo audio input
		0.1, 			// grain size
		MouseX.kr(0,2),	// mouse x controls pitch shift ratio
		0, 				// pitch dispersion
		0.004			// time dispersion
	);
	Out.ar(out, signal ! 2)
}).add
)

x = Synth(\playBufWPitchShift, [\bufnum, b.bufnum])
x.free

// for time streching check out the Warp0, Warp1 Ugens.

Chapter 10 - Granular and Concatenative Synthesis

Granular synthesis is a synthesis technique that became available for most practical purposes with digital computer music software. Early pioneers were Barry Truax and Iannis Xenakis, but the technique has been well explored in the work of Curtis Roads, both in his musical output and in a fine book called Microsound. The idea in granular synthesis is to synthesize a sound using small grains, typically of 10-50 millisecond duration, that are wrapped in envelopes. These grains can then result in a continuous sound or more discontinuous ‘grain clouds’. Here the individual grains become the building blocks, almost atoms, of a more complex structure.

Granular Synthesis

Granular synthesis is used in many pitch shifting and time stretching features of commercial software so most people would be well aware of its functionality and power. Let us explore the pitch shifting through the use of an indigenous SuperCollider UGen, the PitchShift. In the examples below, the grains are 100 ms windows that overlap. What is really happening is that the sample is played at variable rate (where rate of 2 is an octave higher), but the grains are layered on top of each other in order to maintain the time of the sound.

An example of a grain
An example of a grain
b = Buffer.read(s, Platform.userAppSupportDir+/+"sounds/a11wlk01.wav");

// MouseX controls the pitch
{ PitchShift.ar(PlayBuf.ar(1, b, 1, loop:1), 0.1, MouseX.kr(0,2), 0, 0.01) ! 2}.play;
// Same as above, but here MouseY gives random pitch
{ PitchShift.ar(PlayBuf.ar(1, b, 1, loop:1), 0.1, MouseX.kr(0,2), MouseY.kr(0, 2), 0.01) ! \
2}.play;

The grains are windows with a specific envelope (typically a Hanning envelope) and they overlap in order to create the continuous sound. Play around with the parameters of window size and overlap to explore how they result in different sound. The above examples used PitchShift for the purposes of changing the pitch but keeping the same playback rate. Below we use Warp1 to time stretch sound where the pitch remains the same.

// speed up the sound (with same pitch)
{Warp1.ar(1, b, Line.kr(0,1, 1), 1, 0.1, -1, 8, 0.1, 2)!2}.play

// slow down the sound (with the same pitch)
{Warp1.ar(1, b, Line.kr(0,1, 10), 1, 0.09, -1, 8, 0.1, 2)!2}.play

// use the mouse to read the sound (at the same pitch)
{Warp1.ar(1, b, MouseX.kr(0,1), 1, 0.1, -1, 8, 0.1, 2)!2}.play

// A SinOsc reading the sound (at the same pitch)
{Warp1.ar(1, b, SinOsc.kr(0.07).range(0,1), 1, 0.1, -1, 8, 0.1, 2)!2}.play

// use the mouse to read the sound (and control the pitch)
{Warp1.ar(1, b, MouseX.kr(0,1), MouseY.kr(0.5,2), 0.1, -1, 8, 0.1, 2)!2}.play

TGrains

The TGrains Ugen - or Trigger Grains - is a handy UGen for quick and basic granular synthesis. Here we can pass arguments such as number of grains per second, grain duration, rate (which is pitch), etc.

// mouse Y controlling number of grains per second
{TGrains.ar(2, Impulse.ar(MouseY.kr(1, 30)), b, 1, MouseX.kr(0,BufDur.kr(b)), 2/MouseY.kr(1\
, 10), 0, 0.8, 2)}.play

// mouse Y controlling pitch
{TGrains.ar(2, Impulse.ar(20), b, MouseY.kr(0.5, 2), MouseX.kr(0,BufDur.kr(b)), 2/MouseY.kr\
(1, 10), 0, 0.8, 2)}.play

// random pitch location, with mouse X controlling number 
// of grains per second an mouse Y controlling grain duration
{
TGrains.ar(2, 
	Impulse.ar(MouseX.kr(1, 50)), 
	b, 
	LFNoise0.ar(40, add:1), 
	LFNoise0.ar(40).abs * BufDur.kr(b), 
	MouseY.kr(0.01, 0.05), 
	0, 
	0.8, 
	2)
}.play

GrainIn

GrainIn enables you to granularize incoming audio. This UGen is part of a collection of other granular Ugens, such as GrainSin, GrainFM, and GrainBuf. Take a look at the documentation of these UGens and explore their functionality.

SynthDef(\sagrain, {arg amp=1, grainDur=0.1, grainSpeed=10, panWidth=0.5;
	var pan, granulizer;
	pan = LFNoise0.kr(grainSpeed, panWidth);
	granulizer = GrainIn.ar(2, Impulse.kr(grainSpeed), grainDur, SoundIn.ar(0), pan);
	Out.ar(0, granulizer * amp);
}).add;

x = Synth(\sagrain)

x.set(\grainDur, 0.02)
x.set(\amp, 0.02)
x.set(\amp, 1)

x.set(\grainDur, 0.1)
x.set(\grainSpeed, 5)
x.set(\panWidth, 1)

Custom built granular synthesis

Having explored some features of granular synthesis above, the best way to really understand what granular synthesis is would be to make our own granular synth engine that spawns grains of some sort according to our own rate, pitch, wave form, and envelope.

In the examples above we have continued the chapter on Buffers and used sampled sound as the source of our granular synthesis. Here below we will explore the technique with simpler waveforms, such as the sine wave.

SynthDef(\sineGrain, { arg freq=800, amp=0.4, dur=0.1, pan=0;
	var signal, env;
	// A Gaussian curve envelope that's released from the server after playback
	env = EnvGen.kr(Env.sine(dur, amp), doneAction: 2);
	signal = FSinOsc.ar(freq, 0, env);
	OffsetOut.ar(0, Pan2.ar(signal, pan)); 
}).add;

Synth(\sineGrain, [\freq, 500, \dur, 0.05]) // 50 ms grain duration

// we can then trigger 1000 grains, one every 10 ms
(
Task({
   1000.do({ 
   		Synth(\sineGrain, 
			[\freq, rrand(440, 1600), // 
			\amp, rrand(0.1,0.3),
			\dur, rrand(0.02, 0.1)
			]);
		0.01.wait;
	});
}).start;
)

If our grains all have the same pitch, we should be able to generate a continuous sine wave out of the grains as they will be overlapping as shown in this image

[image]

Task({
   1000.do({ 
   		Synth(\sineGrain, 
			[\freq, 440,
			\amp, 0.4,
			\dur, 0.1
			]);
		0.05.wait; // density
	});
}).start;

But the sound is not perfectly continuous. This is because when we create a Synth, it is being sent as quickly as possible to the server. As the language-synth communication is asynchronous there might be slight time differences in the time it takes to send the OSC message over to the server, and this causes the fluctuation. We therefore need to timestamp our messages and it can be done either through messaging style communication with the server, or using s.bind (which makes an OSC bundle under the hood and sends a time stamped OSC message to the server).

Task({
   1000.do({ 
		s.sendBundle(0.2, 
			["/s_new", \sineGrain, x = s.nextNodeID, 0, 1], 
			["/n_set", x, \freq, 440, \amp, 0.4, \dur, 0.1]
		);
		0.05.wait; // density
	});
}).start;

// Or simply (and probably more readably)
Task({
   1000.do({
		s.bind{
			Synth(\sineGrain, 
				[\freq, 440,
				\amp, 0.4,
				\dur, 0.1
			]);
		};
		0.05.wait; // density
	});
}).start;

There can be different envelopes in the grains. Here we use a Perc envelope:

SynthDef(\sineGrainWPercEnv, { arg freq = 800, amp = 0.1, envdur = 0.1, pan=0;
	var signal;
	signal = FSinOsc.ar(freq, 0, EnvGen.kr(Env.perc(0.001, envdur), doneAction: 2)*amp);
	OffsetOut.ar(0, Pan2.ar(signal, pan)); 
}).add;

Task({
   1000.do({
		s.bind{
			Synth(\sineGrainWPercEnv, 
				[\freq, rrand(1300, 4000),
				\amp, rrand(0.1, 0.2),
				\envdur, rrand(0.1, 0.2),
				\pan, 1.0.rand2
			]);
		};
		0.01.wait; // density
	});
}).start;

// Or doing the same using the Pbind Pattern
Pbind(
	\instrument, \sineGrainWPercEnv,
	\freq, Pfunc({rrand(1300, 4000)}),
	\amp, Pfunc({rrand(0.1, 0.2)}),
	\envdur, Pfunc({rrand(0.1, 0.2)}),
	\dur, 0.01, // density
	\pan, Pfunc({1.0.rand2})
).play;

The two examples above serve as a good explanation of how Patterns and Tasks work. We’ve got the same SynthDef, same arguments, but Patterns do operate with default keywords (like \instrument, \freq, \amp, and \dur). We therefore had to make sure that our envelope argument was not called \dur, since Pbind uses that to control the density (or the time it takes until the next event is fired - so “\dur, 0.01” in the pattern is the same as “0.01.wait” in the Task)

Pbind(
	\instrument, \sineGrainWPercEnv,
	\freq, Pseq([1000, 2000, 4000], inf), // try to add 3000 in here
	\amp, Pfunc({rrand(0.1, 0.2)}),
	\envdur, Pseq([0.01, 0.02, 0.04], inf),
	\dur, Pseq([0.01, 0.02, 0.04], inf), // density
	\pan, Pseq([0.9, -0.9],inf)
).play;

Finally, let’s try this out with a buffer.

b = Buffer.read(s, Platform.userAppSupportDir+/+"sounds/a11wlk01-44_1.aiff");

SynthDef(\bufGrain,{ arg out = 0, buffer, rate=1.0, amp = 0.1, dur = 0.1, startPos=0;
	var signal;
	signal = PlayBuf.ar(1, buffer, rate, 1, startPos) * EnvGen.kr(Env.sine(dur, amp), doneActi\
on: 2);
	OffsetOut.ar(out, signal ! 2)
}).add;

Synth(\bufGrain, [\buffer, b]); // try it

Task({
   1000.do({ arg i;
   		Synth(\bufGrain, 
			[\buffer, b,
   			\rate, rrand(0.8, 1.2),
			\amp, rrand(0.05,0.2),
			\dur, rrand(0.06, 0.1),
			\startPos, i*100 // jumping 100 samples per grain
		]);
		0.01.wait;
	});
}).start;

Concatenative Synthesis

Concatenative synthesis is a rather recent technique of data-driven synthesis, where source sounds are analysed into a database, segmented into units, but then an target sound (for example live audio input) is analysed and matched with the closest unit in the database which is then played. This is done at a very granular level, prompting Zils and Pachet to call the technique musaicing, from musical mosaicing, as it enables the synthesis of a coherent sound at a macro level that is built up of smaller units of sound, just like in traditional mosaics. The technique is therefore quite related to granular synthesis in the sense that a macro-sound is built out of micro-sounds. The technique can be quite complex to work with as users might have to analyse and build up a database of source sounds. However, people have built plugins and classes in SuperCollider that help with this purpose and in this section here we will explore some of the work done in this area by Nick Collins, a long time SuperCollider user and developer.

b = Buffer.read(s,Platform.userAppSupportDir+/+"sounds/a11wlk01.wav");


{Concat2.ar(SoundIn.ar(0),PlayBuf.ar(1, b, 1, loop:1),1.0,1.0,1.0,0.1,0,0.0,1.0,0.0,0.0)}.p\
lay

// mouse X used control the match length
{Concat2.ar(SoundIn.ar(0),PlayBuf.ar(1, b, 1, loop:1),1.0,1.0,1.0,MouseX.kr(0.0,0.1),0,1.0,\
0.0,1.0,1.0)}.play

Chapter 11 - Physical Modelling

Physical modelling is a common synthesis technique where a mathematical model is built of some physical object. The maths here can be quite complex and outside the scope of this book. However, it is worth exploring the technique as there are PM UGens in SuperCollider and many musical instruments can easily be built using simple physical models, using filters and alike. Waveguide synthesis can model the physics of the acoustic instrument or sound generating object. It simulates the traveling of waves through a string or a tube. The physical structures of an instrument can be thought of as waveguides or a transmission lines. In physical modelling, as opposed to traditional synthesis types (AM, FM, granular, etc), we are not imitating the sound of an instrument, but rather simulating the instrument itself and the physical laws that are involved in the creation of a sound.In physical modelling of sound we typically operate with excitation and a resonant body. The excitation is the material and weight of the thing that hits, whilst the resonant body is what is being hit and resonates. In many cases it does not make sense separating the two this way mathematically, but from a user-perspective we can think of material bodies of wood, glass, metal, or a string, as examples, being hit by a finger, a plectrum, a metal hammer, or anything imaginable, for example falling sand. Further resolution can be designed in the model of the instrument, for example defining the bridge of a guitar, the type of strings, the type of body, the room which the instrument is in, etc.

For a good text on physical modelling, check Julius O. Smith’s “Physical Audio Signal Processing”: http://ccrma.stanford.edu/~jos/pasp/pasp.html

Karplus-Strong synthesis (named after its authors) is a precursor of physical modelling and is good for synthesising strings and percussion sounds.

// we generate a short burst (the excitation)
{ Decay.ar(Impulse.ar(1), 0.1, WhiteNoise.ar) }.play

// we then wrap that noise in a fast repeating delay
{ CombL.ar(Decay.ar(Impulse.ar(1), 0.1, WhiteNoise.ar), 0.02, 0.001, 3, 1) }.play

The repeat rate of the delay becomes the pitch of the string, so 0.001 is 1000 Hz, or in a reciprocal relationship. We could therefore write 440.reciprocal in the delayTime argument of the CombL, and it would give us a string sound of 440 Hz. The duration of the string is controlled by the decayTime argument. This is the basic ingredient of a string synthesizer, but for further development, you might want to consider applying filters to the noise, or perhaps use another type of noise. Also, the time of the burst (above 100 ms) will affect the sound heavily.

SynthDef(\ks_string, { arg note, pan, rand, delayTime;
	var x, y, env;
	env = Env.new(#[1, 1, 0],#[2, 0.001]);
	// A simple exciter x, with some randomness.
	x = Decay.ar(Impulse.ar(0, 0, rand), 0.1+rand, WhiteNoise.ar); 
 	x = CombL.ar(x, 0.05, note.reciprocal, delayTime, EnvGen.ar(env, doneAction:2)); 
	x = Pan2.ar(x, pan);
	Out.ar(0, LeakDC.ar(x));
}).add;

{ // and play the synthdef
	20.do({
		Synth(\ks_string, 
			[\note, [48, 50, 53, 58].midicps.choose, 
			\pan, 1.0.rand2, 
			\rand, 0.1+0.1.rand, 
			\delayTime, 2+1.0.rand]);
		[0.125, 0.25, 0.5].choose.wait;
	});
}.fork;

// here using patterns
Pdef(\kspattern, 
	Pbind(\instrument, \ks_string, // using our sine synthdef
			\note, Pseq.new([60, 61, 63, 66], inf).midicps, // freq arg
			\dur, Pseq.new([0.25, 0.5, 0.25, 1], inf),  // dur arg
			\rand, Prand.new([0.2, 0.15, 0.15, 0.11], inf),  // dur arg
			\pan, 1.0.rand2,
			\delayTime, 2+1.0.rand;  // envdur arg
		)
).play;

Compare using white noise and pink noise as an exciter, as well as using a resonant filter to filter the burst:

// white noise
{  
	var burstEnv, burst; 
	burstEnv = EnvGen.kr(Env.perc(0, 0.01), gate: Impulse.kr(1.5));
	burst = WhiteNoise.ar(burstEnv);
	CombL.ar(burst, 0.2, 0.003, 1.9, add: burst);  
}.play;

// pink noise
{  
	var burstEnv, burst; 
	burstEnv = EnvGen.kr(Env.perc(0, 0.01), gate: Impulse.kr(1.5));
	burst = PinkNoise.ar(burstEnv);
	CombL.ar(burst, 0.2, 0.003, 1.9, add: burst);  
}.play;

// here we use RLPF (resonant low pass filter) to filter the white noise burst
{  
	var burstEnv, burst; 
	burstEnv = EnvGen.kr(Env.perc(0, 0.01), gate: Impulse.kr(1.5));
	burst = RLPF.ar(WhiteNoise.ar(burstEnv), MouseX.kr(100, 12000), MouseY.kr(0.001, 0.999));
	CombL.ar(burst, 0.2, 0.003, 1.9, add: burst);  
}.play;

SuperCollider comes with a plug called Pluck which is an implementation of the Karplus-Strong synthesis. This should be more effective than the examples above, but of similar sound.

{Pluck.ar(WhiteNoise.ar(0.1), Impulse.kr(2), MouseY.kr(220, 880).reciprocal, MouseY.kr(220,\
 880).reciprocal, 10, coef:MouseX.kr(-0.1, 0.5)) !2 }.play(s)

We could create a SynthDef with Pluck.

SynthDef(\pluck, {arg freq=440, trig=1, time=2, coef=0.1, cutoff=2, pan=0;
	var pluck, burst;
	burst = LPF.ar(WhiteNoise.ar(0.5), freq*cutoff);
	pluck = Pluck.ar(burst, trig, freq.reciprocal, freq.reciprocal, time, coef:coef);
	Out.ar(0, Pan2.ar(pluck, pan));
}).add;

Synth(\pluck);
Synth(\pluck, [\coef, 0.01]);
Synth(\pluck, [\coef, 0.3]);
Synth(\pluck, [\coef, 0.7]);

Synth(\pluck, [\coef, 0.3, \time, 0.1]);
Synth(\pluck, [\coef, 0.3, \time, 5]);

Synth(\pluck, [\coef, 0.2, \time, 5, \cutoff, 1]);
Synth(\pluck, [\coef, 0.2, \time, 5, \cutoff, 2]);
Synth(\pluck, [\coef, 0.2, \time, 5, \cutoff, 5]);
Synth(\pluck, [\coef, 0.2, \time, 5, \cutoff, 15]);

// A guitar that might need a little distortion
Pbind(\instrument, \pluck,
	\freq, Pseq([72, 70, 67,65, 63, 60, 48], inf).midicps,
	\dur, Pseq([0.5, 0.5, 0.375, 0.125, 0.5, 2], 1),
	\cutoff, Pseq([15, 10, 5, 2, 10, 10, 15], 1)	
).play

Biquad filter

In SuperCollider, the SOS UGen is a second order biquad filter that can be used to create various interesting sounds. We could start with simple glass-like sound:

{SOS.ar(Impulse.ar(2), 0.0, 0.05, 0.0, MouseY.kr(1.45, 1.998, 1), MouseX.kr(-0.999, -0.9998\
, 1))!2}.play

And with slight changes we have a more woody type of sound:

SynthDef(\marimba, {arg out=0, amp=0.1, t_trig=1, freq=100, rq=0.006;
	var env, signal;
	var rho, theta, b1, b2;
	b1 = 1.987 * 0.9889999999 * cos(0.09);
	b2 = 0.998057.neg;
	signal = SOS.ar(K2A.ar(t_trig), 0.3, 0.0, 0.0, b1, b2);
	signal = RHPF.ar(signal*0.8, freq, rq) + DelayC.ar(RHPF.ar(signal*0.9, freq*0.99999, rq*0.\
999), 0.02, 0.01223);
	signal = Decay2.ar(signal, 0.4, 0.3, signal);
	DetectSilence.ar(signal, 0.01, doneAction:2);
	Out.ar(out, signal*(amp*0.4)!2);
}).add;

Pbind(
	\instrument, \marimba, 
	\midinote, Prand([[1,5], 2, [3, 5], 7, 9, 3], inf) + 48, 
	\dur, 0.2 
).play;

// Or perhaps
SynthDef(\wood, {arg out=0, amp=0.3, pan=0, sustain=0.5, t_trig=1, freq=100, rq=0.06;
	var env, signal;
	var rho, theta, b1, b2;
	b1 = 2.0 * 0.97576 * cos(0.161447);
	b2 = 0.9757.squared.neg;
	signal = SOS.ar(K2A.ar(t_trig), 1.0, 0.0, 0.0, b1, b2);
	signal = Decay2.ar(signal, 0.4, 0.8, signal);
	signal = Limiter.ar(Resonz.ar(signal, freq, rq*0.5), 0.9);
	env = EnvGen.kr(Env.perc(0.00001, sustain, amp), doneAction:2);
	Out.ar(out, Pan2.ar(signal, pan)*env);
}).add;

Pbind(
	\instrument, \wood, 
	\midinote, Prand([[1,5], 2, [3, 5], 7, 9, 3], inf) + 48, 
	\dur, 0.2 
).play;

Waveguide synthesis

Waveguide synthesis is the most common form of physical modelling, often using delay lines, filtering, feedback and other non-linear elements. The Waveguide flute below is based upon Hans Mikelson’s Csound slide flute (ultimately derived from Perry Cook’s) STK slide flute physical model. SuperCollider port by John E. Bower, who kindly allowed for the flute’s inclusion in this tutorial.

SynthDef("waveguideFlute", { arg scl = 0.2, pch = 72, ipress = 0.9, ibreath = 0.09, ifeedbk\
1 = 0.4, ifeedbk2 = 0.4, dur = 1, gate = 1, amp = 2, vibrato=0.2;	
	var kenv1, kenv2, kenvibr, kvibr, sr, cr, block, poly, signalOut, ifqc,  fdbckArray;
	var aflow1, asum1, asum2, afqc, atemp1, ax, apoly, asum3, avalue, atemp2, aflute1;
	
	sr = SampleRate.ir;
	cr = ControlRate.ir;
	block = cr.reciprocal;
	ifqc = pch.midicps;	
	// noise envelope
	kenv1 = EnvGen.kr(Env.new( 
		[ 0.0, 1.1 * ipress, ipress, ipress, 0.0 ], [ 0.06, 0.2, dur - 0.46, 0.2 ], 'linear' )
	);
	// overall envelope
	kenv2 = EnvGen.kr(Env.new(
		[ 0.0, amp, amp, 0.0 ], [ 0.1, dur - 0.02, 0.1 ], 'linear' ), doneAction: 2 
	);
	// vibrato envelope
	kenvibr = EnvGen.kr(Env.new( [ 0.0, 0.0, 1, 1, 0.0 ], [ 0.5, 0.5, dur - 1.5, 0.5 ], 'linea\
r') )*vibrato;
	// create air flow and vibrato
	aflow1 = LFClipNoise.ar( sr, kenv1 );
	kvibr = SinOsc.ar( 5, 0, 0.1 * kenvibr );
	asum1 = ( ibreath * aflow1 ) + kenv1 + kvibr;
	afqc = ifqc.reciprocal - ( asum1/20000 ) - ( 9/sr ) + ( ifqc/12000000 ) - block;
	fdbckArray = LocalIn.ar( 1 );
	aflute1 = fdbckArray;
	asum2 = asum1 + ( aflute1 * ifeedbk1 );
	//ax = DelayL.ar( asum2, ifqc.reciprocal * 0.5, afqc * 0.5 );
	ax = DelayC.ar( asum2, ifqc.reciprocal - block * 0.5, afqc * 0.5 - ( asum1/ifqc/cr ) + 0.0\
01 );
	apoly = ax - ( ax.cubed );
	asum3 = apoly + ( aflute1 * ifeedbk2 );
	avalue = LPF.ar( asum3, 2000 );
	aflute1 = DelayC.ar( avalue, ifqc.reciprocal - block, afqc );
	fdbckArray = [ aflute1 ];
	LocalOut.ar( fdbckArray );
	signalOut = avalue;
	OffsetOut.ar( 0, [ signalOut * kenv2, signalOut * kenv2 ] );	
}).add;

// Test the flute
Synth(\waveguideFlute, [\amp, 0.5, \dur, 5, \ipress, 0.90, \ibreath, 0.00536, \ifeedbk1, 0.\
4, \ifeedbk2, 0.4, \pch, 60, \vibrato, 0.2] );

// test the flute player's skills:
Routine({
	var pitches, durations, rhythm;
	pitches = Pseq( [ 47, 49, 53, 58, 55, 53, 52, 60, 54, 43, 52, 59, 65, 58, 59, 61, 67, 64, \
58, 53, 66, 73 ], inf ).asStream;
	durations = Pseq([ Pseq( [ 0.15 ], 17 ), Pseq( [ 2.25, 1.5, 2.25, 3.0, 4.5 ], 1 ) ], inf).\
asStream;
	17.do({
		rhythm = durations.next;		
		Synth(\waveguideFlute, [\amp, 0.6, \dur, rhythm, \ipress, 0.93, \ibreath, 0.00536, \ifeed\
bk1, 0.4, \ifeedbk2, 0.4, \pch, pitches.next] );
		rhythm.wait;	
	});
	5.do({
		rhythm = durations.next;		
		Synth(\waveguideFlute, [\amp, 0.6, \dur, rhythm + 0.25, \ipress, 0.93, \ibreath, 0.00536,\
 \ifeedbk1, 0.4, \ifeedbk2, 0.4, \pch, pitches.next] );		
		rhythm.wait;
	});	
}).play;

Filters

Filters are a vital element in physical modelling. The main concepts here are some kind of an exciter (where in SuperCollider we might use triggers such as Impulse, Dust, or filtered noise) and a resonator (such as the Resonz and Klank resonators, Delays, Reverbs, etc.)

Ringz

The Ringz is a powerful ringing filter, with a decay time, so the impulse can ring for N amount of seconds. Let’s explore some examples:

// triggering a ringing filter by one impulse
{ Ringz.ar(Impulse.ar(0), 2000, 2) }.play

// one impulse per second
{ Ringz.ar(Impulse.ar(1), 2000, 2) }.play

// here using an envelope to soften the attack
{ Ringz.ar(EnvGen.ar(Env.perc(0.01, 1, 1), Impulse.ar(1)), 2000, 2) }.play

// playing with the frequency
{ Ringz.ar(Impulse.ar(4)*0.2, LFNoise0.ar(4)*2000, 0.1) }.play

// using XLine to increase rate and frequency
{ Ringz.ar(Impulse.ar(XLine.ar(1, 10, 4))*0.2, LFNoise0.ar(XLine.ar(1, 10, 4))*2000, 0.1) }\
.play

// using Dust instead of Impulse
{ Ringz.ar(Dust.ar(3, 0.3), 2000, 2) }.play

// here we use an Impulse to trigger the incoming sound
{ Ringz.ar(Impulse.ar(MouseX.kr(1, 100, 1)), 1800, MouseY.kr(0.05, 1), 0.4) }.play;

// control frequency as well
{ Ringz.ar(Impulse.ar(10)*0.5, MouseY.kr(100,1000), MouseX.kr(0.001,1)) }.play

// you could also usa an envelope to soften the attack
{ Ringz.ar(EnvGen.ar(Env.perc(0.001, 1), Impulse.kr(MouseX.kr(1, 100, 1))), 1800, MouseY.kr\
(0.05, 1), 0.4) }.play;

// here resonating white noise instead of a trigger
{ Ringz.ar(WhiteNoise.ar(0.005), 600, 4) }.play

// would this be useful in synthesizing a flute?
{ Ringz.ar(LPF.ar(WhiteNoise.ar(0.005), MouseX.kr(100, 10000)), 600, 1) !2}.play

// a modified example from the documentation
{({Ringz.ar(WhiteNoise.ar(0.001),XLine.kr(exprand(100.0,5000.0), exprand(100.0,5000.0), 20)\
, 0.5)}!10).sum}.play

// The Formlet UGen is a type of Ringz filter, useful for formant control:
{ Formlet.ar(Blip.ar(MouseX.kr(10, 400), 1000, 0.1), MouseY.kr(10, 1000), 0.005, 0.04) }.pl\
ay;

Resonz, Klank and DynKlank

The Resonz is a …

// mouse y controlling frequency - mousex controling bandwidth ratio
{ Resonz.ar(Impulse.ar(10)*1.5, MouseY.kr(40,10000), MouseX.kr(0.001,1)) }.play

// here with white noise - mouse y controlling frequency - mousex controling bandwidth ratio
{ Resonz.ar(WhiteNoise.ar(0.1), MouseY.kr(40,10000), MouseX.kr(0.001,1)) }.play

// playing with Ringz and Resonz
{ Ringz.ar(Resonz.ar(Dust.ar(20)*1.5, MouseY.kr(40,10000), MouseX.kr(0.001,1)), MouseY.kr(4\
0,10000), 0.04) }.play;

// let's explore the resonance using the freqscope
{ Resonz.ar(WhiteNoise.ar(0.1), MouseY.kr(40,10000), MouseX.kr(0.001,1)) }.freqscope

// Klank is a bank of Resonz filters 
{ Klank.ar(`[[800, 1071, 1153, 1723], nil, [1, 0.9, 0.1, 2]], Impulse.ar(1, 0, 0.2)) }.play;

// Klank filtering WhiteNoise
{ Klank.ar(`[[800, 1200, 1600, 200], [1, 0.8, 0.4, 0.8], [1, 1, 1, 1]], WhiteNoise.ar(0.001\
)) }.play;

// DynKlang is dynamic - using the mouse to change frequency and ringtime
{   var freqs, ringtimes;
    freqs = [800, 1071, 1153, 1723] * MouseX.kr(0.5, 2);
    ringtimes = [1, 1, 1, 1] * MouseY.kr(0.001, 5);
	DynKlank.ar(`[freqs, nil, ringtimes ], PinkNoise.ar(0.001))
}.play;

Decay

{ Decay.ar(Impulse.ar(XLine.kr(1,50,20), 0.25), 0.2, FSinOsc.ar(600), 0)  }.play;

{ Decay2.ar(Impulse.ar(XLine.kr(1,50,20), 0.25), 0.1, 0.3, FSinOsc.ar(600)) }.play;

SynthDef(\clap, {arg out=0, pan=0, amp=0.3, filterfreq=50, rq=0.01;
	var env, signal, attack, noise, hpf1, hpf2;
	noise = WhiteNoise.ar(1)+SinOsc.ar([filterfreq/2,filterfreq/2+4 ], pi*0.5, XLine.kr(1,0.01\
,4));
	hpf1 = RLPF.ar(noise, filterfreq, rq);
	hpf2 = RHPF.ar(noise, filterfreq/2, rq/4);
	env = EnvGen.kr(Env.perc(0.003, 0.00035));
	signal = (hpf1+hpf2) * env;
	signal = CombC.ar(signal, 0.5, 0.03, 0.031)+CombC.ar(signal, 0.5, 0.03016, 0.06);
	signal = Decay.ar(signal, 1.5);
	signal = FreeVerb.ar(signal, 0.23, 0.1, 0.12);
	Out.ar(out, Pan2.ar(signal * amp, pan));
	DetectSilence.ar(signal, doneAction:2);
}).add;

Synth(\clap, [\filterfreq, 1700, \rq, 0.14, \amp, 0.1]);

TBall, Spring and Friction

Physical modelling can involve the mathematical modelling of all kinds of phenomena, from wind to water to the simulation of moving or falling objects where gravity, speed, surface type, etc., are all parameters. The popular Box2D library (of AngryBirds fame) is one such library that simulates physical systems. In SuperCollider there are UGens that do that, for example TBall (Trigger Ball) and Spring

// arguments are trigger, gravity, damp and friction
{TBall.ar(Impulse.ar(0), 0.1, 0.2, 0.01)*20}.play

// a light ball falling on a bouncy surface on the moon?
{TBall.ar(Impulse.ar(0), 0.1, 0.1, 0.001)*20}.play

// a heavy ball falling on a soft surface?
{TBall.ar(Impulse.ar(0), 0.1, 0.2, 0.1)*20}.play

Having explored the qualities of the TBall as a system that outputs impulses according to a physical system, we can now apply these impulses in some of the resonant filters that we have explored above.

// here using Ringz to create a metal ball falling on a marble table
{Ringz.ar(TBall.ar(Impulse.ar(0), 0.09, 0.1, 0.01)*20, 3000, 0.08)}.play

// many balls falling randomly (using Dust)
{({Ringz.ar(TBall.ar(Dust.ar(2), 0.09, 0.1, 0.01)*20, rrand(2000,3000), 0.07)}!5).sum}.play

// here using Decay to create a metal ball falling on a marble table
{Decay.ar(TBall.ar(Impulse.ar(0), 0.09, 0.1, 0.01)*20, 1)}.play

// a drummer on the snare?
{LPF.ar(WhiteNoise.ar(0.5), 4000)*Decay.ar(TBall.ar(Impulse.ar(0), 0.2, 0.16, 0.003)*20, 1)\
!2}.play

{SOS.ar(TBall.ar(Impulse.ar(0), 0.09, 0.1, 0.01)*20, 0.6, 0.0, 0.0, rrand(1.991, 1.994), -0\
.9982)}.play

// Txalaparta? 
{({|x| SOS.ar(TBall.ar(Impulse.ar(1, x*0.1*x), 0.8, 0.2, 0.02)*20, 0.6, 0.0, 0.0, rrand(1.9\
92, 1.99), -0.9982)}!6).sum}.play

The Spring UGen is a physical model of a resonating spring. Considering the wave properties of spring this can be very useful for synthesis.

{
	var trigger =LFNoise0.ar(1)>0;
	var signal = SinOsc.ar(Spring.ar(trigger,1,4e-06)*1220);
	var env = EnvGen.kr(Env.perc(0.001,5),trigger);
	Out.ar(0, Pan2.ar(signal * env, 0));
}.play

// Two springs:
{
	var trigger = LFNoise0.ar(1)>0;
	var springs = Spring.ar(trigger,1,4e-06) * Spring.ar(trigger,2,4e-07);
	var signal = SinOsc.ar(springs*1220);
	var env = EnvGen.kr(Env.perc(0.001,5),trigger);
	Out.ar(0, Pan2.ar(signal * env, 0));
}.play

// And here are two tweets (less than 140 characters) simulating timpani drums. 

play{{x=LFNoise0.ar(1)>0;SinOsc.ar(Spring.ar(x,4,3e-05)*(70.rand+190)+(30.rand+90))*EnvGen.\
kr(Env.perc(0.001,5),x)}!2}

// here heavy on the tuning pedal
play{{x=LFNoise0.ar(1)>0;SinOsc.ar(Spring.ar(x,4,3e-05)*(70.rand+190)+LFNoise2.ar(1).range(\
90,120))*EnvGen.kr(Env.perc(0.001,5),x)}!2}

In the SC3plugins you’ll find a Friction UGen which is a physical model of a mass resting on a belt. The documentation of the UGen is good, but two examples are provided here for fun:

{Friction.ar(Ringz.ar(Impulse.ar(1), [400, 412]), 0.0002, 0.2, 2, 2.697)}.play

{Friction.ar(Klank.ar(`[[400, 412, 340]], Impulse.ar(1)), 0.0002, 0.2, 2, 2.697)!2}.play

The MetroGnome

How about trying to synthesise a wooden old-fashioned metronome?

(
SynthDef(\metro, {arg tempo=1, filterfreq=1000, rq=1.0;
var env, signal;
	var rho, theta, b1, b2;
	theta = MouseX.kr(0.02, pi).poll;
	rho = MouseY.kr(0.7, 0.9999999).poll;
	b1 = 2.0 * rho * cos(theta);
	b2 = rho.squared.neg;
	signal = SOS.ar(Impulse.ar(tempo), 1.0, 0.0, 0.0, b1, b2);
	signal = RHPF.ar(signal, filterfreq, rq);
	Out.ar(0, Pan2.ar(signal, 0));
}).add
)

// Move the mouse to find your preferred metronome (low left works best for me). We are her\
e polling the MouseX and MouseY Ugens, so you will be able to follow their output in the po\
st window.

a = Synth(\metro) // we create our metronome
a.set(\tempo, 0.5.reciprocal) // 120 bpm (0.5.reciprocal = 2 bps)
a.set(\filterfreq, 4000) // try 1000 (for example)
a.set(\rq, 0.1) // try 0.5 (for example)

// Let's reinterpret the Poème symphonique was composed by György Ligeti (in 1962)
// http://www.youtube.com/watch?v=QCp7bL-AWvw

SynthDef(\ligetignome, {arg tempo=1, filterfreq=1000, rq=1.0;
var env, signal;
	var rho, theta, b1, b2;
	b1 = 2.0 * 0.97576 * cos(0.161447);
	b2 = 0.97576.squared.neg;
	signal = SOS.ar(Impulse.ar(tempo), 1.0, 0.0, 0.0, b1, b2);
	signal = RHPF.ar(signal, filterfreq, rq);
	Out.ar(0, Pan2.ar(signal, 0));
}).add;

// and we create 10 different metronomes running in different tempi
// (try with 3 metros or 30 metros)
(
10.do({
	Synth(\ligetignome).set(
		\tempo, (rrand(0.5,1.5)).reciprocal, 
		\filterfreq, rrand(500,4000), 
		\rq, rrand(0.3,0.9) )
});
)

The STK synthesis kit

Many years ago, Paul Lansky ported the STK physical modelling kit by Perry Cook and Gary Scavone for SuperCollider. This collection of UGens can be found in the SC3plugins, but they have not been maintained and the code is in a bad shape, although there are still some UGens that work. It could be a good project for someone wanting to explore the source of a classic physical modelling source code to update the UGens for SuperCollider 3.7+.

Here below we have a model of a xylophone:

SynthDef(\xylo, { |out=0, freq=440, gate=1, amp=0.3, sustain=0.5, pan=0|
	var sig = StkBandedWG.ar(freq, instr:1, mul:3);
	var env = EnvGen.kr(Env.adsr(0.0001, sustain, sustain, 0.3), gate, doneAction:2);
	Out.ar(out, Pan2.ar(sig, pan, env * amp));
}).add;

Synth(\xylo)

Pbind(\instrument, \xylo, \freq, Pseq(({|x|x+60}!13).mirror).midicps, \dur, 0.2).play

Part III

Chapter 12 - Time Domain Audio Effects

In this book, we divide the section on audio effects into two separate chapters, on time domain and frequency domain effects, respectively. This is for a good reason as the two are completely different techniques of manipulating audio, where the former, the time domain effects, are well know from the world of analogue audio, whereas the latter, manipulation in the frequency domain, is only realistically possible through the use of computers running Fast Fourier Transformation (FFT) algorithms. This will be explained later.

Most of the audio effects that we know (and you can roughly think about the availability of guitar pedal boxes, where each box contains the implementation of some audio effect) are familiar and easy to understand effects that were often discovered by accident or invented through some form of serendipitous exploration. There are diverse stories of John Lennon and George Martin discovering flanging on an Abbey Road tape machine, but earlier examples exist, although the technique had not been given this name then. The time domain effects are either manipulation of samples in time (typically where the signal is split and something is done to one of them, such as delaying it, and they then added again) or in amplitude (where samples can be changed in value, for example to get a distortion effect). This chapter will explore the diverse audio effects that can be easily created using the UGens available in SuperCollider.

Delay

When we delay a signal, we can achieve various effects, from a simple echo to a more complex reverb. Typical variables are delay time (how long it takes before the sound appears again) and decay time (how long it will repeat). In SuperCollider, there are three main type of delays: Delay, Comb and Allpass:

  • DelayN/DelayL/DelayC are simple echos with no feedback.
  • CombN/CombL/CombC are comb delays with feedback (decaytime)
  • AllpassN/AllpassL/AllpassC die out faster than the comb, but have feedback as well

All of these delays come with different interpolation algorithms (N, L, C, standing for No interpolation, Linear interpolation, and Cubic interpolation). Interpolation is about what happens between two discrete values, for example samples. Will you get a jump when the next value appears (N), a line from one value to the next (L) or a curvy shape between the two (C), simulating better analogue signal behaviour. These are all good for different purposes, where the N is computationally cheap, but C is good if you are sweeping delay time and you want more nuanced interpolation that can deal with values between two samples.

Generally, we can talk about three types of time when using Delays, resulting in different types of effects:

1 Short ( < 10 ms)
2 Medium ( 10 - 50 ms)
3 Long ( > 50 ms)

A short delay (1-2 samples) can create a FIR (Finite Impulse Response) lowpass filter. Increase the delay time (1-10 ms) and a comb filter materialises. Medium delays result in a thin signal but could also an ambience and width in the sound. Long delays create discrete echo which imitates sound bouncing of hard walls.

Delays can also have variable delay time which can result in the following effects: Phase Shifting Flanging Chorus These effects are explained in dedicated sections here below

Short Delays (< 10 ms)

Let’s explore what a short delay means. This is a delay that’s hardly perceivable by the human ear if you would for example delay a click sound or an impulse.

{
	x = Impulse.ar(1);
	d =  DelayN.ar(x, 0.001,  MouseX.kr(s.sampleRate.reciprocal, 0.001).poll);
	(x+d)!2
}.play 

In the example above we have a delay from from a sample (e.g., 44100.reciprocal, or 0.000022675 seconds, or 0.023 ms) to 10 milliseconds. The impulse is the shortest sound possible (one sample of of 1 in amplitude), so it serves well in this experiment. When you move the mouse from the left to the right of the screen you will probably perceive the sound as one event, but you will notice that the sound changes slightly in timbre. It is filtered. And indeed, as we will see in the filter chapter, most filters work by way of delaying samples and multiplying the feedback or feedforward samples by different values. We could try the same with a more continuous signal, for example a Saw wave. You will hear that the timbre of the wave changes when you move the mouse around, as it is effectively being filtered (adding two signals together where one is slightly delayed)

{
	x = Saw.ar(440, 0.4);
	d =  DelayC.ar(x, 0.001,  MouseX.kr(s.sampleRate.reciprocal, 0.001).poll);
	(x+d)!2
}.play

Note that in the example above I’m using DelayC, as opposed to the DelayN in the Impulse code. This is because the delay time is so small, at sample level, that interpolation becomes important. Try to change the DelayC to DelayN (no interpolation) and listen to what happens, particularly when moving the mouse at the left of the screen at shorter delay times. The best way to explore the filtering effect might be to use WhiteNoise:

{
	x = WhiteNoise.ar(0.1);
	d =  DelayN.ar(x, 0.001,  MouseX.kr(s.sampleRate.reciprocal, 0.001));
	(x+d)!2
}.play

In the examples above we have been adding the two signals together (the original and the delayed signal) and then duplicating it (!2) into two arrays, for a two-speaker output. Adding the signals create the filtering effect, but if we simply put each signal in each speaker, we get a completely different effect, namely spatialisation:

{
	x = WhiteNoise.ar(0.1);
	d =  DelayC.ar(x, 0.006,  MouseX.kr(s.sampleRate.reciprocal, 0.006));
	[x, d]
}.play

We have now entered the realm of psychoacoustics, but this can be explained quickly by the fact that sound travels around 343 metres per second, or 34cm per millisecond, roughly 0.6 millisecond difference in arrival to the ears of a normal head, if the sound is coming from one side direction. This is called Interaural Time Difference (ITD) and is one of they key factors for sound localisation. We could explore this in the following example, where we have a signal that is “delayed” from 0.001 ms before to 0.001 ms after the original signal. Try this with headphones, you should get some impression of sound moving from the left to the right ear.

{
	x = Impulse.ar(1);
	l =  DelayC.ar(x, 1.001,  1+MouseX.kr(-0.001, 0.001));
	r =  DelayC.ar(x, 1.001,  1+MouseX.kr(0.001, -0.001));
	[l, r] // left and right channels
}.play

// load some sound files into buffers (use your own)
d = Buffer.read(s,"sounds/digireedoo.aif");
e = Buffer.read(s,"sounds/holeMONO.aif");
e = Buffer.read(s, "sounds/a11wlk01.wav"); // this one is in the SC sounds folder

In the example below, explore the difference algorithms implemented in Delay, Comb and Allpass. The Delay does not have the decay time, therefore not resulting in the Karplus-Strong type of sound that we get with the other two. The details of the difference in the internal implementation of Comb and Allpass are too complex for this book, but it has to do with the how gain coefficients are calculated, where a combined feedback and feedforward combs equal an allpass.

{
	var signal, delaytime = MouseX.kr(0.00022675, 0.01, 1);
	signal = PlayBuf.ar(1, e.bufnum, BufRateScale.kr(e.bufnum), loop:1);
	// signal = Saw.ar(440,0.3);
	// signal = WhiteNoise.ar(0.3);
	d =  DelayC.ar(signal, 0.6, delaytime);
	// d =  AllpassC.ar(signal, 0.6, delaytime, MouseY.kr(0.001,1, 1));
	// d =  CombC.ar(signal, 0.6, delaytime, MouseY.kr(0.001,1, 1));
	(signal + d).dup
}.play

Is this familiar?

{CombC.ar(SoundIn.ar(0), 0.6, LFPulse.ar(0.25).range(0.0094,0.013),  0.9)!2}.play

{
	var signal, delay, delaytime = MouseX.kr(0.00022675, 0.02, 1);
	signal = PlayBuf.ar(1, e, 1, loop:1);
	delay =  DelayC.ar(signal, 0.2, delaytime);
	[signal, delay]
}.play

Any amount of Delays can be added together to create the desired sound of course, something we will explore when we discuss reverbs:

{
	var signal;
	var delaytime = MouseX.kr(0.1,0.4, 1);
	signal = Impulse.ar(1);	
	Mix.fill(14, {arg i; DelayL.ar(signal, 1, delaytime*(1+(i/10))) });
}.play

The old Karplus-Strong in its most basic form:

{
	var delaytime = MouseX.kr(0.001,0.2, 1);
	var decaytime = MouseY.kr(0.1,2, 1);
	var signal = Impulse.ar(1);
	CombL.ar(signal, 0.6, delaytime, decaytime)!2
}.play

Medium Delay time ( 10 - 50 ms)

The examples above with delays under 10ms, resulted in change in timbre or spatial location, but we always felt that this was the same sonic event, even when using a one-sample impulse. It is dependent on subjects and context, but it can be said that we start to perceive a delayed event as two events if there is more than 20 ms delay between them. This code demonstrates that:

{x=Impulse.ar(1); y=DelayC.ar(x, 0.04, MouseX.kr(0.005, 0.04).poll); (x+y)!2}.play

The post window shows the milliseconds. A drummer who would be more than 20 ms off when trying to be on the exact beat would be showing a disappointing performance (of course, part of the art of a good percussionist is to be slightly ahead or behind, so the comment is not about intention) and any hardware interface that would have a latency of more than 20 ms would be considered rather poor interface.

Longer delays can also generate a spatialisation effect, although this is not modelling the interaural time difference (ITD), but rather creating the sensation of a wide sonic image.

e = Buffer.read(s,"sounds/holeMONO.aif");

{
	var signal, delay, delaytime = MouseX.kr(0.00022675, 0.05, 1).poll;
	signal = PlayBuf.ar(1, e, 1, loop:1);
	delay =  DelayC.ar(signal, 0.2, delaytime);
	[signal, delay]
}.play
// Using microphone input
{
	var signal, delay, delaytime = MouseX.kr(0.00022675, 0.05, 1).poll;
	signal = SoundIn.ar(0);
	delay =  DelayC.ar(signal, 0.2, delaytime);
	[signal, delay]
}.play

Longer Delays ( > 50 ms)

(
{
var signal;
var delaytime = MouseX.kr(0.05, 2, 1); // between 50 ms and 2 seconds - exponential.
signal = PlayBuf.ar(1, f.bufnum, BufRateScale.kr(f.bufnum), loop:1);

// compare DelayL, CombL and AllpassL

//d =  DelayL.ar(signal, 0.6, delaytime);
//d = CombL.ar(signal, 0.6, delaytime, MouseY.kr(0.1, 10, 1)); // decay using mouseY
d =  AllpassL.ar(signal, 0.6, delaytime, MouseY.kr(0.1,10, 1));

(signal+d).dup
}.play(s)
)
// same as above, here using AudioIn for the signal instead of the NASA irritation
(
{
var signal;
var delaytime = MouseX.kr(0.05, 2, 1); // between 50 ms and 2 seconds - exponential.
signal = AudioIn.ar(1);

// compare DelayL, CombL and AllpassL

//d =  DelayL.ar(signal, 0.6, delaytime);
//d = CombL.ar(signal, 0.6, delaytime, MouseY.kr(0.1, 10, 1)); // decay using mouseY
d =  AllpassL.ar(signal, 0.6, delaytime, MouseY.kr(0.1,10, 1));

(signal+d).dup
}.play(s)
)

Random experiments

Server.default = s = Server.internal
FreqScope.new;
{CombL.ar(Impulse.ar(10), 6, 1, 1)}.play(s)


(
{
var signal;
var delaytime = MouseX.kr(0.01,6, 1);
var decaytime = MouseY.kr(1,2, 1);

signal = Impulse.ar(1);

d =  CombL.ar(signal, 6, delaytime, decaytime);

d!2
}.play(s)
)


// we can see the Comb effect by plotting the signal.

(
{
a = Impulse.ar(1);
d =  CombL.ar(a, 1, 0.001, 0.9);
d
}.plot(0.1)
)

// a little play with AudioIn
(
{
var signal;
var delaytime = MouseX.kr(0.001,2, 1);
signal = AudioIn.ar(1);

a = Mix.fill(10, {arg i; var dt;
		dt = delaytime*(i/10+0.1).postln;
		DelayL.ar(signal, 3.2, dt);});

(signal+a).dup
}.play(s)
)

/*
TIP: if you get this line printed ad infinitum:
exception in real time: alloc failed
You could go into the ServerOptions.sc (source file) and change
	var <>memSize = 8192;
to
	var <>memSize = 32768;
which allows the server to use up more memory (RAM)
*/

(
{ // watch your ears !!! Use headphones and lower the volume !!!
var signal;
var delaytime = MouseX.kr(0.001,2, 1);
signal = AudioIn.ar(1);

a = Mix.fill(13, {arg i; var dt;
		dt = delaytime*(i/10+0.1).postln;
		CombL.ar(signal, 3.2, dt);});

(signal+a).dup
}.play(s)
)


// A source code for a Comb filter might look something like this:
int  i, j, s;

for(i=0; i <= delay_size;i++)

  { if (i >= delay)
     j = i - delay;    // work out the buffer position
    else 
    j = i - delay + delay_size + 1;
    // add the delayed sample to the input sample
    s = input + delay_buffer[j]*decay;
    // store the result in the delay buffer, and output
    delay_buffer[i] = s;
    output = s;
  } 

Phaser (phase shifting)

In a phaser, a signal is sent through an allpass filter, not filtering out any frequencies, but simply shifting the phase of the sound by delaying it. This sound is then added to the original signal. If the phase is 180 degrees, the sound is cancelled out, but if it is less than that, it will create variations in the spectra.

// phaser with a soundfile
e = Buffer.read(s, "sounds/a11wlk01.wav");

(
{
var signal;
var phase = MouseX.kr(0.000022675,0.01, 1); // from a sample resolution to 10 ms delay line

var ph;

signal = PlayBuf.ar(1, e.bufnum, BufRateScale.kr(e.bufnum), loop:1);

ph = AllpassL.ar(PlayBuf.ar(1, e.bufnum, BufRateScale.kr(e.bufnum), loop:1), 4, phase+(0.01\
.rand), 0);
/* // try 4 phasers
ph = Mix.ar(Array.fill(4, 
			{ AllpassL.ar(PlayBuf.ar(1, e.bufnum, BufRateScale.kr(e.bufnum), loop:1), 4, phase+(0.01\
.rand), 0)}
		));
*/

(signal + ph).dup 
}.play
)


// try it with a sinewave (the mouse is shifting the phase of the input signal
(
{
var signal;
var phase = MouseX.kr(0.000022675,0.01); // from a sample to 10 ms delay line
var ph;

signal = SinOsc.ar(444,0,0.5);
//signal = PlayBuf.ar(1, e.bufnum, BufRateScale.kr(e.bufnum), loop:1);
ph = AllpassL.ar(SinOsc.ar(444,0,0.5), 4, phase, 0);

 (signal + ph).dup 

}.play
)


// using an oscillator to control the phase instead of MouseX
// here using the .range trick:
{SinOsc.ar(SinOsc.ar(0.3).range(440, 660), 0, 0.5) }.play

(
{
var signal;
var ph;

// base signal
signal = PlayBuf.ar(1, e.bufnum, BufRateScale.kr(e.bufnum), loop:1);
// phased signal
ph = AllpassC.ar(
		PlayBuf.ar(1, e.bufnum, BufRateScale.kr(e.bufnum), loop:1), 
		4, 
		LFPar.kr(0.1, 0, 1).range(0.000022675,0.01), // a circle every 10 seconds 
		0); // experiment with what happens if you increase the decay length

 (signal + ph).dup // we add them together and route to two speakers
}.play
)

/*
NOTE: Theoretically you could use DelayC or CombC instead of AllpassC.
In the case of DelayC, you would have to delete the last argument (0) 
(as DelayC doesn't have decay argument).
*/

Flanger

In a Flanger, a delayed signal is added to the original signal with a continuously-variable delay (usually smaller than 10 ms) creating a phasing effect. The term comes from times where tapes were used in studios and an operator would place the finger on the flange of one of the tapes to slow it down, thus causing the flanging effect.

Flanger is like a Phaser with dynamic delay filter (allpass), but it usually has a feedback loop.

(
SynthDef(\flanger, { arg out=0, in=0, delay=0.1, depth=0.08, rate=0.06, fdbk=0.0, decay=0.0\
; 

	var input, maxdelay, maxrate, dsig, mixed, local;
	maxdelay = 0.013;
	maxrate = 10.0;
	input = In.ar(in, 1);
	local = LocalIn.ar(1);
	dsig = AllpassL.ar( // the delay (you could use AllpassC (put 0 in decay))
		input + (local * fdbk),
		maxdelay * 2,
		LFPar.kr( // very similar to SinOsc (try to replace it) - Even use LFTri
			rate * maxrate,
			0,
			depth * maxdelay,
			delay * maxdelay),
		decay);
	mixed = input + dsig;
	LocalOut.ar(mixed);
	Out.ar([out, out+1], mixed);
}).add;
)

// audioIn on audio bus nr 10
{Out.ar(10, AudioIn.ar(1))}.play(s, addAction:\addToHead)

a = Synth(\flanger, [\in, 10], addAction:\addToTail)
a.set(\delay, 0.04)
a.set(\depth, 0.04)
a.set(\rate, 0.01)
a.set(\fdbk, 0.08)
a.set(\decay, 0.01)

// or if you prefer a buffer:
b = Buffer.read(s, "sounds/a11wlk01.wav"); // replace this sound with a nice sounding one !\
!!
{Out.ar(10, PlayBuf.ar(1, b.bufnum, BufRateScale.kr(b.bufnum), loop:1))}.play(addAction:\ad\
dToHead)

a = Synth(\flanger, [\in, 10], addAction:\addToTail)
a.set(\delay, 0.04)
a.set(\depth, 0.04)
a.set(\rate, 1)
a.set(\fdbk, 0.08)
a.set(\decay, 0.01)

// a parameter explosion results in a Chorus like effect:
a.set(\decay, 0)
a.set(\delay, 0.43)
a.set(\depth, 0.2)
a.set(\rate, 0.1)
a.set(\fdbk, 0.08)

// or just go mad:
a.set(\delay, 0.93)
a.set(\depth, 0.9)
a.set(\rate, 0.8)
a.set(\fdbk, 0.8)

Chorus

The chorus effect happens when we add a delayed signal with the original with a time-varying delay. The delay has to be short in order not to be perceived as echo, but above 5 ms to be audible. If the delay is too short, it will destructively interfere with the un-delayed signal and create a flanging effect. Often, the delayed signals will be pitch shifted to create a harmony with the original signal.

There is no definite algorithm to create a chorus. There are many different ways to achieve it. As opposed to the Flanger above, this chorus does not have a feedback loop. But you could create a chorus effect out of a Flanger by using longer delay time (20-30 ms instead of 1-10 ms in the Flanger)

// a simple chorus
SynthDef(\chorus, { arg inbus=10, outbus=0, predelay=0.08, speed=0.05, depth=0.1, ph_diff=0\
.5;
	var in, sig, modulators, numDelays = 12;
	in = In.ar(inbus, 1) * numDelays.reciprocal;
	modulators = Array.fill(numDelays, {arg i;
      	LFPar.kr(speed * rrand(0.94, 1.06), ph_diff * i, depth, predelay);}); 
	sig = DelayC.ar(in, 0.5, modulators);  
	sig = sig.sum; //Mix(sig); 
	Out.ar(outbus, sig!2); // output in stereo
}).add


// try it with audio in
{Out.ar(10, AudioIn.ar(1))}.play(addAction:\addToHead)
// or a buffer:
b = Buffer.read(s, "sounds/a11wlk01.wav"); // replace this sound with a nice sounding one !\
!!
{Out.ar(10, PlayBuf.ar(1, b.bufnum, BufRateScale.kr(b.bufnum), loop:1))}.play(addAction:\ad\
dToHead)

a = Synth(\chorus, addAction:\addToTail)
a.set(\predelay, 0.02);
a.set(\speed, 0.22);
a.set(\depth, 0.5