SuperCollider is one of the most expressive, elegant and popular programming languages used in today’s computer music. It has become well known for its fantastic sound, strong object oriented design, useful primitives for musical composition, and openness in terms of aims and objectives. Unlike much commercial software, one of the key design ideas of SuperCollider is to be a tabula rasa, a blank page, that allows musicians to design their own musical compositions or instruments with as few technological constraints as possible. There are other environments that operate in similar space, such as Pure Data, CSound, Max/MSP, ChucK, Extempore, and now JavaScript’s Web Audio, but I believe that SuperCollider excels at important aspects required for such work, and for this reason it has been my key platform for musical composition and instrument design since 2001, although I have enjoyed working in other environments as well.
This book is an outcome of teaching SuperCollider in various British higher education institutions since 2005, in particular at the Digital Music and Sound Arts programme at the University of Brighton, Music Informatics at the University of Sussex, Sonic Arts at Middlesex University and Music Informatics at the University of Westminster. Lacking the ideal course book, I created a tutorial that I’ve used for teaching synthesis and algorithmic composition in SuperCollider. The tutorial’s focus was not on teaching SuperCollider as a programming language, but to explore key concepts, from different synthesis techniques and algorithmic composition to user interfacing with graphical user interfaces or hardware. I have subsequently used this tutorial in diverse workshops given around the world, from Istanbul to Reykjavik; from Madrid to Rovaniemi.
An earlier version of this book was published on the DVD of The SuperCollider Book publised by MIT Press. The SuperCollider book is an excellent source for the study of SuperCollider and is highly recommended, but has different aims than this current book as it goes deeper into more specific areas, whilst the current book aims to be present a smoother introduction, a general overview and a specific focus on practical uses of SuperCollider. The original tutorial was initially released as .sc files, then moved over to the new .scd document format, and finally ported to the .html format that became the standard help file format of SuperCollider. SuperCollider documentation has now gained a new and fantastic documentation format which can be studied by exploring the new documentation system of SuperCollider. However, with this updated tutorial, I have decided to port it into a more modern ebook format that would be applicable in the diverse readers on different operating systems. I have chosen Lean Publishing as the publication platform for this rewriting, as I can write the book in the attractive markdown format and use github for revision control. Furthermore, I can publish the book ad-hoc, get real-time feedback from readers, and disseminate the book in the typical modern ebook formats appropriate to most ebook readers.
The aim with this book is the same as my initial tutorials written in 2005, i.e., to serve as a good undergraduate introduction to SuperCollider programming, audio synthesis, algorithmic composition, and interface building of innovative creative audio systems. I do hope that my past and future students will find this work useful, and I sincerely hope that it also is beneficial to anyone who decides to embark upon the exciting expedition into the fantastic and enticing world of SuperCollider: the ideal workshop for people whose creative material is sound.
I encourage any reader who finds bugs, errors, or simply would like a better explanation of a topic to give me feedback through this book’s Discord Server.
Brighton, June 2013 - Reykjavik, April 2021.
Introduction
Embarking upon learning SuperCollider can seem daunting at first. The software environment the user is faced with might look confusing, but let’s be assured that this feeling will be quickly overcome. Learning SuperCollider is in many ways similar to learning an acoustic instrument and it takes practice to become good at it. However, it should be noted here immediately at the beginning that becoming a code-ninja in SuperCollider need not necessarily be the goal. Indeed, one can write some very good music knowing only a few chords on a guitar or the piano!
The SuperCollider IDE (Integrated Development Environment) is the same on the Linux, Mac, and Windows operating systems. There might be minor differences, but it looks roughly like this (and this picture contains some labels for further explanation):
You will see a coding window on the left, a documentation window, and a post window where the SuperCollider language informs you what it is up to. So let’s dive straight into the first exercise, the famous “Hello World” print into a console (the post window). Simply type “Hello world”.postln; (or “Hola Mundo”.postln; if you like) into the coding window, highlight that text and hit Shift + Return (or go to the Language menu and select “Evaluate Selection or Line”). If you look at the post window, a “Hello World” has been posted there. Now try to write the same with a spelling mistake, such as “Hello World”.possstln; and you will see an error message appearing.
SuperCollider is case sensitive, which means that it understands “SinOsc” but has no clue what “Sinosc” means. You will also notice the semicolon (;) at the end of every line written. This is for the SuperCollider interpreter (the language parser) to understand that the current line has ended. The SuperCollider environment consists of three different elements (or processes): the IDE (the text editor that you see in front of you), the language (sclang that is the programming language), and the synth (the audio server that will generate the sound). Later chapters will explain how different languages (such as Java, C++, or Pd) can communicate to the audio server.
Now, let’s dive straight into making some sound, as that’s really the reason you are reading this book. First boot the audio server from the Server menu and then type:
into the code window. Now evaluate that line (and if your code is only one line, you can simply place the cursor somewhere in that line and hit SHIFT+RETURN). You will hear a sound in the left speaker of your system (yes all oscillators are mono by nature). It might be loud, and you will need to stop it. Hit CMD+. or CTRL+. to stop the sound. There is also a menu item in the Language menu to stop the sound, but it is recommended that you simply write these key commands into the motor memory of your fingers.
Let us play a little with this code (hit Cmd/Ctrl+period (Cmd+.) to stop the sound after every line):
What happened here? We are listening to a sine wave oscillator of 880 Hz, or cycles per second (cps). The sine wave oscillator is one of many elements of what is called, in most sound programming languages, a “unit generator”. The UGen outputs samples according to specific algorithms depending upon the desired wave form or filter functionality. So a SinOsc will output samples in different way than a Saw. Furthermore, in the code above we are using the output of one oscillator to multiply the parameters of another. But the question arises: which parameters? What is that comma after 880 and the stuff appearing after it?
Finally, what we have listened to is a sine wave of 880 Hz, with respective amplitudes of 1 and 0.5. And this is important: signals sent to the sound card of your computer are typically consisting of samples of values between −1 and 1 in amplitude. If the signal is above 1 or below −1, you typically get what is called “clipping” and the sound most likely becomes distorted.
You might also have noticed the information given to you at the bottom of the IDE, that you have used a number of ugens (u), synths (s), groups (g), and SynthDefs (d). This will be explained in the following chapters, but for now: congratulations with having made some sound in SuperCollider!
About the Installation
You have now installed and explored SuperCollider on your system. This book does not cover how to install SuperCollider on the different operating systems, but we should note that on any SuperCollider installation, a user specific area is created where you can install your classes, find the synth definitions you have created, and install SC-plugins. This is in your user directory (which can be found by running this line: Platform.userExtensionDir and the path will be posted in the post window). For example on the Mac: ~/Library/Application Support/SuperCollider
Part I
Chapter 1 - The SuperCollider language
This chapter will introduce the fundamentals for creating and running a simple SuperCollider program. It will introduce the basic concepts needed for further exploration. We will learn the basic key orientation practices of SuperCollider, that is how to run code, post into the post window and use the documentation system. We will also discuss the key fundamental things needed to understand and write SuperCollider code, namely: variables, arrays, functions and basic data flow syntax. Having grasped the topics introduced in this chapter, you should be able to write practically anything that you want, although later we will go into Object Orientated Programming, which will make things considerably more effective and perhaps easy.
The semicolon, brackets and running a program
The semicolon “;” is what divides one instruction from the next. It defines a line of code. After the semicolon, the interpreter looks at next line. There has to be semicolon after each line of code. Forgetting it will give you errors printed in the post console.
This code will work fine if you evaluate only this line:
But not this, if you evaluate both lines (by highlighting both and evaluating them with Shift+Return):
Why not? Because the interpreter (the SC language) will not understand
However, this will work:
It is up to you how you format your code, but you’d typically want to keep it readable for yourself in the future and other readers too. There is however a style of SC coding used for Tweeting, where the 140 character limit introduces interesting constraints for composers. Below is a Twitter composition by Tim Walters, but as you can see, it is not good for human readability although it sounds good (The language doesn’t care about human readability, but we do):
It can get tiring to having to select many lines of code and here is where brackets come in handy as they can create a scope for the interpreter. So this following code:
will not work unless you highlight all three lines. Imagine if these were 100 lines: you would have to do some tedious scrolling up and down the document. So using brackets, you can simply double click after or before a bracket, and it will highlight all the text between the matching brackets.
Matching brackets
Often when writing SuperCollider code, you will experience errors whose origin you can’t figure out. Double clicking between brackets and observe whether they are matching properly is one of the key methods of debugging SuperCollider code.
The following will not work. Why not? Look at the post window.
Note that the • sign is where the interpreter finds the error.
The post window
You have already posted into the post window (many other languages use a “print” and “println” for this purpose). But let’s explore the post window a little further.
You can also use postf:
If you are posting a long list you might not get the whole content using .postln, as SC is lazy and doesn’t like printing too long data structures, like lists.
For this purpose use the following:
Example
Whereas,
The Documentation system (The help system)
The documentation system in SuperCollider is a good source for information and learning. It includes introduction tutorials, overviews and documentation for almost every class in SuperCollider. The documentation files typically contain examples of how to use the specific class/UGen, and thus serves as a great source for learning and understanding. Many SC users go straight into the documentation when they start writing code, using it as a template and copy-paste the examples into their projects.
So if you highlight the word Array in an SC document and hit Cmd+d or Ctrl+d (d for documentation), you will get the documentation for that class. You will see the superclasses/subclasses and learn about all the methods that the Array class has. With no text highlighted, you can search the documentation by hitting Cmd+D (capital d) and you will get a menu asking “Search documentation for” where you can type in your item, such as “LFNoise0”.
Also, if you want to read and browse all the documentation, you can open a help browser: Help.gui.
Comments
Comments are information written for humans, but ignored by the language interpreter. It is a good practice to write comments where you think you might forget what a certain block of code will do. It is also a communication to another programmer who might read your code. Feel free to write as many comments as you want, but often it might be a better practice to name your variables and function names (we’ll learn later in this section what these words mean) such that you don’t need to add a comment.
Comments are red by default, but can be any colour (in the Format menu choose ‘syntax colorize’)
Variables
Here is a mantra to memorise: Variables are containers of some value. They are names or references to values that could change (their value can vary). So we could create a variable that is a property of yourself called age. Every year this variable will increase by one integer (a whole number). So let us try this now:
SuperCollider is not strongly typed so there is no need to declare the data type of variables. Data types (in other languages) include : integer, float, double, string, custom objects, etc… But in SuperCollider you can create a variable that contains an integer at one stage, but later contains reference to a string or a float. This can be handy, but one has to be careful as this can introduce bugs in your code.
Above we created a variable ‘age’, and we declared that variable by writing ‘var’ in front of it. All variables have to be declared before you can use them. There are two exceptions, all lowercase letters from a to z (note that ’s’ is a special variable that is by default used as a reference to the SC Server) can be used without declaration. There are also so called environmental variables (which can be considered global variables within a certain context) and they start with the ‘~’ symbol. More on that later.
SuperCollider has scope, so if you declare a variable within a certain scope, such as a function, they can have a local value within that scope. So try to run this code (by double clicking behind the first bracket).
The value of ‘a’ will be from the code block above this one. So ‘a’ is a global variable, but because you declared it with a var in a scope (the brackets) it did not override the global variable. This is good for prototyping and testing, but not recommended as a good software design. A variable with the name ‘myvar’ could not be global – only single lowercase characters.
If we want longer variable names, we can use environmental variables (using the ~ symbol): they can be seen as global variables, accessible from anywhere in your code
But typically we just declare the variable (var) in the beginning of the program and assign its value where needed. Environmental variables are not necessary, although they can be useful, and this book will not use them extensively.
But why use variables at all? Why not simply write the numbers or the value wherever we need it? Let’s take one example that should demonstrate clearly why they are useful:
As you can see, the ‘freq’ variable is used in various places in the above synthesizer. You can now change the value of the variable to something like 500, and it the frequency will ‘automatically’ be turned into 500 Hz in the left channel, 502 Hz in the right, and the cutoff frequency will be 2000 Hz. So instead of changing these variables throughout the code, you change it in one place and its value magically plugged into every location where that variable is used.
Functions
Functions are an important feature of SuperCollider and most other programming languages. They are used to encapsulate algorithms or functionality that we only want to write once, but use in different places at various times. They can be seen as a black box or a factory that takes some input, parses it, and returns some output. Just as a sophisticated coffee machine might take coffee beans and water as input, it then grounds the beans, boils the water, brews the coffee, and finally outputs a lovely drink. The key point is that you don’t need (or want) to know precisely how all this happens. It is enough to know where to fill up the beans and water, and then how to operate the buttons of the machine (strength, number of cups, etc.). The coffee machine is a [black box] (http://en.wikipedia.org/wiki/Black_box).
Functions in SuperCollider are notated with curly brackets ‘{}’
Let’s create a function that posts the value of 44. We store it in a variable ‘f’, so we can call it later.
When you run this line of code, you see that the SuperCollider post window notifies you that it has been given a function. It does not post 44 into the post window. For that we have to call the function, i.e., to ask it to perform its calculation and return some value to us.
Let us write a more complex function:
This is a typical function that calculates the MIDI note of a given frequency in Hz (or cycles per second). Most electronic musicians know that the MIDI note 60 is C, and that 69 is A, and that A is 440 Hz. But how is this calculated? Well the function above does return the MIDI note of 220 Hz. But this is a function without any input (or argument as it is called in the lingo). Let’s open up this input channel, by drilling a hole into the black box, and let’s name this argument ‘freq’ as that’s what we want to put in.
We have now an input into our function, an argument named ‘freq’. Note that this argument has been put into the right position inside the calculation. We can now put in any frequency and get the relevant MIDI note.
The above is a good example of why functions are so great. The algorithm of calculating the MIDI note from frequency is somewhat complex (or nasty?), and we don’t really want to memorise it or write it more than once. We have simply created a black box that we put in to the ‘f’ variable and now we can call it whenever we want without knowing what is inside the black box.
We will be using functions all the time in the coming chapters. It is vital to understand how they receive arguments, process the data, and return a value.
The final thing to say about functions at this stage is that they can have default values in their arguments. This means that we don’t have to pass in all the arguments of the function.
So here above is a function that calculates the pay after tax, with the default tax rate set at 20%. Of course we can’t be sure that this is the tax rate forever, or in different countries, so this needs to be an argument that can be set in the different contexts.
You will see the following
Often written in this form:
Arrays, Lists and Dictionaries
Arrays are one of the most useful things to understand and use in computer music. This is where we can store bunch of data (whether pitches, scales, synths, or any other information you might want to reference). A common thing a novice programmer typically does is to create lots of variables for data that could be stored in an array, so let’s dive straight into learning how to use arrays and lists.
An array can be seen as a storage space for things that you need to use later. Like a bag or a box where you keep your things. We typically keep the reference to the array in a variable so we can access it anywhere in our code:
You will see that the post window posts the array there when you run this line. Now let us try to play a little with the array:
and so on. The array we created above had five defined items in it. But we can create arrays differently, where we fill it algorithmically with any data we’d be interested in:
What happened here is that we tell the Array class to fill a new array with five items, but then we pass it a function (introduced above) and the function will be evaluated five times. Compare that with:
We can now play a little bit with that function that we pass to the array creation:
You might wonder why this is so fantastic or important. The fact is that arrays are used everywhere in computer music. The sound file you will load in later in this book will be stored in an array, with each sample in its own slot in an array. Then you can jump back and forth in the array, scratching, cutting, break beating or whatever you would like to do, but the fact is that this is all done with data (the samples of your soundfile) stored in an array. Or perhaps you want to play a certain scale.
m is here an array with the following values: [ 0, 2, 3, 5, 7, 8, 10 ]. So in a C scale, 0 would be C, 2 would be D (two half notes above C), 3 would be E flat, and so on. We could represent those values as MIDI notes, where 60 is the C note (~ 261Hz). And we could even look at the actual frequencies in Hertz of those MIDI notes. (Those frequencies would be passed to the oscillators as they are expecting frequencies and not MIDI notes as arguments).
We could now play with the ‘m’ array a little. In an algorithmic composition, for example, you might want to pick a random note from the minor scale
You will note that in ‘x = m.scramble’ above, the ‘x’ variable contains an array with a scrambled version of the ‘m’ array. The ‘m’ array is still intact: you haven’t scrambled that one, you’ve simply said “put a scrambled version of ‘m’ into variable ‘x’.” So the original ‘m’ is still there. If you really wanted to scramble ‘m’ you would have to do:
Arrays can contain anything, and in SuperCollider, they can contain values of mixed types, such as integers, strings, floats, and so on.
Arrays can contain other arrays, containing other arrays of any dimensions.
Above we added 12 to the minor scale.
Lists
It is here that the List class becomes useful.
Lists are like arrays - and implement many of the same methods - but the are slightly more expensive than arrays. In the example above you could simply do ‘a = a.add(100.rand)’ if ‘a’ was an array, but many people like lists for reasons we will discuss later.
Dictionaries
A dictionary is a collection of items where keys are mapped to values. Here, keys are keywords that are identifiers for slots in the collection. You can think of this like names for values. This can be quite useful. Let’s explore two examples:
Imagine how you would do this with an Array. One way would be
but using an array you need to keep track of the how things are organised and indexed.
Another Dictionary example:
Methods?
We have now seen things as 100.rand and a.reverse. How does .rand and .reverse work? Well, SuperCollider is an Object Orientated language and these are methods of the respective classes. So an integer (like 100), has methods like .rand, .midicps, or .neg. It does not have a .reverse method. Why not? Because you can’t reverse a number. However, an array (like [11,22,33,44,55]) can be reversed or added to. We will explore this later in the chapter about Object Orientated programming in SC, but for now it is enough to think that the object (an instantiation of the class) has relevant methods. Or to use an analogy: let’s say we have a class called Car. This class is the information needed to build the car. When we build a Car, we instantiate the class and we have an actual Car. This car can then have some methods, for instance: start, drive, turn, putWipersOn. And these methods could have arguments, like speed(60), or turn(-60). You could think about the object as the noun, the method as the verb, and the argument as the adjective. (As in: John (object) walks (method) fast (adjective)).
So to really understand a class like Array or List you need to read the documentation and explore the methods available. Note also that the Array is subclassing (or getting methods from its superclass) the ArrayedColldection class. This means that it has all the methods of its superclass. Like a class “Car” might have a superclass called “Vehicle” of which a “Motorbike” would also be a subclass (a sibling to “Car”). You can explore this by peeking under the hood of SC a little:
You can see that in the .dumpFullInterface method will tell you all the methods Array inherits from its superclasses.
Now, this might give you a bit of a brainache, but don’t worry, you will gradually learn this terminology and what it means for you in your musical or sound practice with SuperCollider. Wikipedia is good place to start reading about [Object Oriented Programming] (https://en.wikipedia.org/wiki/Object-oriented_programming).
Conditionals, data flow and control
The final thing we should discuss before we start to make sounds with SuperCollider is how we control data and take decisions. This is about logic, about human thinking, and how to encode such decisions in the form of code. Such logic the basic form of all clever systems, for example in artificial intelligence. In short it is about establishing conditions and then decide what to do with them. For example: if it is raining and I’m going out, I take my umbrella with me, else I leave it at home. It’s about basic logic that humans do all the time throughout the day. And programming languages have ways formalise such conditions, most typically with an if-else statement.
In pseudocode it looks like this:
if( condition, { then do this }, { else do this });
as in:
if( rain, { umbrella }, { no umbrella });
So the condition represents a state that is either true or false. If it is true (there is rain), then it evaluates the first function, if false (no rain) it evaluates the second condition.
Another form is a simple if statement where you don’t need to specify what to do if it’s false:
if( hungry, { eat } );
So let’s play with this:
You can see that true and false are keywords in SuperCollider. They are so called Boolean values. You should not use those as variables (well, you can’t). In digital systems, we operate in binary code, in 1s and 0s. True is associated with 1 and false with 0.
Boolean logic is named after George Boole who wrote an important paper in 1848 (“The Calculus of Logic”) on expressions and reasoning. In short it involves the statements AND, OR, and not.
A simple Boolean truth table might look like this
And also
etc. Let’s try this in SuperCollder code and observe the post window. But first we need to learn the basic syntax for the Boolean operators:
== stands for equal
!= stands for not equal
&& stands for and
|| stands for or
And we also use comparison operators
”>” stands for more than
“<” stands for less than
“>=” stands for more than or equal to
“<=” stands for less than or equal to
You might not realise it yet, but knowing what you now know is very powerful and it is something you will use all the time for synthesis, algorithmic composition, instrument building, sound installations, and so on. So make sure that you understand this properly. Let’s play with this a bit more in if-statements:
What happened in that last statement? It asks: is 3 less than 4? Yes. AND is true not equal to false? Yes. Then both conditions are true, and that’s what it posts. Note that of course the values in the string (inside the quotation marks) could be anything, we’re just posting now. So you could write:
in Spanish if you’d like, but you could not write this:
verdad == verdad
as the SuperCollider language is in English.
But what if you have lots of conditions to compare? Here you could use a switch statement:
Another way is to use the case statement, and it might be faster than the switch.
Note that both in switch and case, the semicolon is only after the last testing condition. (so the line evaluation goes from “case…… to that semicolon” )
Looping and iterating
The final thing we need to learn in this chapter is looping. Looping is one of the key tricks used in programming. Say we want to generate 1000 synths at once. It would be tedious to write and evaluate 1000 lines of code one after another, but it’s easy to loop one line of code 1000 times!
In many programming languages this is done with a [for-loop] (http://en.wikipedia.org/wiki/For_loop):
The above code will work in Java, C, JavaScript and many other languages. But since SuperCollider is a fully object orientated language, where everything is an object - which can have methods - so an integer can have a method like .neg, or .midicps, but also .do (which is our loop).
So in SuperCollider we can simply do:
What happened is that it loops through the command 10 times and evaluates the function (which scrambles and posts the string we wrote) every time. We could then make a counter:
But instead of such counter we can use the argument passed into the function in a loop:
Let’s now try to make a small program that gives us all the prime numbers from 0 to 10000. There is a method of the Integer class that is called isPrime which comes in handy here. We will use many of the things learned in this chapter, i.e., creating a List, making a do loop with a function that has a iterator argument, and then we’ll ask if the iterator is a prime number, using an if-statement. If it is (i.e. true), we add it to the list. Finally we post the result to the post window. But note that we’re only posting after we’ve done the 10000 iterations.
We can also loop through an Array or a List. Then the do-loop will pick up up all the items of the array and pass it into the function that you write inside the do loop. Additionally, it will add an iterator. So we have two arguments to the function:
So it posts the slot (the counter/iterator always starts at zero), and the item in the list. You can call the arguments whatever you want of course. Example:
Another looping technique is to use the for-loop:
We might also want to use the forBy-loop:
While is another type of loop:
This is enough about the language. Now is the time to dive into making sounds and explore the synthesis capabilities of SuperCollider. But first let us learn some tricks of peeking under the hood of the SuperCollider language:
Peaking under the hood
Each UGen or Class in SuperCollider has a class definition in a class file. These files are compiled every time SuperCollider is started and become the application environment we are using. SC is an “interpreted” language. (As opposed to a “compiled” language like C or JavaScript). If you add a new class to SuperCollider, you need to recompile the language (there is a menu item for that), or simply restart SuperCollider.
Chapter 2 - The SuperCollider Server
The SuperCollider Server, or SC Synth as it’s also known as, is an elegant and great sounding audio engine. As mentioned earlier, SuperCollider is traditionally separated between a server and a client, that is, an audio server (the SC Synth) and the SuperCollider language client (sc-lang). When the server is booted, it connects to the default audio device (such as internal or external audio cards), but you can set it to any audio device available to your computer (for example using virtual audio routing software like Jack).
The SC Synth renders audio and has an elegant structure of Busses, Groups, Synths and UGens, and it works a bit like a modular synth, where the output of certain chain of oscillators and filters can be routed into another module. The audio is created through creating graphs called synth definitions or SynthDefs. These are definitions of synths, but in a wide sense as they can do practically anything audio related (for example performing audio analysis rather than synthesis).
The SC Synth is a program that runs independently from the SuperCollider IDE or language. You can use any software to control it, like C/C++, Java, Python, Lua, Pure Data, Max/MSP or any other.
This chapter will introduce the SuperCollider server for the most basic purposes of getting started with this amazing engine for audio work. This section will be fundamental for the succeeding chapters.
Booting the Server
When you “boot the server”, you are basically starting a new process on your computer that does not have a GUI (Graphical User Interface). If you observe the list of running processes of your computer, you will see that when you boot the server, a new process will appear (try typing “top” into a Unix Terminal). The server can be booted through a menu command (Menu-> Server -> Boot Server), or through code (s.boot). You can also boot it from the command line if you know where the server is on your system, as it is independent of the SuperCollider application.
We can explore creating our own servers with specific ports and IP addresses:
From the above you might start to think about possibilities of having the server running on a remote computer with various clients communicating to it over network, and yes, that is precisely one of the innovative ideas of SuperCollider 3. You could put any server (with a remote IP address and port) into your server variable and communicate to it over a network. Or have many servers on diverse computers, instructing each of them to render audio. All this is common in SuperCollider practice, but the most common setup is using the SuperCollider IDE to write SC Lang code to control a localhost audio server (localhost meaning “on the same computer”). And that is what we will focus on for a while.
The Unit Generators
Unit Generators have been the key building blocks of digital synthesis systems, since Max Matthews’ Music N systems in the 1960s. Written in C++ and compiled as plugins for the SC Server, they encapsulate complex calculations into a simple black box that returns to us - the synth builders or musicians - what we are after, namely an output that could be in the form of a wave or a filter. The Unit Generators, or UGens as they are commonly called, are modular and the output of one can be the input of another. You can think of them like units in a modular synthesizer, for example the Moog:
UGens typically have audio rate (.ar) and control rate (.kr) methods. Some have initialization rate as well. The difference here is that an audio rate UGen will output as many samples as the sample rate per second. A computer with 44.1kHz sample rate will require each UGen to calculate 44100 samples per second. Control rate is of much lower rate than the sample rate and gives the synth designer the possibility of saving computational power (or CPU cycles) if used wisely.
The control rate SinOsc is inaudible, but it is running fine on the server. We use control rate UGens to control other UGens, for example frequency, amplitude, or filter frequency. Let’s explore that a little:
The beauty of UGens is how one can connect the output of one to the input of another. Oscillator UGens typically output values between -1 and 1, in a certain pattern (e.g., sine wave, saw wave, or square wave) and in a certain frequency. Other UGens such as filters or FFT processing do calculations on an incoming signal and output a new signal. Let’s explore one more example of connected UGens that demonstrates their modular power:
This is a simple case study of how UGens can be added (b+c), and used in any calculation (such as a*0.5 - which is an amplitude modulation, creating a tremolo effect) of the signal. For a bit of fun, let’s try to use a microphone and make a little effect of your voice:
A good way to explore UGens is to browse them in the documentation.
The SynthDef
Above we explored UGens by wrapping them in a function and call .play on that function ({}.play). What this does is to turn the function (indicated by {}, as we learned in the chapter 1) into a synth definition that is sent to the server and then played. The {}.play (or Function:play, if you want to peek into the source code – by highlighting “Function:play” and hit Cmd+I – and explore how SC compiles the function into a SynthDef under the hood) is how many people sketch sound in SuperCollider and it’s good for demonstration purposes, but for all real synth building, we need to create a synth definition, or a SynthDef.
A SynthDef is a pre-compiled graph of unit generators. This graph is written to a binary file and sent to the server over OSC (Open Sound Control - See chapter 4). This file is stored in the “synthdefs” folder on your system. In a way you could see it as your own VST plugin for SuperCollider, and you don’t need the source code for it to work (although it does not make sense to throw that away).
It is recommended that the SynthDef help file is read carefully and properly understood. The SynthDef is a key class of SuperCollider and very important. It adds synths to the server or writes synth definition files to the disk, amongst many other things. Let’s start by exploring how we can turn a unit generator graph function into a synth definition:
You notice that we have done two things: given the function a name (\mysaw), and we’ve wrapped our saw wave in an ‘Out’ UGen which defines which ‘Bus’ the audio is sent to. If you have an 8 channel sound card, you could send audio to any bus from 0 to 7. You could also send it to bus number 20, but we would not be able to hear it then. However, we could put another synth there that routes the audio back onto audio card busses, for example 0-7.
NOTE: There is a difference in the Function-play code and the SynthDef, in that
we need the Out Ugen in a synth definition to tell the server
which audiobus the sound should go out of. (0 is left, 1 is right)
But back to our SynthDef, we can now try to instantiate it, and create a Synth. (A Synth is an instantiation (child) of a SynthDef). This synth can then be controlled if we reference it with a variable.
This is obviously not a very interesting synth. It is ‘hardcoded’, i.e., the parameters in it (such as frequency and amplitude) are static and we can’t change them. This is only done in very specific situations, as normally we would like to specify the values of our synth both when initialising the synth and after it has been started.
In order to open the SynthDef up for specified parameters and enabling it to be changed, we need to put arguments into the UGen function graph. Remember in chapter 1 how we created a function with arguments:
Note that you cannot write ‘f.value’, as you will get an error trying to add ‘nil’ to ‘nil’ (‘a’ and ‘b’ are both nil in the arg slots in the function. To solve that we can give them default values:
So we add the arguments for the synthdef, and we add a Pan2 UGen that enables us to pan the sound from the left (-1) to the right (1). The centre is 0:
This synth definition could be written better and more understandable. Let’s say we were to add a filter to the synth, it might look like this:
But this is starting to be hard to read. Let us make the SynthDef easier to read (although for the computer it is the same, as it only cares about where the semicolons (;) are).
This is roughly how you will write and see other people write synth definitions from now on. The individual parts of a UGen graph are typically put into variables to be more human readable and easier to understand. The exception are SuperCollider tweets (#supercollider) where we have the 280 character limit. We can now explore the synth definition a bit more:
Observing server activity (Poll, Scope and FreqScope)
SuperCollider has various ways to explore what is happening on the server, in addition to the most obvious one: sound itself. Due to the separation between the SC server and the sc-lang, this means that data has to be sent from the server and back to the language, since it’s the language that prints or displays the data. The server is just a lean mean sound machine and doesn’t care about anything else. Firstly we can try to poll (get) the data from a UGen and post it to the post window:
People often use poll to explore what is happening in the synth, to debug, or try to understand why something is not working. But it is typically not used in software that is to be shipped or used in performance as it actually takes some computing power to be sending the messages from the server to the language. Another way to explore the server state is to use scope:
The scope shows amplitude over time, that is: the horizontal axis is time and the vertical axis is amplitude. This is often called a time-domain view of the signal. But we can also explore the frequency content of the sound, a view we call frequency-domain view. This is achieved by performing an FFT analysis of the signal which is then displayed to the scope (don’t worry, this happens ‘under the hood’ and we’ll learn about this in chapter 13). Now let’s explore the freqscope:
Futhermore, there is a Spectrogram Quark that shows a spectrogram view of the audio signal, but this is not part of the SuperCollider distribution. However, it’s easy to install and we will cover this in the chapter on the Quarks.
A quick intro to busses and multichannel expansion
Chapter 14 will go deeper into busses, groups, and how to route the audio signals through the SC Server. However, it is important at this stage to understand how the server works in terms of channels (or busses). Firstly, all oscillators are mono. Many newcomers to SuperCollider find it strange that they only hear a signal in their left ear when using headphones running a SinOsc. Well, it would be strange to have it in stereo, quadrophonic, 5.1 or any other format, unless we specifically ask for that! We therefore need to copy the signal into the next bus if we want stereo. The image below shows a rough sketch of how the sc synth works.
By default SuperCollider has 8 output channels, 8 input channels, and 112 private audio bus channels (where we can run effects and other things). This means that if you have an 8 channel sound card, you can send a signal out on any of the first 8 busses. If you have a 16 channel sound card, you need to enter the ServerOptions class and change the ‘numOutputBusChannels’ variable to 16. More on that later, but let’s now look at some examples:
It is thus clear how the busses of the server are represented by an array containing signals (as in: [signal, signal, signal, signal, etc.]). We can now take a mono signal and ‘expand’ it into other busses. This is called multichannel expansion:
Enough of this. We will explore busses and audio signal routing in chapter 14 later. However, it is important to understand this at the current stage.
Getting values back to the language
As we have discussed, the SuperCollider language and server are two separate applications. They communicate through the OSC protocol. This means that the communication between the two is asynchronous, or in other words, that you can’t know precisely how long it takes for a message to arrive. Also, you would not know in which order things will happen if you were to depend on a value from the server in a code block in the language. However, if we would like to do something with audio data in the language, such as visualising it, posting it, or such, we need to send a message to the server and wait for it to respond back. This can happen in various ways, but a typical way of doing this is to use the SendTrig Ugen:
What we see above is the SendTrig, sending 10 messages every second to the language (the Impulse triggers those messages). It sends a ‘/tr’ OSC message to port 57120 locally. (Don’t worry, we’ll explore this later in a chapter on OSC). The OSCdef then has a function that posts the message from the server.
A little bit more complex example might involve a GUI (Graphical User Interfaces are part of the language) and synthesis on the server:
We could also write values to a control bus on the server, from which we can read in the language. Here is an example:
Check the source of Bus (by hitting Cmd+I) and locate the .get method. You will see that the Bus .get method is using an OSCresponder underneath. It is therefore “asynchronous”, meaning that it will not happen in the linear order of your code. (The language is asking server for the value, and the server then sends back to language. This takes time).
Here is a program that demonstrates the asynchronous nature of b.get. The {}.play from above has to be running. Note how the numbered lines of code appear in the post window “in the wrong order”! (Instead of a synchronous posting of 1, 2 and 3, we get the order of 1, 3 and 2). It takes between 0.1 and 10 milliseconds to get the value on a 2.8 GHz Intel computer.
This type of communication from the server to the language is not very common. The other way (from language to server) is however. This section is therefore not vital for your work in SuperCollider, but you will at some point stumble into the question of synchronous and asynchronous communication with the server and this section should prepare you for that.
ProxySpace
SuperCollider is an extremely wide and flexible language. It is profoundly deep and you will find new things to explore for years to come. Typically SC users find their own way of working in the language and then explore new areas when they find they need so, or are curious.
ProxySpace is one such area. It makes live coding and various on line coding extremely flexible. Effects can be routed in and out of proxies, and source changed. Below you will find a quick examples that are useful when testing UGens or making prototypes for synths that you will write as synthdefs later. ProxySpace is also often used in live coding. Evaluate the code below line by line:
Another ProxySpace example:
Ndef
Ndef is an alternative and more dynamic way of working than using SynthDefs. They can be rewritten on the fly whilst running. They are using the ProxySpace like the code above. Example (from the documentation) here below:
Chapter 3 - Controlling the Server
This chapter explores how we use the SuperCollider language to control the SC Server. From a certain perspective the server with its synth definitions can be seen as an instrument and the language as the performer or the score. The SuperCollider language is an interpreted object orientated and functional language written in C/C++ inspired by the Smalltalk. In many ways it is similar to Python, Ruby, Lua or JavaScript, but these are all different languages, and for good reasons: there is no point in creating a programming language that’s the same as another.
SuperCollider is a powerful language, and as its author James McCartney writes in a 2003 paper:
Different languages are based on different paradigms and lead to different types of approaches to solve a given problem. Those who use a particular computer language learn to think in that language and can see problems in terms of how a solution would look in that language. (McCartney 2003)
SuperCollider is very open and allows us to do things in multiple different ways. We could talk about different coding or compositional styles. And none are better than others. It depends on what people get used to and what practices are in line with how they already think or would like to think.
Music is a time-based art form. It is largely about scheduling events in time (which is a notational practice) or about performing those events yourself (which is an instrumental practice). SuperCollider is good for both practices and it provides the user with specific functionalities that make sense for a musical programming language, which might seem strange in a general language. This chapter and the next will introduce diverse wasy to control the server through automated loops, through patterns, Graphical User Interfaces, and other interface protocols such as MIDI or OSC.
Tasks, Routines, forks and loops
We have learned to design synths graphs with UGens, and wrap them in a SynthDef. We have started and stopped a Synth on the server, but we might ask: then what? How do we make music with SuperCollider? How do we schedule things to happen repeatedly in time?
The most basic way of scheduling things is to create a process that loops and runs the same code repeatedly. In chapter 1 we looked at the .do function (which loops N times, e.g. 10.do({arg i; i.postln;}). Such a process can count, so we can use the counter to access data from arrays and this allows us to use the counter as an index into an array into which we can write anything, perhaps a melody. The problem with .do is that it can’t pause or wait. So all the events would be played at the same time (or very quickly after each other).. So we need to wrap the .do function in a Routine. Let us look at a basic routine:
This could also be written as:
but the key thing is that we have a routine that serves like an engine that can be paused and woken up again after a certain wait. Try to run the do-loop without a fork:
A routine can be played with different clocks (TempoClock, SystemClock, and AppClock) and we will explore them later in this chapter. But here is how we can ask different clocks to play the routines:
In the last line above we experience that we can’t restart a routine after it has stopped. Here is where Tasks come in handy, but they are pauseable processes that behave like routines (check the Task helpfile).
Let’s make some music with a Task. We can put some note values into an array and then ask a Task to loop through that array, repeating the melody we make. First we create a SynthDef that we would like to use for this piece of music:
And here we create a composition to play it:
In fact we could create a loop that re-evaluates the m.scramble line:
BTW. Nobody said this was going to be good music, but music it is.
Patterns
Patterns are useful methods for creating musical structures in an efficient way. Patterns are high-level abstractions of keys and values that can be ‘bound’ together to control synths. Patterns use the TempoClock of the language to send control messages to the server. Patterns are related to Events, but those are collections of keys and values that can be used to control synths.
All this might seem very convoluted, but the key point is that we are operating with default values that can be used to control synths. A principal pattern to understand is the Pbind (a Pattern that binds keys to values, such as \frequency (a key) to 440 (a value)).
The Pbind is using the default arguments to play the ‘default’ synth (one that is defined by SuperCollider), a frequency of 261.6256, amplitude of 0.1, and so on.
The keys that the patterns play match the arguments of the SynthDef. Let’s create a SynthDef that we can fully control with a pattern:
Patterns have some default timing mechanism, so we can control the duration until the next event, and we can also set the sustain of the note:
All this is quite musically boring, but here is where patterns start to get exciting. There are diverse list patterns that allow us to operate with lists, for example by going sequentially through the list (Pseq), picking random values from the list (Prand), shuffling the list (Pshuf), and so on:
There will be more on patterns later, but at this stage it is a good idea to play with the pattern documentation files, for example the ones found under Streams-Patterns-Events. There is also a fantastic Practical Guide to Patterns in the SuperCollider Documentation. Under ‘Streams-Patterns-Events>A-Practical-Guide’
To end this section on patterns, let’s simply play a little with Pdefs:
The TempoClock
TempoClock is one of three clocks available for timing organisation in SuperCollider. The others are SystemClock and AppClock. TempoClock is a scheduler like SystemClock, but it schedules in beats rather than milliseconds. AppClock is less accurate, but it can call GUI primitives and therefore to be used when GUI’s need update from a clock controlled process.
Let’s start by creating a clock, give it the tempo of 1 beat per second (that’s 60 bpm), and schedule a function to be played in 4 beats time. The arguments of beats and seconds since SuperCollider was started are passed into the function, and we post those.
You will note that the beat is a fractional number. This is because the beat returns the appropriate beat time of the clock’s thread. If you prefer to have the beats in whole numbers, you can use the schedAbs method:
If we would like to schedule the function repeatedly, we add a number representing the next beat at the end of the function.
And with this knowledge we can start to make some music:
We can try to make some rhythmic pattern with the tempoclock now. Let us just use a simple synth like the one we had above, but now we call it ‘clocksynth’.
Yet another trick to play sounds in SuperCollider is to use “fork” and schedule a pattern through looping. If you look at the source of .fork (by hitting Cmd+I) you will see that it is essentially a Routine (like above), but it is making our lives easier by wrapping it up in a method of Function.
Note that the interpreter reaches the end of the program before the last fork is finished playing.
This is enough about the TempoClock at this stage. We will explore it in more depth later.
GUI Control
Graphical user interfaces are a very common way for musicians to control their compositions. They serve like a control board for things that the language can do, and to control the server. In the next chapter we will explore interfaces in SuperCollider, but this example is provided in this chapter to give an indication of how the language works.
// we create a synth (here a oscillator with 16 harmonics)
(
SynthDef(\simpleSynth, {|freq, amp|
var signal, harmonics;
harmonics = 16;
signal = Mix.fill(harmonics, {arg i;
SinOsc.ar(freq*(i+1), 1.0.rand, amp * harmonics.reciprocal/(i+1))
});
Out.ar(0, signal ! 2);
}).add;
)
Chapter 4 - Interfaces and Communication (GUI/MIDI/OSC)
SuperCollider is a very open environment. It can be used for practically anything sound related, whether it is scientific study of sound, instrument building, DJing, generative composition, or creating interactive installations. For these purposes we often need real-time interaction with the system and this can be achieved in many ways, but typically through screen-based or hardware interaction. This section will introduce the most common ways of interacting with the SuperCollider language.
MIDI - Musical Instrument Digital Interface
MIDI: A popular 80s technology (SC2 Documentation)
MIDI is one of the most common protocols for hardware and software communication. It is a simple protocol that has proven valuable, although it is currently seen to have gone past its prime. The key point of using MIDI in SuperCollider is to be able to interact with hardware controllers, synthesizers, and other software. SuperCollider has a strong MIDI implementation and should support everything you might want to do with MIDI.
Using MIDI Controllers (Input)
Let’s start with exploring MIDI controllers. The MIDI methods that you will use will depend on what type of controller you’ve got. The following are the available messages of MIDIIn:
noteOn
noteOff
control
bend
touch
polyTouch
program
sysex
sysrt
smpte
If you were to use a relatively good MIDI keyboard, you would be able to use most of these methods. In the following example we will explore the interaction with a simple MIDI keyboard.
On the device I’m using now (Korg NanoKEY), I get an array formatted thus [127, 60, 0, 1001096579], where the first item is the velocity (how hard I hit the key), the second is the MIDI note, the third is the MIDI channel, and the fourth is the device number (so if you have different devices, you can differentiate between them using this ID).
For the example below, we will use the convenient MIDIdef class to register the definition we want to use for the incoming MIDI messages. Making such definitions is common in SuperCollider, as we make SynthDefs, OSCdefs and HIDdefs (Human Interface Device definitions). Let’s hook the incoming note and velocity values up to the freq and amp values of a synth that we create. Note that the MIDIdef contains two things, its name and the function it will trigger on every incoming MIDI note on! We simply create a Synth inside that function.
But the above is not a common synth-like behaviour. Typically you’d hold down the note and it would not be released until you release your finger off the keyboard key. We therefore need to use an ADSR envelope. Wikipedia link.
What’s going on here? The synth definition uses a common trick to create a slight detuning in the frequency in order to make the sound more “analogue” or imperfect. We use a VarSaw that can change the saw waveform and we do change it with the XLine UGen. The synth def has an amp argument for the volume and a gate argument that keeps the synth playing until we tell it to stop.
But what happened? We play and we get a cacophony of sound. The notes are piling up on top of each other as they are not released. How would you solve this?
You could put the note into a variable:
And it will release the note when you release your finger. However, now the problem is that if you press another key whilst holding down the first one, the second key will be the Synth that is put into variable ‘a’, so you have lost the reference to the first one. You can’t release it! There is no access to the synth as a new one has repleced the first. Here is where SuperCollider excels as a programming language and makes things so simple and easy compared to data-flow programming environments like Pd or Max/MSP. We just create an array and put our synths into it. Here every note has a slot in the array and we turn the synths on and off depending on the MIDI message:
MIDI Communication (Output)
It is equally easy to control external hardware or software with SuperCollider’s MIDI functionality. Just as above we initialise the MIDI client and check which devices are available:
And you could control your device using Patterns:
or for example a Task:
You might have recognised the beginning of a Mozart melody there, but perhaps not, as the note lengths were not correct. How would you solve that? Try to fix the timing of the notes as an exercise. Tip: create a duration array (in var ‘d’ for example) and put that instead of “0.25.wait;” above. Use the wrapAt(i) to get at the correct duration slot.
OSC - Open Sound Control
Open Sound Control has become the principal protocol replacing MIDI in the 21st century. It is fast and flexible network protocol that can be used to communicate between applications (like SC Server and sc-lang), between computers (on a local network or the internet), or to hardware (that supports OSC). It is used by musicians and media artists all over the world and it has become so popular that commercial software companies are now supporting it in their software. In many ways it could have been called OMC (Open Media Control) as it is used in graphics, video, 3D software, games, and robotics as well.
OSC is a protocol of communication (how to send messages), but it does not define a standard of what to communicate (that’s the open bit). Unlike MIDI, it can send all kinds of information through the network (integers, floats, strings, arrays, etc.), and the user can define the message names (or address spaces as they are also called).
There are two things that the user needs to know: the computer’s IP address, and the listening Port.
* IP address: Typically something like “194.81.199.106” or locally “127.0.0.1” (localhost)
* Port: You can use any port, but ideally choose a port above 10000.
You have already used OSC in the SendTrig example of chapter 2, but there it was ‘under the hood’, so to speak, as the communication took place in the SuperCollider classes.
OSC messages make use of Unix-like address spaces. You know what that is already, as you are used to how the internet uses ‘/’ to indicate a folder down in web addresses. For example here this OSCdef.html document lies in a folder called ‘Classes’: http://doc.sccode.org/Classes/OSCdef.html together with lots of other documents. The address above is ‘/hello’ (not ‘/hola’).
The idea here is that we can send messages directly deep into the internals of our synthesizers or systems, for example like this:
and so on. We are here giving direct messages that are human-readable as well as specific for the machine. This is very different from how people used to use MIDI where you have no way of naming things, you have to resolve to mapping your things with only 16 channels and often constrained in messaging with numbers from 0-127.
Try to open Pure Data and create a new patch with the following in it:
[dumpOSC 12000]
|
|
[print]
Then send the messages to Pd with this code in SC:
Try to do the same with another computer on the same network, but then send it to that computer’s IP address:
Use the same Pd patch on that computer, but then run the following lines in SuperCollider:
You notice that there is now ‘nil’ in the sender address. This allows any computer on the network to send to your computer. If you would limit that to a specific net address (for example NetAddr(“192.9.12.199”, 3000)), it would only be able to receive OSC messages from that specific address/computer.
Hopefully you have now been able to send OSC messages to another software on your computer, to Pd on another computer, and to SuperCollider on another computer. These examples were on the same network. You might have to change settings in your firewall for this to work over networks, and if you are on an institutional network (such as a University network) you might even have to ask the system administrators to open up for a specific port if the incoming message is coming from outside the network (Internally it works without admin changes).
We could end this section by creating a little program that is typical for who people use OSC over networks on the same or different computers. Here below we have the listener:
And the other system (another software or another computer) will send something like this:
The messages could be wrapped into functionality that is plugged to some GUI, a hardware sensor (pressure sensor and motion tracker for example), or perhaps algorithmically generated together with some animated graphics.
GUI - Graphical User Interfaces
Note that we are creating N number of synths (defined in the variable “nrSynths”) and putting them all into one List. That way we can access and control them individually from the GUI. Look at how the sliders and buttons of the GUI are controlling directly their respective synth by accessing synthList[i] (where “i” is the index of the synth in the list)
TIP: change the nrSynths variable to some other number (10, 16, etc) and see what happens.
ControlSpec - Scaling/mapping values
In the examples above we have used a very crude mapping of a slider onto a frequency argument in a synth. A slider in SuperCollider GUI gives a value from 0 to 1.0 in resolution defined by yourself and the size of the slider (the longer the slider, the higher resolution). So above we are using parts of the slider to control frequency values from 0 to 20 Hz that we are most likely not interested in. And we might want an exponential mapping or negative.
The ControlSpec is the equivalent to [scale] in Pd or Max. Check the helpfile.
The ControlSpec takes the following arguments: minval, maxval, warp, step, default,units
Now we finish this by taking the example above and map the slider to pitch. Try to explore different warp modes for the pitch. And create an amplitude slider.
Other Views (but not all)
HID - Human Interface Devices
SuperCollider has good support for using joysticks, game pads, drawing tablets and other interfaces that work with the HID protocol (A subset of the USB protocol and using the USB port of the computer).
Hardware - Serial port (for example using Arduino)
Before USB, Bluetooth and such “modern” protocols, there was the Serial port. This would send data - in series - from and to the computer to external devices, such as sensors or actuators (e.g. a motor or a printer).
Arduino is a popular serial port chip used in embedded computing or as an external board that interfaces with sensors and actuators. The SuperCollider documentation explains this well in the Serial port documentation:
In 1822, the mathematician Joseph Fourier published a work on heat with a theory that implied that any sound can be described as a function of pure sine waves. This is a very important statement for computer music. It means that we can recreate any sound that we hear by adding number of sine waves together with different frequency, phase and amplitude. Obviously this was a costly technique in times of modular synthesis, as one would have to apply multiple oscillators to get the desired sound. This has changed with digital sound, where innumerable oscillators can be added together with little cost. Here is a proof:
Adding waves
Adding waves together seems simple, and indeed it is. By using the plus operator we can add two signals together and their values at the same time add up to the combined value. In the following images we can see how simple sinusoidal waves add up:
You can see how two sine waves that go from -1 to 1, when added up will have the amplitude of -2 to 2.
You see that two waves at the same frequency added together becomes twice the amplitude. When two waves with the amplitude of 1 are added together we get an amplitude of 2 and in the graph we get a clipping where the wave is clipped at 1. This can cause a distortion, but also resulting in a different wave form, namely a square wave. You can explore this by giving a sine wave the amplitude of 10, but then clip the signal at, say -0.75 and 0.75.
The phase of the wave is important as it can either cancel the sound out or double its amplitude. Recording engineers are familiar with the problems of phasing in microphone placements where certain frequencies of a sound can be phased out if two mics are badly placed.
Most instrumental sounds can be roughly described as a combination of sine waves. Those sinusoidal waves are called partials (the horizontal lines you see in a spectrogram when you analyse a sound). In the example below we mix ten sine waves of frequencies between 200 and 2000. You might well be able to detect a pitch in the example if you run it many times, but since these are random frequencies they are not necessarily lining up to give us a solid pitch.
In harmonic sounds, like the piano, guitar, or the violin we get partials that are whole number multiples of the fundamental (the lowest) partial. If they are fundamental multiples, the partials are called harmonics. The harmonics can be of varied amplitude, phase, envelope form, and duration. A saw wave is a waveform with all the harmonics represented, but lowering in amplitude:
Try to play with adding waves together in various ways. Explore what happens when you add harmonics together (integer multiples of a fundamental frequency),
The harmonic series is something we all know intuitively and have heard many times (swing a flexible tube around your head and you will get a sound in the harmonic series). The Blip UGen in SuperCollider allows you to dynamically control the number of harmonics of equal amplitude:
Creating wave forms out of sinusoids
In SuperCollider you can create all kinds of wave forms out of a combination of sine waves. By adding SinOscs together, you can derive at your own unique wave forms that you might use in your synths. In this section we will look at how we use additive synthesis to derive at diverse wave forms.
Above we created a Saw wave which contains harmonics up to the ![Nyquist rate] (http://en.wikipedia.org/wiki/Nyquist_rate), which is half of the sample rate SuperCollider is running. The Saw UGen is “band-limited” which means that it does not alias and mirror back into the audible range. (Compare with LFSaw which will alias - you can both hear and see the harmonics mirror back into the audio range).
We can now try to create a saw wave out of sine waves. There is a simple algorithm for this, where each partial is an integer multiple of the fundamental frequency, and decreasing in amplitude by the reciprocal of the partials’s/harmonic’s number (1/harmnum).
A ‘Saw’ wave with 30 harmonics:
By inverting the phase (using pi), we get an inverted wave form.
A square wave is a type of a pulse wave (If the length of the on time of the pulse is equal to the length of the off time - also known as a duty cycle of 1:1 - then the pulse wave may also be called a square wave). The square wave can be created by sine waves if we ignore all the even harmonics and only add the odd ones.
Let’s quickly look at the regular Pulse wave in SuperCollider:
A triangle wave is a wave form, similar to the pulse wave in that it ignores the even harmonics, but it has a different algorithm for the phase and the amplitude:
We have now created various wave forms using sine waves, and here is how to wrap them up in a SynthDef for future use:
We have created various typical wave forms above in order to show how they are sums of sinusoidal waves. A good idea is to play with this further and create your own waveforms:
Bell Synthesis
Not all sounds are harmonic. Many musical instruments are inharmonic, for example timpani drums, xylophones, and bells. Here the partials of the sound are not in a harmonic relationship (or multiples of some fundamental frequency). This does not mean that we can’t detect pitch, as there will be certain partials that have stronger amplitude and longer duration than others. Since we know bells are inharmonic, the first thing we might try is to generate a sound with, say, 15 partials:
Try to run this a few times. What we hear is a wave form that might be quite similar to a bell at first, but then the resemblance disappears, because the partials do not fade out. If we add an envelope to each of these sinusoids, we get a different sound:
Above we are using Mix.fill instead of creating an array with ! and then .summing it. These two examples do the same thing, but it is good for the student of SuperCollider to learn different ways of reading and writing code.
You note that there is a “new” bell every time we run the above code. But what if we wanted the “same” bell? One way to do that is to “hard-code” the frequencies, durations, and the amplitudes of the bell.
Generating a SynthDef using a non-deterministic algorithms (such as random) in the SC-lang will also generate a SynthDef that is the “same” bell. Why? This is because the values (430.rand) are defined when the synth definition is compiled. Try to recompile the SynthDef and you get a new sound:
Another way of generating this bell sound would be to use the SynthDef from last tutorial, but here adding a duration to the envelope:
The power of using this style would be if you really wanted to be able to define all the parameters of the sound from the language, for example sonifying some complex information from gestural or other data.
The Klang Ugen
Another interesting way of achieving this is to use the Klang UGen. Klang is a bank of sine oscillators that takes arrays of frequencies, amplitudes and phase as arguments.
And we create a SynthDef with the Klang Ugen:
Xylophone Synthesis
Additive synthesis is good for various types of sound, but it suites very well for xylophones, bells and other metallic instruments (typically inharmonic sounds) as we saw with the bell example above. Using harmonic wave forms, such as a Saw wave, Square wave or Triangle wave would not be useful here as those are harmonic wave forms (as we know from the section above).
In additive synthesis, people often analyse the sound they’re trying to synthesise with generating a spectrogram of its frequencies.
The information the spectrogram gives us is three dimensional. It shows us the frequencies present in the sound on the vertical x-axis, the time on the horizontal y-axis, and amplitude is color (which we could imagine as the z-axis). We see that the partials don’t have the same type of envelopes: some have strong attack, others come smoothly in; some have much amplitude, others less; some have a long duration whilst other have less; and of them vibrate in frequency. These parameters can mix. A loud partial could die out quickly while a soft one can live for a long time.
Analysing the bell above we can detect the following partials
* partial 1: xxx Hz, x sec. long, with amplitude of ca. x
* partial 2: xxx Hz, x sec. long, with amplitude of ca. x
* partial 3: xxx Hz, x sec. long, with amplitude of ca. x
* partial 4: xxx Hz, x sec. long, with amplitude of ca. x
* partial 5: xxx Hz, x sec. long, with amplitude of ca. x
* partial 6: xxx Hz, x sec. long, with amplitude of ca. x
* partial 7: xxx Hz, x sec. long, with amplitude of ca. x
We can now try to synthesize those harmonics:
And we get a decent inharmonic sound (inharmonic is where the partials are not whole number multiples of a fundamental frequency). We would now need to set the right amplitude as well and we’re still guessing from the spectrogram we made, but more importantly we should be using our ears.
Some of the partials have a bit of vibration and we could simply turn the oscillator into a ‘detuned’ oscillator by adding two sines together:
And finally, we need to create envelopes for each of the partials:
And let’s listen to that. You will note that parenthesis have been put around each sine wave and its envelope multiplication. This is because SuperCollider calculates from left to right, and not giving + and - operators precedence, like in common maths and many other programming languages.
TIP: Operator Precedence - explore how these equations result in different outcomes
We have now created a reasonable representation of the bell sound that we listened to. The next thing to do is to turn that into a synth definition and make it stereo. Note that we add a general envelope with a doneAction:2, which will remove the synth from the server when it has stopped playing.
This bell has a specific frequency, but it would be nice to be able to pass a new frequency as a parameter. This could be done in many ways, one would be to pass the frequencies of each of the oscillators as arguments to the Synth. This would make the instrument quite flexible, but on the other hand it would weaken its unique character (now that so many more types of bell sounds - with their respective harmonic relationships - can be made with it). So here we decide to keep the same ratios between the partials for this unique bell sound, but a sound that can change in pitch. We find the ratios by dividing the frequencies by the lowest frequency.
We can now use those ratios in our synth definition
Harmonics GUI
Below you find a Graphical User Interface that allows you to control the harmonics of a fundamental frequency (the slider on the right is the fundamental freq). Here we are also introduced to the Osc UGen, which is a wavetable oscillator that reads its samples from a waveform stored in a buffer.
The “Play” and “Play rand” buttons on the interface allow you to hit Enter repeatedly whilst changing the harmonic energy of the sound. Can you synthesise a clarinet or an oboe this way? Can you find the sound of a trumpet? You can get close, but of course each of the harmonics would ideally have their own envelope and amplitude (as we saw in the xylophone synthesis above).
Some Additive SynthDefs with routines playing them
The examples above might have raised the question whether all the parameters of the synth could be set from the outside as arguments passed to the synth in the form of arrays. This is possible, of course, but it requires that the arrays are created as inputs when the SynthDef is compiled. In the example below, the partials and the amplitudes of 15 oscillators are set on compilation as the default arguments in respective arrays.
Note the # in front of the arrays in the arguments. It means that they are literal (fixed size) arrays.
This is because the synth it is using the default arguments of the SynthDef. Let’s try to pass a partials array
What happened here? Let’s scrutinize the partials argument.
We can now create a piece that sets new partial frequencies and their amplitude on every note. As mentioned above this could be carefully decided, or simply done randomly. If it is completely random, it might be worth looking into the Rand UGens though, as they allow for a random value to be generated within every synth.
Using Control to set multiple parameters
There is another way to store and control arrays within a SynthDef. This is using the Control class. The controls are good for passing arrays into running Synths. In order to do this we use the Control UGen inside our SynthDef.
Here we make an array of 20 frequency values inside a Control variable and pass this array to the SinOsc UGen which makes a “multichannel expansion,” i.e., it creates a sinewave in 20 succedent audio busses. (If you had a sound card with 20 channels, you’d get a sine out of each channel) But here we mix the sines into one signal. Finally in the Out UGen we use “! 2” which is a multichannel expansion trick that makes this a 2 channel signal (we could have used signal.dup).
And here below we can change the frequencies of the Control
Here below we use DynKlang (dynamic Klang) in order to change the synth in runtime:
Klang and Dynklang
It can be laborious to build an array of synths and set the frequencies and amplitudes of each. For this we have a UGen called Klang. Klang is a bank of sine oscillators. It is more efficient than the DynKlang, but less flexible. (Don’t confuse with Klank and DynKlank which we will explore in the next chapter).
Klang can not recieve updates to its frequencies nor can it be modulated. For that we use DynKlang (Dynamic Klang).
The following patch shows how a GUI is used to control the amplitudes of the DynKlang oscillator array
Chapter 6 - Subtractive Synthesis
The previous chapter introduced additive synthesis. The idea is start with silence and add together partials and derive at the sound we are after. Subtractive synthesis works from the opposite. We start with a rich sound - a broadband sound either rich in partials/harmonics or noise - and then filter the unwanted frequencies out. WhiteNoise and Saw waves are typical sound sources, as the noise has equal energy on all frequencies, but the saw wave has a naturally sounding harmonic structure with energy on every harmonic.
Noise Sources
The definition of noise is a signal that is aperiodic, i.e., there is no periodic repetition of some form in the signal. If there was such repetition, we would talk about a wave form and then a frequency of those repetitions. The frequency becomes pitch or musical notes. Not so in the world of noise: there are no repetitions that we can detect and thus we perceive it as the opposite of a signal; the antithesis of a meaning. Some of us might remember the white noise of a dead analogue TV channel. Anyway, although noise might for some have negative connotations, it is a very useful musical element, in particular for synthesis as a rich input signal.
We can not start to sculpt sound with the use of filters and envelopes. For example, what would this remind us of:
We can add a low pass filter (LPF) to the noise, so we cut off the high frequencies:
And here we use mouse movements to control the cutoff frequency (the x-axis) and the envelope duration (y-axis):
But what did that low pass filter do? It passes through the low frequencies, thus the name. A high pass filter will pass through the high frequencies. And a band pass filter (BPF) will pass through the frequencies of a frequency band that you specify. We can view the functionality of the low pass filter with the use of a frequency scope. Note also the quality parameter in the resonant low pass filter:
Note how the Y location of the mouse affects the quality of the resonance in the resonance low pass filter (RLPF).
Filter Types
Filters are algorithms that are typically applied in the time domain of an audio signal. This, for example, might include adding a delayed copy of the signal to the orginal signal.
Here is a very primitive such filter:
Let us try some of the filter UGens of SuperCollider:
Resonating filters
A resonant filter does what is says on the tin, it resonates certain frequencies. The bandwidth of this resonance can vary, so with a WhiteNoise input, one could go from a very wide resonance (where the “quality” - the Q - of the filter is low), to a very narrow band resonance where the noise almost sounds like a sine wave. Let’s explore this with WhiteNoise and a band pass filter:
Move your mouse around and explore how the Q factor, when increased, results in a narrower resonating bandwidth.
In a low pass and high pass resonant filters, the energy at the cutoff frequency can be increased or decreased by setting the Q factor (or in SuperCollider, the reciprocal (inverse) of Q).
There are bespoke resonance filters in SuperCollider, such as Resonz, Ringz and Formant.
Klank and DynKlank
Just as Klang is a bank of fixed frequency oscillators, i.e., additive synthesis, Klank is a bank of fixed frequency resonators, where frequencies are subtracted from an input signal.
Let’s explore the DynKlank UGen. It does the same as Klank, but it allows us to change the values after the synth has been instantiated.
We know resonant filters when we hear them. The typical cry-baby wah wah guitar pedal is a band pass filter, for example. In the examples below we use a SinOsc to “move” the band pass frequency up and down the frequency spectrum. The SinOsc is here effectively working as a LFO (Low Frequency Oscillator - usually with a frequency below 20 Hz).
Bell Synthesis using Subtractive Synthesis
The desired sound that you are trying to synthesize can be achieved through different methods. As an example, we could explore how to synthesize a bell sound with subtractive synthesis.
This bell will be triggered every second. This is because the Impulse UGen is triggering the opening of the gate in the EnvGen (envelope generator) that uses the percussion envelope defined in the ‘burstEnv’ variable. If we wanted this to happen only once, we could set the frequency of the Impulse to zero. If we add a general envelope that frees the synth after being triggered, we could run a task that triggers bells every second.
Simulating the Moog
The much loved MiniMoog is a typical subtractive synthesis synthesizer. A few oscillator types can be mixed together and subsequently passed through a characteristic resonance low pass filter. We could try to simulate a setting on the MiniMoog, using the MoogFF UGen that simulates the Moog VCF (Voltage Controlled Filter) low pass filter and choosing, say, a saw wave form (The MiniMoog also has triangle, square, and two pulse waves).
We would typically start by sketching our synth by hooking up the UGens in a .play or .freqscope:
A common trick when simulating analogue equipment is to try to recreate the detuned oscillators of the analog synth (they are typically out of tune due to the difference of temperature within the synth itself). We can do this by adding another oscillator with a few Hz difference in frequency:
We can then start to add arguments and prepare the synth graph for turning it into a SynthDef:
The two synth graphs above are pretty much the same, except we have removed the mouse input in the latter one. You can see the frequency, amp, pan, and filter cutoff values are derived from the default arguments in the top line. There are only three things left for us to do in order to have a good working general synth: add an envelope, and wrap the graph up in a SynthDef with a name:
We can now hook up a keyboard and play the \moog synth that we’ve designed. The MiniMoog is monophonic (only one note at a time), and it could be written like this:
The “a == nil”, or “a.isNil” check is there to make sure that we don’t press another note and overwriting the variable ‘a’ with another synth. What would happen then is that the noteOff method would free the last synth put into variable ‘a’ and not the prior ones. Try to remove the condition and see what happens.
Finally, we might want to improve the MiniMoog and add a polyphonic feature. As we saw in an earlier chapter, we simply create an array for all the possible MIDI notes and turn them on and off:
We will leave it up to you to decide how you want to control the cutoff and gain parameters of the MoogFF filter UGen. This could be done through knobs or sliders on a MIDI interface, on a GUI, or you could even decide to explore mapping key press velocity to the cutoff frequency, such that the note sounds brighter (or dimmer?) the harder you press the key.
Chapter 7 - Modulation
Modulating one signal with another is one of the oldest and most common techniques in sound synthesis. Here, any parameter of an oscillator can be modulated by the output of another oscillator. Filters, PlayBufs (sound file players) and other things can also be modulated. In this chapter we will explore modulation, and in particular amplitude modulation (AM), ring modulation (RM) and frequency modulation (FM).
LFOs (Low Frequency Oscillators)
As mentioned most parameters or controls in an oscillator can be controlled by the output of another. Low frequency oscillators (LFOs) are oscillators that typically operate under 20 Hz, although in SuperCollider there is no point in trying to define oscillators as LFOs, as we might always want to increase that frequency to 40 or 400 Hz!
Here below are examples of a triangle wave that has different controls modulated by another UGen.
In the first example we have the frequency of one oscillator modulated by the output (amplitude) of another:
We hear that the modulation is 2 Hz, not one, and that is because the output of the modulating oscillator goes up to 1 and down to -1 in one second. So for a one cycle of modulation per second, you would have to give it 0.5 as an amplitude. Furthermore, a frequency argument with a negative sign is automatically turned into a positive one, as negative frequency does not make sense.
Let’s try the same for amplitude:
We thus get the familiar effects of vibrato (modulation of frequency) and tremolo (modulation of amplitude) as they are commonly defined as:
In modulation synthesis we talk about a “modulator” (the oscillator that does the modulation) and the “carrier” which is the main signal being modulated.
There are special Low Frequency Oscillators (LFOs) in SuperCollider. They are typically not band limited, which means that they start to alias (or mirror back) into the frequency domain. Consider the difference between Saw (band-limited) and LFSaw (non-band-limited) here:
When you move your mouse, you can see how the band-limited Saw only gives you the harmonics above the fundamental frequency set by the mouse. On the other hand, with LFSaw, you get the harmonics mirroring back into the audible range at the Nyquist frequency (half the sampling rate, very often 22.050Hz).
But the LFUgens are good for modulation and we typically can run them in the control rate (using .kr rather than .ar - which is typically 64 times less calculation per second -> that is, if the block size is set to 64 samples)
Finally, we should note here at the end of this section on LFOs that the LFO frequency can of course go as high as you would like, but then it ceases being an LFO and starts to do different type of synthesis, which we will look at below. In the examples here, you will start to hear strange artefacts arriving when the oscillation goes up over 20 Hz (observe the post window).
Theremin
We have now obviously found the technique to create a Theremin using vibrato and tremolo:
Amplitude Modulation (AM synthesis)
In one of the examples above, the XLine Ugen to the LFO frequency up over 20Hz and we started to get some exciting artefacts in the sound. What was happening was that “sidebands” were appearing, i.e., partials on either side of the sine. Amplitude synthesis is a modulation that modulates the carrier with unipolar values (that is, they are between 0 and 1 - not bipolar (-1 to 1)).
In amplitude modulation, the sidebands are the sum and the difference of the carrier and the modulator frequency. For example, a 300 Hz carrier and 160 Hz modulator would generate 140 Hz and 460 Hz sidebands. However, the carrier frequency is always present.
If there are harmonics in the wave being modulated, each of the harmonics will have sidebands as well. - Check the saw wave.
In digital synthesis we can apply all kinds of mathematical operators to the sound, for example using .abs to calculate absolute values in the modulator. (this results in many sidebands - try also using .cubed and other unitary operators on the signal).
Ring Modulation
As mentioned above, ring modulation uses a bipolar modulation values (-1 to 1) whereas AM uses unipolar modulation values (0 to 1). This results in ordinary amplitude modulation outputting the original carrier frequency as well as the two side bands for each of the spectral components of the carrier and modulation signals. Ring modulation, however, cancels out the carrier frequencies and simply outputs the side-bands.
Ring modulation was used much in the early electronic music studios, for example in Cologne, BBC Radiophonic workshop and so on. The Barrons used the technique in the music for Forbidden Planet and so did Stockhausen in his Microphonie II, where voices are modulated with the sound of an Hammond organ. Let’s try to ring modulate a voice:
Here a sine wave is modulating the voice of a girl saying “Columbia this is Houston, over…”. We could use one sound file to ring modulate the output of another:
Frequency Modulation (FM Synthesis)
FM Synthesis is a popular synthesis technique that works well for a number of sounds. It became popular with the Yamaha DX7 Synthesizer in the late 1980s, but it was invented in the 1970s when John Chowning, musician and researcher at Stanford University, discovered the power of FM synthesis. He was working in the lab one day when he accidentally plugged the output of one oscillator into the frequency input of another and he heard a sound rich with partials (or sidebands as we call them in modulation synthesis). It’s important to realise that at the time, an oscillator was expensive equipment, and the possibility of getting so many partials out of only two oscillators was very exciting in musical, engineering, and economical terms.
Chowning’s famous FM synthesis piece is called Stria and can be found on the interwebs. The piece was an eye opener for many musicians, as its sounds were so unusual in timbre, rendering the texture of the piece surprising and novel. Imagine being there at the time and hearing these “unnatural” sounds for the first time!
1980s synth pop music is of course full with the sounds of FM synthesis, when musicians began using the DX7 synthesizer. They very often using the pre-installed sounds of the synth itself rather than making their own. The reason for this could be that FM synthesis is quite hard to learn, as there are so multiple parameters at play in any sound. Another explanation is that the user interface of the DX7 prevented people from designing sounds in an effective and ergonomic way, thus the lack of new and exploratory sound design using that synth.
Monitoring the frequency scope in the example above, you will see that when you move your mouse around, sidebands are appearing, spreading with even distance to each other, and the more amplitude the modulator has, the more sidebands you get. Let’s explore the above example with comments, in order to get the terminology right:
What is happening is that we have a carrier oscillator (the first SinOsc) with a frequency of 2000 Hz. We then add to this frequency the output of another oscillator. Note that the amplitude of the modulator is very high: it goes up to 1000, which would become uncomfortable for your ears were you to play that on its own. So when you move the mouse across the x-axis, you notice that around the carrier frequency partial (of 2000Hz) there are appearing sidebands with the distance of the modulator frequency. That is, if the modulator frequency is 250 Hz, you get sidebands of 1750 and 2250; 1500 and 2500; 1250 and 2750, etc. The stronger the modulation depth, or the index, of the modulator (its amplitude basically), the louder the sidebands will become.
We could of course create all those sidebands with oscillators in an additive synthesis style, but note the efficiency of FM compared to Additive synthesis:
Below are two patches that serve well to explore the power of simple FM synthesis. In the first one, a LFNoise0 UGen is used to trigger a new number between 20 and 60, 4 times per second. This number will be a floating point number (a fractional number) so it is rounded to an integer. Then the number is turned into frequency values using .midicps (where MIDI note value is turned into a value of cycles per second).
The PMOsc - Phase modulation
Frequency modulation and phase modulation are pretty much the same. In SuperCollider we have a PMOsc (Phase Modulation Oscillator), and we can try to make the above example using that:
You will note a feature in phase modulation, in that when the modulating frequency is low (< 20Hz), you don’t get the vibrato-like effect of the frequency modulation synth.
The magic of the PMOsc can be studied if we look under the hood. PMOsc is a pseudo-UGen, i.e., it is not written in C and compiled as a plugin for the SC-server, but rather defined when the class library of SuperCollider is compiled (on startup or if you hit Shift+Cmd+l)
How does the PMOsc work? Let’s check the source file (Cmd+i or Ctrl+i). You will see that the PMOsc.ar method simply returns (with the ^ symbol) a SinOsc with another SinOsc in the phase argument slot.
Here are a few examples for studying the PM oscillator:
The SuperCollider documentation of the UGen presents a nice demonstration of the UGen that looks a bit like this:
Other examples of PM synthesis:
Same patch without the comments and modulator and carrier put into variables:
The use of Envelopes in FM synthesis
Frequency modulation is a complex technique and Chowning’s initial research paper shows a wide range of applications of this synthesis method. For example, in the patch below, we have a much lower modulation amplitude (between 0 and 1) but we multiply the carrier frequency with the modulator.
And we can compare that technique with our initial FM example. In short, the frequency of the carrier is used as a parameter in the index (amplitude) of the modulator. These are design details and there are multiple ways of using FM synthesis to derive at the sound that you are after.
One of the key techniques in FM synthesis is to use envelopes do control the parameters in the modulator. By changing the width and amplitude of the sidebands, we can get many interesting sounds, for example trumpets, mallets or bells.
Let us first create a basic FM synthesis synth definition and try to play it with diverse arguments:
Chapter 8 - Envelopes and shaping sound
In both analog and digital synthesis, we typically operate with sound sources that are constantly running - whether those are analog oscillators or digital unit generators. This is great fun of course and we can delight in altering parameters by turning knobs or or setting control values, sculpting the sound we are after. However, this sound is not very musical. Hardly any musical instruments can have infinite sound, and in instrumental sounds we typically get an initial burst of energy, the sound then reaches some sort of equilibrium until it fades out.
The way we shape these sounds in both analog and digital synthesis is to use so-called “envelopes.” They wrap around our sound and give it the desired shape we’re after. Most people have for example heard about the ADSR envelope (where the shape is Attack, Decay, Sustain, and Release) which is one of the available envelopes in SuperCollider:
Envelopes in SuperCollider come in two types, sustaining (un-timed) and non-sustaining (timed) envelopes. A gate is a trigger (a positive number) that holds the envelope open until it gets a message to close it (such as 0 or less). This is like a finger pressing down a key on a MIDI keyboard. If we were using an ADSR envelope, when the finger presses the key, we would run the A (attack) and the D (decay), but then the S (sustain) would last as long as the finger is pressed. On R (release), when the finger releases the key, the R argument defines how long it takes for the sound to fade out. Synths with gated envelopes are can therefore be of un-definite time, i.e., its time is not set at the point of initialising the synth.
However, using a non-gated envelope, or a timed one, we set the duration of the sound at the time of triggering the synth. Here we don’t need to use a gate to trigger and release a synth.
Envelope types
Envelopes are powerful as we can define precisely the shape of a sound. This could be the amplitude of a sound, but it could also be a definition of frequency, filter cutoff, and so on. Let’s look at a few common envelope types in SuperCollider:
Different sounds require different envelopes. For example, if we wanted to synthesise a snare sound, we might choose to use the .perc method of Env.
Envelopes define points in time that have a target value, duration and shape. So we can define the value, length and shape of each of the nodes. The .new method expects arrays for the value, duration and shape arguments. This can be very useful, as through a very simple syntax you can create complex transitions of value through time:
The last array defines the curve where 0 is linear, positive number curves the segment up, and a negative number curves it down. Check the Env documentation for further explanation.
The EnvGen - Envelope Generator
The envelope itself does nothing. It is simply a description of a form; of values in time and the shape of the line between those values. If we want to apply this envelope to a signal, we need to use the EnvGen UGen to play the envelope within a synth graph. You note that the EnvGen has an .ar or a .kr argument, so it works either in audio rate or control rate. The envelope arguments are the following:
where the first argument is the envelope type (for example Env.perc(0.1, 1)), the second argument is the gate (not used with timed envelopes, but the default of the gate argument is 1, so it triggers the synth), the third argument is levelScale, which can scale the levels (such as amplitude) of the envelope, the fourth is levelBias which offsets the envelope’s breakpoints, the fifth is timeScale, which can shorten or stretch the envelope (so a second long Env.sine(1), could become 10 second long), and finally we have the doneAction, but this defines what will happen to the synth instance after the envelope has done its job.
doneActions
The doneActions are an important aspect of how the SC-server works. One of the key strengths of SuperCollider is how a synth can be created and removed very effectively, making it useful for granular synthesis, or playback of notes. Here a grain or a note can be a synth that exists for 20 milliseconds or 20 minutes. Users of data flow languages, such as Pure Data, will appreciate how useful this is, as synths can be spawned at wish, and don’t need to be hard wired beforehand.
When the synth has exceeded its lifetime through the function of the envelope it will typically become silent. However, we don’t want to pile synths up after they have played, but rather free the server of them. Unused synths will still run, use up processing power (CPU), and eventually cause some distortion in the sound; for example, if hundreds of synths have not been freed from the server and are still using CPU.
The doneActions are the following:
0 - Do nothing when the envelope has ended.
1 - Pause the synth running, it is still resident.
2 - Remove the synth and deallocate it.
3 - Remove and deallocate both this synth and the preceding node.
4 - Remove and deallocate both this synth and the following node.
5 - Same as 3. If the preceding node is a group then free all members of the group.
6 - Same as 4. If the following node is a group then free all members of the group.
7 - Same as 3. If the synth is part of a group, free all preceding nodes in the group.
8 - Same as 4. If the synth is part of a group, free all following nodes in the group.
9 - Same as 2, but pause the preceding node.
10 - Same as 2, but pause the following node.
11 - Same as 2, but if the preceding node is a group then free its synths.
12 - Same as 2, but if the following node is a group then free its synths.
13 - Frees the synth and all preceding and following nodes.
The doneActions are used in the EnvGen UGen all the time and it is important not to forget it. However there are other UGens in SuperCollider that also can free their enclosing synth when an event has happened - such as finishing playing a sample buffer. The other UGens are the following:
PlayBuf and RecordBuf - doneAction when the buffer has been played or recorded.
Line and XLine - doneAction when a line has ended.
Linen - doneAction when the envelope is finished.
LFGauss - doneAction after the completion of a cycle.
DemandEnvGen - Similar to EnvGen.
DetectSilence - doneAction when the UGen detects silence below a threshold.
Duty and TDuty - doneAction evaluated when a duty stream ends.
In the examples below, when you add a node, it is always added at the top of the node tree. This is how SC server does it by default. Synths can be added anywhere in the three though, but that will be discussed later in the chapter on busses, nodes and groups. [xxx, 15. ]
Triggers and Gates
The difference between a gated and timed envelope has become clear in the above examples, but to put it in very simple terms, think of the piano as having a timed envelope (as the note dies after a while), but the organ as having a gated envelope (as the note only stops when the key is released). For user input it is good to be able to keep the envelope open as long as the user wants and free it at some event, such as releasing a key (or a person exiting a room in a sound installation).
Gates
Gates are typically used to start a sound that contains an envelope of some sort. They ‘open up’ for a flow values to pass through for a period of time (timed or untimed). When a gate closes, it typically runs the release part of the envelope used.
Triggers are similar to gates, they start a process, but they do not have the release function Gates have. So they are used to trigger envelopes.
trigger rate - Arguments that begin with “t_” (e.g. t_trig), or that are specified as \tr in the def’s rates argument (see below), will be made as a TrigControl. Setting the argument will create a control-rate impulse at the set value. This is useful for triggers.
Triggers
In the example above we saw how Dust and Impulse could be used to trigger an envelope. The trigger can be set from everywhere (code, GUI, system, etc) but we need to use “t_” in front of trigger arguments.
If you want to keep the same synth on the server and trigger it from another process than the synthesis control parameter process you can use gates and triggers for the envelope. Use doneAction:0 to keep the synth on the server before or after the envelope is triggered.
Let’s turn the examples above into SynthDefs and explore the concept of gates:
The example below does the same, but here with a fixed time envelope. Since that envelope finishes when it is done, it does not work with gates. We need a trigger to trigger it back to life.
Exercise: Explore the difference between a gate and a trigger.
MIDI Keyboard Example
The techniques we’ve been exploring above are useful when creating user interfaces for your synth. As an example we could create a synth definition to be controlled by a MIDI controller. Other usage could be networked communication, input from other software, or running musical patterns within SuperCollider itself. In the example below we build upon the example we did in chapter 4, but here we add pitch bend and vibrato.
Chapter 9 - Samples and Buffers
SuperCollider offers multiple ways of working with recorded sound. Sampling is one of the key techniques of computer music programming today, originating in tape-based instruments such as the Chamberlin or Mellotrone, but popularised in digital systems with samplers like E-mu Emulator and the Akai S-Series. Sampled sound is also the source of more recent techniques, such as granular and concatenative synthesis.
The first thing we need to know is that a sample is a collection of amplitude values in an array. If we were using 44.1kHz sample rate, we would have 44100 samples in the array if our sound was one second. And twice that amount if our sound was stereo.
We could therefore generate 1 second of whitenoise like this:
The interesting question then is: how do we play these samples? What mechanism will read this and send it to the sound card? For that we use Buffers and UGens that can read them, such as PlayBuf.
Buffers
In short, a buffer is a collection of values in the memory of the computer. In SuperCollider, buffers are loaded onto the server not the language, so in our white noise example above, we would have to find a way to move our collection of values from the language to the server (as that’s where they would be played). Buffers can be used to contain all kinds of values in addition to sound, for example control data, gestural data from human movement, sonification data, and so on.
Allocating a Buffer
In order to create a buffer, we need to allocate it on the server. This is done through an .alloc method:
As mentioned buffers are collection of values in the RAM (Random Access Memory) of the computer. This means that the playhead can jump back and forth in the sound, play it fast or slow, backwards or forwards, and so on. But it also means that, unlike sound file playback from disk (where sound is buffered at regular intervals), the whole sound is stored in the memory of the computer. Try to open your Terminal and then run this line:
top
We have now allocated some buffers on the server, but they only contain values of zero. Try playing it:
If we wanted to listen to the noise we created above, we could simply load the array into the buffer.
Reading a soundfile into a Buffer
We can read a sound file into a buffer simply by providing the path to it. This path is either relative to the SuperCollider application (so ‘hello.aif’ could be loaded if it was next to the SuperCollider application). Note that the IDE allows you to drag a file from your file system into the code document and the full path appears.
Since the PlayBuf requires information on the number of channels in the sound file, users need to make sure that this is clear, so people often come up with systems like this in their code:
Note that we don’t need the “!2” in the stereo version as that would simply make the left channel expand into the right (and add to the right channel), whereas the right channel would expand into Bus 3.
[Bus 1, Bus 2, Bus 3, Bus 4, Bus 5, etc….]
[ Left , Right ]
[ Left , Right ]
Let us play a little with Buffer playback in order to get a feel for the possibilities of sound stored in random access memory.
Recording live sound into a Buffer
Live sound can of course be fed directly into a Buffer for further manipulation. This could be useful if you are recording the sound, transforming it, overdubbing, cutting it up, scratching, and so on. However, in many cases a simple SoundIn UGen might be sufficient (and no Buffers used).
SuperCollider really makes this simple. However, the RecordBuf does more than simply recording. Since it loops, you can also overwrite the data that is already in the buffer with the preLevel argument. The preLevel argument is the amount that the data that is in the buffer is multiplied with before it is added to the incoming sound. We can now explore this in a more SuperCollider way of doing things, with SynthDefs and Synths.
It is clear that playing with the recLevel and preLevel of a buffer recording, can create interesting layers of sound, where instrumentalists can record on top of what they already recorded. People could also engage in an “I’m Sitting in a Room” exercise a la Lucier.
Finally, as mentioned at the beginning of this chapter, buffers can contain any data and are not necessarily bound to audio content. In the example below we use the buffer to record mouse values at control rate (which is sample rate / block size) and write that mouse movement to disk in the form of an audio file.
BufRd and BufWr
There are other UGens that can be helpful when playing back buffers. BufRd (buffer read) and BufWr (buffer write) are good examples of this, and so is the LoopBuf (from the sc3-plugins that are in the SuperCollider Extensions distribution).
In the example below we use a Phasor to ‘drive’ the reading of the buffer. This reading has to read sample by sample from the buffer, for example by providing the start and the end sample you want to read:
Streaming from disk
If your soundfile is very long, it is probably a good idea to stream the sound from disk, like most digital audio workstations do. This is because long stereo files would quickly fill up your RAM if working with many sound files.
Wavetables and wavetable look-up oscillators
Wavetables are a classic method of sound synthesis. It works similarly to the BufRd of a Buffer above, but here we are creating a bespoke wavetable (which can often be visualised for manipulation) and using wavetable look-up oscillators to play the content of the wavetable back. In fact many of the oscillators of SuperCollider use wavetable look-up under the hood, SinOsc being a good example.
Let’s start with creating a SynthDef with an Osc (which is a wavetable look-up oscillator). It expects to get a signal in the form of a SuperCollider Wavetable, which is a special format for interpolating oscillators.
Above we saw how an envelope was turned into a Signal which was then converted to a Wavetable. Signals are a type of a numerical collection in SuperCollider that allows for various math operations. These can be useful for FFT manipulation of data arrays or simply writing data to a file, as in this example:
Below we explore further how Signals can be used with wavetable oscillators.
People often want to draw their own sound in a wavetable. We can end this excursion into wavetable synthesis by creating a graphical user interface that allows for the drawing of wavetables.
Pitch and duration changes
If you would like to change the pitch but keep the duration of the sampled sound you are playing, you cannot simply change the rate of the PlayBuf, as the duration will get shorter by increasing the rate (an octave up will speed the sound up by a fact or of 2).
We could use PitchShift to change the pitch without changing the time.. PitchShift is a granular synthesis pitch shifter (other techniques include Phase Vocoders)
For time streching check out the Warp0, Warp1 Ugens.
Chapter 10 - Granular and Concatenative Synthesis
Granular synthesis is a synthesis technique that became available for most practical purposes with digital computer music software. Early pioneers were Barry Truax and Iannis Xenakis, but the technique has been well explored in the work of Curtis Roads, both in his musical output and in a fine book called Microsound.
The idea in granular synthesis is to synthesize a sound using small grains, typically of 10-50 millisecond duration, that are wrapped in envelopes. These grains can then result in a continuous sound or more discontinuous ‘grain clouds’. Here the individual grains become the building blocks, almost atoms, of a more complex structure.
Granular Synthesis
Granular synthesis is used in many pitch shifting and time stretching features of commercial software so most people would be well aware of its functionality and power. Let us explore the pitch shifting through the use of an indigenous SuperCollider UGen, the PitchShift. In the examples below, the grains are 100 ms windows that overlap. What is really happening is that the sample is played at variable rate (where rate of 2 is an octave higher), but the grains are layered on top of each other in order to maintain the time of the sound.
The grains are windows with a specific envelope (typically a Hanning envelope) and they overlap in order to create the continuous sound. Play around with the parameters of window size and overlap to explore how they result in different sound. The above examples used PitchShift for the purposes of changing the pitch but keeping the same playback rate. Below we use Warp1 to time stretch sound where the pitch remains the same.
TGrains
The TGrains Ugen - or Trigger Grains - is a handy UGen for quick and basic granular synthesis. Here we can pass arguments such as number of grains per second, grain duration, rate (which is pitch), etc.
GrainIn
GrainIn enables you to granularize incoming audio. This UGen is part of a collection of other granular Ugens, such as GrainSin, GrainFM, and GrainBuf. Take a look at the documentation of these UGens and explore their functionality.
Custom built granular synthesis
Having explored some features of granular synthesis above, the best way to really understand what granular synthesis is would be to make our own granular synth engine that spawns grains of some sort according to our own rate, pitch, wave form, and envelope.
In the examples above we have continued the chapter on Buffers and used sampled sound as the source of our granular synthesis. Here below we will explore the technique with simpler waveforms, such as the sine wave.
If our grains all have the same pitch, we should be able to generate a continuous sine wave out of the grains as they will be overlapping as shown in this image
[image]
But the sound is not perfectly continuous. This is because when we create a Synth, it is being sent as quickly as possible to the server. As the language-synth communication is asynchronous there might be slight time differences in the time it takes to send the OSC message over to the server, and this causes the fluctuation. We therefore need to timestamp our messages and it can be done either through messaging style communication with the server, or using s.bind (which makes an OSC bundle under the hood and sends a time stamped OSC message to the server).
There can be different envelopes in the grains. Here we use a Perc envelope:
The two examples above serve as a good explanation of how Patterns and Tasks work. We’ve got the same SynthDef, same arguments, but Patterns do operate with default keywords (like \instrument, \freq, \amp, and \dur). We therefore had to make sure that our envelope argument was not called \dur, since Pbind uses that to control the density (or the time it takes until the next event is fired - so “\dur, 0.01” in the pattern is the same as “0.01.wait” in the Task)
Finally, let’s try this out with a buffer.
Concatenative Synthesis
Concatenative synthesis is a rather recent technique of data-driven synthesis, where source sounds are analysed into a database, segmented into units, but then an target sound (for example live audio input) is analysed and matched with the closest unit in the database which is then played. This is done at a very granular level, prompting Zils and Pachet to call the technique musaicing, from musical mosaicing, as it enables the synthesis of a coherent sound at a macro level that is built up of smaller units of sound, just like in traditional mosaics. The technique is therefore quite related to granular synthesis in the sense that a macro-sound is built out of micro-sounds.
The technique can be quite complex to work with as users might have to analyse and build up a database of source sounds. However, people have built plugins and classes in SuperCollider that help with this purpose and in this section here we will explore some of the work done in this area by Nick Collins, a long time SuperCollider user and developer.
Chapter 11 - Physical Modelling
Physical modelling is a common synthesis technique where a mathematical model is built of some physical object. The maths here can be quite complex and outside the scope of this book. However, it is worth exploring the technique as there are PM UGens in SuperCollider and many musical instruments can easily be built using simple physical models, using filters and alike. Waveguide synthesis can model the physics of the acoustic instrument or sound generating object. It simulates the traveling of waves through a string or a tube. The physical structures of an instrument can be thought of as waveguides or a transmission lines.
In physical modelling, as opposed to traditional synthesis types (AM, FM, granular, etc), we are not imitating the sound of an instrument, but rather simulating the instrument itself and the physical laws that are involved in the creation of a sound.In physical modelling of sound we typically operate with excitation and a resonant body. The excitation is the material and weight of the thing that hits, whilst the resonant body is what is being hit and resonates. In many cases it does not make sense separating the two this way mathematically, but from a user-perspective we can think of material bodies of wood, glass, metal, or a string, as examples, being hit by a finger, a plectrum, a metal hammer, or anything imaginable, for example falling sand. Further resolution can be designed in the model of the instrument, for example defining the bridge of a guitar, the type of strings, the type of body, the room which the instrument is in, etc.
Karplus-Strong synthesis (named after its authors) is a precursor of physical modelling and is good for synthesising strings and percussion sounds.
The repeat rate of the delay becomes the pitch of the string, so 0.001 is 1000 Hz, or in a reciprocal relationship. We could therefore write 440.reciprocal in the delayTime argument of the CombL, and it would give us a string sound of 440 Hz. The duration of the string is controlled by the decayTime argument. This is the basic ingredient of a string synthesizer, but for further development, you might want to consider applying filters to the noise, or perhaps use another type of noise. Also, the time of the burst (above 100 ms) will affect the sound heavily.
Compare using white noise and pink noise as an exciter, as well as using a resonant filter to filter the burst:
SuperCollider comes with a plug called Pluck which is an implementation of the Karplus-Strong synthesis. This should be more effective than the examples above, but of similar sound.
We could create a SynthDef with Pluck.
Biquad filter
In SuperCollider, the SOS UGen is a second order biquad filter that can be used to create various interesting sounds. We could start with simple glass-like sound:
And with slight changes we have a more woody type of sound:
Waveguide synthesis
Waveguide synthesis is the most common form of physical modelling, often using delay lines, filtering, feedback and other non-linear elements. The Waveguide flute below is based upon Hans Mikelson’s Csound slide flute (ultimately derived from Perry Cook’s) STK slide flute physical model. SuperCollider port by John E. Bower, who kindly allowed for the flute’s inclusion in this tutorial.
Filters
Filters are a vital element in physical modelling. The main concepts here are some kind of an exciter (where in SuperCollider we might use triggers such as Impulse, Dust, or filtered noise) and a resonator (such as the Resonz and Klank resonators, Delays, Reverbs, etc.)
Ringz
The Ringz is a powerful ringing filter, with a decay time, so the impulse can ring for N amount of seconds. Let’s explore some examples:
Resonz, Klank and DynKlank
The Resonz, Klank and DynKlank filters can be used in physical modeling. Some examples here below:
Decay
TBall, Spring and Friction
Physical modelling can involve the mathematical modelling of all kinds of phenomena, from wind to water to the simulation of moving or falling objects where gravity, speed, surface type, etc., are all parameters. The popular Box2D library (of AngryBirds fame) is one such library that simulates physical systems. In SuperCollider there are UGens that do that, for example TBall (Trigger Ball) and Spring
Having explored the qualities of the TBall as a system that outputs impulses according to a physical system, we can now apply these impulses in some of the resonant filters that we have explored above.
The Spring UGen is a physical model of a resonating spring. Considering the wave properties of spring this can be very useful for synthesis.
In the SC3plugins you’ll find a Friction UGen which is a physical model of a mass resting on a belt. The documentation of the UGen is good, but two examples are provided here for fun:
The MetroGnome
How about trying to synthesise a wooden old-fashioned metronome?
The STK synthesis kit
Many years ago, Paul Lansky ported the STK physical modelling kit by Perry Cook and Gary Scavone for SuperCollider. This collection of UGens can be found in the SC3-plugins, but they have not been maintained and the code might be in a bad shape, although there are still some UGens that work. It could be a good project for someone wanting to explore the source of a classic physical modelling source code to update the UGens for SuperCollider 3.7+.
Here below we have a model of a xylophone:
Part III
Chapter 12 - Time Domain Audio Effects
In this book, we divide the section on audio effects into two separate chapters, on time domain and frequency domain effects, respectively. This is for a good reason as the two are completely different techniques of manipulating audio, where the former, the time domain effects, are well know from the world of analogue audio, whereas the latter, manipulation in the frequency domain, is only realistically possible through the use of computers running Fast Fourier Transformation (FFT) algorithms. This will be explained later.
Most of the audio effects that we know (and you can roughly think about the availability of guitar pedal boxes, where each box contains the implementation of some audio effect) are familiar and easy to understand effects that were often discovered by accident or invented through some form of serendipitous exploration. There are diverse stories of John Lennon and George Martin discovering flanging on an Abbey Road tape machine, but earlier examples exist, although the technique had not been given this name then.
The time domain effects are either manipulation of samples in time (typically where the signal is split and something is done to one of them, such as delaying it, and they then added again) or in amplitude (where samples can be changed in value, for example to get a distortion effect). This chapter will explore the diverse audio effects that can be easily created using the UGens available in SuperCollider.
Delay
When we delay a signal, we can achieve various effects, from a simple echo to a more complex reverb. Typical variables are delay time (how long it takes before the sound appears again) and decay time (how long it will repeat). In SuperCollider, there are three main type of delays: Delay, Comb and Allpass:
DelayN/DelayL/DelayC are simple echos with no feedback.
CombN/CombL/CombC are comb delays with feedback (decaytime)
AllpassN/AllpassL/AllpassC die out faster than the comb, but have feedback as well
All of these delays come with different interpolation algorithms (N, L, C, standing for No interpolation, Linear interpolation, and Cubic interpolation). Interpolation is about what happens between two discrete values, for example samples. Will you get a jump when the next value appears (N), a line from one value to the next (L) or a curvy shape between the two (C), simulating better analogue signal behaviour. These are all good for different purposes, where the N is computationally cheap, but C is good if you are sweeping delay time and you want more nuanced interpolation that can deal with values between two samples.
Generally, we can talk about three types of time when using Delays, resulting in different types of effects:
A short delay (1-2 samples) can create a FIR (Finite Impulse Response) lowpass filter.
Increase the delay time (1-10 ms) and a comb filter materialises.
Medium delays result in a thin signal but could also an ambience and width in the sound.
Long delays create discrete echo which imitates sound bouncing of hard walls.
Delays can also have variable delay time which can result in the following effects:
Phase Shifting
Flanging
Chorus
These effects are explained in dedicated sections here below
Short Delays (< 10 ms)
Let’s explore what a short delay means. This is a delay that’s hardly perceivable by the human ear if you would for example delay a click sound or an impulse.
In the example above we have a delay from from a sample (e.g., 44100.reciprocal, or 0.000022675 seconds, or 0.023 ms) to 10 milliseconds. The impulse is the shortest sound possible (one sample of of 1 in amplitude), so it serves well in this experiment. When you move the mouse from the left to the right of the screen you will probably perceive the sound as one event, but you will notice that the sound changes slightly in timbre. It is filtered. And indeed, as we will see in the filter chapter, most filters work by way of delaying samples and multiplying the feedback or feedforward samples by different values.
We could try the same with a more continuous signal, for example a Saw wave. You will hear that the timbre of the wave changes when you move the mouse around, as it is effectively being filtered (adding two signals together where one is slightly delayed)
Note that in the example above I’m using DelayC, as opposed to the DelayN in the Impulse code. This is because the delay time is so small, at sample level, that interpolation becomes important. Try to change the DelayC to DelayN (no interpolation) and listen to what happens, particularly when moving the mouse at the left of the screen at shorter delay times. The best way to explore the filtering effect might be to use WhiteNoise:
In the examples above we have been adding the two signals together (the original and the delayed signal) and then duplicating it (!2) into two arrays, for a two-speaker output. Adding the signals create the filtering effect, but if we simply put each signal in each speaker, we get a completely different effect, namely spatialisation:
We have now entered the realm of psychoacoustics, but this can be explained quickly by the fact that sound travels around 343 metres per second, or 34cm per millisecond, roughly 0.6 millisecond difference in arrival to the ears of a normal head, if the sound is coming from one side direction. This is called Interaural Time Difference (ITD) and is one of they key factors for sound localisation. We could explore this in the following example, where we have a signal that is “delayed” from 0.001 ms before to 0.001 ms after the original signal. Try this with headphones, you should get some impression of sound moving from the left to the right ear.
In the example below, explore the difference algorithms implemented in Delay, Comb and Allpass. The Delay does not have the decay time, therefore not resulting in the Karplus-Strong type of sound that we get with the other two. The details of the difference in the internal implementation of Comb and Allpass are too complex for this book, but it has to do with the how gain coefficients are calculated, where a combined feedback and feedforward combs equal an allpass.
Is this familiar?
Any amount of Delays can be added together to create the desired sound of course, something we will explore when we discuss reverbs:
The old Karplus-Strong in its most basic form:
Medium Delay time ( 10 - 50 ms)
The examples above with delays under 10ms, resulted in change in timbre or spatial location, but we always felt that this was the same sonic event, even when using a one-sample impulse. It is dependent on subjects and context, but it can be said that we start to perceive a delayed event as two events if there is more than 20 ms delay between them. This code demonstrates that:
The post window shows the milliseconds. A drummer who would be more than 20 ms off when trying to be on the exact beat would be showing a disappointing performance (of course, part of the art of a good percussionist is to be slightly ahead or behind, so the comment is not about intention) and any hardware interface that would have a latency of more than 20 ms would be considered rather poor interface.
Longer delays can also generate a spatialisation effect, although this is not modelling the interaural time difference (ITD), but rather creating the sensation of a wide sonic image.
Longer Delays ( > 50 ms)
Random experiments
Phaser (phase shifting)
In a phaser, a signal is sent through an allpass filter, not filtering out any frequencies, but simply shifting the phase of the sound by delaying it. This sound is then added to the original signal. If the phase is 180 degrees, the sound is cancelled out, but if it is less than that, it will create variations in the spectra.
Flanger
In a Flanger, a delayed signal is added to the original signal with a continuously-variable delay (usually smaller than 10 ms) creating a phasing effect. The term comes from times where tapes were used in studios and an operator would place the finger on the flange of one of the tapes to slow it down, thus causing the flanging effect.
Flanger is like a Phaser with dynamic delay filter (allpass), but it usually has a feedback loop.
Chorus
The chorus effect happens when we add a delayed signal with the original with a time-varying delay. The delay has to be short in order not to be perceived as echo, but above 5 ms to be audible. If the delay is too short, it will destructively interfere with the un-delayed signal and create a flanging effect. Often, the delayed signals will be pitch shifted to create a harmony with the original signal.
There is no definite algorithm to create a chorus. There are many different ways to achieve it. As opposed to the Flanger above, this chorus does not have a feedback loop. But you could create a chorus effect out of a Flanger by using longer delay time (20-30 ms instead of 1-10 ms in the Flanger)
Reverb
Achieving realistic reverb is a science on its own, to deep to delve into here. The most common reverb technique in digital acoustics is to use parallel comb delays that are fed into few Allpass delays.
Reverb can be analysed into 3 stages:
* Direct sound (from the soundsource)
* Early reflections (discrete 1st generation reflections from walls)
* Reverberation (Nth generation reflections that take time to build up, and fade out slowly)
Tremolo
Tremolo is fluctuating amplitude of a signal, well known from analogue guitar amplifiers, and heard in surf music, or garage punk such as The Cramps.
Distortion
Distortion can be achieved through diverse algorithms, but the most basic one could be to raise the amplitude of the signal so much that it starts to clip (below -1 and above 1), thus turning a sine wave into a square wave, adding harmonics.
Compressor
The compressor reduces the dynamic range of a signal if it exceeds certain threshold. The compression ratio determines how much the signal exceeding the threshold is lowered. 4:1 compression ratio means that for every 4 dB of signal that goes into the unit, it lowers the signal such that only 1 dB is outputted.
Limiter
The limiter does essentially the same as the compressor, but it looks at the signal’s peaks whereas the compressor looks at the average energy level. A limiter will not let the signal past the threshold, while the compressor does, according to the ratio settings.
The difference is in the slopeAbove argument of the Compander (0.5 in the compressor, but 0.1 in the limiter)
Sustainer
The sustainer works like an inverted compressor, it exaggerates the low amplitudes and tries to raise them up to the threshold defined.
Noise gate
The noise gate allows a signal to pass through the filter only when it is above a certain threshold. If the energy of the signal is below the threshold, no sound is allowed to pass. It is often used in settings where there is background noise and one only wants to record the signal and not the (in this case) uninteresting noise.
The noise gate needs a bit of parameter tweaking to get what you want, so here is the same version as above, just with a MouseY controlling the slopeAbove parameter.
Normalizer
Normalizer uses a buffer to store the sound in a small delay and look ahead in the audio. It will not overshoot like a Compander, but the downside is the delay. The normalizer normalizes the input amplitide to a given level.
Limiter (Ugen)
Like the Normalizer, the Limiter uses a buffer to store the sound in a small delay buffer to look ahead in the audio. It will not overshoot like the Compander, but you have to put up with a slight the delay. The limiter limits the input amplitude to a given level.
Amplitude
Amplitude tracks the peak amplitude of a signal. It is not really an audio effect, but it can be a key element in the design of effects, (for example adaptive audio effects) and is therefore included here in this section.
In the example below, we map the input amplitude to frequency of a sine:
Pitch
Pitch tracks the pitch of a signal. If the pitch tracker has found the pitch, the hasFreq variable will be 1 (true), if it doesn’t hold a pitch then it is 0 (false). (Read the helpfile about how it works)
NOTE: it can be useful to pass the input signal through a Low Pass Filter as it is easier to detect the pitch of a signal with less harmonics.
Tip: People often ask about the hashtag in front of the pitch and hasPitch variables. This is a way to assign two variables with valued from an array.
The simplest of patches - mapping pitch to the frequency of the sine
In the example below we use the Tartini UGen by Nick Collins. In my experience it performs better than Pitch and is part of the SC3-plugins external plugins.
Filters
The filter Ugens in SuperCollider use complex time-domain algorithms to achieve the desired effect.
Low Pass Filter
Resonant Low Pass Filter
High Pass Filter
Resonant High Pass Filter
Band Pass Filter
Band Reject Filter
SOS - A biquad filter
A second order filter, also known as biquad filter. The helpfile shows the algorithm itself:
Where you can see that the filter reaches back to the second sample after the current one, and uses parameters (a0, a1, b1 and b2) to affect the function of the filter.
Resonant filter
This filter will resonate frequencies at the set frequency. The bwr parameter is the bandwidth ratio, that is, how much energy is passed on each side of the centre frequency.
Chapter 13 - Fast Fourier Transform (FFT)
Most of the well known audio effects process audio in the time domain, typically varying samples in amplitude (ring modulation, waveshaping, distortion) or time (filters or delays). Fast Fourier Transform (FFT) is a computational algorithm that allows us to manipulate sound in the frequency domain, performing various calculations on the independent frequency bins of the signal.
In FFT, windows are taken from the sound signal and analysed one by one. (The window size is typically 512 or 1024 samples creating list of 256 or 512 bins: values of magnitude and phase). The processing (using the PV plugins of SC) is done in the frequency domain and then converted back to the time domain before playback. The windows are normally overlapped mixed using with a Hanning window to prevent smearing between frequencies.
Using FFT in SuperCollider, you need to do the FFT analysis, using the FFT UGen, then diverse PV_Ugens (Phase Vocoder Ugens) can be applied to operate mathematically on the signal, finally the resulting signal will need to be converted back into the time domain using the Inverse Fast Fourier Transform (IFFT).
Or, in short: FFT -> PV_Ugens -> IFFT
where FFT translates the signal from the time domain into the frequency domain, the PV_UGens perform some functions on the sound and then we use Inverse Fast Fourier Transform (IFFT) to translate the signal back to the time domain.
Frequency bins are a sets of magnitude and phase. The larger the windows, the better pitch resolution we have, but worse precision in time. The smaller the windows, the worse pitch resolution but better precision in time.
sample rate/window size
44100/512 = 86.1328125 // so the first (lowest) frequency of a 512 window is 86.13 Hz
44100/1024 = 43.06640625 // so the first (lowest) frequency of a 1024 window is 43.06 Hz
For a window size of 1024 samples we get 512 bins. These are the frequencies of which we will get the mag and phase:
Post << 512.collect({|i| (22050/512)*(i+1)})
(And we would need a 1024 frame Buffer to store that (mag and phase for each freq))
The full list of frequencies, including DC, that a 1024-point FFT theoretically generates:
a = 1024.collect({|i| (44100/1024)*i});
Except we ignore the bins above Nyquist since they’re redundant:
a = a[..512];
Resulting in:
a.postcs;””
NOTE : some of the examples below use the FFT plugins from the library of Bhob Rainey
So in general, it is important to understand that FFT analysis of a sound gives you two arrays,
bins (frequencies - depending upon the size of the window) and mags (the magnitude/amplitude of
the frequencies). FFT Ugens do manipulation on either the bins or the mags.
Fast Fourier Transform examples
MagAbove
Passes only bins whose magnitude are above a given threshold.
BrickWall
Clears bins above or below a cutoff point (works as lowpass or highpass filters)
RectComb
Generates a series of gaps in a spectrum
Rectcomb - controllable with mouse
~~~
MagFreeze
Freezes magnitudes at current levels when freeze > 0
CopyPhase
Combines magnitudes of first input and phases of the second input.
Magnitude smear
Average a bin’s magnitude with its neighbours.
Morph
Morphs between two buffers.
XFade
Interpolates bins between two buffers.
Softwipe
Copies low bins from one input and the high bins of the other.
MagMinus
Subtracting spectral energy - Subtracts buffer B’s magnitudes from buffer A.
Language manipulation of bins
The PV_ UGens are blackboxes. We can read their helpfiles, but we don’t see clearly what they do unless we look at their C++ sourcecode. But what if we want to manipulate the bins on the language side?
A pvcollect method (phase vocoder collect) SuperCollider allows this, so instead of:
as we looked at above, we can now do:
We do this through pvcollect (see the collect method in the Collection helpfile) pvcollect processes each bin of an FFT chain separately (see pvcollect helpfile), but pvcollect takes a function and it is inside this function that we can have fun with the magnitude and the phase of the signal (as taken into the frequency domain).
We have magnitude, phase and index to play with. The pvcollect returns an array of [mag, phase]. We can then use all kinds of algorithms to play with the mag and the phase, for example using the index as a parameter in the calculations.
Spectral delay - here we use a DelayN UGen to delay the bins according to MouseX location
Another type of spectral delay where the high frequencies get longer delay times, this is the trick:
Yet another spectral delay where the each bin gets a random delay time
Spectral delay where the delaytimes are modulated by an oscillator
Amplitude controlled with MouseX and phase manipulation with MouseY
Here we add noise to the phase
Square the magnitude and put a random phase (from 0 to pi (3.14))
Here we use the index and we subtract it with a LFPar on a slow sweep
Chapter 14 - Busses, Nodes, Groups and Signalflow
The SuperCollider Server is an extremely well designed application which allows us to structure nodes on busses and add effects before or after, just like we would do on a well designed hardware mixer. This chapter will explore the ins and outs of the Server.
Busses in SC (Audio and Control Busses)
What are Busses? They are virtual placeholders of signals. A good description is to be found in the Server-Architecture helpfile:
Audio Buses
Synths send audio signals to each other via a single global array of audio buses. Audio buses are indexed by integers beginning with zero. Using buses rather than connecting synths to each other directly allows synths to connect themselves to the community of other synths without having to know anything about them specifically. The lowest numbered buses get written to the audio hardware outputs. Immediately following the output buses are the input buses, read from the audio hardware inputs. The number of bus channels defined as inputs and outputs do not have to match that of the hardware.
Control Buses
Synths can send control signals to each other via a single global array of control buses.
Buses are indexed by integers beginning with zero.
{/blockquote}
If you look at the source file of ServerOptions, you will see that there are default number of audio and control busses assigned to the server on booting. You can change these values, of course:
We see that we’ve got 128 audio busses and 4096 control busses. This should be more than enough in most cases, but if you need more you can:
a) question why you need more! Are you designing your program correctly?
b) change the number in the ServerOptions file and recompile.
We also see that by default we have 8 output and 8 input busses. This means that in this setting the 8th Audio bus is actually the 1st input channel. Change this to fit your soundcard if you want.
Busses are not exactly the same as audio channels. Channels as we normally think of them are physical channels as in a mixer or a sound card, but a Bus is rather like an abstract representation of a channel. Thus a bus can be mono or stereo or even 5 channel, depending on your needs.
Audio Busses
Audio busses run on audio rate (e.g., 44.1kHz).
Here below is some code that shows how the busses work. The figure in chapter 2 can be helpful here, although it is simple.
Control Busses
Here signals run on control rate (such as 689 times per second))
A control bus can be mapped to control values in many synths. Let’s make a control synth that maps the freq value of the synth above.
This way you can really plug synths into each other just like on an old fashioned modular synth. For a different take on modular coding, check the JIT lib, the use of ProxySpace and Ndefs.
Nodes
We have already been using nodes in this tutorial. Creating a synth like this:
a = Synth(\bustest);
is creating a node. We can then set the frequency of the node
a.set(\freq, 880);
or just free it:
a.free;
Nodes live on busses. The busses can be seen as a mythic monster with a head facing up and a tail facing downwards that eats audio. This monster (the bus) can take audio in from one bus and output into another bus (SynthDef handles that). The audio runs from the head to the tail. You can put your synths at the top of the monster (where the sound will run through it) or at the tail (where it will receive the signal that runs through it).
When you start SC there is a default group that receives all nodes
s.queryAllNodes; // Note the RootNode (ID 0) and the default Group (ID 1)
By default synths are added to the HEAD of a group (in this instance the default group)
So in the following program you don’t hear anything (but see the 2 synths on the server window)
But in the example below you will hear: (because the sound is put onto the head AFTER the listener (In))
Or better: be specific and simply add the In listener to the tail of the default group and we hear:
This is the meaning of \addToHead, \addToTail, \addAbove, and \addBelow.
And if we keep these synths running we can see that they have been added to the Group (default)
Here is a practical example using a reverb and a delay for a snare
And we could add a synth AFTER the delay:
Or we add it BEFORE the delay:
Groups
Groups can be useful if you are making complex things and you want to group certain things together. You can think of it like grouping in Photoshop (i.e. making a group that you can move around without having to move every line). For a good explanation of Groups, check Mark Polishook’s tutorial which can be found in the distribution of SC
Group example (check the Group and Node helpfiles for more)
And few synths that respond to the freq argument, but multiply it differently
Here we could try to listen to bus 10, but it’s added to the head
We see that we now have 5 synths in a Group (called g)
Part IV
Chapter 14 - Musical Patterns on SC Server
The SC Server is highly streamlined, small and functional piece of software. It does not have the whole of the SuperCollider language to do timing, data flow, etc., but it does have unit generators that can do much of the same.
Stepper and Select
The stepper is a pulse counter that outputs a signal.
A scale of frequencies from 500 to 1600 in steps of 100 (as it is multiplied by 100)
And here the steps are -3 so there are more interesting step sequences
We poll the Stepper to see the output.
And here we use Lag (generating a line from the current value to the next in specified time) for the frequency.
NOTE: the lag time is the reciprocal of the Impulse frequency, i.e. the impulse happens 6 times per second, i.e. every 0.16666666666667 seconds. If you check the reciprocal of 6, you get that number. In this case it doesn’t matter whether we use 0.16666666666667 or 6.reciprocal, but if Impulse frequency is in a variable, it could be useful, as in:
Select
Select and Stepper together
Here we use the Stepper to do what LFSaw did above, it is just stepping through the pitchArray and not generating the pitches like in the Stepper examples above.
PulseCount and PulseDivider
We could also use PulseCount to get at the items of the array
PulseDivider is also an interesting UGen, it outputs an impulse when it has received a certain numbers of impulses
Here we use it to create a drummer in one synthdefinition. (quite primitive, and just for fun, but look at the CPU : )
Demand UGens
In chapter 2 we saw how we could use Patterns to control the server. Patterns are language side streams used to control the server. The Demand UGens are server side and don’t need the SC language. So you could use this from languages like Python, Java, etc.
The Demand UGens follow the logic of the Pattern classes of the SCLang - We will look further at Patterns in the next tutorial.
Using LFSaw instead of a SinOsc
Using LFTri and now we use the mouse to control the mul and add of the Freq osc.
There are useful Ugens like Dseq and Drand (compare to Pseq and Prand)
Dseries
Dgeom
The Dbrown and Dibrown Ugens
These UGens are good for random walk (drunken walk)
Dwhite is whitenoise - not drunk anymore but jumping around madly
Using TDuty to demand results from demand rate UGens
Chapter 15 - Musical Patterns in the SCLang
Throughout this tutorial we have been creating synthesizers, effects, routing them through busses, putting them into groups and more, but for many the question is how to make musical patterns or arrange events in time. For this we need some kind of a representation of the events, for example stored in an array, or generated algorithmically on the fly. Chapter 3 introduced some basic ways of controlling synths, but in this section we will explore in a bit more detail how to arrange musical events in time.
The SynthDefs
For now we’ll use two synth definitions.
Routines and Tasks
We have already explored how to play a melody using a Task and a Routine (check the documentation for each, but in short a Task is a Routine that can be paused).
Function has a method called “fork” which will turn the function into a Routine (co-routine, and some could think of it as a “thread” - although technically it’s not), but this allows for a process to run independently of what is happening elsewhere in the program.
This could also be written as:
Or unpacked:
So with a little melody stored in an array we could play it repeatedly:
The “fork” is running a routine and the routine is played by SuperCollider’s default TempoClock.
If you keep that code running and then evaluate:
You will see how the tempo changes, as the 0.5.wait in the Routine is half a beat of the tempo clock that has now changed its tempo.
Clocks in SuperCollider
All temporal tasks in SuperCollider run from one of the language’s clocks. There are 3 clocks in SuperCollider:
Routines, Tasks and Patterns can all run by these 3 different clocks. You pass the clocks as arguments to them.
SystemClock
Let’s have a quick look at the SystemClock:
AppClock
The AppClock works pretty much the same but uses different source clocks (MacOS’s NSTimers).
You could try to create a GUI which is updated by a clock.
You will get an error message that could become familiar:
“Operation cannot be called from this Process. Try using AppClock instead of SystemClock.”
You can also get this done by “deferring” the command to the AppClock using .defer.
So here we are using the SystemClock to play the \sine synth, but deferring the updating of the GUI to the AppClock.
TempoClock
TempoClocks are typically used for musical tasks. You can run many tempo clocks at the same time, at different tempi or in different meters. TempoClocks are ideal for high priority scheduling of musical events, and if there is a need for external communication, such as MIDI, GUI or Serial communication, the trick is to defer that message with a “{}.defer”.
Let’s explore the tempo clock:
Many people who think in BPM (beats per minute) typically set the argument to the tempo clock as “120/60” (where 120 bpm equals 2 beats per second), or “60/60” (which is 1 bps, and SuperCollider’s “default” tempo).
The clock above is now in a variable “t” and we can use it to schedule events (at a particular beat in the future):
And we can change the tempo:
Polyrhythm of 5/4 against 4/4
Or perhaps a polyrhythm of 5/4 against 4/4 where the bass line is in 4/4 and the high synth in 5/4.
Another version
We can try to make this a bit more interesting by creating another synth:
And play the same polyrhythm.
A survey of Patterns
We can try to play the above synth definitions with Patterns and it will play using the default arguments of patterns (see the Event source file). Let’s start by exploring the Pbind pattern. As we saw in chapter 3, if you run the code below:
You can hear that there are default arguments, like a note played every second, an instrument is used (SuperCollider’s \default) and a frequency (440Hz).
In the example below, we use Pbind (Pattern that binds keys (synth def arguments) and their arguments). Here we pass the \sine synth def as the argument for the \instrument (again as defined in the Event class).
Our \sine synth has a frequency argument, and we are sending the frequency directly. However, if we wanted we could also send ‘note’ or ‘midinote’ arguments, but here the values are converted internally to the \freq argument of \sine.
Pattern definitions (Pdef) are a handy way to define and play patterns. They are a bit like Synth definitions in that they have a unique name and can be recompiled on the fly.
Then we can set variables in our instrument using .set
Patterns use default keywords defined in the Event class, so take care not to use those keywords in your synth definitions. If we had used dur instead of envdur for the envelope in our instrument, this would happen:
because dur is a keyword of Patterns (the main ones are \dur, \freq, \amp, \out, \midi)
Resetting the freq info is not possible however :
One solution would be to resubmit the Pattern Definition:
Patterns and environmental variables
We could also use Pdefn (read the helpfiles to compare Pdef and Pdefn)
(here we are using envrionment variables to refer to patterns)
// quit SuperCollider, open it again and now try this
p = Pbind(\degree, Pseq([0, 4, 4, 2, 8, 3, 2, 0]), \dur, 0.5);
q = Pfx(p, \testenv, \dur, 4); // not working (sine env is 2 secs, the synthdef default)
q.play
// but here is the trick, read the SynthDescLib and try again!
SynthDescLib.global.read;
q = Pfx(p, \testenv, \dur, 4); // not working (sine env is 2 secs, the synthdef default)
q.play
// rendering the pattern as soundfile to disk (it will be written to your SuperCollider folder)
But to have each pattern playing different TempoClocks, you need to create two clocks and use them to drive each pattern (this way one can do some nice phasing/polyrhytmic stuff).
It is hard to get this clear as they are running the same pitch patterns so let’s redefine one of the patterns:
Popcorn
An example of making a tune using patterns. For an excellent example take a look at spacelab, in examples/pieces/spacelab.scd
Mozart
A little transcription of Mozart’s Piano Sonata No 16 in C major. Here the instrument has been put into a variable called “instr” so it’s easier to quickly change the instrument.
Syncing Patterns and TempoClocks
Chapter 16 - JIT lib and ProxySpace
JIT lib, or Just in Time library, is a system that allows people to write Ugen Graphs (signal processing on the SC server) and rewrite them in real time. This is ideal for live coding, teaching, experimenting and all kinds of compositional work.
ProxySpace
In order to use the JIT lib you create a ProxySpace which becomes the Environment or reference space for the synths that will live on it.
Ndef
Tdef
Chapter 18 - Tuning Systems and Scales
In this chapter we will look at how we can explore tuning systems, scales and microtonal composition using algorithmic means to generate tunings and scales.
The SynthDefs
For this chapter we want as pure waveform as possible so we can hear the ratios between the notes.
Tuning systems are generally called “temperaments”. There are many different temperaments, but since the inventon of the piano the equal temperament has become the most common temperament and is typically used in computer music software.
For a bibliography and further information on scales and tunings, visit:
http://www.huygens-fokker.org/scala/
The scales can be found here:
http://www.huygens-fokker.org/docs/scales.zip
A good source for microtonal theory is the Tonalsoft Encyclopedia of Microtonal Music Theory:
http://tonalsoft.com/enc/
// NOTE: Tuning systems are not scales. We can have scales in different tuning systems.
Equal Temperament
Equal temperament is the most common tuning system in Western music. The octave is divided logarithmically into series of equal steps, most commonly the twelve tone octave. Other systems are also used such as the nineteen tone equal temperament (19-TET) or the thirty one tone equal temperament (31-TET).
Indian and Arabic music often uses a twenty four tone equal temperament (24-TET), although the instruments are frequently tuned using just intonation. Javanese Gamelan music is mainly tuned in a 5-TET
About the cent:
The cent is a logaritmic unit (of equal steps) where 1200 represent an octave.
In a 12-TET system, the half-note (of two adjacent keys on a keyboard) is 100 cents.
The logarithmic formula for pitch (exponential) for 12 tone equal temperament can be found in the following formula:
fundFreq * 2.pow(n/12); (or roughly 1.05946309)
For Equal temperament of 12 notes in an octave these are the values we multiply the fundamental key with:
NOTE (SC LANG): If you are wondering about the ~freqlist.do compare this:
To this:
Just Intonation
Just intonation is a very natural system frequently used by vocalists or instrumentalists who can easily tune the pitch. Instruments tuned in just intonation will have to be retuned in order to play in a different scale. This is the case with the Hapsichord for example.
Just intonation is a method of tuning intervals based exclusively on rational numbers (integers). It is based on the intervals of the harmonic series. Depending on context, the ratio might be different for the same note.
(e.g. 9/8 and 10/9 for the major second). Any interval tuned as ratio of whole numbers is a just interval, but usually it is only ratios with small numbers.
Examples of intervals:
2/1 = octave
3/2 = fifth
4/3 = fourth
5/4 = major third
6/5 = minor third
Many composers (e.g. La Monte Yong and Terry Riley) prefer to compose for just intonation tuned instruments.
A major scale
~justIntFreqlist8 = [1, 9/8, 5/4, 4/3, 3/2, 5/3, 15/8];
A whole 12 note octave
And we put in a fundamental note (A)
Let’s play the scale:
Pythagorean tuning
Pythagorean tuning was invented by the Greek Philosopher Pythagoras in the 6th century BC. He was interested in harmony, geometry and beans. The Pythagorean tuning is based on perfect fifths, fourths and octaves.
Now let’s compare Equal Temperament to the Pythagorean tuning.
First we make the equal temperament scale array
Scales
Scales are usually but not necessarily designated for an octave - so they repeat themselves over all octaves. There are countless scales with different note count, the most common in Western music is the diatonic scale. Other common scales (defined by note count) are chromatic (12 notes), whole tone (6 notes), pentatonic (5 notes) and octatonic (8 notes)
A dictinary of Scales
James McCartney wrote this dictionary of scales. (they are MIDI notes - no microtones and all are equal tempered)
The Scala Library
For a proper exploration of scales we will use the Scala project and the SCL class written in SuperCollider to use the Scala files.
The Scale Archive can be found here (with over 3000 scales):
http://www.huygens-fokker.org/docs/scales.zip
And a SuperCollider class that interfaces with the archive can be found here (XiiScala.sc)
https://github.com/thormagnusson/TuningTheory
Note that you have to provide the path to where you install your Scala libaray
for example “~/scwork/scl/”
a = XiiScala(“bohlen-p_9”);
a.tuning.octaveRatio
a.degrees
a.semitones
a.pitchesPerOctave
z = x.degrees.mirror;
(
Task({
z.do({ arg ratio, i; // first arg is the item in the list, next arg is the index (i)
Synth(\pure, [\freq, 440*ratio]);
0.3.wait;
});
}).start;
)
(
x = SCL.new(“cairo.scl”.standardizePath, 440);
z = x.getRatios.mirror;
Task({
z.do({ arg ratio, i; // first arg is the item in the list, next arg is the index (i)
Synth(\pure, [\freq, 440*ratio]);
0.3.wait;
});
}).start;
)
(
x = SCL.new(“kayolonian_s.scl”.standardizePath, 440);
z = x.getRatios.mirror;
Task({
z.do({ arg ratio, i; // first arg is the item in the list, next arg is the index (i)
Synth(\pure, [\freq, 440*ratio]);
0.3.wait;
});
}).start;
)
Using Samples
We can of course control the pitch of sampled sounds too, and here the playback rate will control the pitch o f the sample.
First we load a sound and we get a sound with a simple tone (replace this sound with your own)
The pythagorean scale:
The Scale and Tuning Classes
SuperCollider comes with Scale and Tuning classes. They make encapsulate and simplify the things we have done above in easy to use methods of the Scale and Tuning libraries.
An example - we choose a minor scale:
Check the scale directory
And the available tunings
Part V
Chapter 19 - Creating Classes
In object oriented programming classes serve as the blueprint for objects, the genotype information that results in instances of phenotypes. Like a recipe for cookes. This can be extremely useful when creating data structures that have properties (parameters or variables) and have behaviours (methods or functions).
For further information on what a class is, then a good start is the Wikipedia:
// run this line in SuperCollider
“open ‘http://en.wikipedia.org/wiki/Class_(computer_science)’“.unixCmd;
Creating Classes
A good introduction to writing classes are in the Help documentation
“Writing-Classes”.openHelpFile
So below we have two TestClasses, one that subclasses the first one (like the guitar is a subclass of a string instrument).
Save both classes in a document that could be called “TestClass.sc” and make sure it is saved in the classpath of SuperCollider (where it is compiled on recompilation or startup). On the Mac the classpath for third party classes is in ~Library/Application Support/SuperCollider/External.
When the classes have been saved and compiled we can now test the class:
You can see that the < > symbols in the class are so called setters (>) and getters (<), which, if specified in front of the property specification, can serve instead of methods that set the properties. Therefore you can equally write
v.addnr(44) // using a setter
and
v.addnrSet(44) // using a method
Another test class
And here is some code to try the class
Chapter 20 - Functional Programming
SuperCollider is an object oriented programming (OOP) language inspred by SmallTalk. In SmallTalk (and unlike languages such as C++ or Java) everything is an object, so it’s possible to create methods and subclass practically all data structures. Thus we find in SuperCollider the methods of .toLower for a string or a char (“HEY”.toLower and $A.toLower) or .neg or .cos for a number (10.neg and 2.cos). Here the actual number is an object that has methods (e.g. .neg).
We could for example create a .double method for SimpleNumber. We’d simply open the SimpleNumber class and create a method like this:
double {
^this * 2
}
But don’t do that. If you are creating your own methods and classes, keep them in your own Extensions folder, so the next time you update SuperCollider, you still have your classes and methods at hand.
However, there is another way of thinking and it’s different from OOP, a bit like the difference between Plato (reality is static ideal types) and Heracleitus or Buddha (reality is a flow). Let’s explore functional programming.
SuperCollider is also a functional programming language. You can program solely using the functional paradigm and we will look at the FP classes below.
Functional Programming
In functional programming (FP) the idea is not to create classes and instantiate objects that exist in the computer’s memory, responding to messages and calls. It’s not a world of things, but a world of movement, behaviour, events.
So it’s not “John.drives(work)| but rather a “drives - work - John”. The function is to drive, and it’s John who is going to work.
So the doubling of the number above would be a function:
double = {arg num; num*2}
In short, the idea is to avoid state and mutable objets, as those are typically the source of bugs and errors (often called side effects) in imperative and object orientated programming languages.
Functions as first class citizens
In a functional programming language, it is important to be able to send functions around into other functions as arguments, and if that’s possible, the language supports “functions as first class citizens”
Above, the result of the function ~square was passed to the function ~double
Recursion
Iteration or looping is typicaly done with recursion in functional programming. So instead of the C and Java:
Or normal SuperCollider:
The Scheme function for the above would be
And a Python version could be written with some explanations like this:
The SuperCollider recursion would be something like
Chapter 21 - Live Coding
Live coding needs no introduction, but as a summary it comes with an imperative that performers project their screens such that the audience is able to participate in the musical creation. Some people argue that this should be done from a clean slate where the code is designed in realtime. Others use prewritten code and change parameters on the fly (often called “CJs”, or code jockeys). A dedicated forum exists for practitioners called Toplap and various papers have been written on live coding, with MIT Press publishing a Handbook on the topic in 2021.
A typical problem for the live coder is the high level of expertise required for such performance. Very few performers are able to exhibit those skills without consistent dedication to practise.
The level of abstraction for live coding therefore becomes important. Are we coding in C/C++ or in higher levels? Here language such as Tidal, Sonic Pi and ixi lang have proposed solutions to more realtime environments.
Chapter 22 - Other clients
Other sc-synth clients than the sclang include Tidal (written in Haskell), Sonic Pi (written in Ruby), ixi lang (written in sc-lang), and many others.
Creating a client is very easy. It involves sending OSC messages from your language of choice to the sc-synth. Here below is the totality of the commands you need to create a fully functional sc-synth client:
/notify - Register to receive notifications from server
/status - Query the status. Replies to sender with the following message:
/status.repl
/dumpOSC - Display incoming OSC messages.
/sync - Notify when async commands have completed.
/clearSched - Clear all scheduled bundles. Removes all bundles from the scheduling queue.
/error - Enable/disable error message posting.
/version - Query the SuperCollider version.
/d_recv - Receive a synth definition file.
/d_load - Load synth definition.
/d_loadDir - Load a directory of synth definitions.
/d_free - Delete synth definition.
/n_free - Delete a node.
/n_run - Turn node on or off.
/n_set - Set a node’s control value(s).
/n_setn - Set ranges of a node’s control value(s).
/n_fill - Fill ranges of a node’s control value(s).
/n_map - Map a node’s controls to read from a bus.
/n_mapn - Map a node’s controls to read from buses.
/n_mapa - Map a node’s controls to read from an audio bus.
/n_mapan - Map a node’s controls to read from audio buses.
/n_before - Place a node before another.
/n_after - Place a node after another.
/n_query - Get info about a node.
/n_trace - Trace a node.
/n_order - Move and order a list of nodes.
/s_new - Create a new synth.
/s_get - Get control value(s).
/s_getn - Get ranges of control value(s).
/s_noid - Auto-reassign synth’s ID to a reserved value.
/g_new - Create a new group.
/p_new - Create a new parallel group.
/g_head - Add node to head of group.
/g_tail - Add node to tail of group.
/g_freeAll - Delete all nodes in a group.
/g_deepFree - Free all synths in this group and all its sub-groups.
/g_dumpTree - Post a representation of this group’s node subtree.
/g_queryTree - Get a representation of this group’s node subtree.
/u_cmd - Send a command to a unit generator.
/b_alloc - Allocate buffer space.
/b_allocRead - Allocate buffer space and read a sound file.
/b_allocReadChannel - Allocate buffer space and read channels from a sound file.
/b_read - Read sound file data into an existing buffer.
/b_readChannel - Read sound file channel data into an existing buffer.
/b_write - Write sound file data.
/b_free - Free buffer data.
/b_zero - Zero sample data.
/b_set - Set sample value(s).
/b_setn - Set ranges of sample value(s).
/b_fill - Fill ranges of sample value(s).
/b_gen - Call a command to fill a buffer.
/b_close - Close soundfile.
/b_query - Get buffer info.
/b_get - Get sample value(s).
/b_getn - Get ranges of sample value(s).
/c_set - Set bus value(s).
/c_setn - Set ranges of bus value(s).
/c_fill - Fill ranges of bus value(s).
/c_get - Get bus value(s).
/c_getn - Get ranges of bus value(s).
/done - An asynchronous message has completed.
/fail - An error occurred.
/late - A command was received too late. not yet implemented
/n_go - A node was started. This command is sent to all registered clients when a node is created.
/n_end - A node ended. This command is sent to all registered clients when a node ends and is deallocated.
/n_off - A node was turned off. This command is sent to all registered clients when a node is turned off.
/n_on - A node was turned on. This command is sent to all registered clients when a node is turned on.
/n_move - A node was moved. This command is sent to all registered clients when a node is moved.
/n_info - Reply to /n_query. This command is sent to all registered clients in response to an /n_query command.
/tr - A trigger message.
There is much more to learn about this, so see the Server Command Reference file for that. This list is just provided here to show how the commands map to the constructs in the SuperCollider language that we have been learning in this book.
Chapter 23 - Twitter code
A musical miniature form of writing pieces for Supercollider in under 280 character, which is the current char limits of Twitter. Earlier Twitter pieces (#sctweets) would be 140 character which was the char limit until recently.
Of course, no sane SuperCollider user would write code this way. The Twitter constraints (of 280 chars) forces people to consider how code can be compressed as much as possible, for example writing 999 instead of 1000 (thus saving a char) or 9e10, which becomes 90000000000. This is not code to learn how to write music in SuperCollider.