Week 6
6.1 What we will cover in Week 6
In this final week you will in the theory track learn about bodily metaphors in our experience with music. You will also learn how these affect our experience of emotional content in music. In the methods track we will take a closer look at biosensors. There is no terminology track in this week. Remember to use the dictionary.
6.3 Music, Verticality and Bodily Metaphor
In this article we will look a little more on the details of how bodily metaphors shape our experience of music.
As we discussed in Studying music perception in the motion capture laboratory, most people have a strong sensation of physical direction when listening to short sounds that “move up” or “down”. This phenomenon comes about despite the fact that sound waves do not actually move up or down in physical space. So we are talking about a cognitive phenomenon. The American musicologist Arnie Cox writes about this phenomenon in music:
Verticality is not inherent in music (let alone in its notational representation); it is not there to be observed (heard) in the music, but it is instead a product of logical, metaphoric conceptualization (1999:50, emphasis in original).
Or as Björn Vickhoff adds:
Although there are no obvious directions of melody movement, most listeners feel directions in music. When the melody is moving ‘upwards’ or ‘downwards’ you get a feeling of spatial direction (2008:52).
Such processes of conceptualization have been addressed by cognitive semantics. In Philosophy in the Flesh from 1999, linguist George Lakoff and philosopher Mark Johnson employ the concept of “primary metaphors” (as opposed to “complex metaphors”) to illustrate the basic connection that exists between abstract and literal expressions. Primary metaphors are metaphors that have been incorporated into our world-view so thoroughly that we no longer see them as metaphors. They are based on correlations between expressions and embodied experiences and are, according to Lakoff and Johnson, fundamental to all thinking regarding subjective experience and judgement:
We do not have a choice as to whether to acquire and use primary metaphor. Just by functioning normally in the world, we automatically and unconsciously acquire and use a vast number of such metaphors. Those metaphors are realized in our brains physically and are mostly beyond our control. They are a consequence of the nature of our brains, our bodies, and the world we inhabit (Lakoff & Johnson 1999:59, emphasis in original).
With reference to Christopher Johnson’s “theory of conflation” (Johnson 1999), Lakoff and Johnson then describe how primary metaphors are formed:
For young children, subjective (nonsensorimotor) experiences and judgments, on the one hand, and sensorimotor experiences, on the other, are so regularly conflated—undifferentiated in experience—that for a time children do not distinguish between the two when they occur together (Lakoff & Johnson 1999:46).
Lakoff and Johnson use the example of the subjective experience of affection and the sensory experience of warmth through being held (loc.cit.). Even when children eventually develop the ability to differentiate between them, they will preserve associations from one domain (the “source domain”) to the other (the “target domain”). Thus “affection” and “warmth” will be connected, and in relation to affective meaning, “warmth” may be used where no actual (literal) high temperature is present. Similarly, metaphors are linked to movements: when we use “falling” metaphorically in the phrase “falling asleep,” the downward movement is projected upon the transition from consciousness to unconsciousness. Yet we have not “fallen” anywhere.
Verticality underpins our understanding of music as well, though the adverbs “up” and “down” and the adjectives “high” and “low” imply nonexistent spatial orientations there. According to Lakoff and Johnson such parallels
arise from the fact that we have bodies of the sort we have and that they function as they do in our physical environment (Lakoff & Johnson 1980:14).
Motor schemas and image schemas are parts of the conceptual structure we form through sensorimotor experience and visual perception. Bob Snyder describes schemas as
memory structures created by generalizations made across seemingly similar situations in the environment (2000:102).
These affect perception and shape actions. In the same way that we use image schemas as points of departure for producing images when we are told stories, we use motor schemas to form motor commands when we experience music. A motor schema related to tempo in music will support a correspondence between fast rhythms and rapid body movements; a motor schema related to verticality in music will encourage vertical movements in response to pitch. This latter motor schema has been shaped through our encounter with sources of verticality in music. Arnie Cox (1999:18f) refers to ten such sources that possess both literal and metaphoric features:
- verticality in staff notation,
- verticality in vocal experience, and
- the propagation of sound waves
- “higher” and “lower” frequencies,
- the “higher” and “lower” perceived loudness levels of high and low notes,
- the “higher” and “lower” amounts of air used for high and low notes,
- the “higher” and “lower” magnitudes of effort needed for high and low notes,
- the “higher” and “lower” degrees of tension in producing high and low notes,
- the association of “high” levels of emotional intensity and pitch at climaxes, and
- the metaphoric state-locations of tones in pitch space.
Of these ten sources of verticality, the first three ones are based on literal vertical relations, while the seven last ones are based on metaphoric verticality.
Some of these sources are based on the experience of singing or playing certain instruments and are blended with other metaphoric associations of “high” and “low,” especially greater or lesser quantities/magnitudes (“more = up, less = down”) (Lakoff & Johnson 1980:15). Others are mainly bodily experienced and do not have to trigger any explicit knowledge before helping us to form motor schemas. In a culture where music is written (as notation) and actively learned, verticality in music likely arises from a mixture of rational and bodily knowledge. Still, there are many music lovers that do not know notation, but still experience verticality in music. Much classical music from the Romantic period (Chopin, Grieg) reaches an emotional climax of a piece with ascending melodies, accelerandos (increasing tempo) and crescendos. Similarly, producers of dance music or groove-oriented popular music constantly confront the notion of “high” and “low”. A build-up in a typical house track very often moves gradually from “low” sounds to “high” sounds to reach a climax in the track (see Solberg 2014).
The sound systems in clubs are usually organized with separate subwoofers and tweeters that are situated vertically, so that the “low” sounds come from the speaker beneath the one that produces “high” sounds. This vertical placement has little specific impact upon low frequencies, but high frequencies are generally more directional, so tweeters are often placed at ear hight (see Rossing et al. 2002:chap. 24). The loud volume level in clubs also intensifies how sounds resonate in our body. Low-frequency sound waves have a greater impact than high-frequency waves in how they are felt most noticeably in boneless body regions like the abdomen, which is obviously below our ears (and eyes), thereby contributing to the physical realization of a “low” frequency.
Club-oriented dance music very often uses a basic pattern where a bass drum and a hi-hat alternate. This alternation of “low” and “high” is not as obviously “vertical” as a continuous pitch movement either up or down, but in relation to a vertical movement pattern, the structural parallel is pivotal. The bass drum sound evokes the “low” position of verticality, while the hi-hat sound evokes the “high” position. The musical sounds may then be a transducer of verticality-information, from music to spatial orientation – from alternating “low” and “high” sounds transduced to analogous up-and-down movements.
References
- Cox, Arnie W. 1999. The Metaphoric Logic of Musical Motion and Space. Ph.D. Thesis, University of Oregon.
- Johnson, Christopher. 1999. Metaphor vs. Conflation in the Acquisition of Polysemy: The Case of See. In Cultural, Psychological and Typological Issues in Cognitive Linguistics: Selected Papers of the Bi-Annual ICLA Meeting in Albuquerque, July 1995, edited by M. K. Hiraga, C. Sinha and S. Wilcox. Amsterdam: John Benjamins: 155–169.
- Lakoff, George, and Mark Johnson. 1980. Metaphors We Live By. Chicago: University of Chicago Press.
- Lakoff, George, and Mark Johnson. 1999. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York: Basic Books.
- Rossing, Thomas D., F. Richard Moore, and Paul A. Wheeler. 2002. The Science of Sound. San Francisco: Addison Wesley.
- Snyder, Bob. 2000. Music and Memory: An Introduction. Cambridge, MA: The MIT Press.
- Solberg, Ragnhild T. 2014. Waiting for the Bass to Drop: Correlations Between Intense Emotional Experiences and Production Techniques in Build-up and Drop Sections of Electronic Dance Music. In Dancecult: Journal of Electronic Dance Music Culture, 6 (1), 61–82
- Vickhoff, Björn. 2008. A Perspective Theory of Music Perception and Emotion. Ph.D. dissertation, University of Gothenburg
6.8 Introduction to Biosensors
So far in Music Moves we have only looked at motion sensors here in the methods track. This week we will explore how it is possible to measure biosignals, that is activity within the body itself.
Sensors measuring biosignals are often also called physiological sensors. Most of these sensors share the same sensing principle, that of measuring electrical current in various parts of the body. But since the biosignals vary considerably in strength throughout the body, the sensors are optimised differently:
- Galvanic skin response (GSR) refers to changes in skin conductance, and is often used on the fingers or in the palm of the hand. The GSR signal is highly correlated with emotional changes, and such sensors have been used to some extent in music research (see next video) as well as in music performance. A challenge with such signals is that may not be entirely straightforward to interpret, and elements like sweat may become an issue when worn for longer periods of time.
- Electromyograms (EMG) are used to measure muscle activity, and are particularly effective on the arms of musicians to pick up information about hand and finger motion. A challenge with EMG is to place the sensor(s) in a way such that they pick up the muscle activity properly. Later in this activity you will see an example of how EMG sensors can be used for musical interaction.
- Electrocardiograms (EKG) measure the electrical pulses from the heart, and can be used to extract information about heart rate and heart rate variability. The latter has been shown to correlate with emotional state and has also been used in music research.
- Electroencephalograms (EEG) are used to measure electrical pulses from the brain, using either a few sensors placed on the forehead, or hats with numerous sensors included. Due to the weak brain signals, such sensors need to have strong amplifiers and are therefore also suspect to a lot of interference and noise. Nevertheless, such sensors have also been applied in both music analysis and performance.
EEG is in many ways the first “step” towards doing brain imaging, which has also become more popular in music research over the last years. We will not cover this in Music Moves, but interested learners may find some useful links in the references below.
References
- Campbell, I. G. (2009). EEG Recording and Analysis for Sleep Research. In J. N. Crawley, C. R. Gerfen, M. A. Rogawski, D. R. Sibley, P. Skolnick, & S. Wray (Eds.), Current Protocols in Neuroscience. Hoboken, NJ, USA: John Wiley & Sons, Inc.
- Craig, D. G. (2005). An Exploratory Study of Physiological Changes during �Chills� Induced by Music. Musicae Scientiae, 9(2), 273�287.
- Eaton, J., Jin, W., & Miranda, E. (2014). The Space Between Us. A Live Performance with Musical Score Generated via Emotional Levels Measured in EEG of One Performer and an Audience Member. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 593�596). London.
- Fan, Y.-Y., & Sciotto, M. (2013). BioSync: An Informed Participatory Interface for Audience Dynamics and Audiovisual Content Co-creation using Mobile PPG and EEG. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 248�251). Daejeon, Republic of Korea: Graduate School of Culture Technology, KAIST.
- Nakra, T. M. (2000, February). Inside the Conductor�s Jacket: Analysis, Interpretation and Musical Synthesis of Expressive Gesture. Massachusetts Institute of Technology, Cambridge, Mass.
- Patel, S., Park, H., Bonato, P., Chan, L., & Rodgers, M. (2012). A review of wearable sensors and systems with application in rehabilitation. Journal of NeuroEngineering and Rehabilitation, 9(1), 21.
- P�rez, M. A. O., & Knapp, R. B. (2008). BioTools: A Biosignal Toolbox for Composers and Performers. In R. Kronland-Martinet, S. Ystad, & K. Jensen (Eds.), Computer Music Modeling and Retrieval. Sense of Sounds (pp. 441�452). Springer Berlin Heidelberg.
- Zimny, G. H., & Weidenfeller, E. W. (1963). Effects of music upon GSR and heart-rate. The American Journal of Psychology, 311�314.
6.11 New Interfaces for Musical Expression
As we learned in the previous video, the term NIME refers to “new interfaces for musical expression”. The term is derived from the conference and community with the same name (NIME).
Common among practitioners in the NIME community is that of designing, building, performing and evaluating new musical instruments and other types of musical interfaces.
Musical Instruments
It is difficult to come up with a clear definition of a musical instrument, particularly if one considers all sorts of musical instruments. One common denominator, however, is that an instrument functions as a mediator between action and sound.
As such, any pair of objects and actions that are used in music could be considered a musical instrument.
Digital Musical Instruments
Many new interfaces for musical expression are digital musical instruments (DMIs), involving the use of digital technology in the mediation between actions and sound. This can be seen as a subset of electronic musical instruments, which embeds also analogue, electronic instruments such as the famous MOOG synthesizers.
A model capturing three essential parts of the term digital musical instrument is shown in the figure below. The controller is the physical device that the user interacts with. A controller in itself will not produce any sound, however, so the sound engine is where sound generation takes place. This may be a physical hardware unit or a piece of software. The mapping is a set of rules for how the sound engine should respond when the user interacts with the controller.
There exist numerous digital musical instruments. Many commercial digital musical instruments are based on a piano-like interface, with a controller, mapping, and sound engine built into the same physical device. Although piano-based instruments are hardly considered new interfaces for musical expression any longer, they are nevertheless digital musical instruments.
<sup>Nord Stage (Photo by Ben Garney, CC BY 2.0 license)</sup>
The Hands by Michel Waisvisz is an example of a “classic” digital musical instrument not inspired by traditional musical instruments. The controller uses various sensors to capture hand, arm and finger movements. The sound engine is also separated from the controller, which makes it possible to create many different types of mappings between movement and sound. See The Hands in action in this youtube video.
<sup>The Hands (Photo by Luka Ivanovic, CC BY-SA 2.0 license)</sup>
NIME as a research field
One of the main areas of interest to NIME researchers is the development of new modes of interaction:
- How can musical sound be controlled in intuitive and interesting manners?
- How does the way we control musical sound shape our perception of the sound?
- How can new technologies be used to enhance our musical experiences?
At the University of Oslo we have been interested in exploring different ways of creating NIMEs based on our research on embodied music cognition. Here we have use the results from cognitive experiments when designing mappings between movement and sound. Similarly, we have explored how NIMEs may be used in experiments. One such example is that of the “instrument” MuMYO.
MuMYO
The MuMYO instrument used in the previous video uses the commercially available MYO armband as the sensor. The MYO consists of eight electromyography (EMG) sensors which measure muscle tension, in addition to an inertial measurement unit measuring motion and rotation.
Using a machine learning algorithm to extract useful features from the muscle sensing data, the device is able to detect hand postures such as waving in or out, making a fist, or spreading the fingers. The possibility to capture muscle tension data is interesting when it comes to developing new musical instruments, as the effort put into tightening the muscles may be accessed more directly than when only measuring motion.
References
- E.R. Miranda, and M.M. Wanderley. New digital musical instruments: control and interaction beyond the keyboard. Middleton, WI: AR Editions, Inc., 2006.
- K. Nymoen, M.R. Haugen, A.R. Jensenius (2015). “MuMYO � Evaluating and Exploring the MYO Armband for Musical Interaction” In Proceedings of the International Conference on New Interfaces for Musical Expression, Louisiana State University, Baton Rouge, LA, 2015.