Learning Diary, Week 6

This week the reading focused on auditory and vestibular systems. Outside of the Brain structure and function course I was working through ideas of what constitutes live music performance, and placed social experience as a central tenant. This was informed by a new meta study from Aalto titled “Social Pleasures of Music.”

Here is an excerpt from my writing on the topic:

Musicologist Thor Magnussen proposes that traditional instruments serve as bodily extensions while computational systems represent mind extensions (Magnussen, 2019). This dichotomy reflects the current state of instrumental versus digital music, but the mind does not have to be disembodied. It has been theorized that the reason brains evolved was for movement (Wolpert, 2011) and as such the body is the primary interface for the mind. Additionally, even in passive listening the brain’s motor cortex will be continuously activated and this activity can be decoded to show the emotional states triggered by music (Nummenmaa et al., 2020). Barring certain motor disabilities, the body is always involved in mind’s activities, even if that activity is as slight as pressing play on a piece of generative computer music.

Acoustic instruments express the energy put into them; this is what powers them. Minimal gesture is required by most Digital Music Interfaces (DMIs) because they convert the energy of a player and are powered by electricity (Magnussen, 2018). By extension, performing with a computer does not demand much physical exertion. However, it is possible to integrate more embodied expression into computational tools. This is because digital music is created through mapping controls to a sound engine. Mapping is a central concern in the design of New Interfaces for Musical Expression (NIME) (Hunt et al. 2002).

The schematic proposed in an early NIME paper by Wessel and Wright outlines a circular interaction between a performer’s intentions, motor actions with an interface, and the sounds produced. This feedback loop is also noted in the definition of live electronic music by techno progenitor Jeff Mills: “What makes something live is the usage of the musician’s intuition to feel what to do next…” Mills describes, “It’s a reactionary gesture based on how the musician is analyzing the current situation” (Coleman, 2016). Regardless of the DMI used, DJs and dance music producers can be outstanding performers. In the settings where dance music lives however, what a performer is doing is often less important than what is thumping out of the speakers, or the people on the floor. This is especially true when flashing lights and fog are included, which distort individuals and add to a collective sensory experience.

Collective experience is intertwined with how music functions in our brains, and this can be seen on both physical and operational levels. In a meta-analysis of neuroimaging studies that interpreted brain regions employed during social processing and music perception, significant overlap was detected across multiple areas. These included auditory cortices, as well as posterior temporal polysensory areas, the motor cortex, the thalamus, amygdala and midcingulate cortex, anterior insula and ventral striatum (Nummenmaa et al., 2020). It is intuitive that the social brain and the sonic brain share the same structures; this is emphasized by their co-evolution (Schulkin & Raglan, 2014). Based on this research (and personal experience) the social meaning in music performance is important to what makes it “live.”

Posted by M

About M

beep boop, ha ha
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *