updated from proofreading
This commit is contained in:
12
abstract.tex
12
abstract.tex
@@ -1,2 +1,10 @@
|
|||||||
This paper critically reviews the history of programming languages for music, distinct from computer music as a genre, by drawing on discussions from sound studies. The paper focuses on the universalist assumptions around pulse-code modulation and the Unit Generator concept established by the MUSIC-N family, which established a lineage of role between composers and scientists which tends to turn
|
This paper critically reviews the history of programming languages for
|
||||||
composers into consumers. The paper concludes that programming languages for music developed after the 2000s present alternatives to the often-invisible technological infrastructures surrounding music, such as formats and protocols, rather than solely aiming to create novel musical styles.
|
music, distinct from computer music as a genre, by drawing on
|
||||||
|
discussions from sound studies. The paper focuses on the universalist
|
||||||
|
assumptions around pulse-code modulation and the Unit Generator concept
|
||||||
|
established by the MUSIC-N family, which established a lineage of role
|
||||||
|
between composers and scientists which tends to turn composers into
|
||||||
|
consumers. The paper concludes that programming languages for music
|
||||||
|
developed after the 2000s present alternatives to the often-invisible
|
||||||
|
technological infrastructures surrounding music, such as formats and
|
||||||
|
protocols, rather than solely aiming to create novel musical styles.
|
||||||
|
|||||||
522
content.tex
522
content.tex
@@ -1,12 +1,12 @@
|
|||||||
\section{Introduction}\label{introduction}
|
\section{Introduction}\label{introduction}
|
||||||
|
|
||||||
Programming languages and environments for music, for instance, Max, Pure Data,
|
Programming languages and environments for music such as Max, Pure Data,
|
||||||
Csound, and SuperCollider, has been referred to as ``Computer Music
|
Csound, and SuperCollider, have been referred to as ``computer music
|
||||||
Language''\citep{McCartney2002, Nishino2016, McPherson2020}, ``Language
|
language''\citep{McCartney2002, Nishino2016, McPherson2020}, ``language
|
||||||
for Computer Music''\citep{Dannenberg2018}, and ``Computer Music
|
for computer music''\citep{Dannenberg2018}, and ``computer music
|
||||||
Programming Systems''\citep{Lazzarini2013}, though there is no clear
|
programming systems''\citep{Lazzarini2013}, though there is no clear
|
||||||
consensus on the use of these terms. However, as the shared term
|
consensus on the use of these terms. However, as the shared term
|
||||||
``Computer Music'' implies, these programming languages are deeply
|
``computer music'' implies, these programming languages are deeply
|
||||||
intertwined with the history of technology-driven music, which developed
|
intertwined with the history of technology-driven music, which developed
|
||||||
under the premise that ``almost any sound can be
|
under the premise that ``almost any sound can be
|
||||||
produced''\citep[p557]{mathews1963} through the use of computers.
|
produced''\citep[p557]{mathews1963} through the use of computers.
|
||||||
@@ -15,89 +15,86 @@ In the early days, when computers existed only in research laboratories
|
|||||||
and neither displays nor mice existed, creating sound or music with
|
and neither displays nor mice existed, creating sound or music with
|
||||||
computers was inevitably equivalent to programming. Today, however,
|
computers was inevitably equivalent to programming. Today, however,
|
||||||
programming as a means to produce sound on a computer---rather than
|
programming as a means to produce sound on a computer---rather than
|
||||||
employing Digital Audio Workstation (DAW) software like Pro Tools is not
|
employing digital audio workstation (DAW) software, such as Pro Tools is
|
||||||
popular. In other words, programming languages for music developed after
|
not popular. In other words, programming languages for music developed
|
||||||
the proliferation of personal computers are the softwares that
|
after the proliferation of personal computers are the software tools
|
||||||
intentionally chose programming (whether textual or graphical) as their
|
that intentionally chose programming (whether textual or graphical) as
|
||||||
frontend for making sound.
|
their frontend for making sound.
|
||||||
|
|
||||||
Since the 1990s, the theoretical development of programming languages
|
Since the 1990s, the theoretical development of programming languages
|
||||||
and the various constraints required for real-time audio processing have
|
and the various constraints required for real-time audio processing have
|
||||||
significantly increased the specialized knowledge necessary for
|
significantly increased the specialized knowledge necessary for
|
||||||
developing programming languages for music today. Furthermore, some
|
developing programming languages for music today. Furthermore, some
|
||||||
languages developed after the 2000s are not necessarily aimed at
|
languages developed after the 2000s are not necessarily aimed at
|
||||||
pursuing new forms of musical expression. There is still no unified perspective on how the value of those languages should be evaluated.
|
pursuing new forms of musical expression, and there is still no unified
|
||||||
|
perspective on how their values should be evaluated.
|
||||||
|
|
||||||
In this paper, a critical historical review is conducted by drawing on
|
This paper is a critical historical review that draws on discussions
|
||||||
discussions from sound studies alongside existing surveys, aiming to
|
from sound studies and existing surveys to examine programming languages
|
||||||
consider programming languages for music independently from computer
|
for music as distinct from computer music as the specific genre.
|
||||||
music as the specific genre.
|
|
||||||
|
|
||||||
\subsection{Use of the Term ``Computer
|
\subsection{Use of the Term ``Computer
|
||||||
Music''}\label{use-of-the-term-computer-music}
|
Music''}\label{use-of-the-term-computer-music}
|
||||||
|
|
||||||
The term ``Computer Music,'' despite its literal and potentially broad
|
Since the 1990s, the term ``computer music,'' despite its literal and
|
||||||
meaning, has been noted for being used within a narrowly defined
|
potentially broad meaning, has been noted for being used within a
|
||||||
framework tied to specific styles or communities, as represented in
|
narrowly defined framework tied to specific styles or communities, as
|
||||||
Ostertag's \emph{Why Computer Music Sucks}\citep{ostertag1998} since the
|
explored in Ostertag's \emph{Why Computer Music
|
||||||
1990s.
|
Sucks}\citep{ostertag1998}.
|
||||||
|
|
||||||
As Eric Lyon observed nearly two decades ago, it is now nearly impossible to
|
As Lyon observed nearly two decades ago, it is now nearly impossible to
|
||||||
imagine a situation in which computers are not involved at any stage
|
imagine a situation in which computers are not involved at any stage
|
||||||
from the production to experience of music\citep[p1]{lyon_we_2006}. The
|
from the production to experience of music\citep[p1]{lyon_we_2006}. The
|
||||||
necessity of using the term ``Computer Music'' to describe academic
|
necessity of using the term ``computer music'' in academic contexts has
|
||||||
contexts has consequently diminished.
|
consequently diminished.
|
||||||
|
|
||||||
Holbrook and Rudi extended Lyon's discussion by proposing the use of
|
Holbrook and Rudi extended Lyon's discussion by proposing the use of
|
||||||
frameworks like Post-Acousmatic\citep{adkins2016} to redefine ``Computer
|
frameworks such as post-acoutmatic\citep{adkins2016} to redefine
|
||||||
Music.'' Their approach incorporates the tradition of pre-computer
|
computer music. Their approach situates the tradition of pre-computer
|
||||||
experimental/electronic music, situating it as part of the broader
|
experimental/electronic music as part of the broader continuum of
|
||||||
continuum of technology-based or technology-driven
|
technology-based or technology-driven music\citep{holbrook2022}.
|
||||||
music\citep{holbrook2022}.
|
|
||||||
|
|
||||||
While the strict definition of Post-Acousmatic music is deliberately
|
Although the strict definition of post-acousmatic music is deliberately
|
||||||
left open, one of its key aspects is the expansion of music production
|
left open, one of its key aspects is the expansion of music production
|
||||||
from institutional settings to individuals and as well as the
|
from institutional settings to individuals and as well as the
|
||||||
diversification of technological usage\citep[p113]{adkins2016}. However,
|
diversification of technological usage\citep[p113]{adkins2016}. However,
|
||||||
while the Post-Acousmatic discourse integrates the historical fact that
|
despite integrating the historical fact that declining computer costs
|
||||||
declining computer costs and increasing accessibility beyond
|
and increasing accessibility beyond laboratories have enabled diverse
|
||||||
laboratories have enabled diverse musical expressions, it still
|
musical expressions, the post-acousmatic discourse still marginalizes
|
||||||
marginalizes much of the music that is ``just using computers'' and
|
much of the music that is ``just using computers'' and fails to provide
|
||||||
fails to provide insights into this divided landscape.
|
insights into this divided landscape.
|
||||||
|
|
||||||
Lyon argues that the term ``computer music'' is a style-agnostic
|
Lyon argues that the term ``computer music'' is a style-agnostic
|
||||||
definition almost like ``piano music,'' implying that it ignores the
|
definition, almost like ``piano music,'' implying that it ignores the
|
||||||
style and form inside music produced by the instrument.
|
style and form of music produced by the instrument. However, one of the
|
||||||
|
defining characteristics of computers as a medium lies in their ability
|
||||||
However, one of the defining characteristics of computers as a medium
|
to treat musical styles themselves as subjects of meta-manipulation
|
||||||
lies in their ability to treat musical styles themselves as subjects of
|
through simulation and modeling. When creating instruments with
|
||||||
meta-manipulation through simulation and modeling. When creating
|
computers or using such instruments, sound production involves
|
||||||
instruments with computers or when using such instruments, sound
|
programming---manipulating symbols embedded in a particular musical
|
||||||
production involves programming---manipulating symbols embedded in a
|
culture. This recursive embedding of language and recognition, which
|
||||||
particular musical culture. This recursive embedding of language and
|
construct that musical culture, into the resulting music is a process
|
||||||
recognition, which construct that musical culture, into the resulting
|
that exceeds what is possible with acoustic instruments or analog
|
||||||
music is a process that goes beyond what is possible with acoustic
|
instruments. Magnusson refers to this characteristic of digital
|
||||||
instruments or analog instruments. Magnusson refers to this
|
instruments as ``epistemic tools'' and points out that the computer
|
||||||
characteristic of digital instruments as ``Epistemic Tools'' and points
|
serves to ``create a snapshot of musical theory, freezing musical
|
||||||
out that the computer serves to ``create a snapshot of musical theory,
|
culture in time'' \citep[p.173]{Magnusson2009} through formalization.
|
||||||
freezing musical culture in time'' \citep[p.173]{Magnusson2009} through
|
|
||||||
formalization.
|
|
||||||
|
|
||||||
Today, many people use computers for music production not because they
|
Today, many people use computers for music production not because they
|
||||||
consciously leverage the uniqueness of the meta-medium, but simply
|
consciously leverage the uniqueness of the meta-medium, but simply
|
||||||
because there are no quicker or more convenient alternatives available.
|
because there are no quicker or more convenient alternatives available.
|
||||||
Even so, within a musical culture where computers are used as a
|
Even so, within a musical culture where computers are used out of
|
||||||
reluctant choice, musicians are inevitably influenced by the underlying
|
necessity rather than preference, musicians are inevitably influenced by
|
||||||
infrastructures like software, protocols, and formats. As long as the
|
the underlying infrastructures such as software, protocols, and formats.
|
||||||
history of programming languages for music remains intertwined with the
|
As long as the history of programming languages for music remains
|
||||||
history of computer music as it relates to specific genres or
|
intertwined with the history of computer music as it relates to specific
|
||||||
communities, it becomes difficult to analyze music created with
|
genres or communities, it will be difficult to analyze music created
|
||||||
computers as merely a passive means.
|
with computers as merely a passive means.
|
||||||
|
|
||||||
In this paper, the history of programming languages for music is
|
In this paper, the history of programming languages for music is
|
||||||
reexamined with an approach that, in contrast to Lyon, adopts a
|
reexamined with an approach that, unlike Lyon's, adopts a radically
|
||||||
radically style-agnostic perspective. Rather than focusing on what has
|
style-agnostic perspective. Rather than focusing on what has been
|
||||||
been created with these tools, the emphasis is placed on how these tools
|
created with these tools, the emphasis is placed on how these tools
|
||||||
themselves have been constructed. The paper centers on the following two
|
themselves have been constructed. The paper centers on the following two
|
||||||
topics: 1. A critique of the universality of sound representation using
|
topics: 1. A critique of the universality of sound representation using
|
||||||
pulse-code modulation (PCM)---the foundational concept underlying most
|
pulse-code modulation (PCM)---the foundational concept underlying most
|
||||||
@@ -112,9 +109,9 @@ internal workings.
|
|||||||
Ultimately, the paper concludes that programming languages for music
|
Ultimately, the paper concludes that programming languages for music
|
||||||
developed since the 2000s are not solely aimed at creating new music but
|
developed since the 2000s are not solely aimed at creating new music but
|
||||||
also serve as alternatives to the often-invisible technological
|
also serve as alternatives to the often-invisible technological
|
||||||
infrastructures surrounding music, such as formats and protocols. By
|
infrastructures surrounding music such as formats and protocols. Thus,
|
||||||
doing so, the paper proposes new perspectives for the historical study
|
the paper proposes new perspectives for the historical study of music
|
||||||
of music created with computers.
|
created with computers.
|
||||||
|
|
||||||
\section{PCM and Early Computer
|
\section{PCM and Early Computer
|
||||||
Music}\label{pcm-and-early-computer-music}
|
Music}\label{pcm-and-early-computer-music}
|
||||||
@@ -126,12 +123,11 @@ UK and Australia prior to MUSIC have also been
|
|||||||
documented\citep{doornbusch2017}.
|
documented\citep{doornbusch2017}.
|
||||||
|
|
||||||
The earliest experiments with sound generation on computers in the 1950s
|
The earliest experiments with sound generation on computers in the 1950s
|
||||||
involved controlling the intervals of one-bit pulses to
|
involved controlling the intervals of one-bit pulses to control pitch.
|
||||||
control pitch. This was partly because the operational clock frequencies
|
This was partly because the operational clock frequencies of early
|
||||||
of early computers fell within the audible range, making the
|
computers fell within the audible range, making the sonification of
|
||||||
sonification of electrical signals a practical and cost-effective
|
electrical signals a practical and cost-effective debugging method
|
||||||
debugging method compared to visualizing them on displays or
|
compared to visualizing them on displays or oscilloscopes.
|
||||||
oscilloscopes.
|
|
||||||
|
|
||||||
For instance, Louis Wilson, who was an engineer of the BINAC in the UK,
|
For instance, Louis Wilson, who was an engineer of the BINAC in the UK,
|
||||||
noticed that an AM radio placed near the computer could pick up weak
|
noticed that an AM radio placed near the computer could pick up weak
|
||||||
@@ -142,18 +138,18 @@ debugging. Frances Elizabeth Holberton took this a step further by
|
|||||||
programming the computer to generate pulses at desired intervals,
|
programming the computer to generate pulses at desired intervals,
|
||||||
creating melodies in 1949\citep{woltman1990}.
|
creating melodies in 1949\citep{woltman1990}.
|
||||||
|
|
||||||
Also, some computers at this time, such as the CSIR Mark I (CSIRAC) in
|
Further, some computers at this time, such as the CSIR Mark I (CSIRAC)
|
||||||
Australia often had primitive ``hoot'' instructions that emit a single
|
in Australia often had primitive ``hoot'' instructions that emitted a
|
||||||
pulse to a speaker. Early sound generation using computers, including
|
single pulse to a speaker. Early sound generation using computers,
|
||||||
the BINAC and CSIR Mark I, primarily involved playing melodies of
|
including the BINAC and CSIR Mark I, primarily involved playing melodies
|
||||||
existing music.
|
of existing music.
|
||||||
|
|
||||||
However, not all sound generation at this time was merely involved the
|
However, not all sound generation at this time was merely the
|
||||||
reproduction of existing music. Doornbusch highlights experiments on the
|
reproduction of existing music. Doornbusch highlights experiments on the
|
||||||
Pilot ACE (the Prototype for Automatic Computing Engine) in the UK,
|
Pilot ACE (the Prototype for Automatic Computing Engine) in the UK,
|
||||||
which utilized acoustic delay line memory to produce unique
|
which utilized acoustic delay line memory to produce unique
|
||||||
sounds\citep[pp.303-304]{doornbusch2017}. Acoustic delay line memory,
|
sounds\citep[pp.303-304]{doornbusch2017}. Acoustic delay line memory,
|
||||||
used as the main memory in early computers such as the BINAC and the
|
used as the main memory in early computers, such as the BINAC and the
|
||||||
CSIR Mark I, employed the feedback of pulses traveling through mercury
|
CSIR Mark I, employed the feedback of pulses traveling through mercury
|
||||||
via a speaker and microphone setup to retain data. Donald Davis, an
|
via a speaker and microphone setup to retain data. Donald Davis, an
|
||||||
engineer on the ACE project, described the sounds it produced as
|
engineer on the ACE project, described the sounds it produced as
|
||||||
@@ -171,9 +167,9 @@ into colored noise as the complexity went beyond human understanding.
|
|||||||
|
|
||||||
This music arose unintentionally during program optimization and was
|
This music arose unintentionally during program optimization and was
|
||||||
made possible by the ``misuse'' of switches installed for debugging
|
made possible by the ``misuse'' of switches installed for debugging
|
||||||
delay line memory. Media scholar Miyazaki described the practice of
|
delay line memory. Media scholar, Miyazaki, described the practice of
|
||||||
listening to sounds generated by algorithms and their bit patterns,
|
listening to sounds generated by algorithms and their bit patterns,
|
||||||
integrated into programming, as ``Algo- \emph{rhythmic}
|
integrated into programming, as ``Algo-\emph{rhythmic}
|
||||||
Listening''\citep{miyazaki2012}.
|
Listening''\citep{miyazaki2012}.
|
||||||
|
|
||||||
Doornbusch warns against ignoring these early computer music practices
|
Doornbusch warns against ignoring these early computer music practices
|
||||||
@@ -181,22 +177,21 @@ simply because they did not directly influence subsequent
|
|||||||
research\citep[p.305]{doornbusch2017}. Indeed, the sounds produced by
|
research\citep[p.305]{doornbusch2017}. Indeed, the sounds produced by
|
||||||
the Pilot ACE challenge the post-acousmatic historical narrative, which
|
the Pilot ACE challenge the post-acousmatic historical narrative, which
|
||||||
suggests that computer music transitioned from being democratized in
|
suggests that computer music transitioned from being democratized in
|
||||||
closed electro-acoustic music laboratories to individual musicians.
|
closed electro-\\acoustic music laboratories to being embraced by
|
||||||
|
individual musicians.
|
||||||
|
|
||||||
This is because the sounds generated by the Pilot ACE were not created
|
This is because the sounds generated by the Pilot ACE were not created
|
||||||
by musical experts, nor were they solely intended for debugging
|
by musical experts, nor were they solely intended for debugging
|
||||||
purposes. Instead, they were programmed with the goal of producing
|
purposes. Instead, they were programmed with the goal of producing
|
||||||
interesting sounds. Moreover, these sounds were tied to the hardware of
|
interesting sounds. Moreover, these sounds were tied to the hardware of
|
||||||
the acoustic delay line memory---a feature that was likely difficult to
|
the acoustic delay line memory---a feature that is likely difficult to
|
||||||
replicate, even in today's sound programming environments.
|
replicate, even in today's sound programming environments.
|
||||||
|
|
||||||
Similarly, in the 1960s at MIT, Peter Samson took advantage of the
|
Similarly, in the 1960s at the Massachusetts Institute of Technology
|
||||||
debugging speaker on the TX-0, a machine that had become outdated and
|
(MIT), Peter Samson exploited the debugging speaker on the TX-0, a
|
||||||
was freely available for students to use. He conducted experiments in
|
machine that had become outdated and was freely available for students
|
||||||
which he played melodies, such as Bach fugues, using ``hoot''
|
to use. He conducted experiments in which he played melodies, such as
|
||||||
instruction\citep{levy_hackers_2010}. Samson's experiments with the TX-0
|
Bach fugues, using ``hoot'' instruction\citep{levy_hackers_2010}.
|
||||||
later evolved into the creation of a program that allowed melodies to be
|
|
||||||
described using text within MIT.
|
|
||||||
|
|
||||||
Building on this, Samson developed a program called the Harmony Compiler
|
Building on this, Samson developed a program called the Harmony Compiler
|
||||||
for the DEC PDP-1, which was derived from the TX-0. This program gained
|
for the DEC PDP-1, which was derived from the TX-0. This program gained
|
||||||
@@ -216,26 +211,24 @@ PCM}\label{acousmatic-listening-the-premise-of-the-universality-of-pcm}
|
|||||||
|
|
||||||
One of the reasons why MUSIC led to subsequent advancements in research
|
One of the reasons why MUSIC led to subsequent advancements in research
|
||||||
was not simply that it was developed early, but because it was the first
|
was not simply that it was developed early, but because it was the first
|
||||||
to implement, but because it was the first to implement sound
|
to implement sound representation on a computer based on PCM, which
|
||||||
representation on a computer based on \textbf{pulse-code modulation
|
theoretically can generate ``almost any sound''.
|
||||||
(PCM)}, which theoretically can generate ``almost any
|
|
||||||
sound''.
|
|
||||||
|
|
||||||
PCM, the foundational digital sound representation today, involves
|
PCM, the foundational digital sound representation today, involves
|
||||||
sampling audio waveforms at discrete intervals and quantizing the sound
|
sampling audio waveforms at discrete intervals and quantizing the sound
|
||||||
pressure at each interval as discrete numerical values.
|
pressure at each interval as discrete numerical values.
|
||||||
|
|
||||||
The issue with the universalism of PCM in the history of computer music
|
The problem with the universalism of PCM in the history of computer
|
||||||
is inherent in the concept of Acousmatic Listening, which serves as a
|
music is inherent in the concept of acousmatic listening, which serves
|
||||||
premise for Post-Acousmatic. Acousmatic, introduced by Piegnot as a
|
as a premise for post-acousmatic. Acousmatic listening, introduced by
|
||||||
listening style for tape music such as musique concrète and later
|
Piegnot as a listening style for tape music, such as musique concrète,
|
||||||
theorized by Schaeffer\citep[p106]{adkins2016}, refers to a mode of
|
and later theorized by Schaeffer\citep[p106]{adkins2016}, refers to a
|
||||||
listening in which the listener refrains from imagining a specific sound
|
mode of listening in which the listener refrains from imagining a
|
||||||
source. This concept has been widely applied in theories of listening to
|
specific sound source. This concept has been widely applied in theories
|
||||||
recorded sound, including Michel Chion's analysis of sound design in
|
of listening to recorded sound, including Michel Chion's analysis of
|
||||||
film.
|
sound design in film.
|
||||||
|
|
||||||
However, as sound studies scholar Jonathan Sterne has pointed out,
|
However, as sound studies scholar, Jonathan Sterne, has observed,
|
||||||
discourses surrounding acousmatic listening often work to delineate
|
discourses surrounding acousmatic listening often work to delineate
|
||||||
pre-recording auditory experiences as ``natural'' by
|
pre-recording auditory experiences as ``natural'' by
|
||||||
contrast\footnote{Sterne later critiques the phenomenological basis of
|
contrast\footnote{Sterne later critiques the phenomenological basis of
|
||||||
@@ -264,9 +257,9 @@ noise from it. Sampling theory builds on this premise through Shannon's
|
|||||||
information theory by statistically modeling human auditory
|
information theory by statistically modeling human auditory
|
||||||
characteristics: it assumes that humans cannot discern volume
|
characteristics: it assumes that humans cannot discern volume
|
||||||
differences below certain thresholds or perceive vibrations outside
|
differences below certain thresholds or perceive vibrations outside
|
||||||
specific frequency ranges. By limiting representation to the reconizable range,
|
specific frequency ranges. By limiting representation to the reconizable
|
||||||
sampling theory ensures that all audible sounds can be effectively
|
range, sampling theory ensures that all audible sounds can be
|
||||||
encoded.
|
effectively encoded.
|
||||||
|
|
||||||
Incidentally, the actual implementation of PCM in MUSIC I only allowed
|
Incidentally, the actual implementation of PCM in MUSIC I only allowed
|
||||||
for monophonic triangle waves with controllable volume, pitch, and
|
for monophonic triangle waves with controllable volume, pitch, and
|
||||||
@@ -274,24 +267,25 @@ timing\citep{Mathews1980}. Would anyone today describe such a system as
|
|||||||
capable of producing ``almost any sound''?
|
capable of producing ``almost any sound''?
|
||||||
|
|
||||||
Even when considering more contemporary applications, processes like
|
Even when considering more contemporary applications, processes like
|
||||||
ring modulation (RM), amplitude modulation (AM), or distortion often
|
ring modulation and amplitude modulation, or distortion often cause
|
||||||
generate aliasing artifacts unless proper oversampling is applied. These
|
aliasing artifacts unless proper oversampling is applied. These
|
||||||
artifacts occur because PCM, while universally suitable for reproducing
|
artifacts occur because PCM, while universally suitable for reproducing
|
||||||
recorded sound, is not inherently versatile as a medium for generating
|
recorded sound, is not inherently versatile as a medium for generating
|
||||||
new sounds. As Puckette has argued, alternative representations, for instance, representation by a sequence of linear segments or physical modeling synthesis, offer
|
new sounds. As Puckette argues, alternative representations, for
|
||||||
other possibilities\citep{puckette2015}. Therefore, PCM is not a
|
instance, representation by a sequence of linear segments or physical
|
||||||
completely universal tool for creating sound.
|
modeling synthesis, offer other possibilities\citep{puckette2015}.
|
||||||
|
Therefore, PCM is not a completely universal tool for creating sound.
|
||||||
|
|
||||||
\section{What Does the Unit Generator
|
\section{What Does the Unit Generator
|
||||||
Hide?}\label{what-does-the-unit-generator-hide}
|
Hide?}\label{what-does-the-unit-generator-hide}
|
||||||
|
|
||||||
Beginning with version III, MUSIC took the form of a block diagram compiler that processes two input sources: a score
|
Beginning with Version III, MUSIC took the form of a block diagram
|
||||||
language, which represents a list of time-varying parameters, and an
|
compiler that processes two input sources: a score language, which
|
||||||
orchestra language, which describes the connections between \textbf{Unit
|
represents a list of time-varying parameters, and an orchestra language,
|
||||||
Generators} such as oscillators and filters. In this paper, the term
|
which describes the connections between \textbf{unit generator (UGen)}
|
||||||
``Unit Generator''refers to a signal processing module whose
|
such as oscillators and filters. In this paper, the term ``UGen'' refers
|
||||||
implementation is either not open or written in a language different
|
to a signal processing module whose implementation is either not open or
|
||||||
from the one used by the user.
|
written in a language different from the one used by the user.
|
||||||
|
|
||||||
The MUSIC family, in the context of computer music research, achieved
|
The MUSIC family, in the context of computer music research, achieved
|
||||||
success for performing sound synthesis based on PCM but this success
|
success for performing sound synthesis based on PCM but this success
|
||||||
@@ -316,10 +310,10 @@ willing to experiment.\citep[p17]{Mathews1980}
|
|||||||
|
|
||||||
This clear delineation of roles between musicians and scientists became
|
This clear delineation of roles between musicians and scientists became
|
||||||
one of the defining characteristics of post-MUSIC computer music
|
one of the defining characteristics of post-MUSIC computer music
|
||||||
research. Paradoxically, while computer music research aimed to create
|
research. Paradoxically, although computer music research aimed to
|
||||||
sounds never heard before, it also paved the way for further research by
|
create sounds never heard before, it also paved the way for further
|
||||||
allowing musicians to focus on composition without having to understand
|
research by allowing musicians to focus on composition without having to
|
||||||
the cumbersome work of programming.
|
understand the cumbersome work of programming.
|
||||||
|
|
||||||
\subsection{Example: Hiding Internal State Variables in Signal
|
\subsection{Example: Hiding Internal State Variables in Signal
|
||||||
Processing}\label{example-hiding-internal-state-variables-in-signal-processing}
|
Processing}\label{example-hiding-internal-state-variables-in-signal-processing}
|
||||||
@@ -329,16 +323,15 @@ language and an orchestra language, the actual implementation of each
|
|||||||
programming language varies significantly, even within the series.
|
programming language varies significantly, even within the series.
|
||||||
|
|
||||||
One notable but often overlooked example is MUSIGOL, a derivative of
|
One notable but often overlooked example is MUSIGOL, a derivative of
|
||||||
MUSIC IV \citep{innis_sound_1968}. In MUSIGOL, not only was the system
|
MUSIC IV \citep{innis_sound_1968}. In MUSIGOL, the system, the score and
|
||||||
itself but even the score and orchestra defined by user were written
|
orchestra defined by user were written entirely as ALGOL 60 language.
|
||||||
entirely as ALGOL 60 language. Similar to today's Processing or Arduino,
|
Similar to today's Processing or Arduino, MUSIGOL is one of the earliest
|
||||||
MUSIGOL is one of the earliest internal DSL (Domain-specific languages) for
|
internal domain-specific languages (DSL) for music; thus, it is
|
||||||
music, which means implemented as an library \footnote{While
|
implemented as an library\footnote{While MUS10, used at Stanford
|
||||||
MUS10, used at Stanford University, was not an internal DSL, it was
|
University, was not an internal DSL, it was created by modifying an
|
||||||
created by modifying an existing ALGOL parser \citep[p.248]{loy1985}.}.
|
existing ALGOL parser \citep[p.248]{loy1985}.}. (According to the
|
||||||
(Therefore, according to the definition of Unit Generator provided in
|
definition in this paper, MUSIGOL does not qualify as a language that
|
||||||
this paper, MUSIGOL does not qualify as a language that uses Unit
|
uses UGen.)
|
||||||
Generators.)
|
|
||||||
|
|
||||||
The level of abstraction deemed intuitive for musicians varied across
|
The level of abstraction deemed intuitive for musicians varied across
|
||||||
different iterations of the MUSIC N series. This can be illustrated by
|
different iterations of the MUSIC N series. This can be illustrated by
|
||||||
@@ -358,28 +351,27 @@ to the normalized values of the coefficients \(I_2\) and \(I_3\),
|
|||||||
divided by \(I_1\) (as a result, the overall gain of the filter can be
|
divided by \(I_1\) (as a result, the overall gain of the filter can be
|
||||||
greater or less than 1). The parameters \passthrough{\lstinline!Pi!} and
|
greater or less than 1). The parameters \passthrough{\lstinline!Pi!} and
|
||||||
\passthrough{\lstinline!Pj!} are normally used to receive parameters
|
\passthrough{\lstinline!Pj!} are normally used to receive parameters
|
||||||
from the Score, specifically among the available
|
from the score, specifically among the available
|
||||||
\passthrough{\lstinline!P0!} to \passthrough{\lstinline!P30!}. In this
|
\passthrough{\lstinline!P0!} to \passthrough{\lstinline!P30!}. In this
|
||||||
case, however, these parameters are repurposed as general-purpose memory
|
case, however, these parameters are repurposed as general-purpose memory
|
||||||
to temporarily store feedback signals. Similarly, other Unit Generators,
|
to temporarily store feedback signals. Similarly, other UGens, such as
|
||||||
such as oscillators, reuse note parameters to handle operations like
|
oscillators, reuse note parameters to handle operations like phase
|
||||||
phase accumulation. As a result, users needed to manually calculate
|
accumulation. As a result, users needed to manually calculate feedback
|
||||||
feedback gains based on the desired frequency
|
gains based on the desired frequency characteristics\footnote{It is said
|
||||||
characteristics\footnote{It is said that a preprocessing feature called
|
that a preprocessing feature called \passthrough{\lstinline!CONVT!}
|
||||||
\passthrough{\lstinline!CONVT!} could be used to transform frequency
|
could be used to transform frequency characteristics into coefficients
|
||||||
characteristics into coefficients
|
\citep[p77]{mathews_technology_1969}.}, and they also had to considder
|
||||||
\citep[p77]{mathews_technology_1969}.}, and they also had to account
|
at least two sample memory spaces.
|
||||||
for at least two sample memory spaces.
|
|
||||||
|
|
||||||
On the other hand, in later MUSIC 11, and its successor Csound by Barry
|
On the other hand, in newer MUSIC 11, and its successor, Csound, by
|
||||||
Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named
|
Barry Vercoe, the band-pass filter is defined as a UGen named
|
||||||
\passthrough{\lstinline!reson!}. This UGen takes four parameters: the
|
\passthrough{\lstinline!reson!}. This UGen takes four parameters: the
|
||||||
input signal, center cutoff frequency, bandwidth, and Q
|
input signal, center cutoff frequency, bandwidth, and Q
|
||||||
factor\citep[p248]{vercoe_computer_1983}. Unlike previous
|
factor\citep[p248]{vercoe_computer_1983}. Unlike previous
|
||||||
implementations, users no longer need to calculate coefficients
|
implementations, users no longer need to calculate coefficients
|
||||||
manually, nor do they need to be aware of the two-sample memory space.
|
manually, nor do they need to be aware of the two-sample memory space.
|
||||||
However, in MUSIC 11 and Csound, it is possible to implement this
|
However, in MUSIC 11 and Csound, it is possible to implement this
|
||||||
band-pass filter from scratch as a User-Defined Opcode (UDO) as shownin
|
band-pass filter from scratch as a user-defined opcode (UDO) as shown in
|
||||||
Listing \ref{lst:reson}. Vercoe emphasized that while signal processing
|
Listing \ref{lst:reson}. Vercoe emphasized that while signal processing
|
||||||
primitives should allow for low-level operations, such as single-sample
|
primitives should allow for low-level operations, such as single-sample
|
||||||
feedback, and eliminate black boxes, it is equally important to provide
|
feedback, and eliminate black boxes, it is equally important to provide
|
||||||
@@ -391,7 +383,7 @@ clutter'') when users do not need to understand the internal details
|
|||||||
FLT I1 O I2 I3 Pi Pj;
|
FLT I1 O I2 I3 Pi Pj;
|
||||||
\end{lstlisting}
|
\end{lstlisting}
|
||||||
|
|
||||||
\begin{lstlisting}[caption={Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)}, label=lst:reson]
|
\begin{lstlisting}[caption={Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted owing to space restriction.)}, label=lst:reson]
|
||||||
instr 1
|
instr 1
|
||||||
la1 init 0
|
la1 init 0
|
||||||
la2 init 0
|
la2 init 0
|
||||||
@@ -411,38 +403,38 @@ a1 reson a1,p5,p6,1
|
|||||||
endin
|
endin
|
||||||
\end{lstlisting}
|
\end{lstlisting}
|
||||||
|
|
||||||
On the other hand, in succeeding environments that inherit the Unit
|
On the other hand, in succeeding environments that inherit the UGen
|
||||||
Generator paradigm, such as Pure Data \citep{puckette_pure_1997}, Max
|
paradigm, such as Pure Data \citep{puckette_pure_1997}, Max (whose
|
||||||
(whose signal processing functionalities were ported from Pure Data as
|
signal processing functionalities were ported from Pure Data as MSP),
|
||||||
MSP), SuperCollider \citep{mccartney_supercollider_1996}, and ChucK
|
SuperCollider \citep{mccartney_supercollider_1996}, and ChucK
|
||||||
\citep{wang_chuck_2015}, primitive UGens are implemented in
|
\citep{wang_chuck_2015}, primitive UGens are implemented in
|
||||||
general-purpose languages like C or C++\footnote{ChucK later introduced
|
general-purpose languages such as C or C++\footnote{ChucK later
|
||||||
ChuGen, which is similar extension to Csound's UDO, allowing users to
|
introduced ChuGen, which is similar extension to Csound's UDO,
|
||||||
define UGens within the ChucK language itself \citep{Salazar2012}.
|
allowing users to define UGens within the ChucK language itself
|
||||||
However, not all existing UGens are replaced by UDOs by default both
|
\citep{Salazar2012}. However, not all existing UGens are replaced by
|
||||||
in Csound and ChucK, which remain supplemental features possibly
|
UDOs by default both in Csound and ChucK, which remain supplemental
|
||||||
because the runtime performance of UDO is inferior to natively
|
features possibly because the runtime performance of UDO is inferior
|
||||||
implemented UGens.}. If users wish to define low-level UGens (called
|
to natively implemented UGens.}. If users wish to define low-level
|
||||||
external objects in Max and Pd), they need to set up a development
|
UGens (called external objects in Max and Pd), they need to set up a
|
||||||
environment for C or C++.
|
development environment for C or C++.
|
||||||
|
|
||||||
When UGens are implemented in low-level languages like C, even if the
|
When UGens are implemented in low-level languages like C, even if the
|
||||||
implementation is open-source, the division of knowledge effectively
|
implementation is open-source, the division of knowledge effectively
|
||||||
forces users (composers) to treat UGens as black boxes. This reliance on
|
forces users (composers) to treat UGens as black boxes. This reliance on
|
||||||
UGens as black boxes reflects and deepens the division of labor between
|
UGens as black boxes reflects and deepens the division of labor between
|
||||||
musicians and scientists that was established in MUSIC though it can be
|
musicians and scientists that was established in MUSIC, though it can be
|
||||||
interpreted as both a cause and a result.
|
interpreted as both a cause and a result.
|
||||||
|
|
||||||
For example, Puckette, the developer of Max and Pure Data, noted that
|
For example, Puckette, the developer of Max and Pure Data, notes that
|
||||||
the division of labor at IRCAM between Researchers, Musical
|
the division of labor at IRCAM between researchers, musical assistants
|
||||||
Assistants(Realizers), and Composers has parallels in the current Max
|
(realizers), and composers has parallels in the current Max ecosystem,
|
||||||
ecosystem, where roles are divided among Max developers themselves,
|
where roles are divided among Max developers themselves, developers of
|
||||||
developers of external objects, and Max users \citep{puckette_47_2020}.
|
external objects, and Max users \citep{puckette_47_2020}. As described
|
||||||
As described in the ethnography of 1980s IRCAM by anthropologist
|
in the ethnography of 1980s IRCAM by anthropologist Georgina Born, the
|
||||||
Georgina Born, the division of labor between fundamental research
|
division of labor between fundamental research scientists and composers
|
||||||
scientists and composers at IRCAM was extremely clear. This structure
|
at IRCAM was extremely clear. This structure was also tied to the
|
||||||
was also tied to the exclusion of popular music and its associated
|
exclusion of popular music and its associated technologies from IRCAM's
|
||||||
technologies from IRCAM's research focus \citep{Born1995}.
|
research focus \citep{Born1995}.
|
||||||
|
|
||||||
However, such divisions are not necessarily the result of differences in
|
However, such divisions are not necessarily the result of differences in
|
||||||
values along the axes analyzed by Born, such as
|
values along the axes analyzed by Born, such as
|
||||||
@@ -467,37 +459,49 @@ particular type of consumer. \citep[p.89]{theberge_any_1997}
|
|||||||
|
|
||||||
This argument can be extended beyond electronic music to encompass
|
This argument can be extended beyond electronic music to encompass
|
||||||
computer-based music in general. For example, media researcher Lori
|
computer-based music in general. For example, media researcher Lori
|
||||||
Emerson noted that while the proliferation of personal computers began
|
Emerson noted that although the proliferation of personal computers
|
||||||
with the vision of a ``metamedium''---tools that users could modify
|
began with the vision of a ``metamedium''---tools that users could
|
||||||
themselves, as exemplified by Xerox PARC's Dynabook---the vision was
|
modify themselves, as exemplified by Xerox PARC's Dynabook---the vision
|
||||||
ultimately realized in an incomplete form through devices like the
|
was ultimately realized in an incomplete form through devices such as
|
||||||
Macintosh and iPad, which distanced users from programming by
|
the Macintosh and iPad, which distanced users from programming by
|
||||||
black-boxing functionality \citep{emerson2014}. In fact, Alan Kay, the
|
black-boxing functionality \citep{emerson2014}. In fact, Alan Kay, the
|
||||||
architect behind the Dynabook concept, remarked that while the iPad's
|
architect behind the Dynabook concept, remarked that while the iPad's
|
||||||
appearance may resemble the ideal he originally envisioned, its lack of
|
appearance may resemble the ideal he originally envisioned, its lack of
|
||||||
extensibility through programming renders it merely a device for media
|
extensibility through programming rendered it merely a device for media
|
||||||
consumption \citep{kay2019}.
|
consumption \citep{kay2019}.
|
||||||
|
|
||||||
Musicians have attempted to resist the consumeristic use of those tools through appropriation and exploitation \citep{kelly_cracked_2009}. However, just as circuit bending has been narrowed down to its potential by a literal black box - one big closed IC of aggregated functions \citep[p225]{inglizian_beyond_2020}, and glitching has been recovered from methodology to a superficial auditory style \citep{casconeErrormancyGlitchDivination2011}, capitalism-based technology expands in a direction that does not permit users to misuse. Under these circumstances, designing a new programming language does not merely provide musicians with the means to create new music, but is itself contextualized as a musicking practice following hacking, an active reconstruction of the technological infrastructure that is allowed to be hacked.
|
Musicians have attempted to resist the consumeristic use of those tools
|
||||||
|
through appropriation and exploitation \citep{kelly_cracked_2009}.
|
||||||
|
However, just as circuit bending has been narrowed down to its potential
|
||||||
|
by a literal black box - one big closed IC of aggregated functions
|
||||||
|
\citep[p225]{inglizian_beyond_2020}, and glitching has shifted from
|
||||||
|
methodology to a superficial auditory style
|
||||||
|
\citep{casconeErrormancyGlitchDivination2011}, capitalism-based
|
||||||
|
technology expands in a direction that prevents users from misusing it.
|
||||||
|
Under these circumstances, designing a new programming language does not
|
||||||
|
merely provide musicians with the means to create new music, but is
|
||||||
|
itself contextualized as a musicking practice following hacking, an
|
||||||
|
active reconstruction of the technological infrastructure that is
|
||||||
|
allowed to be hacked.
|
||||||
|
|
||||||
\section{Context of Programming Languages for Music After
|
\section{Context of Programming Languages for Music After
|
||||||
2000}\label{context-of-programming-languages-for-music-after-2000}
|
2000}\label{context-of-programming-languages-for-music-after-2000}
|
||||||
|
|
||||||
Based on the discussions thus far, music programming languages developed
|
Under this premise, music programming languages developed after the
|
||||||
after the 2000s can be categorized into two distinct directions: those
|
2000s can be categorized into two distinct directions: those that narrow
|
||||||
that narrow the scope of the language's role by introducing alternative
|
the scope of the language's role by introducing alternative abstractions
|
||||||
abstractions at a higher-level, distinct from the UGen paradigm, and
|
at a higher-level, distinct from the UGen paradigm, and those that
|
||||||
those that expand the general-purpose capabilities of the language,
|
expand the general-purpose capabilities of the language, reducing
|
||||||
reducing black-boxing.
|
black-boxing.
|
||||||
|
|
||||||
Languages that pursued alternative higher-level abstractions have
|
Languages that pursued alternative higher-level abstractions have
|
||||||
evolved alongside the culture of live coding, where performances are
|
evolved alongside the culture of live coding, where performances are
|
||||||
conducted by rewriting code in real time. The activities of the live
|
conducted by rewriting code in real time. The activities of the live
|
||||||
coding community, including groups such as TOPLAP since the 2000s, were
|
coding community, including groups, such as TOPLAP, since the 2000s,
|
||||||
not only about turning coding itself into a performance but also served
|
were not only about turning coding itself into a performance but also
|
||||||
as a resistance against laptop performances that relied on black-boxed
|
served as a resistance against laptop performances that relied on
|
||||||
music software. This is evident in the community's manifesto, which
|
black-boxed music software. This is evident in the community's
|
||||||
states, ``Obscurantism is dangerous''
|
manifesto, which states, ``Obscurantism is dangerous''
|
||||||
\citep{toplap_manifestodraft_2004}.
|
\citep{toplap_manifestodraft_2004}.
|
||||||
|
|
||||||
Languages implemented as clients for SuperCollider, such as \textbf{IXI}
|
Languages implemented as clients for SuperCollider, such as \textbf{IXI}
|
||||||
@@ -512,49 +516,49 @@ the abstraction of patterns in TidalCycles is not limited to music but
|
|||||||
can also be applied to visual patterns and other outputs, meaning it is
|
can also be applied to visual patterns and other outputs, meaning it is
|
||||||
not inherently tied to PCM-based waveform output as the final result.
|
not inherently tied to PCM-based waveform output as the final result.
|
||||||
|
|
||||||
On the other hand, due to their high-level design, these languages often
|
On the other hand, owing to their high-level design, these languages
|
||||||
rely on ad-hoc implementations for tasks like sound manipulation and
|
often rely on ad-hoc implementations for tasks like sound manipulation
|
||||||
low-level signal processing, such as effects. McCartney, the developer
|
and low-level signal processing, such as effects. McCartney, the
|
||||||
of SuperCollider, stated that if general-purpose programming languages
|
developer of SuperCollider, stated that if general-purpose programming
|
||||||
were sufficiently expressive, there would be no need to create
|
languages were sufficiently expressive, there would be no need to create
|
||||||
specialized languages \citep{McCartney2002}. This prediction appears
|
specialized languages \citep{McCartney2002}, which appears reasonable
|
||||||
reasonable when considering examples like MUSIGOL. However, in practice,
|
when considering examples like MUSIGOL. However, in practice, scripting
|
||||||
scripting languages that excel in dynamic program modification face
|
languages that excel in dynamic program modification face challenges in
|
||||||
challenges in modern preemptive OS environments. For instance, dynamic
|
modern preemptive operating system (OS) environments. For instance,
|
||||||
memory management techniques such as garbage collection can hinder
|
dynamic memory management techniques such as garbage collection can
|
||||||
deterministic execution timing required for real-time processing
|
hinder deterministic execution timing required for real-time processing
|
||||||
\citep{Dannenberg2005}.
|
\citep{Dannenberg2005}.
|
||||||
|
|
||||||
Historically, programming languages like FORTRAN or C served as a
|
Historically, programming languages, such as FORTRAN or C, served as a
|
||||||
portable way for implementing programs across different architectures.
|
portable way for implementing programs across different architectures.
|
||||||
However, with the proliferation of higher-level languages, programming
|
However, with the proliferation of higher-level languages, programming
|
||||||
in C or C++ has become relatively more difficult, akin to assembly
|
in C or C++ has become relatively more difficult, akin to assembly
|
||||||
language in earlier times. Furthermore, considering the challenges of
|
language in earlier times. Furthermore, considering the challenges of
|
||||||
portability not only across different CPUs but also diverse host
|
portability not only across different CPUs but also diverse host
|
||||||
environments such as OSs and the Web, these languages are no longer as
|
environments such as OSs and the Web, these languages are no longer as
|
||||||
portable as they once were. Consequently, internal DSL for music
|
portable as they once were. Consequently, internal DSL for music,
|
||||||
including signal processing have become exceedingly rare, with only a
|
including signal processing, have become exceedingly rare, with only a
|
||||||
few examples such as LuaAV\citep{wakefield2010}.
|
few examples such as LuaAV\citep{wakefield2010}.
|
||||||
|
|
||||||
Instead, an approach has emerged to create general-purpose languages
|
Instead, an approach has emerged to create general-purpose languages
|
||||||
specifically designed for use in music from the ground up. One prominent
|
specifically designed for use in music from the ground up. One prominent
|
||||||
example is \textbf{Extempore}, a live programming environment developed
|
example is \textbf{Extempore}, a live programming environment developed
|
||||||
by Sorensen \citep{sorensen_extempore_2018}. Extempore consists of
|
by Sorensen \citep{sorensen_extempore_2018}. Extempore consists of
|
||||||
Scheme, a LISP-based language, and xtlang, a meta-implementation on top
|
Scheme, a Lisp-based language, and xtlang, a meta-implementation on top
|
||||||
of Scheme. While xtlang requires users to write hardware-oriented type
|
of Scheme. While xtlang requires users to write hardware-oriented type
|
||||||
signatures similar to those in C, it leverages the LLVM\citep{Lattner}, the compiler
|
signatures similar to those in C, it leverages the compiler
|
||||||
infrastructure to just-in-time (JIT) compile signal
|
infrastructure, LLVM\citep{Lattner}, to just-in-time (JIT) compile
|
||||||
processing code, including sound manipulation, into machine code for
|
signal processing code, including sound manipulation, into machine code
|
||||||
high-speed execution.
|
for high-speed execution.
|
||||||
|
|
||||||
The expressive power of general-purpose languages and compiler
|
The expressive power of general-purpose languages and compiler
|
||||||
infrastructures like LLVM has given rise to an approach focused on
|
infrastructures, such as LLVM, has given rise to an approach focused on
|
||||||
designing languages with mathematical formalization that reduces
|
designing languages with mathematical formalization that reduces
|
||||||
black-boxing. \textbf{Faust} \citep{Orlarey2009}, for instance, is a
|
black-boxing. \textbf{Faust} \citep{Orlarey2009}, for instance, is a
|
||||||
language that retains a graph-based structure akin to UGens but is built
|
language that retains a graph-based structure akin to UGens but is built
|
||||||
on a formal system called Block Diagram Algebra. Thanks to its
|
on a formal system called Block Diagram Algebra. Thanks to its
|
||||||
formalization, Faust can be transpiled into various low-level languages
|
formalization, Faust can be transpiled into various low-level languages,
|
||||||
such as C, C++, or Rust and can also be used as external objects in Max
|
such as C, C++, or Rust, and can also be used as external objects in Max
|
||||||
or Pure Data.
|
or Pure Data.
|
||||||
|
|
||||||
Languages like \textbf{Kronos} \citep{norilo2015} and \textbf{mimium}
|
Languages like \textbf{Kronos} \citep{norilo2015} and \textbf{mimium}
|
||||||
@@ -565,48 +569,48 @@ processing while exploring interactive meta-operations on programs
|
|||||||
interoperability with other general-purpose languages
|
interoperability with other general-purpose languages
|
||||||
\citep{matsuura_lambda-mmm_2024}.
|
\citep{matsuura_lambda-mmm_2024}.
|
||||||
|
|
||||||
DSLs are constructed within a double bind:
|
DSLs are constructed within a double bind; they aim to specialize in a
|
||||||
they aim to specialize in a particular purpose while still providing a
|
particular purpose while still providing a certain degree of expressive
|
||||||
certain degree of expressive freedom through coding. In this context,
|
freedom through coding. In this context, efforts like Extempore, Kronos,
|
||||||
efforts like Extempore, Kronos, and mimium are not merely programming
|
and mimium are not merely programming languages for music but are also
|
||||||
languages for music but are also situated within the broader research
|
situated within the broader research context of functional reactive
|
||||||
context of functional reactive programming (FRP), which focuses on
|
programming, which focuses on representing time-varying values in
|
||||||
representing time-varying values in computation. Most computing models
|
computation. Most computing models lack an inherent concept of
|
||||||
lack an inherent concept of real-time and instead operates based on
|
real-time, operating instead based on discrete computational steps.
|
||||||
discrete computational steps. Similarly, low-level general-purpose
|
Similarly, low-level general-purpose programming languages do not
|
||||||
programming languages do not natively include primitives for real-time
|
natively include primitives for real-time concepts. Consequently, the
|
||||||
concepts. Consequently, the exploration of computational models tied to
|
exploration of computational models tied to time ---a domain inseparable
|
||||||
time ---a domain inseparable from music--- remains vital and has the
|
from music--- remains vital and has the potential to contribute to the
|
||||||
potential to contribute to the theoretical foundations of
|
theoretical foundations of general-purpose programming languages.
|
||||||
general-purpose programming languages.
|
|
||||||
|
|
||||||
However, strongly formalized languages come with another trade-off.
|
However, strongly formalized languages come with another trade-off.
|
||||||
While they allow UGens to be defined without black-boxing, understanding
|
Although they allow UGens to be defined without black-boxing,
|
||||||
the design and implementation of these languages often requires expert
|
understanding the design and implementation of these languages often
|
||||||
knowledge. This can create a deeper division between language developers
|
requires expert knowledge. This can create a deeper division between
|
||||||
and users, in contrast to the many but small and shallow divisions seen
|
language developers and users, in contrast to the many but small and
|
||||||
in the multi-language paradigm, like SuperCollider developers, external
|
shallow divisions seen in the multi-language paradigm, such as
|
||||||
UGen developers, client language developers (e.g., TidalCycles),
|
SuperCollider developers, external UGen developers, client language
|
||||||
SuperCollider users, and client language users.
|
developers (e.g., TidalCycles), SuperCollider users, and client language
|
||||||
|
users.
|
||||||
|
|
||||||
Although there is no clear solution to this trade-off, one intriguing
|
Although there is no clear solution to this trade-off, one intriguing
|
||||||
idea is the development of self-hosting languages for music---that is,
|
idea is the development of self-hosting languages for music---that is,
|
||||||
languages whose their own compilers are written in the language itself.
|
languages whose compilers are written in the language itself. At first
|
||||||
At first glance, this may seem impractical. However, by enabling users
|
glance, this may seem impractical. However, by enabling users to learn
|
||||||
to learn and modify the language's mechanisms spontaneously, this
|
and modify the language's mechanisms spontaneously, this approach could
|
||||||
approach could create an environment that fosters deeper engagement and
|
create an environment that fosters deeper engagement and understanding
|
||||||
understanding among users.
|
among users.
|
||||||
|
|
||||||
\section{Conclusion}\label{conclusion}
|
\section{Conclusion}\label{conclusion}
|
||||||
|
|
||||||
This paper has reexamined the history of computer music and music
|
This paper has reexamined the history of computer music and music
|
||||||
programming languages with a focus on the universalism of PCM and the
|
programming languages with a focus on the universalism of PCM and the
|
||||||
black-boxing tendencies of the Unit Generator paradigm. Historically, it
|
black-boxing tendencies of the UGen paradigm. Historically, it was
|
||||||
was expected that the clear division of roles between engineers and
|
expected that the clear division of roles between engineers and
|
||||||
composers would enable the creation of new forms of expression using
|
composers would enable the creation of new forms of expression using
|
||||||
computers. Indeed, from the perspective of Post-Acousmatic discourse,
|
computers. Indeed, from the perspective of post-acousmatic discourse,
|
||||||
some, such as Holbrook and Rudi, still consider this division to be a
|
some scholars, such as Holbrook and Rudi, still consider this division
|
||||||
positive development:
|
to be a positive development:
|
||||||
|
|
||||||
\begin{quote}
|
\begin{quote}
|
||||||
Most newer tools abstract the signal processing routines and variables,
|
Most newer tools abstract the signal processing routines and variables,
|
||||||
@@ -617,13 +621,13 @@ technologies.\citep[p2]{holbrook2022}
|
|||||||
\end{quote}
|
\end{quote}
|
||||||
|
|
||||||
However, this division of labor also creates a shared vocabulary (as
|
However, this division of labor also creates a shared vocabulary (as
|
||||||
exemplified in the Unit Generator by Mathews) and serves to perpetuate
|
exemplified in the UGen by Mathews) and serves to perpetuate it. By
|
||||||
it. By portraying new technologies as something externally introduced,
|
portraying new technologies as something externally introduced, and by
|
||||||
and by focusing on the agency of those who create music with computers,
|
focusing on the agency of those who create music with computers, the
|
||||||
the individuals responsible for building programming environments,
|
individuals responsible for building programming environments, software,
|
||||||
software, protocols, and formats are rendered invisible
|
protocols, and formats are rendered invisible \citep{sterne_there_2014}.
|
||||||
\citep{sterne_there_2014}. This leads to an oversight of the indirect
|
This leads to an oversight of the indirect power relationships produced
|
||||||
power relationships produced by these infrastructures.
|
by these infrastructures.
|
||||||
|
|
||||||
For this reason, future research on programming languages for music must
|
For this reason, future research on programming languages for music must
|
||||||
address how the tools, including the languages themselves, contribute
|
address how the tools, including the languages themselves, contribute
|
||||||
@@ -632,27 +636,27 @@ practice they enable), as well as the social (im)balances of power they
|
|||||||
produce.
|
produce.
|
||||||
|
|
||||||
The academic value of the research on programming languages for music is
|
The academic value of the research on programming languages for music is
|
||||||
often vaguely asserted, using terms such as ``general'', ``expressive'',
|
often vaguely asserted, using terms such as ``general,'' ``expressive,''
|
||||||
and ``efficient''. However, it is difficult to argue these claims when
|
and ``efficient.'' However, it is difficult to argue these claims when
|
||||||
processing speed is no longer the primary concern. Thus, as with
|
processing speed is no longer the primary concern. Thus, as with
|
||||||
Idiomaticity \citep{McPherson2020} by McPherson et al., we need to
|
idiomaticity \citep{McPherson2020} by McPherson et al., we need to
|
||||||
develop and share a vocabulary for understanding the value judgments we
|
develop and share a vocabulary for understanding the value judgments we
|
||||||
make about music languages.
|
make about music languages.
|
||||||
|
|
||||||
In a broader sense, the development of programming languages for music
|
In a broader sense, the development of programming languages for music
|
||||||
has also expanded to the individual level. Examples include
|
has also expanded to the individual level. Examples include
|
||||||
\textbf{Gwion} by Astor, which is inspired by ChucK and enhances its
|
\textbf{Gwion} by Astor, which is inspired by ChucK and enhances its
|
||||||
abstraction, such as lambda functions \citep{astor_gwion_2017};
|
abstraction through features, such as lambda functions
|
||||||
\textbf{Vult}, a DSP transpiler language created by Ruiz for his modular
|
\citep{astor_gwion_2017}; \textbf{Vult}, a DSP transpiler language
|
||||||
synthesizer hardware \citep{Ruiz2020}; and a UGen-based live coding
|
created by Ruiz for his modular synthesizer hardware \citep{Ruiz2020};
|
||||||
environment designed for web, \textbf{Glicol} \citep{lan_glicol_2020}.
|
and a UGen-based live coding environment designed for web,
|
||||||
However, these efforts have not yet been incorporate into academic
|
\textbf{Glicol} \citep{lan_glicol_2020}. However, these efforts have not
|
||||||
discourse.
|
yet been incorporated into academic discourse.
|
||||||
|
|
||||||
Conversely, practical knowledge of past languages in 1960s as well as
|
Conversely, practical knowledge of past languages in 1960s, as well as
|
||||||
real-time hardware-oriented systems from the 1980s, is gradually being
|
real-time hardware-oriented systems from the 1980s, is gradually being
|
||||||
lost. While research efforts such as \emph{Inside Computer Music}, which
|
lost. Although research efforts such as \emph{Inside Computer Music},
|
||||||
analyzes historical works of computer music, have begun
|
which analyzes historical works of computer music, have begun
|
||||||
\citep{clarke_inside_2020}, an archaeological practice focused on the
|
\citep{clarke_inside_2020}, an archaeological practice focused on the
|
||||||
construction of computer music systems themselves will also be
|
construction of computer music systems themselves will also be
|
||||||
necessary.
|
necessary.
|
||||||
|
|||||||
10
main.bib
10
main.bib
@@ -380,7 +380,7 @@
|
|||||||
year = {2006},
|
year = {2006},
|
||||||
journal = {Talk given at EMS 2006, Beijing},
|
journal = {Talk given at EMS 2006, Beijing},
|
||||||
urldate = {2025-01-17},
|
urldate = {2025-01-17},
|
||||||
howpublished = {\url{https://disis.music.vt.edu/eric/LyonPapers/Do\_We\_Still\_Need\_Computer\_Music.pdf}},
|
howpublished = {https://disis.music.vt.edu/eric/LyonPapers/Do\_We\_Still\_Need\_Computer\_Music.pdf},
|
||||||
file = {/Users/tomoya/Zotero/storage/SK2DXEE8/Do_We_Still_Need_Computer_Music.pdf}
|
file = {/Users/tomoya/Zotero/storage/SK2DXEE8/Do_We_Still_Need_Computer_Music.pdf}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -482,7 +482,7 @@
|
|||||||
author = {Mathews, M.V.},
|
author = {Mathews, M.V.},
|
||||||
year = {1963},
|
year = {1963},
|
||||||
month = nov,
|
month = nov,
|
||||||
journal = {Science,New Series},
|
journal = {Science, New Series},
|
||||||
volume = {142},
|
volume = {142},
|
||||||
number = {3592},
|
number = {3592},
|
||||||
eprint = {1712380},
|
eprint = {1712380},
|
||||||
@@ -698,7 +698,7 @@
|
|||||||
booktitle = {New {{Computational Paradigms}} for {{Computer Music}}},
|
booktitle = {New {{Computational Paradigms}} for {{Computer Music}}},
|
||||||
author = {Orlarey, Yann and Fober, Dominique and Letz, St{\'e}phane and Letz, Stephane},
|
author = {Orlarey, Yann and Fober, Dominique and Letz, St{\'e}phane and Letz, Stephane},
|
||||||
year = {2009},
|
year = {2009},
|
||||||
publisher = {DELATOUR FRANCE},
|
publisher = {Delatour France},
|
||||||
urldate = {2020-03-28},
|
urldate = {2020-03-28},
|
||||||
file = {/Users/tomoya/Zotero/storage/LB4PIMPY/full-text.pdf}
|
file = {/Users/tomoya/Zotero/storage/LB4PIMPY/full-text.pdf}
|
||||||
}
|
}
|
||||||
@@ -708,7 +708,7 @@
|
|||||||
author = {Ostertag, Bob},
|
author = {Ostertag, Bob},
|
||||||
year = {1998},
|
year = {1998},
|
||||||
urldate = {2025-01-17},
|
urldate = {2025-01-17},
|
||||||
howpublished = {\url{https://web.archive.org/web/20160312125123/http://bobostertag.com/writings-articles-computer-music-sucks.htm}},
|
howpublished = {https://web.archive.org/web/20160312125123/http://bobostertag.com/writings-articles-computer-music-sucks.htm},
|
||||||
file = {/Users/tomoya/Zotero/storage/9QAGQSVS/writings-articles-computer-music-sucks.html}
|
file = {/Users/tomoya/Zotero/storage/9QAGQSVS/writings-articles-computer-music-sucks.html}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -783,7 +783,7 @@
|
|||||||
}
|
}
|
||||||
|
|
||||||
@inproceedings{Salazar2012,
|
@inproceedings{Salazar2012,
|
||||||
title = {{{CHUGENS}}, {{CHUBGRAPHS}}, {{CHUGINS}}: 3 {{TIERS FOR EXTENDING CHUCK}}},
|
title = {{{ChuGens}}, {{ChubGraphs}}, {{ChuGins}}: 3 {{Tiers}} for {{Extending ChucK}}},
|
||||||
booktitle = {International {{Computer Music Conference Proceedings}}},
|
booktitle = {International {{Computer Music Conference Proceedings}}},
|
||||||
author = {Salazar, Spencer and Wang, Ge},
|
author = {Salazar, Spencer and Wang, Ge},
|
||||||
year = {2012},
|
year = {2012},
|
||||||
|
|||||||
94
main.md
94
main.md
@@ -1,32 +1,32 @@
|
|||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
Programming languages and environments for music, for instance, Max, Pure Data, Csound, and SuperCollider, has been referred to as "Computer Music Language"[@McCartney2002;@Nishino2016;@McPherson2020], "Language for Computer Music"[@Dannenberg2018], and "Computer Music Programming Systems"[@Lazzarini2013], though there is no clear consensus on the use of these terms. However, as the shared term "Computer Music" implies, these programming languages are deeply intertwined with the history of technology-driven music, which developed under the premise that "almost any sound can be produced"[@mathews1963,p557] through the use of computers.
|
Programming languages and environments for music such as Max, Pure Data, Csound, and SuperCollider, have been referred to as "computer music language"[@McCartney2002;@Nishino2016;@McPherson2020], "language for computer music"[@Dannenberg2018], and "computer music programming systems"[@Lazzarini2013], though there is no clear consensus on the use of these terms. However, as the shared term "computer music" implies, these programming languages are deeply intertwined with the history of technology-driven music, which developed under the premise that "almost any sound can be produced"[@mathews1963,p557] through the use of computers.
|
||||||
|
|
||||||
In the early days, when computers existed only in research laboratories and neither displays nor mice existed, creating sound or music with computers was inevitably equivalent to programming. Today, however, programming as a means to produce sound on a computer—rather than employing Digital Audio Workstation (DAW) software like Pro Tools is not popular. In other words, programming languages for music developed after the proliferation of personal computers are the softwares that intentionally chose programming (whether textual or graphical) as their frontend for making sound.
|
In the early days, when computers existed only in research laboratories and neither displays nor mice existed, creating sound or music with computers was inevitably equivalent to programming. Today, however, programming as a means to produce sound on a computer—rather than employing digital audio workstation (DAW) software, such as Pro Tools is not popular. In other words, programming languages for music developed after the proliferation of personal computers are the software tools that intentionally chose programming (whether textual or graphical) as their frontend for making sound.
|
||||||
|
|
||||||
Since the 1990s, the theoretical development of programming languages and the various constraints required for real-time audio processing have significantly increased the specialized knowledge necessary for developing programming languages for music today. Furthermore, some languages developed after the 2000s are not necessarily aimed at pursuing new forms of musical expression. There is still no unified perspective on how the value of those languages should be evaluated.
|
Since the 1990s, the theoretical development of programming languages and the various constraints required for real-time audio processing have significantly increased the specialized knowledge necessary for developing programming languages for music today. Furthermore, some languages developed after the 2000s are not necessarily aimed at pursuing new forms of musical expression, and there is still no unified perspective on how their values should be evaluated.
|
||||||
|
|
||||||
In this paper, a critical historical review is conducted by drawing on discussions from sound studies alongside existing surveys, aiming to consider programming languages for music independently from computer music as the specific genre.
|
This paper is a critical historical review that draws on discussions from sound studies and existing surveys to examine programming languages for music as distinct from computer music as the specific genre.
|
||||||
|
|
||||||
### Use of the Term "Computer Music"
|
### Use of the Term "Computer Music"
|
||||||
|
|
||||||
The term "Computer Music," despite its literal and potentially broad meaning, has been noted for being used within a narrowly defined framework tied to specific styles or communities, as represented in Ostertag's *Why Computer Music Sucks*[@ostertag1998] since the 1990s.
|
Since the 1990s, the term "computer music," despite its literal and potentially broad meaning, has been noted for being used within a narrowly defined framework tied to specific styles or communities, as explored in Ostertag's *Why Computer Music Sucks*[@ostertag1998].
|
||||||
|
|
||||||
As Eric Lyon observed nearly two decades ago, it is now nearly impossible to imagine a situation in which computers are not involved at any stage from the production to experience of music[@lyon_we_2006, p1]. The necessity of using the term "Computer Music" to describe academic contexts has consequently diminished.
|
As Lyon observed nearly two decades ago, it is now nearly impossible to imagine a situation in which computers are not involved at any stage from the production to experience of music[@lyon_we_2006, p1]. The necessity of using the term "computer music" in academic contexts has consequently diminished.
|
||||||
|
|
||||||
Holbrook and Rudi extended Lyon's discussion by proposing the use of frameworks like Post-Acousmatic[@adkins2016] to redefine "Computer Music." Their approach incorporates the tradition of pre-computer experimental/electronic music, situating it as part of the broader continuum of technology-based or technology-driven music[@holbrook2022].
|
Holbrook and Rudi extended Lyon's discussion by proposing the use of frameworks such as post-acoutmatic[@adkins2016] to redefine computer music. Their approach situates the tradition of pre-computer experimental/electronic music as part of the broader continuum of technology-based or technology-driven music[@holbrook2022].
|
||||||
|
|
||||||
While the strict definition of Post-Acousmatic music is deliberately left open, one of its key aspects is the expansion of music production from institutional settings to individuals and as well as the diversification of technological usage[@adkins2016, p113]. However, while the Post-Acousmatic discourse integrates the historical fact that declining computer costs and increasing accessibility beyond laboratories have enabled diverse musical expressions, it still marginalizes much of the music that is "just using computers" and fails to provide insights into this divided landscape.
|
Although the strict definition of post-acousmatic music is deliberately left open, one of its key aspects is the expansion of music production from institutional settings to individuals and as well as the diversification of technological usage[@adkins2016, p113]. However, despite integrating the historical fact that declining computer costs and increasing accessibility beyond laboratories have enabled diverse musical expressions, the post-acousmatic discourse still marginalizes much of the music that is "just using computers" and fails to provide insights into this divided landscape.
|
||||||
|
|
||||||
<!-- Lyon argues that defining computer music simply as music created with computers is too permissive, while defining it as music that could not exist without computers is too strict. He highlights the difficulty of considering instruments that use digital simulations, such as virtual analog synthesizers, within these definitions. Furthermore, --> Lyon argues that the term "computer music" is a style-agnostic definition almost like "piano music," implying that it ignores the style and form inside music produced by the instrument.
|
<!-- Lyon argues that defining computer music simply as music created with computers is too permissive, while defining it as music that could not exist without computers is too strict. He highlights the difficulty of considering instruments that use digital simulations, such as virtual analog synthesizers, within these definitions. Furthermore, -->
|
||||||
|
|
||||||
However, one of the defining characteristics of computers as a medium lies in their ability to treat musical styles themselves as subjects of meta-manipulation through simulation and modeling. When creating instruments with computers or when using such instruments, sound production involves programming—manipulating symbols embedded in a particular musical culture. This recursive embedding of language and recognition, which construct that musical culture, into the resulting music is a process that goes beyond what is possible with acoustic instruments or analog instruments. Magnusson refers to this characteristic of digital instruments as "Epistemic Tools" and points out that the computer serves to "create a snapshot of musical theory, freezing musical culture in time" [@Magnusson2009, p.173] through formalization.
|
Lyon argues that the term "computer music" is a style-agnostic definition, almost like "piano music," implying that it ignores the style and form of music produced by the instrument. However, one of the defining characteristics of computers as a medium lies in their ability to treat musical styles themselves as subjects of meta-manipulation through simulation and modeling. When creating instruments with computers or using such instruments, sound production involves programming—manipulating symbols embedded in a particular musical culture. This recursive embedding of language and recognition, which construct that musical culture, into the resulting music is a process that exceeds what is possible with acoustic instruments or analog instruments. Magnusson refers to this characteristic of digital instruments as "epistemic tools" and points out that the computer serves to "create a snapshot of musical theory, freezing musical culture in time" [@Magnusson2009, p.173] through formalization.
|
||||||
|
|
||||||
Today, many people use computers for music production not because they consciously leverage the uniqueness of the meta-medium, but simply because there are no quicker or more convenient alternatives available. Even so, within a musical culture where computers are used as a reluctant choice, musicians are inevitably influenced by the underlying infrastructures like software, protocols, and formats. As long as the history of programming languages for music remains intertwined with the history of computer music as it relates to specific genres or communities, it becomes difficult to analyze music created with computers as merely a passive means.
|
Today, many people use computers for music production not because they consciously leverage the uniqueness of the meta-medium, but simply because there are no quicker or more convenient alternatives available. Even so, within a musical culture where computers are used out of necessity rather than preference, musicians are inevitably influenced by the underlying infrastructures such as software, protocols, and formats. As long as the history of programming languages for music remains intertwined with the history of computer music as it relates to specific genres or communities, it will be difficult to analyze music created with computers as merely a passive means.
|
||||||
|
|
||||||
In this paper, the history of programming languages for music is reexamined with an approach that, in contrast to Lyon, adopts a radically style-agnostic perspective. Rather than focusing on what has been created with these tools, the emphasis is placed on how these tools themselves have been constructed. The paper centers on the following two topics: 1. A critique of the universality of sound representation using pulse-code modulation (PCM)—the foundational concept underlying most of today's sound programming, by referencing early attempts at sound generation using electronic computers. 2. An examination of the MUSIC-N family, the origin of PCM-based sound programming, to highlight that its design varies significantly across systems from the perspective of today's programming language design and that it has evolved over time into a black box, eliminating the need for users to understand its internal workings.
|
In this paper, the history of programming languages for music is reexamined with an approach that, unlike Lyon's, adopts a radically style-agnostic perspective. Rather than focusing on what has been created with these tools, the emphasis is placed on how these tools themselves have been constructed. The paper centers on the following two topics: 1. A critique of the universality of sound representation using pulse-code modulation (PCM)—the foundational concept underlying most of today's sound programming, by referencing early attempts at sound generation using electronic computers. 2. An examination of the MUSIC-N family, the origin of PCM-based sound programming, to highlight that its design varies significantly across systems from the perspective of today's programming language design and that it has evolved over time into a black box, eliminating the need for users to understand its internal workings.
|
||||||
|
|
||||||
Ultimately, the paper concludes that programming languages for music developed since the 2000s are not solely aimed at creating new music but also serve as alternatives to the often-invisible technological infrastructures surrounding music, such as formats and protocols. By doing so, the paper proposes new perspectives for the historical study of music created with computers.
|
Ultimately, the paper concludes that programming languages for music developed since the 2000s are not solely aimed at creating new music but also serve as alternatives to the often-invisible technological infrastructures surrounding music such as formats and protocols. Thus, the paper proposes new perspectives for the historical study of music created with computers.
|
||||||
|
|
||||||
## PCM and Early Computer Music
|
## PCM and Early Computer Music
|
||||||
|
|
||||||
@@ -36,33 +36,33 @@ The earliest experiments with sound generation on computers in the 1950s involve
|
|||||||
|
|
||||||
For instance, Louis Wilson, who was an engineer of the BINAC in the UK, noticed that an AM radio placed near the computer could pick up weak electromagnetic waves generated during the switching of vacuum tubes, producing sounds. He leveraged this phenomenon by connecting a speaker and a power amplifier to the computer's circuit to assist with debugging. Frances Elizabeth Holberton took this a step further by programming the computer to generate pulses at desired intervals, creating melodies in 1949[@woltman1990].
|
For instance, Louis Wilson, who was an engineer of the BINAC in the UK, noticed that an AM radio placed near the computer could pick up weak electromagnetic waves generated during the switching of vacuum tubes, producing sounds. He leveraged this phenomenon by connecting a speaker and a power amplifier to the computer's circuit to assist with debugging. Frances Elizabeth Holberton took this a step further by programming the computer to generate pulses at desired intervals, creating melodies in 1949[@woltman1990].
|
||||||
|
|
||||||
Also, some computers at this time, such as the CSIR Mark I (CSIRAC) in Australia often had primitive "hoot" instructions that emit a single pulse to a speaker. Early sound generation using computers, including the BINAC and CSIR Mark I, primarily involved playing melodies of existing music.
|
Further, some computers at this time, such as the CSIR Mark I (CSIRAC) in Australia often had primitive "hoot" instructions that emitted a single pulse to a speaker. Early sound generation using computers, including the BINAC and CSIR Mark I, primarily involved playing melodies of existing music.
|
||||||
|
|
||||||
However, not all sound generation at this time was merely involved the reproduction of existing music. Doornbusch highlights experiments on the Pilot ACE (the Prototype for Automatic Computing Engine) in the UK, which utilized acoustic delay line memory to produce unique sounds[@doornbusch2017, pp.303-304]. Acoustic delay line memory, used as the main memory in early computers such as the BINAC and the CSIR Mark I, employed the feedback of pulses traveling through mercury via a speaker and microphone setup to retain data. Donald Davis, an engineer on the ACE project, described the sounds it produced as follows[@davis_very_1994, pp.19-20]:
|
However, not all sound generation at this time was merely the reproduction of existing music. Doornbusch highlights experiments on the Pilot ACE (the Prototype for Automatic Computing Engine) in the UK, which utilized acoustic delay line memory to produce unique sounds[@doornbusch2017, pp.303-304]. Acoustic delay line memory, used as the main memory in early computers, such as the BINAC and the CSIR Mark I, employed the feedback of pulses traveling through mercury via a speaker and microphone setup to retain data. Donald Davis, an engineer on the ACE project, described the sounds it produced as follows[@davis_very_1994, pp.19-20]:
|
||||||
|
|
||||||
> The Ace Pilot Model and its successor, the Ace proper, were both capable of composing their own music and playing it on a little speaker built into the control desk. I say composing because no human had any intentional part in choosing the notes. The music was very interesting, though atonal, and began by playing rising arpeggios: these gradually became more complex and faster, like a developing fugue. They dissolved into colored noise as the complexity went beyond human understanding.
|
> The Ace Pilot Model and its successor, the Ace proper, were both capable of composing their own music and playing it on a little speaker built into the control desk. I say composing because no human had any intentional part in choosing the notes. The music was very interesting, though atonal, and began by playing rising arpeggios: these gradually became more complex and faster, like a developing fugue. They dissolved into colored noise as the complexity went beyond human understanding.
|
||||||
|
|
||||||
<!-- > Loops were always multiples of 32 microseconds long, so notes had frequencies which were submultiples of 31.25 KHz. The music was based on a very strange scale, which was nothing like equal tempered or harmonic, but was quite pleasant. -->
|
<!-- > Loops were always multiples of 32 microseconds long, so notes had frequencies which were submultiples of 31.25 KHz. The music was based on a very strange scale, which was nothing like equal tempered or harmonic, but was quite pleasant. -->
|
||||||
|
|
||||||
This music arose unintentionally during program optimization and was made possible by the "misuse" of switches installed for debugging delay line memory. Media scholar Miyazaki described the practice of listening to sounds generated by algorithms and their bit patterns, integrated into programming, as "Algo- *rhythmic* Listening"[@miyazaki2012].
|
This music arose unintentionally during program optimization and was made possible by the "misuse" of switches installed for debugging delay line memory. Media scholar, Miyazaki, described the practice of listening to sounds generated by algorithms and their bit patterns, integrated into programming, as "Algo-*rhythmic* Listening"[@miyazaki2012].
|
||||||
|
|
||||||
Doornbusch warns against ignoring these early computer music practices simply because they did not directly influence subsequent research[@doornbusch2017, p.305]. Indeed, the sounds produced by the Pilot ACE challenge the post-acousmatic historical narrative, which suggests that computer music transitioned from being democratized in closed electro-acoustic music laboratories to individual musicians.
|
Doornbusch warns against ignoring these early computer music practices simply because they did not directly influence subsequent research[@doornbusch2017, p.305]. Indeed, the sounds produced by the Pilot ACE challenge the post-acousmatic historical narrative, which suggests that computer music transitioned from being democratized in closed electro-acoustic music laboratories to being embraced by individual musicians.
|
||||||
|
|
||||||
This is because the sounds generated by the Pilot ACE were not created by musical experts, nor were they solely intended for debugging purposes. Instead, they were programmed with the goal of producing interesting sounds. Moreover, these sounds were tied to the hardware of the acoustic delay line memory—a feature that was likely difficult to replicate, even in today's sound programming environments.
|
This is because the sounds generated by the Pilot ACE were not created by musical experts, nor were they solely intended for debugging purposes. Instead, they were programmed with the goal of producing interesting sounds. Moreover, these sounds were tied to the hardware of the acoustic delay line memory—a feature that is likely difficult to replicate, even in today's sound programming environments.
|
||||||
|
|
||||||
Similarly, in the 1960s at MIT, Peter Samson took advantage of the debugging speaker on the TX-0, a machine that had become outdated and was freely available for students to use. He conducted experiments in which he played melodies, such as Bach fugues, using "hoot" instruction[@levy_hackers_2010]. Samson’s experiments with the TX-0 later evolved into the creation of a program that allowed melodies to be described using text within MIT.
|
Similarly, in the 1960s at the Massachusetts Institute of Technology (MIT), Peter Samson exploited the debugging speaker on the TX-0, a machine that had become outdated and was freely available for students to use. He conducted experiments in which he played melodies, such as Bach fugues, using "hoot" instruction[@levy_hackers_2010].
|
||||||
|
|
||||||
Building on this, Samson developed a program called the Harmony Compiler for the DEC PDP-1, which was derived from the TX-0. This program gained significant popularity among MIT students. Around 1972, Samson began surveying various digital synthesizers that were under development at the time and went on to create a system specialized for computer music. The resulting Samson Box was used at Stanford University's CCRMA (Center for Computer Research in Music and Acoustics) for over a decade until the early 1990s and became a tool for many composers to create their works [@loy_life_2013]. Considering his example, it is not appropriate to separate the early experiments in sound generation by computers from the history of computer music solely because their initial purpose was debugging.
|
Building on this, Samson developed a program called the Harmony Compiler for the DEC PDP-1, which was derived from the TX-0. This program gained significant popularity among MIT students. Around 1972, Samson began surveying various digital synthesizers that were under development at the time and went on to create a system specialized for computer music. The resulting Samson Box was used at Stanford University's CCRMA (Center for Computer Research in Music and Acoustics) for over a decade until the early 1990s and became a tool for many composers to create their works [@loy_life_2013]. Considering his example, it is not appropriate to separate the early experiments in sound generation by computers from the history of computer music solely because their initial purpose was debugging.
|
||||||
|
|
||||||
### Acousmatic Listening, the premise of the Universality of PCM
|
### Acousmatic Listening, the premise of the Universality of PCM
|
||||||
|
|
||||||
One of the reasons why MUSIC led to subsequent advancements in research was not simply that it was developed early, but because it was the first to implement, but because it was the first to implement sound representation on a computer based on **pulse-code modulation (PCM)**, which theoretically can generate "almost any sound".
|
One of the reasons why MUSIC led to subsequent advancements in research was not simply that it was developed early, but because it was the first to implement sound representation on a computer based on PCM, which theoretically can generate "almost any sound".
|
||||||
|
|
||||||
PCM, the foundational digital sound representation today, involves sampling audio waveforms at discrete intervals and quantizing the sound pressure at each interval as discrete numerical values.
|
PCM, the foundational digital sound representation today, involves sampling audio waveforms at discrete intervals and quantizing the sound pressure at each interval as discrete numerical values.
|
||||||
|
|
||||||
The issue with the universalism of PCM in the history of computer music is inherent in the concept of Acousmatic Listening, which serves as a premise for Post-Acousmatic. Acousmatic, introduced by Piegnot as a listening style for tape music such as musique concrète and later theorized by Schaeffer[@adkins2016,p106], refers to a mode of listening in which the listener refrains from imagining a specific sound source. This concept has been widely applied in theories of listening to recorded sound, including Michel Chion’s analysis of sound design in film.
|
The problem with the universalism of PCM in the history of computer music is inherent in the concept of acousmatic listening, which serves as a premise for post-acousmatic. Acousmatic listening, introduced by Piegnot as a listening style for tape music, such as musique concrète, and later theorized by Schaeffer[@adkins2016,p106], refers to a mode of listening in which the listener refrains from imagining a specific sound source. This concept has been widely applied in theories of listening to recorded sound, including Michel Chion’s analysis of sound design in film.
|
||||||
|
|
||||||
However, as sound studies scholar Jonathan Sterne has pointed out, discourses surrounding acousmatic listening often work to delineate pre-recording auditory experiences as "natural" by contrast[^husserl]. This implies that prior to the advent of sound reproduction technologies, listening was unmediated and holistic—a narrative that obscures the constructed nature of these assumptions.
|
However, as sound studies scholar, Jonathan Sterne, has observed, discourses surrounding acousmatic listening often work to delineate pre-recording auditory experiences as "natural" by contrast[^husserl]. This implies that prior to the advent of sound reproduction technologies, listening was unmediated and holistic—a narrative that obscures the constructed nature of these assumptions.
|
||||||
|
|
||||||
[^husserl]: Sterne later critiques the phenomenological basis of acousmatic listening, which presupposes an idealized, intact body as the listening subject. He proposes a methodology of political phenomenology centered on impairment, challenging these normative assumptions [@sterne_diminished_2022]. Discussions of universality in computer music should also address ableism, particularly in relation to recording technologies and auditory disabilities.
|
[^husserl]: Sterne later critiques the phenomenological basis of acousmatic listening, which presupposes an idealized, intact body as the listening subject. He proposes a methodology of political phenomenology centered on impairment, challenging these normative assumptions [@sterne_diminished_2022]. Discussions of universality in computer music should also address ableism, particularly in relation to recording technologies and auditory disabilities.
|
||||||
|
|
||||||
@@ -74,11 +74,11 @@ The claim that PCM-based sound synthesis can produce "almost any sound" is under
|
|||||||
|
|
||||||
Incidentally, the actual implementation of PCM in MUSIC I only allowed for monophonic triangle waves with controllable volume, pitch, and timing[@Mathews1980]. Would anyone today describe such a system as capable of producing "almost any sound"?
|
Incidentally, the actual implementation of PCM in MUSIC I only allowed for monophonic triangle waves with controllable volume, pitch, and timing[@Mathews1980]. Would anyone today describe such a system as capable of producing "almost any sound"?
|
||||||
|
|
||||||
Even when considering more contemporary applications, processes like ring modulation (RM), amplitude modulation (AM), or distortion often generate aliasing artifacts unless proper oversampling is applied. These artifacts occur because PCM, while universally suitable for reproducing recorded sound, is not inherently versatile as a medium for generating new sounds. As Puckette has argued, alternative representations, for instance, representation by a sequence of linear segments or physical modeling synthesis, offer other possibilities[@puckette2015]. Therefore, PCM is not a completely universal tool for creating sound.
|
Even when considering more contemporary applications, processes like ring modulation and amplitude modulation, or distortion often cause aliasing artifacts unless proper oversampling is applied. These artifacts occur because PCM, while universally suitable for reproducing recorded sound, is not inherently versatile as a medium for generating new sounds. As Puckette argues, alternative representations, for instance, representation by a sequence of linear segments or physical modeling synthesis, offer other possibilities[@puckette2015]. Therefore, PCM is not a completely universal tool for creating sound.
|
||||||
|
|
||||||
## What Does the Unit Generator Hide?
|
## What Does the Unit Generator Hide?
|
||||||
|
|
||||||
Beginning with version III, MUSIC took the form of a block diagram compiler that processes two input sources: a score language, which represents a list of time-varying parameters, and an orchestra language, which describes the connections between **Unit Generators** such as oscillators and filters. In this paper, the term "Unit Generator"refers to a signal processing module whose implementation is either not open or written in a language different from the one used by the user.
|
Beginning with Version III, MUSIC took the form of a block diagram compiler that processes two input sources: a score language, which represents a list of time-varying parameters, and an orchestra language, which describes the connections between **unit generator (UGen)** such as oscillators and filters. In this paper, the term "UGen" refers to a signal processing module whose implementation is either not open or written in a language different from the one used by the user.
|
||||||
|
|
||||||
The MUSIC family, in the context of computer music research, achieved success for performing sound synthesis based on PCM but this success came with the establishment of a division of labor between professional musicians and computer engineers through the development of domain-specific languages. Mathews explained that he developed a compiler for MUSIC III in response to requests from many composers for additional features in MUSIC II, such as envelopes and vibrato, while also ensuring that the program would not be restricted to a specialized form of musical expression[@mathews_max_2007,13:10-17:50]. He repeatedly stated that his role was that of a scientist rather than a musician:
|
The MUSIC family, in the context of computer music research, achieved success for performing sound synthesis based on PCM but this success came with the establishment of a division of labor between professional musicians and computer engineers through the development of domain-specific languages. Mathews explained that he developed a compiler for MUSIC III in response to requests from many composers for additional features in MUSIC II, such as envelopes and vibrato, while also ensuring that the program would not be restricted to a specialized form of musical expression[@mathews_max_2007,13:10-17:50]. He repeatedly stated that his role was that of a scientist rather than a musician:
|
||||||
|
|
||||||
@@ -86,13 +86,13 @@ The MUSIC family, in the context of computer music research, achieved success fo
|
|||||||
|
|
||||||
> When we first made these music programs the original users were not composers; they were the psychologist Guttman, John Pierce, and myself, who are fundamentally scientists. We wanted to have musicians try the system to see if they could learn the language and express themselves with it. So we looked for adventurous musicians and composers who were willing to experiment.[@Mathews1980, p17]
|
> When we first made these music programs the original users were not composers; they were the psychologist Guttman, John Pierce, and myself, who are fundamentally scientists. We wanted to have musicians try the system to see if they could learn the language and express themselves with it. So we looked for adventurous musicians and composers who were willing to experiment.[@Mathews1980, p17]
|
||||||
|
|
||||||
This clear delineation of roles between musicians and scientists became one of the defining characteristics of post-MUSIC computer music research. Paradoxically, while computer music research aimed to create sounds never heard before, it also paved the way for further research by allowing musicians to focus on composition without having to understand the cumbersome work of programming.
|
This clear delineation of roles between musicians and scientists became one of the defining characteristics of post-MUSIC computer music research. Paradoxically, although computer music research aimed to create sounds never heard before, it also paved the way for further research by allowing musicians to focus on composition without having to understand the cumbersome work of programming.
|
||||||
|
|
||||||
### Example: Hiding Internal State Variables in Signal Processing
|
### Example: Hiding Internal State Variables in Signal Processing
|
||||||
|
|
||||||
Although the MUSIC N series shares a common workflow of using a score language and an orchestra language, the actual implementation of each programming language varies significantly, even within the series.
|
Although the MUSIC N series shares a common workflow of using a score language and an orchestra language, the actual implementation of each programming language varies significantly, even within the series.
|
||||||
|
|
||||||
One notable but often overlooked example is MUSIGOL, a derivative of MUSIC IV [@innis_sound_1968]. In MUSIGOL, not only was the system itself but even the score and orchestra defined by user were written entirely as ALGOL 60 language. Similar to today's Processing or Arduino, MUSIGOL is one of the earliest internal DSL (Domain-specific languages) for music, which means implemented as an library[^mus10]. (Therefore, according to the definition of Unit Generator provided in this paper, MUSIGOL does not qualify as a language that uses Unit Generators.)
|
One notable but often overlooked example is MUSIGOL, a derivative of MUSIC IV [@innis_sound_1968]. In MUSIGOL, the system, the score and orchestra defined by user were written entirely as ALGOL 60 language. Similar to today's Processing or Arduino, MUSIGOL is one of the earliest internal domain-specific languages (DSL) for music; thus, it is implemented as an library[^mus10]. (According to the definition in this paper, MUSIGOL does not qualify as a language that uses UGen.)
|
||||||
|
|
||||||
[^mus10]: While MUS10, used at Stanford University, was not an internal DSL, it was created by modifying an existing ALGOL parser [@loy1985, p.248].
|
[^mus10]: While MUS10, used at Stanford University, was not an internal DSL, it was created by modifying an existing ALGOL parser [@loy1985, p.248].
|
||||||
|
|
||||||
@@ -100,17 +100,17 @@ The level of abstraction deemed intuitive for musicians varied across different
|
|||||||
|
|
||||||
$$O_n = I_1 \cdot S_n + I_2 \cdot O_{n-1} - I_3 \cdot O_{n-2}$$
|
$$O_n = I_1 \cdot S_n + I_2 \cdot O_{n-1} - I_3 \cdot O_{n-2}$$
|
||||||
|
|
||||||
In MUSIC V, this band-pass filter can be used as shown in Listing \ref{lst:musicv} [@mathews_technology_1969, p.78]. Here, `I1` represents the input bus, and `O` is the output bus. The parameters `I2` and `I3` correspond to the normalized values of the coefficients $I_2$ and $I_3$, divided by $I_1$ (as a result, the overall gain of the filter can be greater or less than 1). The parameters `Pi` and `Pj` are normally used to receive parameters from the Score, specifically among the available `P0` to `P30`. In this case, however, these parameters are repurposed as general-purpose memory to temporarily store feedback signals. Similarly, other Unit Generators, such as oscillators, reuse note parameters to handle operations like phase accumulation. As a result, users needed to manually calculate feedback gains based on the desired frequency characteristics[^musicv], and they also had to account for at least two sample memory spaces.
|
In MUSIC V, this band-pass filter can be used as shown in Listing \ref{lst:musicv} [@mathews_technology_1969, p.78]. Here, `I1` represents the input bus, and `O` is the output bus. The parameters `I2` and `I3` correspond to the normalized values of the coefficients $I_2$ and $I_3$, divided by $I_1$ (as a result, the overall gain of the filter can be greater or less than 1). The parameters `Pi` and `Pj` are normally used to receive parameters from the score, specifically among the available `P0` to `P30`. In this case, however, these parameters are repurposed as general-purpose memory to temporarily store feedback signals. Similarly, other UGens, such as oscillators, reuse note parameters to handle operations like phase accumulation. As a result, users needed to manually calculate feedback gains based on the desired frequency characteristics[^musicv], and they also had to considder at least two sample memory spaces.
|
||||||
|
|
||||||
[^musicv]: It is said that a preprocessing feature called `CONVT` could be used to transform frequency characteristics into coefficients [@mathews_technology_1969, p77].
|
[^musicv]: It is said that a preprocessing feature called `CONVT` could be used to transform frequency characteristics into coefficients [@mathews_technology_1969, p77].
|
||||||
|
|
||||||
On the other hand, in later MUSIC 11, and its successor Csound by Barry Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named `reson`. This UGen takes four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor[@vercoe_computer_1983, p248]. Unlike previous implementations, users no longer need to calculate coefficients manually, nor do they need to be aware of the two-sample memory space. However, in MUSIC 11 and Csound, it is possible to implement this band-pass filter from scratch as a User-Defined Opcode (UDO) as shownin Listing \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample feedback, and eliminate black boxes, it is equally important to provide high-level modules that avoid unnecessary complexity ("avoid the clutter") when users do not need to understand the internal details [@vercoe_computer_1983, p.247].
|
On the other hand, in newer MUSIC 11, and its successor, Csound, by Barry Vercoe, the band-pass filter is defined as a UGen named `reson`. This UGen takes four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor[@vercoe_computer_1983, p248]. Unlike previous implementations, users no longer need to calculate coefficients manually, nor do they need to be aware of the two-sample memory space. However, in MUSIC 11 and Csound, it is possible to implement this band-pass filter from scratch as a user-defined opcode (UDO) as shown in Listing \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample feedback, and eliminate black boxes, it is equally important to provide high-level modules that avoid unnecessary complexity ("avoid the clutter") when users do not need to understand the internal details [@vercoe_computer_1983, p.247].
|
||||||
|
|
||||||
~~~{#lst:musicv caption="Example of the use of FLT UGen in MUSIC V."}
|
~~~{#lst:musicv caption="Example of the use of FLT UGen in MUSIC V."}
|
||||||
FLT I1 O I2 I3 Pi Pj;
|
FLT I1 O I2 I3 Pi Pj;
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
~~~{#lst:reson caption="Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)"}
|
~~~{#lst:reson caption="Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted owing to space restriction.)"}
|
||||||
instr 1
|
instr 1
|
||||||
la1 init 0
|
la1 init 0
|
||||||
la2 init 0
|
la2 init 0
|
||||||
@@ -130,13 +130,13 @@ a1 reson a1,p5,p6,1
|
|||||||
endin
|
endin
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
On the other hand, in succeeding environments that inherit the Unit Generator paradigm, such as Pure Data [@puckette_pure_1997], Max (whose signal processing functionalities were ported from Pure Data as MSP), SuperCollider [@mccartney_supercollider_1996], and ChucK [@wang_chuck_2015], primitive UGens are implemented in general-purpose languages like C or C++[^chugen]. If users wish to define low-level UGens (called external objects in Max and Pd), they need to set up a development environment for C or C++.
|
On the other hand, in succeeding environments that inherit the UGen paradigm, such as Pure Data [@puckette_pure_1997], Max (whose signal processing functionalities were ported from Pure Data as MSP), SuperCollider [@mccartney_supercollider_1996], and ChucK [@wang_chuck_2015], primitive UGens are implemented in general-purpose languages such as C or C++[^chugen]. If users wish to define low-level UGens (called external objects in Max and Pd), they need to set up a development environment for C or C++.
|
||||||
|
|
||||||
[^chugen]: ChucK later introduced ChuGen, which is similar extension to Csound’s UDO, allowing users to define UGens within the ChucK language itself [@Salazar2012]. However, not all existing UGens are replaced by UDOs by default both in Csound and ChucK, which remain supplemental features possibly because the runtime performance of UDO is inferior to natively implemented UGens.
|
[^chugen]: ChucK later introduced ChuGen, which is similar extension to Csound’s UDO, allowing users to define UGens within the ChucK language itself [@Salazar2012]. However, not all existing UGens are replaced by UDOs by default both in Csound and ChucK, which remain supplemental features possibly because the runtime performance of UDO is inferior to natively implemented UGens.
|
||||||
|
|
||||||
When UGens are implemented in low-level languages like C, even if the implementation is open-source, the division of knowledge effectively forces users (composers) to treat UGens as black boxes. This reliance on UGens as black boxes reflects and deepens the division of labor between musicians and scientists that was established in MUSIC though it can be interpreted as both a cause and a result.
|
When UGens are implemented in low-level languages like C, even if the implementation is open-source, the division of knowledge effectively forces users (composers) to treat UGens as black boxes. This reliance on UGens as black boxes reflects and deepens the division of labor between musicians and scientists that was established in MUSIC, though it can be interpreted as both a cause and a result.
|
||||||
|
|
||||||
For example, Puckette, the developer of Max and Pure Data, noted that the division of labor at IRCAM between Researchers, Musical Assistants(Realizers), and Composers has parallels in the current Max ecosystem, where roles are divided among Max developers themselves, developers of external objects, and Max users [@puckette_47_2020]. As described in the ethnography of 1980s IRCAM by anthropologist Georgina Born, the division of labor between fundamental research scientists and composers at IRCAM was extremely clear. This structure was also tied to the exclusion of popular music and its associated technologies from IRCAM’s research focus [@Born1995].
|
For example, Puckette, the developer of Max and Pure Data, notes that the division of labor at IRCAM between researchers, musical assistants (realizers), and composers has parallels in the current Max ecosystem, where roles are divided among Max developers themselves, developers of external objects, and Max users [@puckette_47_2020]. As described in the ethnography of 1980s IRCAM by anthropologist Georgina Born, the division of labor between fundamental research scientists and composers at IRCAM was extremely clear. This structure was also tied to the exclusion of popular music and its associated technologies from IRCAM’s research focus [@Born1995].
|
||||||
|
|
||||||
However, such divisions are not necessarily the result of differences in values along the axes analyzed by Born, such as modernist/postmodernist/populist or low-tech/high-tech distinctions[^wessel]. This is because the black-boxing of technology through the division of knowledge occurs in popular music as well. Paul Théberge pointed out that the "democratization" of synthesizers in the 1980s was achieved through the concealment of technology, which transformed musicians as creators into consumers.
|
However, such divisions are not necessarily the result of differences in values along the axes analyzed by Born, such as modernist/postmodernist/populist or low-tech/high-tech distinctions[^wessel]. This is because the black-boxing of technology through the division of knowledge occurs in popular music as well. Paul Théberge pointed out that the "democratization" of synthesizers in the 1980s was achieved through the concealment of technology, which transformed musicians as creators into consumers.
|
||||||
|
|
||||||
@@ -144,47 +144,47 @@ However, such divisions are not necessarily the result of differences in values
|
|||||||
|
|
||||||
> Lacking adequate knowledge of the technical system, musicians increasingly found themselves drawn to prefabricated programs as a source of new sound material. (...)it also suggests a reconceptualization on the part of the industry of the musician as a particular type of consumer. [@theberge_any_1997, p.89]
|
> Lacking adequate knowledge of the technical system, musicians increasingly found themselves drawn to prefabricated programs as a source of new sound material. (...)it also suggests a reconceptualization on the part of the industry of the musician as a particular type of consumer. [@theberge_any_1997, p.89]
|
||||||
|
|
||||||
This argument can be extended beyond electronic music to encompass computer-based music in general. For example, media researcher Lori Emerson noted that while the proliferation of personal computers began with the vision of a "metamedium"—tools that users could modify themselves, as exemplified by Xerox PARC's Dynabook—the vision was ultimately realized in an incomplete form through devices like the Macintosh and iPad, which distanced users from programming by black-boxing functionality [@emerson2014]. In fact, Alan Kay, the architect behind the Dynabook concept, remarked that while the iPad's appearance may resemble the ideal he originally envisioned, its lack of extensibility through programming renders it merely a device for media consumption [@kay2019].
|
This argument can be extended beyond electronic music to encompass computer-based music in general. For example, media researcher Lori Emerson noted that although the proliferation of personal computers began with the vision of a "metamedium"—tools that users could modify themselves, as exemplified by Xerox PARC's Dynabook—the vision was ultimately realized in an incomplete form through devices such as the Macintosh and iPad, which distanced users from programming by black-boxing functionality [@emerson2014]. In fact, Alan Kay, the architect behind the Dynabook concept, remarked that while the iPad's appearance may resemble the ideal he originally envisioned, its lack of extensibility through programming rendered it merely a device for media consumption [@kay2019].
|
||||||
|
|
||||||
Musicians have attempted to resist the consumeristic use of those tools through appropriation and exploitation [@kelly_cracked_2009]. However, just as circuit bending has been narrowed down to its potential by a literal black box - one big closed IC of aggregated functions [@inglizian_beyond_2020,p225], and glitching has been recovered from methodology to a superficial auditory style [@casconeErrormancyGlitchDivination2011], capitalism-based technology expands in a direction that does not permit users to misuse. Under these circumstances, designing a new programming language does not merely provide musicians with the means to create new music, but is itself contextualized as a musicking practice following hacking, an active reconstruction of the technological infrastructure that is allowed to be hacked.
|
Musicians have attempted to resist the consumeristic use of those tools through appropriation and exploitation [@kelly_cracked_2009]. However, just as circuit bending has been narrowed down to its potential by a literal black box - one big closed IC of aggregated functions [@inglizian_beyond_2020,p225], and glitching has shifted from methodology to a superficial auditory style [@casconeErrormancyGlitchDivination2011], capitalism-based technology expands in a direction that prevents users from misusing it. Under these circumstances, designing a new programming language does not merely provide musicians with the means to create new music, but is itself contextualized as a musicking practice following hacking, an active reconstruction of the technological infrastructure that is allowed to be hacked.
|
||||||
|
|
||||||
## Context of Programming Languages for Music After 2000
|
## Context of Programming Languages for Music After 2000
|
||||||
|
|
||||||
Under this premise, music programming languages developed after the 2000s can be categorized into two distinct directions: those that narrow the scope of the language's role by introducing alternative abstractions at a higher-level, distinct from the UGen paradigm, and those that expand the general-purpose capabilities of the language, reducing black-boxing.
|
Under this premise, music programming languages developed after the 2000s can be categorized into two distinct directions: those that narrow the scope of the language's role by introducing alternative abstractions at a higher-level, distinct from the UGen paradigm, and those that expand the general-purpose capabilities of the language, reducing black-boxing.
|
||||||
|
|
||||||
Languages that pursued alternative higher-level abstractions have evolved alongside the culture of live coding, where performances are conducted by rewriting code in real time. The activities of the live coding community, including groups such as TOPLAP since the 2000s, were not only about turning coding itself into a performance but also served as a resistance against laptop performances that relied on black-boxed music software. This is evident in the community's manifesto, which states, "Obscurantism is dangerous" [@toplap_manifestodraft_2004].
|
Languages that pursued alternative higher-level abstractions have evolved alongside the culture of live coding, where performances are conducted by rewriting code in real time. The activities of the live coding community, including groups, such as TOPLAP, since the 2000s, were not only about turning coding itself into a performance but also served as a resistance against laptop performances that relied on black-boxed music software. This is evident in the community's manifesto, which states, "Obscurantism is dangerous" [@toplap_manifestodraft_2004].
|
||||||
|
|
||||||
Languages implemented as clients for SuperCollider, such as **IXI** (on Ruby) [@Magnusson2011], **Sonic Pi** (on Ruby), **Overtone** (on Clojure) [@Aaron2013], **TidalCycles** (on Haskell) [@McLean2014], and **FoxDot** (on Python) [@kirkbride2016foxdot], leverage the expressive power of more general-purpose programming languages. While embracing the UGen paradigm, they enable high-level abstractions for previously difficult-to-express elements like note values and rhythm. For example, the abstraction of patterns in TidalCycles is not limited to music but can also be applied to visual patterns and other outputs, meaning it is not inherently tied to PCM-based waveform output as the final result.
|
Languages implemented as clients for SuperCollider, such as **IXI** (on Ruby) [@Magnusson2011], **Sonic Pi** (on Ruby), **Overtone** (on Clojure) [@Aaron2013], **TidalCycles** (on Haskell) [@McLean2014], and **FoxDot** (on Python) [@kirkbride2016foxdot], leverage the expressive power of more general-purpose programming languages. While embracing the UGen paradigm, they enable high-level abstractions for previously difficult-to-express elements like note values and rhythm. For example, the abstraction of patterns in TidalCycles is not limited to music but can also be applied to visual patterns and other outputs, meaning it is not inherently tied to PCM-based waveform output as the final result.
|
||||||
|
|
||||||
On the other hand, due to their high-level design, these languages often rely on ad-hoc implementations for tasks like sound manipulation and low-level signal processing, such as effects. McCartney, the developer of SuperCollider, stated that if general-purpose programming languages were sufficiently expressive, there would be no need to create specialized languages [@McCartney2002]. This prediction appears reasonable when considering examples like MUSIGOL. However, in practice, scripting languages that excel in dynamic program modification face challenges in modern preemptive OS environments. For instance, dynamic memory management techniques such as garbage collection can hinder deterministic execution timing required for real-time processing [@Dannenberg2005].
|
On the other hand, owing to their high-level design, these languages often rely on ad-hoc implementations for tasks like sound manipulation and low-level signal processing, such as effects. McCartney, the developer of SuperCollider, stated that if general-purpose programming languages were sufficiently expressive, there would be no need to create specialized languages [@McCartney2002], which appears reasonable when considering examples like MUSIGOL. However, in practice, scripting languages that excel in dynamic program modification face challenges in modern preemptive operating system (OS) environments. For instance, dynamic memory management techniques such as garbage collection can hinder deterministic execution timing required for real-time processing [@Dannenberg2005].
|
||||||
|
|
||||||
Historically, programming languages like FORTRAN or C served as a portable way for implementing programs across different architectures. However, with the proliferation of higher-level languages, programming in C or C++ has become relatively more difficult, akin to assembly language in earlier times. Furthermore, considering the challenges of portability not only across different CPUs but also diverse host environments such as OSs and the Web, these languages are no longer as portable as they once were. Consequently, internal DSL for music including signal processing have become exceedingly rare, with only a few examples such as LuaAV[@wakefield2010].
|
Historically, programming languages, such as FORTRAN or C, served as a portable way for implementing programs across different architectures. However, with the proliferation of higher-level languages, programming in C or C++ has become relatively more difficult, akin to assembly language in earlier times. Furthermore, considering the challenges of portability not only across different CPUs but also diverse host environments such as OSs and the Web, these languages are no longer as portable as they once were. Consequently, internal DSL for music, including signal processing, have become exceedingly rare, with only a few examples such as LuaAV[@wakefield2010].
|
||||||
|
|
||||||
Instead, an approach has emerged to create general-purpose languages specifically designed for use in music from the ground up. One prominent example is **Extempore**, a live programming environment developed by Sorensen [@sorensen_extempore_2018]. Extempore consists of Scheme, a LISP-based language, and xtlang, a meta-implementation on top of Scheme. While xtlang requires users to write hardware-oriented type signatures similar to those in C, it leverages the LLVM compiler infrastructure [@Lattner] to just-in-time (JIT) compile signal processing code, including sound manipulation, into machine code for high-speed execution.
|
Instead, an approach has emerged to create general-purpose languages specifically designed for use in music from the ground up. One prominent example is **Extempore**, a live programming environment developed by Sorensen [@sorensen_extempore_2018]. Extempore consists of Scheme, a Lisp-based language, and xtlang, a meta-implementation on top of Scheme. While xtlang requires users to write hardware-oriented type signatures similar to those in C, it leverages the compiler infrastructure, LLVM[@Lattner], to just-in-time (JIT) compile signal processing code, including sound manipulation, into machine code for high-speed execution.
|
||||||
|
|
||||||
The expressive power of general-purpose languages and compiler infrastructures like LLVM has given rise to an approach focused on designing languages with mathematical formalization that reduces black-boxing. **Faust** [@Orlarey2009], for instance, is a language that retains a graph-based structure akin to UGens but is built on a formal system called Block Diagram Algebra. Thanks to its formalization, Faust can be transpiled into various low-level languages such as C, C++, or Rust and can also be used as external objects in Max or Pure Data.
|
The expressive power of general-purpose languages and compiler infrastructures, such as LLVM, has given rise to an approach focused on designing languages with mathematical formalization that reduces black-boxing. **Faust** [@Orlarey2009], for instance, is a language that retains a graph-based structure akin to UGens but is built on a formal system called Block Diagram Algebra. Thanks to its formalization, Faust can be transpiled into various low-level languages, such as C, C++, or Rust, and can also be used as external objects in Max or Pure Data.
|
||||||
|
|
||||||
Languages like **Kronos** [@norilo2015] and **mimium** [@matsuura_mimium_2021], which are based on the more general computational model of lambda calculus, focus on PCM-based signal processing while exploring interactive meta-operations on programs [@Norilo2016] and balancing self-contained semantics with interoperability with other general-purpose languages [@matsuura_lambda-mmm_2024].
|
Languages like **Kronos** [@norilo2015] and **mimium** [@matsuura_mimium_2021], which are based on the more general computational model of lambda calculus, focus on PCM-based signal processing while exploring interactive meta-operations on programs [@Norilo2016] and balancing self-contained semantics with interoperability with other general-purpose languages [@matsuura_lambda-mmm_2024].
|
||||||
|
|
||||||
DSLs are constructed within a double bind: they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through coding. In this context, efforts like Extempore, Kronos, and mimium are not merely programming languages for music but are also situated within the broader research context of functional reactive programming (FRP), which focuses on representing time-varying values in computation. Most computing models lack an inherent concept of real-time and instead operates based on discrete computational steps. Similarly, low-level general-purpose programming languages do not natively include primitives for real-time concepts. Consequently, the exploration of computational models tied to time —a domain inseparable from music— remains vital and has the potential to contribute to the theoretical foundations of general-purpose programming languages.
|
DSLs are constructed within a double bind; they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through coding. In this context, efforts like Extempore, Kronos, and mimium are not merely programming languages for music but are also situated within the broader research context of functional reactive programming, which focuses on representing time-varying values in computation. Most computing models lack an inherent concept of real-time, operating instead based on discrete computational steps. Similarly, low-level general-purpose programming languages do not natively include primitives for real-time concepts. Consequently, the exploration of computational models tied to time —a domain inseparable from music— remains vital and has the potential to contribute to the theoretical foundations of general-purpose programming languages.
|
||||||
|
|
||||||
However, strongly formalized languages come with another trade-off. While they allow UGens to be defined without black-boxing, understanding the design and implementation of these languages often requires expert knowledge. This can create a deeper division between language developers and users, in contrast to the many but small and shallow divisions seen in the multi-language paradigm, like SuperCollider developers, external UGen developers, client language developers (e.g., TidalCycles), SuperCollider users, and client language users.
|
However, strongly formalized languages come with another trade-off. Although they allow UGens to be defined without black-boxing, understanding the design and implementation of these languages often requires expert knowledge. This can create a deeper division between language developers and users, in contrast to the many but small and shallow divisions seen in the multi-language paradigm, such as SuperCollider developers, external UGen developers, client language developers (e.g., TidalCycles), SuperCollider users, and client language users.
|
||||||
|
|
||||||
Although there is no clear solution to this trade-off, one intriguing idea is the development of self-hosting languages for music—that is, languages whose their own compilers are written in the language itself. At first glance, this may seem impractical. However, by enabling users to learn and modify the language's mechanisms spontaneously, this approach could create an environment that fosters deeper engagement and understanding among users.
|
Although there is no clear solution to this trade-off, one intriguing idea is the development of self-hosting languages for music—that is, languages whose compilers are written in the language itself. At first glance, this may seem impractical. However, by enabling users to learn and modify the language's mechanisms spontaneously, this approach could create an environment that fosters deeper engagement and understanding among users.
|
||||||
|
|
||||||
## Conclusion
|
## Conclusion
|
||||||
|
|
||||||
This paper has reexamined the history of computer music and music programming languages with a focus on the universalism of PCM and the black-boxing tendencies of the Unit Generator paradigm. Historically, it was expected that the clear division of roles between engineers and composers would enable the creation of new forms of expression using computers. Indeed, from the perspective of Post-Acousmatic discourse, some, such as Holbrook and Rudi, still consider this division to be a positive development:
|
This paper has reexamined the history of computer music and music programming languages with a focus on the universalism of PCM and the black-boxing tendencies of the UGen paradigm. Historically, it was expected that the clear division of roles between engineers and composers would enable the creation of new forms of expression using computers. Indeed, from the perspective of post-acousmatic discourse, some scholars, such as Holbrook and Rudi, still consider this division to be a positive development:
|
||||||
|
|
||||||
> Most newer tools abstract the signal processing routines and variables, making them easier to use while removing the need for understanding the underlying processes in order to create meaningful results. Composers no longer necessarily need mathematical and programming skills to use the technologies.[@holbrook2022, p2]
|
> Most newer tools abstract the signal processing routines and variables, making them easier to use while removing the need for understanding the underlying processes in order to create meaningful results. Composers no longer necessarily need mathematical and programming skills to use the technologies.[@holbrook2022, p2]
|
||||||
|
|
||||||
However, this division of labor also creates a shared vocabulary (as exemplified in the Unit Generator by Mathews) and serves to perpetuate it. By portraying new technologies as something externally introduced, and by focusing on the agency of those who create music with computers, the individuals responsible for building programming environments, software, protocols, and formats are rendered invisible [@sterne_there_2014]. This leads to an oversight of the indirect power relationships produced by these infrastructures.
|
However, this division of labor also creates a shared vocabulary (as exemplified in the UGen by Mathews) and serves to perpetuate it. By portraying new technologies as something externally introduced, and by focusing on the agency of those who create music with computers, the individuals responsible for building programming environments, software, protocols, and formats are rendered invisible [@sterne_there_2014]. This leads to an oversight of the indirect power relationships produced by these infrastructures.
|
||||||
|
|
||||||
For this reason, future research on programming languages for music must address how the tools, including the languages themselves, contribute aesthetic value within musical culture (and what forms of musical practice they enable), as well as the social (im)balances of power they produce.
|
For this reason, future research on programming languages for music must address how the tools, including the languages themselves, contribute aesthetic value within musical culture (and what forms of musical practice they enable), as well as the social (im)balances of power they produce.
|
||||||
|
|
||||||
The academic value of the research on programming languages for music is often vaguely asserted, using terms such as "general", "expressive", and "efficient". However, it is difficult to argue these claims when processing speed is no longer the primary concern. Thus, as with Idiomaticity [@McPherson2020] by McPherson et al., we need to develop and share a vocabulary for understanding the value judgments we make about music languages.
|
The academic value of the research on programming languages for music is often vaguely asserted, using terms such as "general," "expressive," and "efficient." However, it is difficult to argue these claims when processing speed is no longer the primary concern. Thus, as with idiomaticity [@McPherson2020] by McPherson et al., we need to develop and share a vocabulary for understanding the value judgments we make about music languages.
|
||||||
|
|
||||||
In a broader sense, the development of programming languages for music has also expanded to the individual level. Examples include **Gwion** by Astor, which is inspired by ChucK and enhances its abstraction, such as lambda functions [@astor_gwion_2017]; **Vult**, a DSP transpiler language created by Ruiz for his modular synthesizer hardware [@Ruiz2020]; and a UGen-based live coding environment designed for web, **Glicol** [@lan_glicol_2020]. However, these efforts have not yet been incorporate into academic discourse.
|
In a broader sense, the development of programming languages for music has also expanded to the individual level. Examples include **Gwion** by Astor, which is inspired by ChucK and enhances its abstraction through features, such as lambda functions [@astor_gwion_2017]; **Vult**, a DSP transpiler language created by Ruiz for his modular synthesizer hardware [@Ruiz2020]; and a UGen-based live coding environment designed for web, **Glicol** [@lan_glicol_2020]. However, these efforts have not yet been incorporated into academic discourse.
|
||||||
|
|
||||||
Conversely, practical knowledge of past languages in 1960s as well as real-time hardware-oriented systems from the 1980s, is gradually being lost. While research efforts such as *Inside Computer Music*, which analyzes historical works of computer music, have begun [@clarke_inside_2020], an archaeological practice focused on the construction of computer music systems themselves will also be necessary.
|
Conversely, practical knowledge of past languages in 1960s, as well as real-time hardware-oriented systems from the 1980s, is gradually being lost. Although research efforts such as *Inside Computer Music*, which analyzes historical works of computer music, have begun [@clarke_inside_2020], an archaeological practice focused on the construction of computer music systems themselves will also be necessary.
|
||||||
|
|
||||||
|
|||||||
2
main.tex
2
main.tex
@@ -32,7 +32,7 @@
|
|||||||
% ================ Define title and author names here ===============
|
% ================ Define title and author names here ===============
|
||||||
% ====================================================
|
% ====================================================
|
||||||
%user defined variables
|
%user defined variables
|
||||||
\def\papertitle{Hiding What to Who? : A Critical Review of the History in Programming languages for Music}
|
\def\papertitle{Hiding What from Whom? A Critical Review of the History of Programming languages for Music}
|
||||||
\def\firstauthor{Tomoya Matsuura}
|
\def\firstauthor{Tomoya Matsuura}
|
||||||
% \def\firstauthor{Anonymized for review}
|
% \def\firstauthor{Anonymized for review}
|
||||||
|
|
||||||
|
|||||||
23
sed.js
Normal file
23
sed.js
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
const fs = require("fs");
|
||||||
|
|
||||||
|
const src = fs.readFileSync("content_pre.tex").toString();
|
||||||
|
|
||||||
|
src.replace(`\\begin{verbatim}
|
||||||
|
instr 1 ; instrument with fabricated reson:
|
||||||
|
la1 init 0 ; clear feedbacks
|
||||||
|
la2 init 0 ; at start only
|
||||||
|
i3 = exp(-6.28 * p6 / 10000) ; set coef 3
|
||||||
|
i2 = 4*i3*cos(6.283185 * p5/10000) / (1+i3); set coef 2
|
||||||
|
i1 = (1-i3) * sqrt(1-1 - i2*i2/(4*i3)) ; set coef 1
|
||||||
|
a1 rand p4 ; source signal
|
||||||
|
la3 = la2 ; feedback 2
|
||||||
|
la2 = la1 ; feedback 1
|
||||||
|
la1 = i1*a1 + i2 * la2 - i3 * la3 ; 2nd order difference eqn
|
||||||
|
out la1
|
||||||
|
endin
|
||||||
|
|
||||||
|
instr 2 ; this instr does same as above
|
||||||
|
a1 rand p4 ; source signal
|
||||||
|
a1 reson a1,p5,p6,1 ; 2nd order recursicve filter
|
||||||
|
endin
|
||||||
|
\\end{verbatim}`,``)
|
||||||
Reference in New Issue
Block a user