finished writing for now

This commit is contained in:
2025-01-28 08:06:13 +00:00
parent e751e91a44
commit 176122d4c1
8 changed files with 1117 additions and 223 deletions

View File

@@ -1 +1 @@
In this paper, the author introduces the perspective of “Somewhat Weak Computer Music” in order to describe the history of programming languages for music without being bound by the style of computer music, and conduct a critical review of the history programming languages for music. This paper focuses on a critical review of the post-acousmatic discourse, which is an inclusive notion for recent tendencies in computer music. The universalism associated with pulse-code modulation, which is the basis of sound programming today, has functioned as a discourse that invites expectations of musicians and scientists, even though in reality the range of expression is limited to that era. In addition, the MUSIC-N family, which is the origin of sound generation with a computer based on PCM, is contextualized more as a series of workflows for generating sound on a computer rather than as a semantics and specification of programming languages, and it has gradually developed as a black box that users do not need to understand its internal structure. The author concludes that programming languages for music developed since the 1990s are not necessarily aimed at creating new musical styles, but also have the aspect of presenting an alternative to the technological infrastructure around music, such as formats and protocols which is becoming more invisible, and a new point of discussion is presented for future historical research on music using computers.
This paper critically reviews the history of programming languages for music by referring discussions from sound studies, aiming to describe this history decoupled from computer music as a form/community. This paper focuses on critiquing the discourse of Post-Acousmatic, which inclusively addresses recent trends in computer music. The universalism associated with pulse-code modulation (PCM), which is basic assumption of today's sound programming, has functioned as a discourse that shapes musicians' expectations historically, despite the fact that its expressive range has several limits. Also, this paper points out that the MUSIC-N family, which formed the foundation of PCM-based sound synthesis, is contextualized not as programming languages in terms of their syntactic or semantic properties, but as a lineage of workflows for generating sound on computers, and these systems have evolved into black boxes that minimize the need for users to understand their internal structures over time. The paper concludes that programming languages for music developed since the 2000s function as a means of presenting alternatives to the often-invisible technological infrastructures surrounding music, such as formats and protocols, rather than solely aiming to create novel musical styles. This conclusion paves the way for future discussions in this research area.

View File

@@ -1 +1,20 @@
In this paper, the author introduces the perspective of “Somewhat Weak Computer Music” in order to describe the history of programming languages for music without being bound by the style of computer music, and conduct a critical review of the history programming languages for music. This paper focuses on a critical review of the post-acousmatic discourse, which is an inclusive notion for recent tendencies in computer music. The universalism associated with pulse-code modulation, which is the basis of sound programming today, has functioned as a discourse that invites expectations of musicians and scientists, even though in reality the range of expression is limited to that era. In addition, the MUSIC-N family, which is the origin of sound generation with a computer based on PCM, is contextualized more as a series of workflows for generating sound on a computer rather than as a semantics and specification of programming languages, and it has gradually developed as a black box that users do not need to understand its internal structure. The author concludes that programming languages for music developed since the 1990s are not necessarily aimed at creating new musical styles, but also have the aspect of presenting an alternative to the technological infrastructure around music, such as formats and protocols which is becoming more invisible, and a new point of discussion is presented for future historical research on music using computers.
This paper critically reviews the history of programming languages for
music by referring discussions from sound studies, aiming to describe
this history decoupled from computer music as a form/community. This
paper focuses on critiquing the discourse of Post-Acousmatic, which
inclusively addresses recent trends in computer music. The universalism
associated with pulse-code modulation (PCM), which is basic assumption
of today's sound programming, has functioned as a discourse that shapes
musicians' expectations historically, despite the fact that its
expressive range has several limits. Also, this paper points out that
the MUSIC-N family, which formed the foundation of PCM-based sound
synthesis, is contextualized not as programming languages in terms of
their syntactic or semantic properties, but as a lineage of workflows
for generating sound on computers, and these systems have evolved into
black boxes that minimize the need for users to understand their
internal structures over time. The paper concludes that programming
languages for music developed since the 2000s function as a means of
presenting alternatives to the often-invisible technological
infrastructures surrounding music, such as formats and protocols, rather
than solely aiming to create novel musical styles. This conclusion paves
the way for future discussions in this research area.

View File

@@ -1,146 +1,150 @@
\section{Introduction}\label{introduction}
Programming languages and environments for music have developed hand in
hand with the history of creating music using computers. Software like
Max, Pure Data, CSound, and SuperCollider has been referred to as
``Computer Music
hand with the history of creating music using computers. Software and
systems like Max, Pure Data, CSound, and SuperCollider has been referred
to as ``Computer Music
Language''\citep{McCartney2002, Nishino2016, McPherson2020}, ``Language
for Computer Music''\citep{Dannenberg2018}, and ``Computer Music
Programming Systems''\citep{Lazzarini2013}, though there is no clear
consensus on the use of these terms. However, as the term ``Computer
Music'' suggests, these programming languages are deeply intertwined
with the history of technology-driven music, which developed under the
premise that ``almost any sound can be
consensus on the use of these terms. However, as the shared term
``Computer Music'' implies, these programming languages are deeply
intertwined with the history of technology-driven music, which developed
under the premise that ``almost any sound can be
produced''\citep{mathews_acoustic_1961} through the use of computers.
In the early days, when computers were confined to university research
laboratories and neither displays nor mice existed, creating sound or
music with computers was inevitably linked to programming. Today,
however, using programming as a means to produce sound on a
computer---rather than employing DAW (Digital Audio Workstation)
software---is somewhat specialized. In other words, programming
languages for music developed after the proliferation of personal
computers are software that deliberately choose programming (whether
textual or graphical) as their frontend for sound generation.
In the early days, when computers were confined to research laboratories
and neither displays nor mouse existed, creating sound or music with
computers was inevitably equal to the work of programming. Today,
however, programming as a means to produce sound on a computer---rather
than employing Digital Audio Workstation (DAW) software like Pro Tools
is not usual. In other words, programming languages for music developed
after the proliferation of personal computers are the softwares that
intentionally chose programming (whether textual or graphical) as their
frontend for making sound.
Since the 1990s, theoretical advancements in programming languages and
the various constraints required for real-time audio processing have
significantly increased the specialized knowledge needed to develop
programming languages for music. Furthermore, some music-related
Since the 1990s, the theoretical development of programming languages
and the various constraints required for real-time audio processing have
significantly increased the specialized knowledge necessary for
developing programming languages for music today. Furthermore, some
languages developed after the 2000s are not necessarily aimed at
pursuing new forms of musical expression. There appears to be no unified
perspective on how to evaluate such languages.
pursuing new forms of musical expression. It seems that there is still
no unified perspective on how the value of such languages should be
evaluated.
The ultimate goal of this paper is to introduce the framework of ``weak
computer music,'' referring to music mediated by computers in a
non-style-specific manner. This framework aims to decouple the
evaluation of programming language design and development for music from
specific styles and the ideologies associated with computer music.
In this paper, a critical historical review is conducted by deriving
discussions from sound studies alongside existing surveys, aiming to
consider programming languages for music independently from computer
music as the specific genre. \#\#\# Use of the Term ``Computer Music''
\subsection{Use of the Term ``Computer
Music''}\label{use-of-the-term-computer-music}
The term ``Computer Music,'' despite its literal and potential broad
meaning, has been noted as being used within a narrowly defined
framework tied to specific styles or communities, as represented in
Ostartag's \emph{Why Computer Music Sucks}\citep{ostertag1998} since the
1990s.
Despite its potential broad application, the term ``computer music'' has
been repeatedly noted since the 1990s as being used within a narrowly
defined framework, tied to specific styles or
communities\citep{ostertag1998}.
As Lyon observed nearly two decades ago, it is now nearly impossible to
imagine a situation in which computers are not involved at any stage
from production to experience of music\citep[p1]{lyon_we_2006}. The
necessity of using the term ``Computer Music'' to describe academic
contexts, particularly those centered around the ICMC, has consequently
diminished.
The necessity of using the term ``computer music'' for such academic
contexts (particularly those centered around the International Computer
Music Conference, or ICMC) has diminished over time. Lyon argues that
defining computer music as simply ``music made using computers'' is too
permissive, while defining it as ``music that could not exist without
computers'' is overly strict, complicating the evaluation of analog
modeling synthesizers implemented on computers. Lyon questions the
utility of the term itself, comparing its consideration to that of
``piano music,'' which ignores the styles within it\citep{lyon2006}.
Holbrook and Rudi continued Lyon's discussion by proposing the use of
frameworks like Post-Acousmatic\citep{adkins2016} to redefine ``Computer
Music.'' Their approach incorporates the tradition of pre-computer
experimental/electronic music, situating it as part of the broader
continuum of technology-based or technology-driven
music\citep{holbrook2022}.
As Ostertag and Lyon observed, it has become increasingly difficult to
envision a situation where computers are absent from the production and
experience of music today, particularly in commercial
contexts\footnote{Of course, the realm of music extends beyond the
numbers processed by computers or the oscillations of speaker
diaphragms. This paper does not seek to intervene in aesthetic
judgments regarding music made without computers or non-commercial
musical activities. However, the existence of such music does not
counter the awareness that there is little analysis of the inevitable
involvement of computing as a medium in the field of popular music,
which attracts significant academic and societal interest.}.
Nevertheless, the majority of music in the world could be described as
``simply using computers.''
While the strict definition of the Post-Acousmatic music is not given
deliberately, one of its elements contains the expansion of music
production from institutional settings to individuals and the use of the
technology were diversified\citep[p113]{adkins2016}. However, while the
Post-Acousmatic discourse integrates the historical fact that declining
computer costs and access beyond laboratories have enabled diverse
musical expressions, it simultaneously marginalizes much of the music
that is ``just using computers'' and fails to provide insights into this
divided landscape.
Holbrook and Rudi propose analyzing what has been called computer music
within the framework of post-acousmatic music\citep{adkins2016},
including traditions of pre-computer electronic music as one of many
forms of technology-based/driven music\citep{holbrook2022}.
Lyon argues that defining computer music simply as music created with
computers is too permissive, while defining it as music that could not
exist without computers is too strict. He highlights the difficulty of
considering instruments that use digital simulations, such as virtual
analog synthesizers, within these definitions. Furthermore, he suggests
that the term ``computer music'' is style-agnostic definition almost
like ``piano music,'' implying that it ignores the style and form inside
music produced by the instruments.
A critical issue with these discussions is that post-acousmatic music
lacks a precise definition. One proposed characteristic is the shift in
the locus of production from institutions to individuals, which has
altered how technology is used\citep[p113]{adkins2016}. However, this
narrative incorporates a tautological issue: while it acknowledges the
historical fact that the decreasing cost of computers allowed diverse
musical expressions outside laboratories, it excludes much music as
``simply using computers'' and fails to provide insights into such
divisions.
The spread of personal computers has incompletely achieved the vision of
metamedium as a device users could modify themselves, instead becoming a
black box for content consumption\citep{emerson2014}. Histories
highlighting the agency of those who created programming environments,
software, protocols, and formats for music obscure indirect power
relationships generated by the infrastructure\citep{sterne_there_2014}.
Today, while music production fundamentally depends on computers, most
of it falls under Lyon's overlapping permissive and strict definitions
of computer music. In this paper, I propose calling this situation the
following:
However, one of the defining characteristics of computers as a medium
lies in their ability to treat musical styles themselves as subjects of
meta-manipulation through simulation and modeling. When creating
instruments with computers, or when using such instruments, sound
production involves programming---manipulating symbols embedded in a
particular musical culture. This recursive embedding of the language and
perception constituting that musical culture into the resulting music is
a process that goes beyond what is possible with acoustic instruments or
analog electronic instruments. Magnusson refers to this characteristic
of digital instruments as ``Epistemic Tools'' and points out that they
tend to work in the direction of reinforcing and solidifying musical
culture:
\begin{quote}
``Weak computer music'' --- music for which computers are essential to
its realization, but where the uniqueness of the work as intended by the
creator is not particularly tied to the use of computers.
The act of formalising is therefore always an act of fossilisation. As
opposed to the acoustic instrument maker, the designer of the composed
digital instrument frames affordances through symbolic design, thereby
creating a snapshot of musical theory, freezing musical culture in time.
\citep[p173]{Magnusson2009}
\end{quote}
Most people use computers simply because no quicker alternative exists,
not because they are deliberately leveraging the unique medium of
computers for music production. However, the possibility that such music
culture, shaped by the incidental use of computers, has aesthetic and
social characteristics worth analyzing cannot be dismissed.
Today, many people use computers for music production not because they
consciously leverage the uniqueness of the meta-medium, but simply
because there are no quicker or more convenient alternatives available.
Even so, within a musical culture where computers are used as a default
or reluctant choice, musicians are inevitably influenced by the
underlying infrastructures like software, protocols, and formats. As
long as the history of programming languages for music remains
intertwined with the history of computer music as it relates to specific
genres or communities, it becomes difficult to analyze music created
with computers as a passive means.
This paper will historically organize the specifications and
construction of programming languages for early computer music systems
with a focus on their style-agnostic nature.
In this paper, the history of programming languages for music is
reexamined with an approach that, opposite from Lyon, takes an extremely
style-agnostic perspective. Rather than focusing on what has been
created with these tools, the emphasis is placed on how these tools
themselves have been constructed. The paper centers on the following two
topics:
\begin{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Examining the discourse framing MUSIC as the progenitor of computer
music.
A critique of the universality of sound representation using
pulse-code modulation (PCM), the foundational concept underlying most
of today's sound programming, by referencing early attempts of sound
generation using electronic computers.
\item
Investigating what aspects were excluded from user access in MUSIC-N
derivatives such as MUSIGOL.
\item
Analyzing the standardization of UGens (unit generators) and the
division of labor in Max and Pure Data.
\item
Reviewing music programming languages of the 2000s.
\end{itemize}
An examination of the MUSIC-N family, the origin of PCM-based sound
synthesis, to highlight that its design varies significantly across
systems from the perspective of modern programming language design and
that it has evolved over time into a black box, eliminating the need
for users to understand its internal workings.
\end{enumerate}
The conclusion will propose a framework necessary for future discussions
on music programming languages.
\section{Born of ``Computer Music'' - MUSIC-N and PCM
Universality}\label{born-of-computer-music---music-n-and-pcm-universality}
Ultimately, the paper concludes that programming languages for music
developed since the 2000s are not solely aimed at creating new music but
also serve as alternatives to the often-invisible technological
infrastructures surrounding music, such as formats and protocols. By
doing so, the paper proposes new perspectives for the historical study
of music created with computers. \#\# PCM and Early Computer Music
Among the earliest examples of computer music research, the MUSIC I
system (1957) from Bell Labs and its derivatives, known as MUSIC-N, are
frequently highlighted. However, attempts to create music with computers
in the UK and Australia prior to MUSIC I have also been
documented\citep{doornbusch2017}.
Organizing what was achieved by MUSIC-N and earlier efforts can help
clarify definitions of computer music.
documented\citep{doornbusch2017}. Organizing what was achieved by
MUSIC-N and earlier efforts can help clarify definitions of computer
music.
The earliest experiments with sound generation on computers in the 1950s
involved controlling the intervals between one-bit pulses (on or off) to
@@ -148,25 +152,31 @@ control pitch. This was partly because the operational clock frequencies
of early computers fell within the audible range, making the
sonification of electrical signals a practical and cost-effective
debugging method compared to visualizing them on displays or
oscilloscopes. Computers like Australia's CSIR Mark I even featured
primitive instructions like a ``hoot'' command to emit a single pulse to
a speaker.
oscilloscopes. Some computers at this time like Australia's CSIR Mark I
(CSIRAC) often had ``hoot'' primitive instructions that emit a single
pulse to a speaker.
In the UK, Louis Wilson discovered that an AM radio near the BINAC
computer picked up electromagnetic waves generated by vacuum tube
switching, producing regular tones. This serendipitous discovery led to
the intentional programming of pulse intervals to generate
melodies\citep{woltman1990}.
In 1949, the background to music played on the BINAC in UK involved
engineer Louis Wilson, who noticed that an AM radio placed nearby could
pick up weak electromagnetic waves generated during the switching of
vacuum tubes, producing regular sounds. He leveraged this phenomenon by
connecting a speaker and a power amplifier to the computer's output,
using the setup to assist in debugging processes. Frances Elizabeth
Holberton took this a step further by programming the computer to
generate pulses at arbitrary intervals, creating melodies
\citep{woltman1990}. The sound generation on BINAC and CSIR Mark I
represents early instances of using computers to play melodies from
existing music.
However, not all sound generation prior to PCM (Pulse Code Modulation)
was merely the reproduction of existing music. Doornbusch highlights
experiments on the British Pilot ACE (Prototype for Automatic Computing
Engine: ACE), which utilized acoustic delay line memory to produce
unique sounds\citep[p303-304]{doornbusch2017}. Acoustic delay line
memory, used as main memory in early computers like BINAC and CSIR Mark
I, employed the feedback of pulses traveling through mercury via a
speaker and microphone setup to retain data. Donald Davis, an engineer
on the ACE project, described the sounds it produced as
However, not all sound generation at this timewas merely the
reproduction of existing music. Doornbusch highlights experiments on the
British Pilot ACE (Prototype for Automatic Computing Engine: ACE), which
utilized acoustic delay line memory to produce unique
sounds\citep[p303-304]{doornbusch2017}. Acoustic delay line memory, used
as main memory in early computers like BINAC and CSIR Mark I, employed
the feedback of pulses traveling through mercury via a speaker and
microphone setup to retain data. Donald Davis, an engineer on the ACE
project, described the sounds it produced as
follows\citep[p19-20]{davis_very_1994}:
\begin{quote}
@@ -181,63 +191,76 @@ into colored noise as the complexity went beyond human understanding.
Loops were always multiples of 32 microseconds long, so notes had
frequencies which were submultiples of 31.25 KHz. The music was based on
a very strange scale, which was nothing like equal tempered or harmonic,
but was quite pleasant. This music arose unintentionally during program
optimization and was made possible by ``misusing'' switches installed
for debugging acoustic delay line memory (p20).
but was quite pleasant.
\end{quote}
Media scholar Miyazaki described the practice of listening to sounds
generated by algorithms and their bit patterns, integrated into
programming and debugging, as ``Algo\emph{rhythmic}
This music arose unintentionally during program optimization and was
made possible by ``misusing'' switches installed for debugging acoustic
delay line memory (p20). Media scholar Miyazaki described the practice
of listening to sounds generated by algorithms and their bit patterns,
integrated into programming and debugging, as ``Algo\emph{rhythmic}
Listening''\citep{miyazaki2012}.
Doornbusch warns against ignoring early computer music practices in
Australia and the UK simply because they did not directly influence
subsequent research\citep[p305]{doornbusch2017}. Indeed, the tendency to
treat pre-MUSIC attempts as hobbyist efforts by engineers and post-MUSIC
endeavors as serious research remains common even
endeavors as ``serious'' research remains common even
today\citep{tanaka_all_2017}.
The sounds generated by Pilot ACE challenge the post-acousmatic
narrative that computer music transitioned from laboratory-based
professional practices to personal use by amateurs. This is because: 1.
The sounds were produced not by music specialists but by engineers, and
2. The sounds were tied to hardware-specific characteristics of acoustic
delay line memory, making them difficult to replicate even with modern
audio programming environments. Similarly, at MIT in the 1960s, Peter
Samson utilized a debug speaker attached to the aging TX-0 computer to
experiment with generating melodies using square
waves\citep{levy_hackers_2010}.
The sounds produced by the Pilot ACE challenge the post-acousmatic
historical narrative, which suggests that computer music transitioned
from being confined to specialized laboratories to becoming accessible
to individuals, including amateurs.
This effort evolved into a program that allowed users to describe
melodies with text strings. For instance, writing \texttt{4fs\ t8} would
produce an F4 note as an eighth note. Samson later adapted this work to
the PDP-1 computer, creating the ``Harmony Compiler,'' widely used by
MIT students. He also developed the Samson Box in the early 1970s, a
computer music system used at Stanford University's CCRMA for over a
decade\citep{loy_life_2013}. These examples suggest that the initial
purpose of debugging does not warrant segregating early computational
sound generation from the broader history of computer music.
This is because the sounds generated by the Pilot ACE were not created
by musical experts, nor were they solely intended for debugging
purposes. Instead, they were programmed with the goal of producing
interesting sounds. Moreover, the sounds were tied to the hardware of
the acoustic delay line memory---a feature that was likely difficult to
replicate, even in modern audio programming environments.
\subsection{Universality of PCM}\label{universality-of-pcm}
Similarly, in the 1960s at MIT, Peter Samson took advantage of the
debugging speaker on the TX-0, a machine that had become outdated and
freely available for students to use. He conducted experiments where he
played melodies, such as Bach fugues, using square waves
\citep{levy_hackers_2010}. Samson's experiments with the TX-0 later
evolved into the creation of a program that allowed melodies to be
described using text strings within MIT.
Let us examine \textbf{Pulse Code Modulation (PCM)}---a foundational
aspect of MUSIC's legacy and one of the key reasons it is considered a
milestone in the history of computer music. PCM enables the theoretical
representation of ``almost any sound'' on a computer by dividing audio
waveforms into discrete intervals (sampling) and expressing the
amplitude of each interval as quantized numerical values. It remains the
fundamental representation of sound on modern computers. The underlying
sampling theorem was introduced by Nyquist in 1928\citep{Nyquist1928},
and PCM itself was developed by Reeves in 1938.
Building on this, Samson developed a program called the Harmony Compiler
on the DEC PDP-1, which was derived from the TX-0. This program gained
significant popularity among MIT students. Around 1972, Samson began
surveying various digital synthesizers that were being developed at the
time and went on to create a system specialized for computer music. The
resulting Samson Box was used at Stanford University's CCRMA (Center for
Computer Research in Music and Acoustics) for over a decade until the
early 1990s and became a tool for many composers to create their works
\citep{loy_life_2013}. Considering Samson's example, it is not
appropriate to separate the early experiments in sound generation by
computers from the history of computer music solely because their
initial purpose was debugging. \#\#\# Acousmatic Listening, the premise
of the Universality of PCM
A critical issue with the ``post-acousmatic'' framework in computer
music history lies within the term ``acousmatic'' itself. Initially
proposed by Piegnot and later theorized by Schaeffer, the term describes
a mode of listening to tape music, such as musique concrète, in which
the listener does not imagine a specific sound source. It has been
widely applied in theories of recorded sound, including Chion's analyses
of sound design in visual media.
One of the reasons why MUSIC led to subsequent advancements in research
was not simply because it was developed early, but because it was the
first to implement sound representation on a computer based on
\textbf{pulse-code modulation (PCM)}, which theoretically enables the
representation of ``almost any sound.''
PCM, the foundational method of sound representation on today's
computers, involves dividing audio waveforms into discrete intervals
(sampling) and representing the sound pressure at each interval as
discrete numerical values (quantization).
The issue with the universalism of PCM in the history of computer music
is inherent in the concept of Acousmatic, which serves as a premise for
Post-Acousmatic. Acousmatic, introduced by Piegnot as a listening style
for tape music such as musique concrète and later theorized by
Schaeffer, refers to a mode of listening where the listener refrains
from imagining a specific sound source. This concept has been widely
applied in theories of listening to recorded sound, including Chion's
analysis of sound design in film.
However, as sound studies scholar Jonathan Sterne has pointed out,
discourses surrounding acousmatic listening often work to delineate
@@ -289,3 +312,419 @@ new sounds. As Puckette has argued, alternative representations, such as
collections of linear segments or physical modeling synthesis, present
other possibilities\citep{puckette2015}. Therefore, PCM is not a
completely universal tool for creating sound.
\section{What Does the Unit Generator
Hide?}\label{what-does-the-unit-generator-hide}
Starting with version III, MUSIC adopted the form of an acoustic
compiler (or block diagram compiler) that takes two types of input: a
score language, which represents a list of time-varying parameters, and
an orchestra language, which describes the connections between
\textbf{Unit Generators} such as oscillators and filters. In this paper,
the term ``Unit Generator'' means a signal processing module used by the
user, where the internal implementation is either not open or
implemented in a language different from the one used by the user.
Beyond performing sound synthesis based on PCM, one of the defining
features of the MUSIC family in the context of computer music research
was the establishment of a division of labor between professional
musicians and computer engineers through the development of
domain-specific languages. Mathews explained that he developed a
compiler for MUSIC III in response to requests for additional features
such as envelopes and vibrato, while also ensuring that the program
would not be fixed in a static form
\citep[13:10-17:50]{mathews_max_2007}. He repeatedly stated that his
role was that of a scientist rather than a musician:
\begin{quote}
The only answer I could see was not to make the instruments myself---not
to impose my taste and ideas about instruments on the musicians---but
rather to make a set of fairly universal building blocks and give the
musician both the task and the freedom to put these together into his or
her instruments. \citep[p16]{Mathews1980}\\
(\ldots) When we first made these music programs the original users were
not composers; they were the psychologist Guttman, John Pierce, and
myself, who are fundamentally scientists. We wanted to have musicians
try the system to see if they could learn the language and express
themselves with it. So we looked for adventurous musicians and composers
who were willing to experiment. (p17)
\end{quote}
This clear delineation of roles between musicians and scientists became
one of the defining characteristics of post-MUSIC computer music
research. Paradoxically, the act of creating sounds never heard before
using computers paved the way for research by allowing musicians to
focus on their craft without needing to grapple with the complexities of
programming.
\subsection{Example: Hiding First-Order Variables in Signal
Processing}\label{example-hiding-first-order-variables-in-signal-processing}
Although the MUSIC N series shares a common workflow of using a Score
language and an Orchestra language, the actual implementation of each
programming language varies significantly, even within the series.
One notable but often overlooked example is MUSIGOL, a derivative of
MUSIC IV \citep{innis_sound_1968}. In MUSIGOL, not only was the system
itself implemented differently, but even the user-written Score and
Orchestra programs were written entirely as ALGOL 60 source code.
Similar to modern frameworks like Processing or Arduino, MUSIGOL
represents one of the earliest examples of a domain-specific language
implemented as an internal DSL within a library\footnote{While MUS10,
used at Stanford University, was not an internal DSL, it was created
by modifying an existing ALGOL parser \citep[p248]{loy1985}.}.
(Therefore, according to the definition of Unit Generator provided in
this paper, MUSIGOL does not qualify as a language that uses Unit
Generators.)
The level of abstraction deemed intuitive for musicians varied across
different iterations of the MUSIC N series. This can be illustrated by
examining the description of a second-order band-pass filter. The filter
mixes the current input signal \(S_n\), the output signal from \(t\)
time steps prior \(O_{n-t}\), and an arbitrary amplitude parameter
\(I_1\), as shown in the following equation:
\[O_n = I_1 \cdot S_n + I_2 \cdot O_{n-1} - I_3 \cdot O_{n-2}\]
In MUSIC V, this band-pass filter can be used as in \ref{lst:musicv}
\citep[p78]{mathews_technology_1969}.
\begin{lstlisting}[label={lst:musicv}, caption={Example of the use of RESON UGen in MUSIC V.}]
FLT I1 O I2 I3 Pi Pj;
\end{lstlisting}
Here, \passthrough{\lstinline!I1!} represents the input bus, and
\passthrough{\lstinline!O!} is the output bus. The parameters
\passthrough{\lstinline!I2!} and \passthrough{\lstinline!I3!} correspond
to the normalized values of the coefficients \(I_2\) and \(I_3\),
divided by \(I_1\) (as a result, the overall gain of the filter can be
greater or less than 1). The parameters \passthrough{\lstinline!Pi!} and
\passthrough{\lstinline!Pj!} are normally used to receive parameters
from the Score, specifically among the available
\passthrough{\lstinline!P0!} to \passthrough{\lstinline!P30!}. In this
case, however, these parameters are repurposed as general-purpose memory
to temporarily store feedback signals. Similarly, other Unit Generators,
such as oscillators, reuse note parameters to handle operations like
phase accumulation.
As a result, users needed to manually calculate feedback gains based on
the desired frequency characteristics\footnote{It is said that a
preprocessing feature called \passthrough{\lstinline!CONVT!} could be
used to transform frequency characteristics into coefficients
\citep[p77]{mathews_technology_1969}.}, and they also had to account
for using at least two sample memory spaces.
On the other hand, in MUSIC 11, developed by Barry Vercoe, and its later
iteration, CSound, the band-pass filter is defined as a Unit Generator
(UGen) named \passthrough{\lstinline!reson!}. This UGen accepts four
parameters: the input signal, center cutoff frequency, bandwidth, and Q
factor. Unlike previous implementations, users no longer need to be
aware of the two-sample feedback memory space for the output
\citep[p248]{vercoe_computer_1983}. However, in MUSIC 11 and CSound, it
is still possible to implement this band-pass filter from scratch as a
User Defined Opcode (UDO) as in \ref{lst:reson}. Vercoe emphasized that
while signal processing primitives should allow for low-level
operations, such as single-sample feedback, and eliminate black boxes,
it is equally important to provide high-level modules that avoid
unnecessary complexity (``avoid the clutter'') when users do not need to
understand the internal details \citep[p247]{vercoe_computer_1983}.
\begin{lstlisting}[label={lst:reson}, caption={Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)}]
instr 1
la1 init 0
la2 init 0
i3 = exp(-6.28 * p6 / 10000)
i2 = 4*i3*cos(6.283185 * p5/10000) / (1+i3)
i1 = (1-i3) * sqrt(1-1 - i2*i2/(4*i3))
a1 rand p4
la3 = la2
la2 = la1
la1 = i1*a1 + i2 * la2 - i3 * la3
out la1
endin
instr 2
a1 rand p4
a1 reson a1,p5,p6,1
endin
\end{lstlisting}
On the other hand, in programming environments that inherit the Unit
Generator paradigm, such as Pure Data \citep{puckette_pure_1997}, Max
(whose signal processing functionalities were ported from Pure Data as
MSP), SuperCollider \citep{mccartney_supercollider_1996}, and ChucK
\citep{wangChucKStronglyTimed2015}, primitive UGens are implemented in
general-purpose languages like C or C++. If users wish to define
low-level UGens (External Objects), they need to set up a development
environment for C or C++.
As an extension, ChucK later introduced ChuGen, which is equivalent to
CSound's UDO, allowing users to define low-level UGens within the ChucK
language itself \citep{Salazar2012}. However, both CSound and ChucK face
performance limitations with UDOs during runtime compared to natively
implemented UGens. Consequently, not all existing UGens are replaced by
UDOs, which remain supplemental features rather than primary tools.
When UGens are implemented in low-level languages like C, even if the
implementation is open-source, the division of knowledge effectively
forces users (composers) to treat UGens as black boxes. This reliance on
UGens as black boxes reflects and deepens the division of labor between
musicians and scientists that Mathews helped establish---a structure
that can be seen as both a cause and a result of this paradigm.
For example, Puckette, the developer of Max and Pure Data, noted that
the division of labor at IRCAM between researchers, Musical
Assistants/realizers, and composers has parallels in the current Max
ecosystem, where the roles are divided into software developers,
External Objects developers, and Max users \citep{puckette_47_2020}. As
described in the ethnography of 1980s IRCAM by anthropologist Georgina
Born, the division of labor between fundamental research scientists and
composers at IRCAM was extremely clear. This structure was also tied to
the exclusion of popular music and its associated technologies in
IRCAM's research focus \citep{Born1995}.
However, such divisions are not necessarily the result of differences in
values along the axes analyzed by Born, such as
modernist/postmodernist/populist or low-tech/high-tech
distinctions\footnote{David Wessel revealed that the individual referred
to as RIG in Born's ethnography was himself and commented that Born
oversimplified her portrayal of Pierre Boulez, then director of IRCAM,
as a modernist. \citep{taylor_article_1999}}. This is because the
black-boxing of technology through the division of knowledge occurs in
popular music as well. Paul Théberge pointed out that the
``democratization'' of synthesizers in the 1980s was achieved through
the concealment of technology, which transformed musicians as creators
into consumers.
\begin{quote}
Lacking adequate knowledge of the technical system, musicians
increasingly found themselves drawn to prefabricated programs as a
source of new sound material. As I have argued, however, this assertion
is not simply a state ment of fact; it also suggests a
reconceptualization on the part of the industry of the musician as a
particular type of consumer. \citep[p89]{theberge_any_1997}
\end{quote}
This argument can be extended beyond electronic music to encompass
computer-based music in general. For example, media researcher Lori
Emerson noted that while the proliferation of personal computers began
with the vision of ``metamedia''---tools that users could modify
themselves, as exemplified by Xerox PARC's Dynabook---the vision was
ultimately realized in an incomplete form through devices like the
Macintosh and iPad, which distanced users from programming by
black-boxing functionality \citep{emerson2014}. In fact, Alan Kay, the
architect behind the Dynabook concept, remarked that while the iPad's
appearance may resemble the ideal he originally envisioned, its lack of
extensibility through programming renders it merely a device for media
consumption \citep{kay2019}.
Although programming environments as tools for music production are not
widely used, the Unit Generator concept, alongside MIDI, serves as a
foundational paradigm for today's consumer music production software and
infrastructure, including Web Audio. It is known that the concept of
Unit Generators emerged either simultaneously with or even slightly
before modular synthesizers \citep[p20]{park_interview_2009}. However,
UGen-based languages have actively incorporated the user interface
metaphors of modular synthesizers, as Vercoe said that the distinction
between ``ar'' (audio-rate) and ``kr'' (control-rate) processing
introduced in MUSIC 11 is said to have been inspired by Buchla's
differentiation between control and audio signals in its plug type
\citep[1:01:38--1:04:04]{vercoe_barry_2012}.
However, adopting visual metaphors comes with the limitation that it
constrains the complexity of representation to what is visually
conceivable. In languages with visual patching interfaces like Max and
Pure Data, meta-operations on UGens are often restricted to simple
tasks, such as parallel duplication. Consequently, even users of Max or
Pure Data may not necessarily be engaging in expressions that are only
possible with computers. Instead, many might simply be using these tools
as the most convenient software equivalents of modular synthesizers.
\section{Context of Programming Languages for Music After
2000}\label{context-of-programming-languages-for-music-after-2000}
Based on the discussions thus far, music programming languages developed
after the 2000s can be categorized into two distinct directions: those
that narrow the scope of the language's role by attempting alternative
abstractions at a higher level, distinct from the Unit Generator
paradigm, and those that expand the general-purpose capabilities of the
language, reducing black-boxing.
Languages that pursued alternative abstractions at higher levels have
evolved alongside the culture of live coding, where performances are
conducted by rewriting code in real time. The activities of the live
coding community, including groups such as TOPLAP since the 2000s, were
not only about turning coding itself into a performance but also served
as a resistance against laptop performances that relied on black-boxed
music software. This is evident in the community's manifesto, which
states, ``Obscurantism is dangerous''
\citep{toplap_manifestodraft_2004}.
Languages implemented as clients for SuperCollider, such as \textbf{IXI}
(on Ruby) \citep{Magnusson2011}, \textbf{Sonic Pi}(on Ruby),
\textbf{Overtone} (on Clojure) \citep{Aaron2013}, \textbf{TidalCycles}
(on Haskell) \citep{McLean2014}, and \textbf{FoxDot} (on Python)
\citep{kirkbride2016foxdot}, leverage the expressive power of more
general-purpose programming languages. While embracing the UGen
paradigm, they enable high-level abstractions for previously
difficult-to-express elements like note values and rhythm. For example,
the abstraction of patterns in TidalCycles is not limited to music but
can also be applied to visual patterns and other outputs, meaning it is
not inherently tied to PCM-based waveform output as the final result.
On the other hand, due to their high-level design, these languages often
rely on ad hoc implementations for tasks like sound manipulation and
low-level signal processing, such as effects.
McCartney, the developer of SuperCollider, once stated that if
general-purpose programming languages were sufficiently expressive,
there would be no need to create specialized languages
\citep{McCartney2002}. This prediction appears reasonable when
considering examples like MUSIGOL. However, in practice, scripting
languages that excel in dynamic program modification face challenges in
modern preemptive OS environments. For instance, dynamic memory
management techniques such as garbage collection can hinder the ability
to guarantee deterministic execution timing required for real-time
processing \citep{Dannenberg2005}.
Historically, programming in languages like FORTRAN or C served as a
universal method for implementing audio processing on computers,
independent of architecture. However, with the proliferation of
general-purpose programming languages, programming in C or C++ has
become relatively more difficult, akin to programming in assembly
language in earlier times. Furthermore, considering the challenges of
portability across not only different CPUs but also diverse host
environments such as operating systems and the Web, these languages are
no longer as portable as they once were. Consequently, systems targeting
signal processing implemented as internal DSLs have become exceedingly
rare, with only a few examples like LuaAV \citep{wakefield2010}.
Instead, an approach has emerged to create general-purpose languages
specifically designed for use in music from the ground up. One prominent
example is \textbf{Extempore}, a live programming environment developed
by Sorensen \citep{sorensenExtemporeDesignImplementation2018}. Extempore
consists of Scheme, a LISP-based language, and xtlang, a
meta-implementation on top of Scheme. While xtlang requires users to
write hardware-oriented type signatures similar to those in C, it
leverages the LLVM compiler infrastructure \citep{Lattner} to
just-in-time (JIT) compile signal processing code, including sound
manipulation, into machine code for high-speed execution.
The expressive power of general-purpose languages and compiler
infrastructures like LLVM have given rise to an approach focused on
designing languages with formalized abstractions that reduce
black-boxing. \textbf{Faust} \citep{Orlarey2009}, for example, is a
language that retains a graph-based structure akin to UGens but is built
on a formal system called Block Diagram Algebra. This system integrates
primitives for reading and writing internal states, which are essential
for operations like delays and filters. Thanks to its formalization,
Faust can be transpiled into general-purpose languages such as C, C++,
or Rust and can also be used as an External Object in environments like
Max or Pure Data.
Languages like \textbf{Kronos}
\citep{noriloKronosReimaginingMusical2016} and \textbf{mimium}
\citep{matsuura2021}, which are based on the more general computational
model of lambda calculus, focus on PCM-based signal processing while
exploring interactive meta-operations on programs \citep{Norilo2016} and
balancing self-contained semantics with interoperability with other
general-purpose languages \citep{matsuura2024}.
Domain-specific languages (DSLs) are constructed within a double bind:
they aim to specialize in a particular purpose while still providing a
certain degree of expressive freedom through programming. In this
context, efforts like Extempore, Kronos, and mimium are not merely
programming languages for music but are also situated within the broader
research context of Functional Reactive Programming (FRP), which focuses
on representing time-varying values in computation. Most computer
hardware lacks an inherent concept of real time and instead operates
based on discrete computational steps. Similarly, low-level
general-purpose programming languages do not natively include primitives
for real-time concepts. Consequently, the exploration of computational
models tied to time---a domain inseparable from music---remains vital
and has the potential to contribute to the theoretical foundations of
general-purpose programming languages.
However, strongly formalized languages come with their own trade-offs.
While they allow UGens to be defined without black-boxing, understanding
the design and implementation of these languages often requires advanced
knowledge. This can create a significant divide between language
developers and users, in contrast to the more segmented roles seen in
the Multi-Language paradigm---such as SuperCollider developers, external
UGen developers, client language developers (e.g., TidalCycles),
SuperCollider users, and client language users.
Although there is no clear solution to this trade-off, one intriguing
idea is the development of self-hosting languages for music---that is,
languages where their own compilers are written in the language itself.
At first glance, this may seem impractical. However, by enabling users
to learn and modify the language's mechanisms spontaneously, this
approach could create an environment that fosters deeper engagement and
understanding among users.
\section{Conclusion}\label{conclusion}
This paper has reexamined the history of computer music and music
programming languages with a focus on the universalism of PCM and the
black-boxing tendencies of the Unit Generator paradigm. Historically, it
was expected that the clear division of roles between engineers and
composers would enable the creation of new forms of expression using
computers. Indeed, from the perspective of Post-Acousmatic discourse,
some, like Holbrook and Rudi, still consider this division to be a
positive development:
\begin{quote}
Most newer tools abstract the signal processing routines and variables,
making them easier to use while removing the need for understanding the
underlying processes in order to create meaningful results. Composers no
longer necessarily need mathematical and programming skills to use the
technologies. These abstractions are important, as they hide many of the
technical details and make the software and processes available to more
people, and form the basis for what can arguably be seen as a new folk
music. \citep[p2]{holbrook2022}
\end{quote}
However, this division of labor also creates a shared
vocabulary---exemplified by the Unit Generator itself, pioneered by
Mathews---and works to perpetuate it. By portraying new technologies as
something externally introduced, and by focusing on the agency of those
who create music with computers, the individuals responsible for
building the programming environments, software, protocols, and formats
are rendered invisible \citep{sterne_there_2014}. This leads to an
oversight of the indirect power relationships produced by these
infrastructures.
For this reason, future research on programming languages for music must
address how the tools, including the languages themselves, contribute
aesthetic value within musical culture (and what forms of musical
practice they enable), as well as the social (im)balances of power they
produce.
It has been noted in programming language research that evaluation
criteria such as efficiency, expressiveness, and generality are often
ambiguous \citep{Markstrum2010}. This issue is even more acute in fields
like music, where no clear evaluation criteria exist. Thus, as McPherson
et al.~have proposed with the concept of Idiomaticity
\citep{McPherson2020}, we need to develop and share a vocabulary for
understanding the value judgments we make about programming languages in
general.
In a broader sense, the creation of programming languages for music has
also expanded to the individual level. Examples include \textbf{Gwion}
by Astor, which builds on ChucK and enhances its abstraction
capabilities with features like lambda functions \citep{astorGwion2017};
\textbf{Vult}, a DSP transpiler language created by Ruiz for his modular
synthesizer hardware \citep{ruizVultLanguage2020}; and a UGen-based live
coding environment designed for web execution, \textbf{Glicol}
\citep{lan_glicol_2020}. However, these efforts have not yet been
adequately integrated into academic discourse.
Conversely, practical knowledge of university-researched languages from
the past, as well as real-time hardware-oriented systems from the 1980s,
is gradually being lost. While research efforts such as \emph{Inside
Computer Music}, which analyzes historical works of computer music, have
begun \citep{clarke_inside_2020}, an archaeological practice focused on
the construction of computer music systems will also be necessary in the
future. This includes not only collecting primary resources, such as
oral archives from those involved, but also reconstructing the knowledge
and practices behind these systems.

5
convert_from_md.sh Normal file → Executable file
View File

@@ -1,2 +1,3 @@
#!/bin/zsh
pandoc main.md --natbib --bibliography=main.bib --shift-heading-level-by=-1 -o content.tex
pandoc abstract.md -o abstract.tex
pandoc main.md -f markdown+fenced_code_blocks+fenced_code_attributes --listings --natbib --bibliography=main.bib --shift-heading-level-by=-1 -o content.tex

328
main.bib
View File

@@ -62,6 +62,17 @@
file = {/Users/tomoya/Zotero/storage/627PI276/p56-anderson.pdf;/Users/tomoya/Zotero/storage/PA4GN5XG/p56-anderson.pdf}
}
@misc{astor_gwion_2017,
title = {Gwion},
author = {Astor, J{\'e}r{\'e}mie},
year = {2017},
urldate = {2022-01-27},
abstract = {:musical\_note: strongly-timed musical programming language},
copyright = {GPL-3.0},
howpublished = {https://github.com/Gwion/Gwion},
keywords = {audio,chuck,compiler,composition,hacktoberfest,interpreter,lang,language,music,programming-language,real-time,realtime-audio,sound,synth,synthesis}
}
@article{berg1979,
title = {{{PILE}}: {{A Language}} for {{Sound Synthesis}}},
shorttitle = {{{PILE}}},
@@ -80,6 +91,16 @@
file = {/Users/tomoya/Zotero/storage/H94X4M7S/Berg - 1979 - PILE A Language for Sound Synthesis.pdf}
}
@book{Born1995,
title = {Rationalizing {{Culture}}},
author = {Born, Georgina},
year = {1995},
number = {1},
publisher = {University of California Press},
urldate = {2021-10-10},
isbn = {0-520-20216-3}
}
@inproceedings{brandt2000,
title = {Temporal Type Constructors for Computer Music Programming},
booktitle = {Proceedings of {{International Computer Music Conference}}},
@@ -185,6 +206,24 @@
file = {/Users/tomoya/Zotero/storage/NBRFF5ND/Holbrook et al. - Computer music and post-acousmatic practices.pdf}
}
@article{innis_sound_1968,
title = {Sound {{Synthesis}} by {{Computer}}: {{Musigol}}, a {{Program Written Entirely}} in {{Extended Algol}}},
shorttitle = {Sound {{Synthesis}} by {{Computer}}},
author = {Innis, Donald Mac},
year = {1968},
journal = {Perspectives of New Music},
volume = {7},
number = {1},
eprint = {832426},
eprinttype = {jstor},
pages = {66--79},
publisher = {Perspectives of New Music},
issn = {0031-6016},
doi = {10.2307/832426},
urldate = {2022-01-04},
file = {/Users/tomoya/Zotero/storage/DYXDF5EH/Innis - 1968 - Sound Synthesis by Computer Musigol, a Program Wr.pdf}
}
@misc{kay2019,
title = {American Computer Pioneer {{Alan Kay}}'s Concept, the {{Dynabook}}, Was Published in 1972. {{How}} Come {{Steve Jobs}} and {{Apple iPad}} Get the Credit for Tablet Invention?},
author = {Kay, Alan C.},
@@ -193,11 +232,40 @@
journal = {Quora},
urldate = {2022-01-25},
abstract = {Answer (1 of 4): The Dynabook idea happened in 1968. But the simple part of the idea --- a personal computer on the back of a flat screen display with a stylus and touch sensitivity --- is hard to consider a real invention given: * Flat-screen displays. I saw the first University of Illinois one i...},
howpublished = {\url{https://www.quora.com/American-computer-pioneer-Alan-Kay-s-concept-the-Dynabook-was-published-in-1972-How-come-Steve-Jobs-and-Apple-iPad-get-the-credit-for-tablet-invention}},
howpublished = {https://www.quora.com/American-computer-pioneer-Alan-Kay-s-concept-the-Dynabook-was-published-in-1972-How-come-Steve-Jobs-and-Apple-iPad-get-the-credit-for-tablet-invention},
language = {en},
file = {/Users/tomoya/Zotero/storage/52TPMQQG/American-computer-pioneer-Alan-Kay-s-concept-the-Dynabook-was-published-in-1972-How-come-Steve-.html}
}
@inproceedings{kirkbride2016foxdot,
title = {{{FoxDot}}: {{Live}} Coding with Python and Supercollider},
booktitle = {Proceedings of the {{International Conference}} on {{Live Interfaces}}},
author = {Kirkbride, Ryan},
year = {2016},
pages = {194--198}
}
@misc{lan_glicol_2020,
title = {Glicol},
author = {Lan, Qichao},
year = {2020},
urldate = {2025-01-28},
howpublished = {https://glicol.org/},
file = {/Users/tomoya/Zotero/storage/9DZAAT5M/glicol.org.html}
}
@inproceedings{Lattner,
title = {{{LLVM}}: {{A Compilation Framework}} for {{Lifelong Program Analysis}} \& {{Transformation}}},
booktitle = {Proceedings of the {{International Symposium}} on {{Code Generation}} and {{Optimization}}: {{Feedback-Directed}} and {{Runtime Optimization}}},
author = {Lattner, Chris and Adve, Vikram},
year = {2004},
pages = {75},
publisher = {IEEE Computer Society},
urldate = {2019-05-29},
abstract = {This paper describes LLVM (Low Level Virtual Machine),a compiler framework designed to support transparent, lifelongprogram analysis and transformation for arbitrary programs,by providing high-level information to compilertransformations at compile-time, link-time, run-time, and inidle time between runs.LLVM defines a common, low-levelcode representation in Static Single Assignment (SSA) form,with several novel features: a simple, language-independenttype-system that exposes the primitives commonly used toimplement high-level language features; an instruction fortyped address arithmetic; and a simple mechanism that canbe used to implement the exception handling features ofhigh-level languages (and setjmp/longjmp in C) uniformlyand efficiently.The LLVM compiler framework and coderepresentation together provide a combination of key capabilitiesthat are important for practical, lifelong analysis andtransformation of programs.To our knowledge, no existingcompilation approach provides all these capabilities.We describethe design of the LLVM representation and compilerframework, and evaluate the design in three ways: (a) thesize and effectiveness of the representation, including thetype information it provides; (b) compiler performance forseveral interprocedural problems; and (c) illustrative examplesof the benefits LLVM provides for several challengingcompiler problems.\vphantom\{\}, booktitle = \{Proceedings of the International Symposium on Code Generation and Optimization: Feedback-Directed and Runtime Optimization\vphantom\}},
file = {/Users/tomoya/Zotero/storage/6F75AM3H/full-text.pdf}
}
@article{Lazzarini2013,
title = {The {{Development}} of {{Computer Music Programming Systems}}},
author = {Lazzarini, Victor},
@@ -259,6 +327,15 @@
file = {/Users/tomoya/Zotero/storage/N4NELPL9/Loy and Abbott - 1985 - Programming languages for computer music synthesis.pdf}
}
@inproceedings{lyon_we_2006,
title = {Do {{We Still Need Computer Music}}?},
booktitle = {{{EMS}}},
author = {Lyon, Eric},
year = {2006},
urldate = {2025-01-17},
file = {/Users/tomoya/Zotero/storage/SK2DXEE8/Do_We_Still_Need_Computer_Music.pdf}
}
@article{lyon2002,
title = {Dartmouth {{Symposium}} on the {{Future}} of {{Computer Music Software}}: {{A Panel Discussion}}},
shorttitle = {Dartmouth {{Symposium}} on the {{Future}} of {{Computer Music Software}}},
@@ -275,13 +352,31 @@
urldate = {2025-01-01}
}
@misc{lyon2006,
title = {Do we still need computer Music?},
author = {Lyon, Eric},
year = {2006},
urldate = {2025-01-17},
howpublished = {\url{https://disis.music.vt.edu/eric/LyonPapers/Do\_We\_Still\_Need\_Computer\_Music.pdf}},
file = {/Users/tomoya/Zotero/storage/SK2DXEE8/Do_We_Still_Need_Computer_Music.pdf}
@article{Magnusson2009,
title = {Of Epistemic Tools: {{Musical}} Instruments as Cognitive Extensions},
author = {Magnusson, Thor},
year = {2009},
month = aug,
journal = {Organised Sound},
volume = {14},
number = {2},
pages = {168--176},
issn = {13557718},
doi = {10.1017/S1355771809000272},
urldate = {2021-03-17},
abstract = {This paper explores the differences in the design and performance of acoustic and new digital musical instruments, arguing that with the latter there is an increased encapsulation of musical theory. The point of departure is the phenomenology of musical instruments, which leads to the exploration of designed artefacts as extensions of human cognition - as scaffolding onto which we delegate parts of our cognitive processes. The paper succinctly emphasises the pronounced epistemic dimension of digital instruments when compared to acoustic instruments. Through the analysis of material epistemologies it is possible to describe the digital instrument as an epistemic tool: a designed tool with such a high degree of symbolic pertinence that it becomes a system of knowledge and thinking in its own terms. In conclusion, the paper rounds up the phenomenological and epistemological arguments, and points at issues in the design of digital musical instruments that are germane due to their strong aesthetic implications for musical culture. {\copyright} 2009 Cambridge University Press.},
file = {/Users/tomoya/Zotero/storage/9SUU6WCD/magnusson.pdf;/Users/tomoya/Zotero/storage/HJFNX6AG/magnusson.pdf}
}
@article{Magnusson2011,
title = {The {{IXI Lang}}: {{A SuperCollider Parasite}} for {{Live Coding}}},
author = {Magnusson, Thor},
year = {2011},
journal = {International Computer Music Conference Proceedings},
volume = {2011},
publisher = {Michigan Publishing, University of Michigan Library},
issn = {2223-3881},
urldate = {2020-03-20}
}
@article{Markstrum2010,
@@ -311,6 +406,28 @@
file = {/Users/tomoya/Zotero/storage/IHLKBB9C/Mathews - 1961 - An acoustic compiler for music and psychological s.pdf;/Users/tomoya/Zotero/storage/CRSTYZYX/6773634.html}
}
@misc{mathews_max_2007,
title = {Max {{Mathews Full Interview}} {\textbar} {{NAMM}}.Org},
author = {Mathews, Max V.},
year = {2007},
month = mar,
urldate = {2025-01-08},
abstract = {Max Mathews was working as an engineer at the famed Bell Laboratory in 1954 when he was asked to determine if the computer Bell was designing could create music. The landmark Music 2 and later Music 4 projects put the two concepts together as early as 1957---the computer and music had a future and Max was there for the birth. Max had moved on to musical programming when Don Buchla and Robert Moog created similar electronic music in the form of the synthesizer.},
howpublished = {https://www.namm.org/video/orh/max-mathews-full-interview},
language = {en},
file = {/Users/tomoya/Zotero/storage/F9CN88YP/max-mathews-full-interview.html}
}
@book{mathews_technology_1969,
title = {The Technology of Computer Music},
author = {Mathews, Max V. and Miller, Joan E.},
year = {1969},
publisher = {M.I.T. Press},
urldate = {2020-03-31},
isbn = {0-262-13050-5},
keywords = {Computer composition sound processing}
}
@article{mathews1963,
title = {The {{Digital Computer}} as a {{Musical Instrument}}},
author = {Mathews, M.V.},
@@ -339,6 +456,49 @@
file = {/Users/tomoya/Zotero/storage/GFPCD4VD/full-text.pdf;/Users/tomoya/Zotero/storage/ZAQ37PDB/Mathews, Roads - 1980 - Interview with Max Mathews.pdf}
}
@inproceedings{matsuura_lambda-mmm_2024,
title = {Lambda-Mmm: The {{Intermediate Representation}} for {{Synchronous Signal Processing Language Based}} on {{Lambda Calculus}}},
booktitle = {Proceedings of the 4th {{International Faust Conference}}},
author = {Matsuura, Tomoya},
year = {2024},
pages = {17--25},
abstract = {This paper proposes {$\lambda$}mmm, a call-by-value, simply typed lambda calculus-based intermediate representation for a music programming language that handles synchronous signal processing and introduces a virtual machine and instruction set to execute {$\lambda$}mmm. Digital signal processing is represented by a syntax that incorporates the internal states of delay and feedback into the lambda calculus. {$\lambda$}mmm extends the lambda calculus, allowing users to construct generative signal processing graphs and execute them with consistent semantics. However, a challenge arises when handling higher-order functions because users must determine whether execution occurs within the global environment or during DSP execution. This issue can potentially be resolved through multi-stage computation.},
copyright = {All rights reserved},
isbn = {978-2-9597911-0-9},
language = {en},
file = {/Users/tomoya/Zotero/storage/X9PF87WL/Matsuura - 2024 - Lambda-mmm the Intermediate Representation for Sy.pdf}
}
@inproceedings{matsuura_mimium_2021,
title = {Mimium: {{A Self-Extensible Programming Language}} for {{Sound}} and {{Music}}},
shorttitle = {Mimium},
booktitle = {Proceedings of the 9th {{ACM SIGPLAN International Workshop}} on {{Functional Art}}, {{Music}}, {{Modelling}}, and {{Design}}},
author = {Matsuura, Tomoya and Jo, Kazuhiro},
year = {2021},
month = aug,
series = {{{FARM}} 2021},
pages = {1--12},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/3471872.3472969},
urldate = {2024-07-09},
abstract = {We propose a programming language for music named mimium, which combines temporal-discrete control and signal processing in a single language. mimium has an intuitive imperative syntax and can use stateful functions as Unit Generator in the same way as ordinary function definitions and applications. Furthermore, the runtime performance is made equivalent to that of lower-level languages by compiling the code through the LLVM compiler infrastructure. By using the strategy of adding a minimum number of features for sound to the design and implementation of a general-purpose functional language, mimium is expected to lower the learning cost for users, simplify the implementation of compilers, and increase the self-extensibility of the language. In this paper, we present the basic language specification, semantics for simple task scheduling, the semantics for stateful functions, and the compilation process. mimium has certain specifications that have not been achieved in existing languages. Future works suggested include extending the compiler functionality to combine task scheduling with the functional paradigm and introducing multi-stage computation for parametric replication of stateful functions.},
copyright = {All rights reserved},
isbn = {978-1-4503-8613-5},
file = {/Users/tomoya/Zotero/storage/ERG4LFIZ/Matsuura and Jo - 2021 - mimium A Self-Extensible Programming Language for.pdf;/Users/tomoya/Zotero/storage/TDBLJQTL/Matsuura and Jo - 2021 - mimium a self-extensible programming language for.pdf}
}
@inproceedings{mccartney_supercollider_1996,
title = {{{SuperCollider}}, a {{New Real Time Synthesis Language}}},
booktitle = {International {{Computer Music Conference Proceedings}}},
author = {McCartney, James},
year = {1996},
publisher = {Michigan Publishing},
issn = {2223-3881},
urldate = {2021-10-12},
file = {/Users/tomoya/Zotero/storage/5WDUN5YL/supercollider-a-new-real-time-synthesis-language.pdf}
}
@article{McCartney2002,
title = {Rethinking the Computer Music Language: {{SuperCollider}}},
author = {McCartney, James},
@@ -439,6 +599,14 @@
file = {/Users/tomoya/Zotero/storage/JVBK3LZK/Nishino, Nakatsu - 2016 - Computer Music Languages and Systems The Synergy Between Technology and Creativity.pdf;/Users/tomoya/Zotero/storage/UKFT5TD2/Nishino, Nakatsu_2016_Handbook of Digital Games and Entertainment Technologies.pdf}
}
@phdthesis{norilo_kronos_2016,
title = {Kronos: {{Reimagining}} Musical Signal Processing},
author = {Norilo, Vesa},
year = {2016},
school = {University of the Arts Helsinki},
file = {/Users/tomoya/Zotero/storage/DIJ6Q8UF/sisus_b51.pdf;/Users/tomoya/Zotero/storage/KLHBHLZZ/sisus_b51.pdf}
}
@article{norilo2015,
title = {Kronos: {{A Declarative Metaprogramming Language}} for {{Digital Signal Processing}}},
author = {Norilo, Vesa},
@@ -467,15 +635,49 @@
file = {/Users/tomoya/Zotero/storage/MDQ8W5KZ/nyquist1928.pdf}
}
@incollection{Orlarey2009,
title = {{{FAUST}} : An {{Efficient Functional Approach}} to {{DSP Programming}}},
booktitle = {New {{Computational Paradigms}} for {{Computer Music}}},
author = {Orlarey, Yann and Fober, Dominique and Letz, St{\'e}phane and Letz, Stephane},
year = {2009},
publisher = {DELATOUR FRANCE},
urldate = {2020-03-28},
file = {/Users/tomoya/Zotero/storage/LB4PIMPY/full-text.pdf}
}
@misc{ostertag1998,
title = {Why {{Computer Music Sucks}}},
author = {Ostertag, Bob},
year = {1998},
urldate = {2025-01-17},
howpublished = {\url{https://web.archive.org/web/20160312125123/http://bobostertag.com/writings-articles-computer-music-sucks.htm}},
howpublished = {https://web.archive.org/web/20160312125123/http://bobostertag.com/writings-articles-computer-music-sucks.htm},
file = {/Users/tomoya/Zotero/storage/9QAGQSVS/writings-articles-computer-music-sucks.html}
}
@misc{puckette_47_2020,
title = {47 {$\bullet$} {{Miller Puckette}} {$\bullet$} {{Max}}/{{MSP}} \& {{Pure Data}}},
author = {Reese, Ivan},
year = {2020},
month = may,
journal = {Future of Coding},
number = {47},
urldate = {2022-01-23},
abstract = {Are you looking for the real computer revolution? Join the club! Future of Coding is a podcast and community of toolmakers, researchers, and creators working together to reimagine computing.},
collaborator = {Puckette, Miller S.},
language = {english},
file = {/Users/tomoya/Zotero/storage/E4PL98DG/047.html}
}
@inproceedings{puckette_pure_1997,
title = {Pure {{Data}}},
booktitle = {International {{Computer Music Conference Proceedings}}},
author = {Puckette, Miller},
year = {1997},
publisher = {Michigan Publishing, University of Michigan Library},
issn = {2223-3881},
file = {/Users/tomoya/Zotero/storage/E5VQAJSD/puredata_icmc97.pdf}
}
@article{puckette2015,
title = {The {{Sampling Theorem}} and {{Its Discontents}}},
author = {Puckette, Miller},
@@ -499,6 +701,30 @@
language = {英語}
}
@misc{ruiz_vult_2020,
title = {Vult {{Language}}},
author = {Ruiz, Leonardo Laguna},
year = {2020},
urldate = {2020-09-27}
}
@misc{Ruiz2020,
title = {Vult {{Language}}},
author = {Ruiz, Leonardo Laguna},
year = {2020},
urldate = {2024-11-27},
howpublished = {http://modlfo.github.io/vult/}
}
@inproceedings{Salazar2012,
title = {{{CHUGENS}}, {{CHUBGRAPHS}}, {{CHUGINS}}: 3 {{TIERS FOR EXTENDING CHUCK}}},
booktitle = {International {{Computer Music Conference Proceedings}}},
author = {Salazar, Spencer and Wang, Ge},
year = {2012},
pages = {60--63},
file = {/Users/tomoya/Zotero/storage/6XY3DR2B/chugens-chubgraphs-chugins-3-tiers-for-extending-chuck.pdf}
}
@article{scheirer1999,
title = {{{SAOL}}: {{The MPEG-4 Structured Audio Orchestra Language}}},
shorttitle = {{{SAOL}}},
@@ -516,6 +742,16 @@
file = {/Users/tomoya/Zotero/storage/NIULED49/Scheirer and Vercoe - 1999 - SAOL The MPEG-4 Structured Audio Orchestra Langua.pdf;/Users/tomoya/Zotero/storage/U9MFTBDB/Scheirer and Vercoe - 1999 - SAOL The MPEG-4 Structured Audio Orchestra Langua.pdf}
}
@phdthesis{sorensen_extempore:_2018,
title = {Extempore: {{The}} Design, Implementation and Application of a Cyber-Physical Programming Language},
author = {Sorensen, Andrew Carl},
year = {2018},
doi = {10.25911/5D67B75C3AAF0},
school = {The Australian National University},
keywords = {Computer Music,Cyber,Extempore,High Performance Computing,Human Computer Interaction,Live Coding,Live Programming,Physical Programming},
file = {/Users/tomoya/Zotero/storage/5HUUW8EZ/full-text.pdf;/Users/tomoya/Zotero/storage/B2JYT8R8/Sorensen - 2018 - Extempore The design, implementation and application of a cyber-physical programming language(3).pdf}
}
@article{Spinellis2001,
title = {Notable Design Patterns for Domain-Specific Languages},
author = {Spinellis, Diomidis},
@@ -615,6 +851,67 @@
annotation = {title translation by the author.}
}
@misc{taylor_article_1999,
title = {Article: {{An Interview With David Wessel}} {\textbar} {{Cycling}} '74},
shorttitle = {Article},
author = {Taylor, Gregory},
year = {1999},
urldate = {2022-01-20},
abstract = {David Wessel is Professor of Music at the University of California, Berkeley where he directs the Center for New Music and Audio Technologies (CNMAT). Wessel worked at IRCAM between 1979 and 1988; his activities there included starting the department where Miller Puckette first began working on Max on a Macintosh. Since Wessel's arrival in Berkeley over ten years ago, CNMAT has been actively involved in teaching Max/MSP as well as developing freely available Max-based software projects.},
howpublished = {https://cycling74.com/articles/an-interview-with-david-wessel},
language = {en},
file = {/Users/tomoya/Zotero/storage/ZM7E9L9Q/an-interview-with-david-wessel.html}
}
@book{theberge_any_1997,
title = {Any Sound You Can Imagine: Making Music/Consuming Technology},
shorttitle = {Any Sound You Can Imagine},
author = {Th{\'e}berge, Paul},
year = {1997},
series = {Music/Culture},
publisher = {Wesleyan University Press : University Press of New England},
address = {Hanover, NH},
isbn = {978-0-8195-5307-2 978-0-8195-6309-5},
lccn = {ML1092 .T38 1997},
keywords = {Computer sound processing,Electronic musical instruments,Music and technology}
}
@article{theberge_any_2023,
title = {Any {{Sound You Can Imagine}}: {{Then}} and Now},
shorttitle = {Any {{Sound You Can Imagine}}},
author = {Th{\'e}berge, Paul},
year = {2023},
month = jun,
journal = {Journal of Popular Music Education},
volume = {7},
number = {The 25th Anniversary Release of Th{\'e}berge's Any Sound You Can Imagine: Making Music/Consuming Technology},
pages = {219--229},
publisher = {Intellect},
issn = {2397-6721, 2397-673X},
doi = {10.1386/jpme_00115_1},
urldate = {2025-01-22},
abstract = {During the 25 years since the publication of my book, Any Sound You Can Imagine: Making Music/Consuming Technology, a number of technological developments and theoretical trends have emerged: among them, the integration of music production within Digital Audio Workstation (DAW) platforms, and the rise of social media as a means for information sharing among musicians, on the one hand; and the emergence, in popular music studies, of practice-based and community-oriented forms of music research and pedagogy, on the other. In addition, new technologies and applications of artificial intelligence (AI) have begun to have an impact on music-making and listening at every level. These developments are discussed in relation to theoretical issues of innovation, production, consumption and gender found in my previous work and, more specifically, in relation to concerns raised in a number of articles in the present volume, using them as a springboard for further reflection and theorizing.},
language = {en},
file = {/Users/tomoya/Zotero/storage/4FJLP4DZ/Théberge - 2023 - Any Sound You Can Imagine Then and now.pdf;/Users/tomoya/Zotero/storage/EHEXPCGE/jpme_00115_1.html}
}
@misc{toplap_manifestodraft_2004,
title = {{{ManifestoDraft}} - {{Toplap}}},
author = {{TOPLAP}},
year = {2004},
urldate = {2025-01-26},
howpublished = {https://toplap.org/wiki/ManifestoDraft}
}
@article{vercoe_computer_1983,
title = {Computer {{Systems}} and {{Languages}} for {{Audio Research}}},
author = {Vercoe, Barry L.},
year = {1983},
journal = {The New World of Digital Audio (Audio Engineering Society Special Edition)},
pages = {245--250},
file = {/Users/tomoya/Zotero/storage/5FWAAURE/Vercoe - Computer Systems and Languages for Audio Research.pdf}
}
@inproceedings{wakefield2010,
title = {{{LuaAV}}: {{Extensibility}} and {{Heterogeneity}} for {{Audiovisual Computing}}},
booktitle = {Proceeding of {{Linux Audio Conference}}},
@@ -626,6 +923,19 @@
file = {/Users/tomoya/Zotero/storage/C8WADNNI/full-text.pdf}
}
@article{wang_chuck_2015,
title = {{{ChucK}}: {{A Strongly Timed Computer Music Language}}},
author = {Wang, Ge and Cook, Perry R and Salazar, Spencer},
year = {2015},
journal = {Computer Music Journal},
volume = {39},
number = {4},
pages = {10--29},
doi = {10.1162/COMJ_a_00324},
abstract = {ChucK is a programming language designed for computer music. It aims to be expressive and straightforward to read and write with respect to time and concurrency, and to provide a platform for precise audio synthesis and analysis and for rapid experimentation in computer music. In particular, ChucK defines the notion of a strongly timed audio programming language, comprising a versatile time-based programming model that allows programmers to flexibly and precisely control the flow of time in code and use the keyword now as a time-aware control construct, and gives programmers the ability to use the timing mechanism to realize sample-accurate concurrent programming. Several case studies are presented that illustrate the workings, properties, and personality of the language. We also discuss applications of ChucK in laptop orchestras, computer music pedagogy, and mobile music instruments. Properties and affordances of the language and its future directions are outlined.},
file = {/Users/tomoya/Zotero/storage/4BFQ6VDF/Wang, Cook, Salazar - 2015 - ChucK A Strongly Timed Computer Music Language.pdf}
}
@incollection{wang2017,
title = {A {{History}} of {{Programming}} and {{Music}}},
booktitle = {Cambridge {{Companion}} to {{Electronic Music}}},

191
main.md
View File

@@ -1,73 +1,69 @@
## Introduction
Programming languages and environments for music have developed hand in hand with the history of creating music using computers. Software like Max, Pure Data, CSound, and SuperCollider has been referred to as "Computer Music Language"[@McCartney2002;@Nishino2016;@McPherson2020], "Language for Computer Music"[@Dannenberg2018], and "Computer Music Programming Systems"[@Lazzarini2013], though there is no clear consensus on the use of these terms. However, as the term "Computer Music" suggests, these programming languages are deeply intertwined with the history of technology-driven music, which developed under the premise that "almost any sound can be produced"[@mathews_acoustic_1961] through the use of computers.
Programming languages and environments for music have developed hand in hand with the history of creating music using computers. Software and systems like Max, Pure Data, CSound, and SuperCollider has been referred to as "Computer Music Language"[@McCartney2002;@Nishino2016;@McPherson2020], "Language for Computer Music"[@Dannenberg2018], and "Computer Music Programming Systems"[@Lazzarini2013], though there is no clear consensus on the use of these terms. However, as the shared term "Computer Music" implies, these programming languages are deeply intertwined with the history of technology-driven music, which developed under the premise that "almost any sound can be produced"[@mathews_acoustic_1961] through the use of computers.
In the early days, when computers were confined to university research laboratories and neither displays nor mice existed, creating sound or music with computers was inevitably linked to programming. Today, however, using programming as a means to produce sound on a computer—rather than employing DAW (Digital Audio Workstation) software—is somewhat specialized. In other words, programming languages for music developed after the proliferation of personal computers are software that deliberately choose programming (whether textual or graphical) as their frontend for sound generation.
In the early days, when computers were confined to research laboratories and neither displays nor mouse existed, creating sound or music with computers was inevitably equal to the work of programming. Today, however, programming as a means to produce sound on a computer—rather than employing Digital Audio Workstation (DAW) software like Pro Tools is not usual. In other words, programming languages for music developed after the proliferation of personal computers are the softwares that intentionally chose programming (whether textual or graphical) as their frontend for making sound.
Since the 1990s, theoretical advancements in programming languages and the various constraints required for real-time audio processing have significantly increased the specialized knowledge needed to develop programming languages for music. Furthermore, some music-related languages developed after the 2000s are not necessarily aimed at pursuing new forms of musical expression. There appears to be no unified perspective on how to evaluate such languages.
The ultimate goal of this paper is to introduce the framework of "weak computer music," referring to music mediated by computers in a non-style-specific manner. This framework aims to decouple the evaluation of programming language design and development for music from specific styles and the ideologies associated with computer music.
Since the 1990s, the theoretical development of programming languages and the various constraints required for real-time audio processing have significantly increased the specialized knowledge necessary for developing programming languages for music today. Furthermore, some languages developed after the 2000s are not necessarily aimed at pursuing new forms of musical expression. It seems that there is still no unified perspective on how the value of such languages should be evaluated.
In this paper, a critical historical review is conducted by deriving discussions from sound studies alongside existing surveys, aiming to consider programming languages for music independently from computer music as the specific genre.
### Use of the Term "Computer Music"
Despite its potential broad application, the term "computer music" has been repeatedly noted since the 1990s as being used within a narrowly defined framework, tied to specific styles or communities[@ostertag1998].
The term "Computer Music," despite its literal and potential broad meaning, has been noted as being used within a narrowly defined framework tied to specific styles or communities, as represented in Ostartag's *Why Computer Music Sucks*[@ostertag1998] since the 1990s.
The necessity of using the term "computer music" for such academic contexts (particularly those centered around the International Computer Music Conference, or ICMC) has diminished over time. Lyon argues that defining computer music as simply "music made using computers" is too permissive, while defining it as "music that could not exist without computers" is overly strict, complicating the evaluation of analog modeling synthesizers implemented on computers. Lyon questions the utility of the term itself, comparing its consideration to that of "piano music," which ignores the styles within it[@lyon2006].
As Lyon observed nearly two decades ago, it is now nearly impossible to imagine a situation in which computers are not involved at any stage from production to experience of music[@lyon_we_2006, p1]. The necessity of using the term "Computer Music" to describe academic contexts, particularly those centered around the ICMC, has consequently diminished.
As Ostertag and Lyon observed, it has become increasingly difficult to envision a situation where computers are absent from the production and experience of music today, particularly in commercial contexts[^nonelectric]. Nevertheless, the majority of music in the world could be described as "simply using computers."
Holbrook and Rudi continued Lyon's discussion by proposing the use of frameworks like Post-Acousmatic[@adkins2016] to redefine "Computer Music." Their approach incorporates the tradition of pre-computer experimental/electronic music, situating it as part of the broader continuum of technology-based or technology-driven music[@holbrook2022].
[^nonelectric]: Of course, the realm of music extends beyond the numbers processed by computers or the oscillations of speaker diaphragms. This paper does not seek to intervene in aesthetic judgments regarding music made without computers or non-commercial musical activities. However, the existence of such music does not counter the awareness that there is little analysis of the inevitable involvement of computing as a medium in the field of popular music, which attracts significant academic and societal interest.
While the strict definition of the Post-Acousmatic music is not given deliberately, one of its elements contains the expansion of music production from institutional settings to individuals and the use of the technology were diversified[@adkins2016, p113]. However, while the Post-Acousmatic discourse integrates the historical fact that declining computer costs and access beyond laboratories have enabled diverse musical expressions, it simultaneously marginalizes much of the music that is "just using computers" and fails to provide insights into this divided landscape.
Holbrook and Rudi propose analyzing what has been called computer music within the framework of post-acousmatic music[@adkins2016], including traditions of pre-computer electronic music as one of many forms of technology-based/driven music[@holbrook2022].
Lyon argues that defining computer music simply as music created with computers is too permissive, while defining it as music that could not exist without computers is too strict. He highlights the difficulty of considering instruments that use digital simulations, such as virtual analog synthesizers, within these definitions. Furthermore, he suggests that the term "computer music" is style-agnostic definition almost like "piano music," implying that it ignores the style and form inside music produced by the instruments.
A critical issue with these discussions is that post-acousmatic music lacks a precise definition. One proposed characteristic is the shift in the locus of production from institutions to individuals, which has altered how technology is used[@adkins2016,p113]. However, this narrative incorporates a tautological issue: while it acknowledges the historical fact that the decreasing cost of computers allowed diverse musical expressions outside laboratories, it excludes much music as "simply using computers" and fails to provide insights into such divisions.
However, one of the defining characteristics of computers as a medium lies in their ability to treat musical styles themselves as subjects of meta-manipulation through simulation and modeling. When creating instruments with computers, or when using such instruments, sound production involves programming—manipulating symbols embedded in a particular musical culture. This recursive embedding of the language and perception constituting that musical culture into the resulting music is a process that goes beyond what is possible with acoustic instruments or analog electronic instruments. Magnusson refers to this characteristic of digital instruments as "Epistemic Tools" and points out that they tend to work in the direction of reinforcing and solidifying musical culture:
The spread of personal computers has incompletely achieved the vision of metamedium as a device users could modify themselves, instead becoming a black box for content consumption[@emerson2014]. Histories highlighting the agency of those who created programming environments, software, protocols, and formats for music obscure indirect power relationships generated by the infrastructure[@sterne_there_2014].
> The act of formalising is therefore always an act of fossilisation. As opposed to the acoustic instrument maker, the designer of the composed digital instrument frames affordances through symbolic design, thereby creating a snapshot of musical theory, freezing musical culture in time. [@Magnusson2009,p173]
Today, while music production fundamentally depends on computers, most of it falls under Lyon's overlapping permissive and strict definitions of computer music. In this paper, I propose calling this situation the following:
> "Weak computer music" — music for which computers are essential to its realization, but where the uniqueness of the work as intended by the creator is not particularly tied to the use of computers.
Today, many people use computers for music production not because they consciously leverage the uniqueness of the meta-medium, but simply because there are no quicker or more convenient alternatives available. Even so, within a musical culture where computers are used as a default or reluctant choice, musicians are inevitably influenced by the underlying infrastructures like software, protocols, and formats. As long as the history of programming languages for music remains intertwined with the history of computer music as it relates to specific genres or communities, it becomes difficult to analyze music created with computers as a passive means.
Most people use computers simply because no quicker alternative exists, not because they are deliberately leveraging the unique medium of computers for music production. However, the possibility that such music culture, shaped by the incidental use of computers, has aesthetic and social characteristics worth analyzing cannot be dismissed.
In this paper, the history of programming languages for music is reexamined with an approach that, opposite from Lyon, takes an extremely style-agnostic perspective. Rather than focusing on what has been created with these tools, the emphasis is placed on how these tools themselves have been constructed. The paper centers on the following two topics:
This paper will historically organize the specifications and construction of programming languages for early computer music systems with a focus on their style-agnostic nature.
1. A critique of the universality of sound representation using pulse-code modulation (PCM), the foundational concept underlying most of today's sound programming, by referencing early attempts of sound generation using electronic computers.
2. An examination of the MUSIC-N family, the origin of PCM-based sound synthesis, to highlight that its design varies significantly across systems from the perspective of modern programming language design and that it has evolved over time into a black box, eliminating the need for users to understand its internal workings.
- Examining the discourse framing MUSIC as the progenitor of computer music.
- Investigating what aspects were excluded from user access in MUSIC-N derivatives such as MUSIGOL.
- Analyzing the standardization of UGens (unit generators) and the division of labor in Max and Pure Data.
- Reviewing music programming languages of the 2000s.
Ultimately, the paper concludes that programming languages for music developed since the 2000s are not solely aimed at creating new music but also serve as alternatives to the often-invisible technological infrastructures surrounding music, such as formats and protocols. By doing so, the paper proposes new perspectives for the historical study of music created with computers.
## PCM and Early Computer Music
The conclusion will propose a framework necessary for future discussions on music programming languages.
Among the earliest examples of computer music research, the MUSIC I system (1957) from Bell Labs and its derivatives, known as MUSIC-N, are frequently highlighted. However, attempts to create music with computers in the UK and Australia prior to MUSIC I have also been documented[@doornbusch2017]. Organizing what was achieved by MUSIC-N and earlier efforts can help clarify definitions of computer music.
## Born of "Computer Music" - MUSIC-N and PCM Universality
The earliest experiments with sound generation on computers in the 1950s involved controlling the intervals between one-bit pulses (on or off) to control pitch. This was partly because the operational clock frequencies of early computers fell within the audible range, making the sonification of electrical signals a practical and cost-effective debugging method compared to visualizing them on displays or oscilloscopes. Some computers at this time like Australias CSIR Mark I (CSIRAC) often had "hoot" primitive instructions that emit a single pulse to a speaker.
Among the earliest examples of computer music research, the MUSIC I system (1957) from Bell Labs and its derivatives, known as MUSIC-N, are frequently highlighted. However, attempts to create music with computers in the UK and Australia prior to MUSIC I have also been documented[@doornbusch2017].
In 1949, the background to music played on the BINAC in UK involved engineer Louis Wilson, who noticed that an AM radio placed nearby could pick up weak electromagnetic waves generated during the switching of vacuum tubes, producing regular sounds. He leveraged this phenomenon by connecting a speaker and a power amplifier to the computer's output, using the setup to assist in debugging processes. Frances Elizabeth Holberton took this a step further by programming the computer to generate pulses at arbitrary intervals, creating melodies [@woltman1990]. The sound generation on BINAC and CSIR Mark I represents early instances of using computers to play melodies from existing music.
Organizing what was achieved by MUSIC-N and earlier efforts can help clarify definitions of computer music.
The earliest experiments with sound generation on computers in the 1950s involved controlling the intervals between one-bit pulses (on or off) to control pitch. This was partly because the operational clock frequencies of early computers fell within the audible range, making the sonification of electrical signals a practical and cost-effective debugging method compared to visualizing them on displays or oscilloscopes. Computers like Australias CSIR Mark I even featured primitive instructions like a "hoot" command to emit a single pulse to a speaker.
In the UK, Louis Wilson discovered that an AM radio near the BINAC computer picked up electromagnetic waves generated by vacuum tube switching, producing regular tones. This serendipitous discovery led to the intentional programming of pulse intervals to generate melodies[@woltman1990].
However, not all sound generation prior to PCM (Pulse Code Modulation) was merely the reproduction of existing music. Doornbusch highlights experiments on the British Pilot ACE (Prototype for Automatic Computing Engine: ACE), which utilized acoustic delay line memory to produce unique sounds[@doornbusch2017, p303-304]. Acoustic delay line memory, used as main memory in early computers like BINAC and CSIR Mark I, employed the feedback of pulses traveling through mercury via a speaker and microphone setup to retain data. Donald Davis, an engineer on the ACE project, described the sounds it produced as follows[@davis_very_1994, p19-20]:
However, not all sound generation at this timewas merely the reproduction of existing music. Doornbusch highlights experiments on the British Pilot ACE (Prototype for Automatic Computing Engine: ACE), which utilized acoustic delay line memory to produce unique sounds[@doornbusch2017, p303-304]. Acoustic delay line memory, used as main memory in early computers like BINAC and CSIR Mark I, employed the feedback of pulses traveling through mercury via a speaker and microphone setup to retain data. Donald Davis, an engineer on the ACE project, described the sounds it produced as follows[@davis_very_1994, p19-20]:
> The Ace Pilot Model and its successor, the Ace proper, were both capable of composing their own music and playing it on a little speaker built into the control desk. I say composing because no human had any intentional part in choosing the notes. The music was very interesting, though atonal, and began by playing rising arpeggios: these gradually became more complex and faster, like a developing fugue. They dissolved into colored noise as the complexity went beyond human understanding.
>
> Loops were always multiples of 32 microseconds long, so notes had frequencies which were submultiples of 31.25 KHz. The music was based on a very strange scale, which was nothing like equal tempered or harmonic, but was quite pleasant. This music arose unintentionally during program optimization and was made possible by "misusing" switches installed for debugging acoustic delay line memory (p20).
> Loops were always multiples of 32 microseconds long, so notes had frequencies which were submultiples of 31.25 KHz. The music was based on a very strange scale, which was nothing like equal tempered or harmonic, but was quite pleasant.
Media scholar Miyazaki described the practice of listening to sounds generated by algorithms and their bit patterns, integrated into programming and debugging, as "Algo*rhythmic* Listening"[@miyazaki2012].
This music arose unintentionally during program optimization and was made possible by "misusing" switches installed for debugging acoustic delay line memory (p20). Media scholar Miyazaki described the practice of listening to sounds generated by algorithms and their bit patterns, integrated into programming and debugging, as "Algo*rhythmic* Listening"[@miyazaki2012].
Doornbusch warns against ignoring early computer music practices in Australia and the UK simply because they did not directly influence subsequent research[@doornbusch2017, p305]. Indeed, the tendency to treat pre-MUSIC attempts as hobbyist efforts by engineers and post-MUSIC endeavors as serious research remains common even today[@tanaka_all_2017].
Doornbusch warns against ignoring early computer music practices in Australia and the UK simply because they did not directly influence subsequent research[@doornbusch2017, p305]. Indeed, the tendency to treat pre-MUSIC attempts as hobbyist efforts by engineers and post-MUSIC endeavors as "serious" research remains common even today[@tanaka_all_2017].
The sounds generated by Pilot ACE challenge the post-acousmatic narrative that computer music transitioned from laboratory-based professional practices to personal use by amateurs. This is because: 1. The sounds were produced not by music specialists but by engineers, and 2. The sounds were tied to hardware-specific characteristics of acoustic delay line memory, making them difficult to replicate even with modern audio programming environments. Similarly, at MIT in the 1960s, Peter Samson utilized a debug speaker attached to the aging TX-0 computer to experiment with generating melodies using square waves[@levy_hackers_2010].
The sounds produced by the Pilot ACE challenge the post-acousmatic historical narrative, which suggests that computer music transitioned from being confined to specialized laboratories to becoming accessible to individuals, including amateurs.
This effort evolved into a program that allowed users to describe melodies with text strings. For instance, writing `4fs t8` would produce an F4 note as an eighth note. Samson later adapted this work to the PDP-1 computer, creating the "Harmony Compiler," widely used by MIT students. He also developed the Samson Box in the early 1970s, a computer music system used at Stanford University's CCRMA for over a decade[@loy_life_2013]. These examples suggest that the initial purpose of debugging does not warrant segregating early computational sound generation from the broader history of computer music.
This is because the sounds generated by the Pilot ACE were not created by musical experts, nor were they solely intended for debugging purposes. Instead, they were programmed with the goal of producing interesting sounds. Moreover, the sounds were tied to the hardware of the acoustic delay line memory—a feature that was likely difficult to replicate, even in modern audio programming environments.
### Universality of PCM
Similarly, in the 1960s at MIT, Peter Samson took advantage of the debugging speaker on the TX-0, a machine that had become outdated and freely available for students to use. He conducted experiments where he played melodies, such as Bach fugues, using square waves [@levy_hackers_2010]. Samsons experiments with the TX-0 later evolved into the creation of a program that allowed melodies to be described using text strings within MIT.
Let us examine **Pulse Code Modulation (PCM)**—a foundational aspect of MUSIC's legacy and one of the key reasons it is considered a milestone in the history of computer music. PCM enables the theoretical representation of "almost any sound" on a computer by dividing audio waveforms into discrete intervals (sampling) and expressing the amplitude of each interval as quantized numerical values. It remains the fundamental representation of sound on modern computers. The underlying sampling theorem was introduced by Nyquist in 1928[@Nyquist1928], and PCM itself was developed by Reeves in 1938.
Building on this, Samson developed a program called the Harmony Compiler on the DEC PDP-1, which was derived from the TX-0. This program gained significant popularity among MIT students. Around 1972, Samson began surveying various digital synthesizers that were being developed at the time and went on to create a system specialized for computer music. The resulting Samson Box was used at Stanford University's CCRMA (Center for Computer Research in Music and Acoustics) for over a decade until the early 1990s and became a tool for many composers to create their works [@loy_life_2013]. Considering Samsons example, it is not appropriate to separate the early experiments in sound generation by computers from the history of computer music solely because their initial purpose was debugging.
### Acousmatic Listening, the premise of the Universality of PCM
A critical issue with the "post-acousmatic" framework in computer music history lies within the term "acousmatic" itself. Initially proposed by Piegnot and later theorized by Schaeffer, the term describes a mode of listening to tape music, such as musique concrète, in which the listener does not imagine a specific sound source. It has been widely applied in theories of recorded sound, including Chion's analyses of sound design in visual media.
One of the reasons why MUSIC led to subsequent advancements in research was not simply because it was developed early, but because it was the first to implement sound representation on a computer based on **pulse-code modulation (PCM)**, which theoretically enables the representation of "almost any sound."
PCM, the foundational method of sound representation on today's computers, involves dividing audio waveforms into discrete intervals (sampling) and representing the sound pressure at each interval as discrete numerical values (quantization).
The issue with the universalism of PCM in the history of computer music is inherent in the concept of Acousmatic, which serves as a premise for Post-Acousmatic. Acousmatic, introduced by Piegnot as a listening style for tape music such as musique concrète and later theorized by Schaeffer, refers to a mode of listening where the listener refrains from imagining a specific sound source. This concept has been widely applied in theories of listening to recorded sound, including Chions analysis of sound design in film.
However, as sound studies scholar Jonathan Sterne has pointed out, discourses surrounding acousmatic listening often work to delineate pre-recording auditory experiences as "natural" by contrast[^husserl]. This implies that prior to the advent of recording technologies, listening was unmediated and holistic—a narrative that obscures the constructed nature of these assumptions.
@@ -83,5 +79,122 @@ By the way, the actual implementation of PCM in MUSIC I only allowed for monopho
Even when considering more contemporary applications, processes like ring modulation (RM), amplitude modulation (AM), or distortion often generate aliasing artifacts unless proper oversampling is applied. These artifacts occur because PCM, while universally suitable for reproducing recorded sound, is not inherently versatile as a medium for generating new sounds. As Puckette has argued, alternative representations, such as collections of linear segments or physical modeling synthesis, present other possibilities[@puckette2015]. Therefore, PCM is not a completely universal tool for creating sound.
## What Does the Unit Generator Hide?
Starting with version III, MUSIC adopted the form of an acoustic compiler (or block diagram compiler) that takes two types of input: a score language, which represents a list of time-varying parameters, and an orchestra language, which describes the connections between **Unit Generators** such as oscillators and filters. In this paper, the term "Unit Generator" means a signal processing module used by the user, where the internal implementation is either not open or implemented in a language different from the one used by the user.
Beyond performing sound synthesis based on PCM, one of the defining features of the MUSIC family in the context of computer music research was the establishment of a division of labor between professional musicians and computer engineers through the development of domain-specific languages. Mathews explained that he developed a compiler for MUSIC III in response to requests for additional features such as envelopes and vibrato, while also ensuring that the program would not be fixed in a static form [@mathews_max_2007, 13:10-17:50]. He repeatedly stated that his role was that of a scientist rather than a musician:
> The only answer I could see was not to make the instruments myself—not to impose my taste and ideas about instruments on the musicians—but rather to make a set of fairly universal building blocks and give the musician both the task and the freedom to put these together into his or her instruments. [@Mathews1980, p16]
> (...) When we first made these music programs the original users were not composers; they were the psychologist Guttman, John Pierce, and myself, who are fundamentally scientists. We wanted to have musicians try the system to see if they could learn the language and express themselves with it. So we looked for adventurous musicians and composers who were willing to experiment. (p17)
This clear delineation of roles between musicians and scientists became one of the defining characteristics of post-MUSIC computer music research. Paradoxically, the act of creating sounds never heard before using computers paved the way for research by allowing musicians to focus on their craft without needing to grapple with the complexities of programming.
### Example: Hiding First-Order Variables in Signal Processing
Although the MUSIC N series shares a common workflow of using a Score language and an Orchestra language, the actual implementation of each programming language varies significantly, even within the series.
One notable but often overlooked example is MUSIGOL, a derivative of MUSIC IV [@innis_sound_1968]. In MUSIGOL, not only was the system itself implemented differently, but even the user-written Score and Orchestra programs were written entirely as ALGOL 60 source code. Similar to modern frameworks like Processing or Arduino, MUSIGOL represents one of the earliest examples of a domain-specific language implemented as an internal DSL within a library[^mus10]. (Therefore, according to the definition of Unit Generator provided in this paper, MUSIGOL does not qualify as a language that uses Unit Generators.)
[^mus10]: While MUS10, used at Stanford University, was not an internal DSL, it was created by modifying an existing ALGOL parser [@loy1985, p248].
The level of abstraction deemed intuitive for musicians varied across different iterations of the MUSIC N series. This can be illustrated by examining the description of a second-order band-pass filter. The filter mixes the current input signal $S_n$, the output signal from $t$ time steps prior $O_{n-t}$, and an arbitrary amplitude parameter $I_1$, as shown in the following equation:
$$O_n = I_1 \cdot S_n + I_2 \cdot O_{n-1} - I_3 \cdot O_{n-2}$$
In MUSIC V, this band-pass filter can be used as in \ref{lst:musicv} [@mathews_technology_1969, p78].
~~~{label=lst:musicv caption="Example of the use of RESON UGen in MUSIC V."}
FLT I1 O I2 I3 Pi Pj;
~~~
Here, `I1` represents the input bus, and `O` is the output bus. The parameters `I2` and `I3` correspond to the normalized values of the coefficients $I_2$ and $I_3$, divided by $I_1$ (as a result, the overall gain of the filter can be greater or less than 1). The parameters `Pi` and `Pj` are normally used to receive parameters from the Score, specifically among the available `P0` to `P30`. In this case, however, these parameters are repurposed as general-purpose memory to temporarily store feedback signals. Similarly, other Unit Generators, such as oscillators, reuse note parameters to handle operations like phase accumulation.
As a result, users needed to manually calculate feedback gains based on the desired frequency characteristics[^musicv], and they also had to account for using at least two sample memory spaces.
[^musicv]: It is said that a preprocessing feature called `CONVT` could be used to transform frequency characteristics into coefficients [@mathews_technology_1969, p77].
On the other hand, in MUSIC 11, developed by Barry Vercoe, and its later iteration, CSound, the band-pass filter is defined as a Unit Generator (UGen) named `reson`. This UGen accepts four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor. Unlike previous implementations, users no longer need to be aware of the two-sample feedback memory space for the output [@vercoe_computer_1983, p248]. However, in MUSIC 11 and CSound, it is still possible to implement this band-pass filter from scratch as a User Defined Opcode (UDO) as in \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample feedback, and eliminate black boxes, it is equally important to provide high-level modules that avoid unnecessary complexity ("avoid the clutter") when users do not need to understand the internal details [@vercoe_computer_1983, p247].
~~~{label=lst:reson caption="Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)"}
instr 1
la1 init 0
la2 init 0
i3 = exp(-6.28 * p6 / 10000)
i2 = 4*i3*cos(6.283185 * p5/10000) / (1+i3)
i1 = (1-i3) * sqrt(1-1 - i2*i2/(4*i3))
a1 rand p4
la3 = la2
la2 = la1
la1 = i1*a1 + i2 * la2 - i3 * la3
out la1
endin
instr 2
a1 rand p4
a1 reson a1,p5,p6,1
endin
~~~
On the other hand, in programming environments that inherit the Unit Generator paradigm, such as Pure Data [@puckette_pure_1997], Max (whose signal processing functionalities were ported from Pure Data as MSP), SuperCollider [@mccartney_supercollider_1996], and ChucK [@wangChucKStronglyTimed2015], primitive UGens are implemented in general-purpose languages like C or C++. If users wish to define low-level UGens (External Objects), they need to set up a development environment for C or C++.
As an extension, ChucK later introduced ChuGen, which is equivalent to CSounds UDO, allowing users to define low-level UGens within the ChucK language itself [@Salazar2012]. However, both CSound and ChucK face performance limitations with UDOs during runtime compared to natively implemented UGens. Consequently, not all existing UGens are replaced by UDOs, which remain supplemental features rather than primary tools.
When UGens are implemented in low-level languages like C, even if the implementation is open-source, the division of knowledge effectively forces users (composers) to treat UGens as black boxes. This reliance on UGens as black boxes reflects and deepens the division of labor between musicians and scientists that Mathews helped establish—a structure that can be seen as both a cause and a result of this paradigm.
For example, Puckette, the developer of Max and Pure Data, noted that the division of labor at IRCAM between researchers, Musical Assistants/realizers, and composers has parallels in the current Max ecosystem, where the roles are divided into software developers, External Objects developers, and Max users [@puckette_47_2020]. As described in the ethnography of 1980s IRCAM by anthropologist Georgina Born, the division of labor between fundamental research scientists and composers at IRCAM was extremely clear. This structure was also tied to the exclusion of popular music and its associated technologies in IRCAMs research focus [@Born1995].
However, such divisions are not necessarily the result of differences in values along the axes analyzed by Born, such as modernist/postmodernist/populist or low-tech/high-tech distinctions[^wessel]. This is because the black-boxing of technology through the division of knowledge occurs in popular music as well. Paul Théberge pointed out that the "democratization" of synthesizers in the 1980s was achieved through the concealment of technology, which transformed musicians as creators into consumers.
[^wessel]: David Wessel revealed that the individual referred to as RIG in Borns ethnography was himself and commented that Born oversimplified her portrayal of Pierre Boulez, then director of IRCAM, as a modernist. [@taylor_article_1999]
> Lacking adequate knowledge of the technical system, musicians increasingly found themselves drawn to prefabricated programs as a source of new sound material. As I have argued, however, this assertion is not simply a state ment of fact; it also suggests a reconceptualization on the part of the industry of the musician as a particular type of consumer. [@theberge_any_1997, p89]
This argument can be extended beyond electronic music to encompass computer-based music in general. For example, media researcher Lori Emerson noted that while the proliferation of personal computers began with the vision of "metamedia"—tools that users could modify themselves, as exemplified by Xerox PARC's Dynabook—the vision was ultimately realized in an incomplete form through devices like the Macintosh and iPad, which distanced users from programming by black-boxing functionality [@emerson2014]. In fact, Alan Kay, the architect behind the Dynabook concept, remarked that while the iPad's appearance may resemble the ideal he originally envisioned, its lack of extensibility through programming renders it merely a device for media consumption [@kay2019].
Although programming environments as tools for music production are not widely used, the Unit Generator concept, alongside MIDI, serves as a foundational paradigm for today's consumer music production software and infrastructure, including Web Audio. It is known that the concept of Unit Generators emerged either simultaneously with or even slightly before modular synthesizers [@park_interview_2009, p20]. However, UGen-based languages have actively incorporated the user interface metaphors of modular synthesizers, as Vercoe said that the distinction between "ar" (audio-rate) and "kr" (control-rate) processing introduced in MUSIC 11 is said to have been inspired by Buchla's differentiation between control and audio signals in its plug type [@vercoe_barry_2012, 1:01:381:04:04].
However, adopting visual metaphors comes with the limitation that it constrains the complexity of representation to what is visually conceivable. In languages with visual patching interfaces like Max and Pure Data, meta-operations on UGens are often restricted to simple tasks, such as parallel duplication. Consequently, even users of Max or Pure Data may not necessarily be engaging in expressions that are only possible with computers. Instead, many might simply be using these tools as the most convenient software equivalents of modular synthesizers.
## Context of Programming Languages for Music After 2000
Based on the discussions thus far, music programming languages developed after the 2000s can be categorized into two distinct directions: those that narrow the scope of the language's role by attempting alternative abstractions at a higher level, distinct from the Unit Generator paradigm, and those that expand the general-purpose capabilities of the language, reducing black-boxing.
Languages that pursued alternative abstractions at higher levels have evolved alongside the culture of live coding, where performances are conducted by rewriting code in real time. The activities of the live coding community, including groups such as TOPLAP since the 2000s, were not only about turning coding itself into a performance but also served as a resistance against laptop performances that relied on black-boxed music software. This is evident in the community's manifesto, which states, "Obscurantism is dangerous" [@toplap_manifestodraft_2004].
Languages implemented as clients for SuperCollider, such as **IXI** (on Ruby) [@Magnusson2011], **Sonic Pi**(on Ruby), **Overtone** (on Clojure) [@Aaron2013], **TidalCycles** (on Haskell) [@McLean2014], and **FoxDot** (on Python) [@kirkbride2016foxdot], leverage the expressive power of more general-purpose programming languages. While embracing the UGen paradigm, they enable high-level abstractions for previously difficult-to-express elements like note values and rhythm. For example, the abstraction of patterns in TidalCycles is not limited to music but can also be applied to visual patterns and other outputs, meaning it is not inherently tied to PCM-based waveform output as the final result.
On the other hand, due to their high-level design, these languages often rely on ad hoc implementations for tasks like sound manipulation and low-level signal processing, such as effects.
McCartney, the developer of SuperCollider, once stated that if general-purpose programming languages were sufficiently expressive, there would be no need to create specialized languages [@McCartney2002]. This prediction appears reasonable when considering examples like MUSIGOL. However, in practice, scripting languages that excel in dynamic program modification face challenges in modern preemptive OS environments. For instance, dynamic memory management techniques such as garbage collection can hinder the ability to guarantee deterministic execution timing required for real-time processing [@Dannenberg2005].
Historically, programming in languages like FORTRAN or C served as a universal method for implementing audio processing on computers, independent of architecture. However, with the proliferation of general-purpose programming languages, programming in C or C++ has become relatively more difficult, akin to programming in assembly language in earlier times. Furthermore, considering the challenges of portability across not only different CPUs but also diverse host environments such as operating systems and the Web, these languages are no longer as portable as they once were. Consequently, systems targeting signal processing implemented as internal DSLs have become exceedingly rare, with only a few examples like LuaAV [@wakefield2010].
Instead, an approach has emerged to create general-purpose languages specifically designed for use in music from the ground up. One prominent example is **Extempore**, a live programming environment developed by Sorensen [@sorensenExtemporeDesignImplementation2018]. Extempore consists of Scheme, a LISP-based language, and xtlang, a meta-implementation on top of Scheme. While xtlang requires users to write hardware-oriented type signatures similar to those in C, it leverages the LLVM compiler infrastructure [@Lattner] to just-in-time (JIT) compile signal processing code, including sound manipulation, into machine code for high-speed execution.
The expressive power of general-purpose languages and compiler infrastructures like LLVM have given rise to an approach focused on designing languages with formalized abstractions that reduce black-boxing. **Faust** [@Orlarey2009], for example, is a language that retains a graph-based structure akin to UGens but is built on a formal system called Block Diagram Algebra. This system integrates primitives for reading and writing internal states, which are essential for operations like delays and filters. Thanks to its formalization, Faust can be transpiled into general-purpose languages such as C, C++, or Rust and can also be used as an External Object in environments like Max or Pure Data.
Languages like **Kronos** [@noriloKronosReimaginingMusical2016] and **mimium** [@matsuura2021], which are based on the more general computational model of lambda calculus, focus on PCM-based signal processing while exploring interactive meta-operations on programs [@Norilo2016] and balancing self-contained semantics with interoperability with other general-purpose languages [@matsuura2024].
Domain-specific languages (DSLs) are constructed within a double bind: they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through programming. In this context, efforts like Extempore, Kronos, and mimium are not merely programming languages for music but are also situated within the broader research context of Functional Reactive Programming (FRP), which focuses on representing time-varying values in computation. Most computer hardware lacks an inherent concept of real time and instead operates based on discrete computational steps. Similarly, low-level general-purpose programming languages do not natively include primitives for real-time concepts. Consequently, the exploration of computational models tied to time—a domain inseparable from music—remains vital and has the potential to contribute to the theoretical foundations of general-purpose programming languages.
However, strongly formalized languages come with their own trade-offs. While they allow UGens to be defined without black-boxing, understanding the design and implementation of these languages often requires advanced knowledge. This can create a significant divide between language developers and users, in contrast to the more segmented roles seen in the Multi-Language paradigm—such as SuperCollider developers, external UGen developers, client language developers (e.g., TidalCycles), SuperCollider users, and client language users.
Although there is no clear solution to this trade-off, one intriguing idea is the development of self-hosting languages for music—that is, languages where their own compilers are written in the language itself. At first glance, this may seem impractical. However, by enabling users to learn and modify the language's mechanisms spontaneously, this approach could create an environment that fosters deeper engagement and understanding among users.
## Conclusion
This paper has reexamined the history of computer music and music programming languages with a focus on the universalism of PCM and the black-boxing tendencies of the Unit Generator paradigm. Historically, it was expected that the clear division of roles between engineers and composers would enable the creation of new forms of expression using computers. Indeed, from the perspective of Post-Acousmatic discourse, some, like Holbrook and Rudi, still consider this division to be a positive development:
> Most newer tools abstract the signal processing routines and variables, making them easier to use while removing the need for understanding the underlying processes in order to create meaningful results. Composers no longer necessarily need mathematical and programming skills to use the technologies. These abstractions are important, as they hide many of the technical details and make the software and processes available to more people, and form the basis for what can arguably be seen as a new folk music. [@holbrook2022, p2]
However, this division of labor also creates a shared vocabulary—exemplified by the Unit Generator itself, pioneered by Mathews—and works to perpetuate it. By portraying new technologies as something externally introduced, and by focusing on the agency of those who create music with computers, the individuals responsible for building the programming environments, software, protocols, and formats are rendered invisible [@sterne_there_2014]. This leads to an oversight of the indirect power relationships produced by these infrastructures.
For this reason, future research on programming languages for music must address how the tools, including the languages themselves, contribute aesthetic value within musical culture (and what forms of musical practice they enable), as well as the social (im)balances of power they produce.
It has been noted in programming language research that evaluation criteria such as efficiency, expressiveness, and generality are often ambiguous [@Markstrum2010]. This issue is even more acute in fields like music, where no clear evaluation criteria exist. Thus, as McPherson et al. have proposed with the concept of Idiomaticity [@McPherson2020], we need to develop and share a vocabulary for understanding the value judgments we make about programming languages in general.
In a broader sense, the creation of programming languages for music has also expanded to the individual level. Examples include **Gwion** by Astor, which builds on ChucK and enhances its abstraction capabilities with features like lambda functions [@astorGwion2017]; **Vult**, a DSP transpiler language created by Ruiz for his modular synthesizer hardware [@ruizVultLanguage2020]; and a UGen-based live coding environment designed for web execution, **Glicol** [@lan_glicol_2020]. However, these efforts have not yet been adequately integrated into academic discourse.
Conversely, practical knowledge of university-researched languages from the past, as well as real-time hardware-oriented systems from the 1980s, is gradually being lost. While research efforts such as *Inside Computer Music*, which analyzes historical works of computer music, have begun [@clarke_inside_2020], an archaeological practice focused on the construction of computer music systems will also be necessary in the future. This includes not only collecting primary resources, such as oral archives from those involved, but also reconstructing the knowledge and practices behind these systems.
(preparing for latex format...)

BIN
main.pdf

Binary file not shown.

View File

@@ -191,6 +191,17 @@
% {\tt \href{mailto:author5@ul.ie}{author5@ul.ie}}}
% {\sixthauthor} { Affiliation6 \\ %
% {\tt \href{mailto:author6@ul.ie}{author6@ul.ie}}}
\usepackage{listings}
\newcommand{\passthrough}[1]{#1} % required from pandoc to provide inline
\lstset{
breaklines=true,
captionpos=b,
basicstyle=\ttfamily\small,
columns=fullflexible,
frame=single,
keepspaces=true,
showstringspaces=false
}
% ====================================================
@@ -210,6 +221,7 @@
\providecommand{\citep}{ \cite}% for pandoc
\def\tightlist{\itemsep1pt\parskip0pt\parsep0pt} %for pandoc
\input{content.tex}
\begin{acknowledgments}