initial commit

This commit is contained in:
2025-01-22 12:16:19 +00:00
parent c770ea7373
commit e751e91a44
9 changed files with 1356 additions and 130 deletions

291
content.tex Normal file
View File

@@ -0,0 +1,291 @@
\section{Introduction}\label{introduction}
Programming languages and environments for music have developed hand in
hand with the history of creating music using computers. Software like
Max, Pure Data, CSound, and SuperCollider has been referred to as
``Computer Music
Language''\citep{McCartney2002, Nishino2016, McPherson2020}, ``Language
for Computer Music''\citep{Dannenberg2018}, and ``Computer Music
Programming Systems''\citep{Lazzarini2013}, though there is no clear
consensus on the use of these terms. However, as the term ``Computer
Music'' suggests, these programming languages are deeply intertwined
with the history of technology-driven music, which developed under the
premise that ``almost any sound can be
produced''\citep{mathews_acoustic_1961} through the use of computers.
In the early days, when computers were confined to university research
laboratories and neither displays nor mice existed, creating sound or
music with computers was inevitably linked to programming. Today,
however, using programming as a means to produce sound on a
computer---rather than employing DAW (Digital Audio Workstation)
software---is somewhat specialized. In other words, programming
languages for music developed after the proliferation of personal
computers are software that deliberately choose programming (whether
textual or graphical) as their frontend for sound generation.
Since the 1990s, theoretical advancements in programming languages and
the various constraints required for real-time audio processing have
significantly increased the specialized knowledge needed to develop
programming languages for music. Furthermore, some music-related
languages developed after the 2000s are not necessarily aimed at
pursuing new forms of musical expression. There appears to be no unified
perspective on how to evaluate such languages.
The ultimate goal of this paper is to introduce the framework of ``weak
computer music,'' referring to music mediated by computers in a
non-style-specific manner. This framework aims to decouple the
evaluation of programming language design and development for music from
specific styles and the ideologies associated with computer music.
\subsection{Use of the Term ``Computer
Music''}\label{use-of-the-term-computer-music}
Despite its potential broad application, the term ``computer music'' has
been repeatedly noted since the 1990s as being used within a narrowly
defined framework, tied to specific styles or
communities\citep{ostertag1998}.
The necessity of using the term ``computer music'' for such academic
contexts (particularly those centered around the International Computer
Music Conference, or ICMC) has diminished over time. Lyon argues that
defining computer music as simply ``music made using computers'' is too
permissive, while defining it as ``music that could not exist without
computers'' is overly strict, complicating the evaluation of analog
modeling synthesizers implemented on computers. Lyon questions the
utility of the term itself, comparing its consideration to that of
``piano music,'' which ignores the styles within it\citep{lyon2006}.
As Ostertag and Lyon observed, it has become increasingly difficult to
envision a situation where computers are absent from the production and
experience of music today, particularly in commercial
contexts\footnote{Of course, the realm of music extends beyond the
numbers processed by computers or the oscillations of speaker
diaphragms. This paper does not seek to intervene in aesthetic
judgments regarding music made without computers or non-commercial
musical activities. However, the existence of such music does not
counter the awareness that there is little analysis of the inevitable
involvement of computing as a medium in the field of popular music,
which attracts significant academic and societal interest.}.
Nevertheless, the majority of music in the world could be described as
``simply using computers.''
Holbrook and Rudi propose analyzing what has been called computer music
within the framework of post-acousmatic music\citep{adkins2016},
including traditions of pre-computer electronic music as one of many
forms of technology-based/driven music\citep{holbrook2022}.
A critical issue with these discussions is that post-acousmatic music
lacks a precise definition. One proposed characteristic is the shift in
the locus of production from institutions to individuals, which has
altered how technology is used\citep[p113]{adkins2016}. However, this
narrative incorporates a tautological issue: while it acknowledges the
historical fact that the decreasing cost of computers allowed diverse
musical expressions outside laboratories, it excludes much music as
``simply using computers'' and fails to provide insights into such
divisions.
The spread of personal computers has incompletely achieved the vision of
metamedium as a device users could modify themselves, instead becoming a
black box for content consumption\citep{emerson2014}. Histories
highlighting the agency of those who created programming environments,
software, protocols, and formats for music obscure indirect power
relationships generated by the infrastructure\citep{sterne_there_2014}.
Today, while music production fundamentally depends on computers, most
of it falls under Lyon's overlapping permissive and strict definitions
of computer music. In this paper, I propose calling this situation the
following:
\begin{quote}
``Weak computer music'' --- music for which computers are essential to
its realization, but where the uniqueness of the work as intended by the
creator is not particularly tied to the use of computers.
\end{quote}
Most people use computers simply because no quicker alternative exists,
not because they are deliberately leveraging the unique medium of
computers for music production. However, the possibility that such music
culture, shaped by the incidental use of computers, has aesthetic and
social characteristics worth analyzing cannot be dismissed.
This paper will historically organize the specifications and
construction of programming languages for early computer music systems
with a focus on their style-agnostic nature.
\begin{itemize}
\tightlist
\item
Examining the discourse framing MUSIC as the progenitor of computer
music.
\item
Investigating what aspects were excluded from user access in MUSIC-N
derivatives such as MUSIGOL.
\item
Analyzing the standardization of UGens (unit generators) and the
division of labor in Max and Pure Data.
\item
Reviewing music programming languages of the 2000s.
\end{itemize}
The conclusion will propose a framework necessary for future discussions
on music programming languages.
\section{Born of ``Computer Music'' - MUSIC-N and PCM
Universality}\label{born-of-computer-music---music-n-and-pcm-universality}
Among the earliest examples of computer music research, the MUSIC I
system (1957) from Bell Labs and its derivatives, known as MUSIC-N, are
frequently highlighted. However, attempts to create music with computers
in the UK and Australia prior to MUSIC I have also been
documented\citep{doornbusch2017}.
Organizing what was achieved by MUSIC-N and earlier efforts can help
clarify definitions of computer music.
The earliest experiments with sound generation on computers in the 1950s
involved controlling the intervals between one-bit pulses (on or off) to
control pitch. This was partly because the operational clock frequencies
of early computers fell within the audible range, making the
sonification of electrical signals a practical and cost-effective
debugging method compared to visualizing them on displays or
oscilloscopes. Computers like Australia's CSIR Mark I even featured
primitive instructions like a ``hoot'' command to emit a single pulse to
a speaker.
In the UK, Louis Wilson discovered that an AM radio near the BINAC
computer picked up electromagnetic waves generated by vacuum tube
switching, producing regular tones. This serendipitous discovery led to
the intentional programming of pulse intervals to generate
melodies\citep{woltman1990}.
However, not all sound generation prior to PCM (Pulse Code Modulation)
was merely the reproduction of existing music. Doornbusch highlights
experiments on the British Pilot ACE (Prototype for Automatic Computing
Engine: ACE), which utilized acoustic delay line memory to produce
unique sounds\citep[p303-304]{doornbusch2017}. Acoustic delay line
memory, used as main memory in early computers like BINAC and CSIR Mark
I, employed the feedback of pulses traveling through mercury via a
speaker and microphone setup to retain data. Donald Davis, an engineer
on the ACE project, described the sounds it produced as
follows\citep[p19-20]{davis_very_1994}:
\begin{quote}
The Ace Pilot Model and its successor, the Ace proper, were both capable
of composing their own music and playing it on a little speaker built
into the control desk. I say composing because no human had any
intentional part in choosing the notes. The music was very interesting,
though atonal, and began by playing rising arpeggios: these gradually
became more complex and faster, like a developing fugue. They dissolved
into colored noise as the complexity went beyond human understanding.
Loops were always multiples of 32 microseconds long, so notes had
frequencies which were submultiples of 31.25 KHz. The music was based on
a very strange scale, which was nothing like equal tempered or harmonic,
but was quite pleasant. This music arose unintentionally during program
optimization and was made possible by ``misusing'' switches installed
for debugging acoustic delay line memory (p20).
\end{quote}
Media scholar Miyazaki described the practice of listening to sounds
generated by algorithms and their bit patterns, integrated into
programming and debugging, as ``Algo\emph{rhythmic}
Listening''\citep{miyazaki2012}.
Doornbusch warns against ignoring early computer music practices in
Australia and the UK simply because they did not directly influence
subsequent research\citep[p305]{doornbusch2017}. Indeed, the tendency to
treat pre-MUSIC attempts as hobbyist efforts by engineers and post-MUSIC
endeavors as serious research remains common even
today\citep{tanaka_all_2017}.
The sounds generated by Pilot ACE challenge the post-acousmatic
narrative that computer music transitioned from laboratory-based
professional practices to personal use by amateurs. This is because: 1.
The sounds were produced not by music specialists but by engineers, and
2. The sounds were tied to hardware-specific characteristics of acoustic
delay line memory, making them difficult to replicate even with modern
audio programming environments. Similarly, at MIT in the 1960s, Peter
Samson utilized a debug speaker attached to the aging TX-0 computer to
experiment with generating melodies using square
waves\citep{levy_hackers_2010}.
This effort evolved into a program that allowed users to describe
melodies with text strings. For instance, writing \texttt{4fs\ t8} would
produce an F4 note as an eighth note. Samson later adapted this work to
the PDP-1 computer, creating the ``Harmony Compiler,'' widely used by
MIT students. He also developed the Samson Box in the early 1970s, a
computer music system used at Stanford University's CCRMA for over a
decade\citep{loy_life_2013}. These examples suggest that the initial
purpose of debugging does not warrant segregating early computational
sound generation from the broader history of computer music.
\subsection{Universality of PCM}\label{universality-of-pcm}
Let us examine \textbf{Pulse Code Modulation (PCM)}---a foundational
aspect of MUSIC's legacy and one of the key reasons it is considered a
milestone in the history of computer music. PCM enables the theoretical
representation of ``almost any sound'' on a computer by dividing audio
waveforms into discrete intervals (sampling) and expressing the
amplitude of each interval as quantized numerical values. It remains the
fundamental representation of sound on modern computers. The underlying
sampling theorem was introduced by Nyquist in 1928\citep{Nyquist1928},
and PCM itself was developed by Reeves in 1938.
A critical issue with the ``post-acousmatic'' framework in computer
music history lies within the term ``acousmatic'' itself. Initially
proposed by Piegnot and later theorized by Schaeffer, the term describes
a mode of listening to tape music, such as musique concrète, in which
the listener does not imagine a specific sound source. It has been
widely applied in theories of recorded sound, including Chion's analyses
of sound design in visual media.
However, as sound studies scholar Jonathan Sterne has pointed out,
discourses surrounding acousmatic listening often work to delineate
pre-recording auditory experiences as ``natural'' by
contrast\footnote{Sterne later critiques the phenomenological basis of
acousmatic listening, which presupposes an idealized, intact body as
the listening subject. He proposes a methodology of political
phenomenology centered on impairment, challenging these normative
assumptions\citep{sterne_diminished_2022}. Discussions of universality
in computer music should also address ableism, as seen in the
relationship between recording technologies and auditory disabilities.}.
This implies that prior to the advent of recording technologies,
listening was unmediated and holistic---a narrative that obscures the
constructed nature of these assumptions.
\begin{quote}
For instance, the claim that sound reproduction has ``alienated'' the
voice from the human body implies that the voice and the body existed in
some prior holistic, unalienated, and self present relation.
They assume that, at some time prior to the invention of sound
reproduction technologies, the body was whole, undamaged, and
phenomenologically coherent.\citep[p20-21]{sterne_audible_2003}
\end{quote}
The claim that PCM-based sound synthesis can produce ``almost any
sound'' is underpinned by an ideology associated with recording
technologies. This ideology assumes that recorded sound contains an
``original'' source and that listeners can distinguish distortions or
noise from it. Sampling theory builds on this premise by statistically
modeling human auditory characteristics: it assumes that humans cannot
discern volume differences below certain thresholds or perceive
vibrations outside specific frequency ranges. By limiting representation
to this range, sampling theory ensures that all audible sounds can be
effectively encoded.
By the way, the actual implementation of PCM in MUSIC I only allowed for
monophonic triangle waves with controllable volume, pitch, and timing
(MUSIC II later expanded this to four oscillators)\citep{Mathews1980}.
Would anyone today describe such a system as capable of producing
``infinite variations'' in sound synthesis?
Even when considering more contemporary applications, processes like
ring modulation (RM), amplitude modulation (AM), or distortion often
generate aliasing artifacts unless proper oversampling is applied. These
artifacts occur because PCM, while universally suitable for reproducing
recorded sound, is not inherently versatile as a medium for generating
new sounds. As Puckette has argued, alternative representations, such as
collections of linear segments or physical modeling synthesis, present
other possibilities\citep{puckette2015}. Therefore, PCM is not a
completely universal tool for creating sound.