proofing finished
This commit is contained in:
495
content.tex
495
content.tex
@@ -1,9 +1,7 @@
|
|||||||
\section{Introduction}\label{introduction}
|
\section{Introduction}\label{introduction}
|
||||||
|
|
||||||
Programming languages and environments for music have developed hand in
|
Programming languages and environments for music such as Max, Pure Data,
|
||||||
hand with the history of creating music using computers. Software and
|
CSound, and SuperCollider has been referred to as ``Computer Music
|
||||||
systems like Max, Pure Data, CSound, and SuperCollider has been referred
|
|
||||||
to as ``Computer Music
|
|
||||||
Language''\citep{McCartney2002, Nishino2016, McPherson2020}, ``Language
|
Language''\citep{McCartney2002, Nishino2016, McPherson2020}, ``Language
|
||||||
for Computer Music''\citep{Dannenberg2018}, and ``Computer Music
|
for Computer Music''\citep{Dannenberg2018}, and ``Computer Music
|
||||||
Programming Systems''\citep{Lazzarini2013}, though there is no clear
|
Programming Systems''\citep{Lazzarini2013}, though there is no clear
|
||||||
@@ -13,13 +11,13 @@ intertwined with the history of technology-driven music, which developed
|
|||||||
under the premise that ``almost any sound can be
|
under the premise that ``almost any sound can be
|
||||||
produced''\citep{mathews_acoustic_1961} through the use of computers.
|
produced''\citep{mathews_acoustic_1961} through the use of computers.
|
||||||
|
|
||||||
In the early days, when computers were confined to research laboratories
|
In the early days, when computers existed only in research laboratories
|
||||||
and neither displays nor mouse existed, creating sound or music with
|
and neither displays nor mice existed, creating sound or music with
|
||||||
computers was inevitably equal to the work of programming. Today,
|
computers was inevitably equivalent to programming. Today, however,
|
||||||
however, programming as a means to produce sound on a computer---rather
|
programming as a means to produce sound on a computer---rather than
|
||||||
than employing Digital Audio Workstation (DAW) software like Pro Tools
|
employing Digital Audio Workstation (DAW) software like Pro Tools is not
|
||||||
is not usual. In other words, programming languages for music developed
|
popular. In other words, programming languages for music developed after
|
||||||
after the proliferation of personal computers are the softwares that
|
the proliferation of personal computers are the softwares that
|
||||||
intentionally chose programming (whether textual or graphical) as their
|
intentionally chose programming (whether textual or graphical) as their
|
||||||
frontend for making sound.
|
frontend for making sound.
|
||||||
|
|
||||||
@@ -32,7 +30,7 @@ pursuing new forms of musical expression. It seems that there is still
|
|||||||
no unified perspective on how the value of such languages should be
|
no unified perspective on how the value of such languages should be
|
||||||
evaluated.
|
evaluated.
|
||||||
|
|
||||||
In this paper, a critical historical review is conducted by deriving
|
In this paper, a critical historical review is conducted by drawing on
|
||||||
discussions from sound studies alongside existing surveys, aiming to
|
discussions from sound studies alongside existing surveys, aiming to
|
||||||
consider programming languages for music independently from computer
|
consider programming languages for music independently from computer
|
||||||
music as the specific genre.
|
music as the specific genre.
|
||||||
@@ -40,72 +38,72 @@ music as the specific genre.
|
|||||||
\subsection{Use of the Term ``Computer
|
\subsection{Use of the Term ``Computer
|
||||||
Music''}\label{use-of-the-term-computer-music}
|
Music''}\label{use-of-the-term-computer-music}
|
||||||
|
|
||||||
The term ``Computer Music,'' despite its literal and potential broad
|
The term ``Computer Music,'' despite its literal and potentially broad
|
||||||
meaning, has been noted as being used within a narrowly defined
|
meaning, has been noted for being used within a narrowly defined
|
||||||
framework tied to specific styles or communities, as represented in
|
framework tied to specific styles or communities, as represented in
|
||||||
Ostartag's \emph{Why Computer Music Sucks}\citep{ostertag1998} since the
|
Ostertag's \emph{Why Computer Music Sucks}\citep{ostertag1998} since the
|
||||||
1990s.
|
1990s.
|
||||||
|
|
||||||
As Lyon observed nearly two decades ago, it is now nearly impossible to
|
As Lyon observed nearly two decades ago, it is now nearly impossible to
|
||||||
imagine a situation in which computers are not involved at any stage
|
imagine a situation in which computers are not involved at any stage
|
||||||
from production to experience of music\citep[p1]{lyon_we_2006}. The
|
from the production to experience of music\citep[p1]{lyon_we_2006}. The
|
||||||
necessity of using the term ``Computer Music'' to describe academic
|
necessity of using the term ``Computer Music'' to describe academic
|
||||||
contexts has consequently diminished.
|
contexts has consequently diminished.
|
||||||
|
|
||||||
Holbrook and Rudi continued Lyon's discussion by proposing the use of
|
Holbrook and Rudi extended Lyon's discussion by proposing the use of
|
||||||
frameworks like Post-Acousmatic\citep{adkins2016} to redefine ``Computer
|
frameworks like Post-Acousmatic\citep{adkins2016} to redefine ``Computer
|
||||||
Music.'' Their approach incorporates the tradition of pre-computer
|
Music.'' Their approach incorporates the tradition of pre-computer
|
||||||
experimental/electronic music, situating it as part of the broader
|
experimental/electronic music, situating it as part of the broader
|
||||||
continuum of technology-based or technology-driven
|
continuum of technology-based or technology-driven
|
||||||
music\citep{holbrook2022}.
|
music\citep{holbrook2022}.
|
||||||
|
|
||||||
While the strict definition of the Post-Acousmatic music is not given
|
While the strict definition of Post-Acousmatic music is deliberately
|
||||||
deliberately, one of its elements contains the expansion of music
|
left open, one of its key aspects is the expansion of music production
|
||||||
production from institutional settings to individuals and the use of the
|
from institutional settings to individuals and as well as the
|
||||||
technology were diversified\citep[p113]{adkins2016}. However, while the
|
diversification of technological usage\citep[p113]{adkins2016}. However,
|
||||||
Post-Acousmatic discourse integrates the historical fact that declining
|
while the Post-Acousmatic discourse integrates the historical fact that
|
||||||
computer costs and access beyond laboratories have enabled diverse
|
declining computer costs and increasing accessibility beyond
|
||||||
musical expressions, it simultaneously marginalizes much of the music
|
laboratories have enabled diverse musical expressions, it still
|
||||||
that is ``just using computers'' and fails to provide insights into this
|
marginalizes much of the music that is ``just using computers'' and
|
||||||
divided landscape.
|
fails to provide insights into this divided landscape.
|
||||||
|
|
||||||
Lyon argues that the term ``computer music'' is style-agnostic
|
Lyon argues that the term ``computer music'' is a style-agnostic
|
||||||
definition almost like ``piano music,'' implying that it ignores the
|
definition almost like ``piano music,'' implying that it ignores the
|
||||||
style and form inside music produced by the instruments.
|
style and form inside music produced by the instrument.
|
||||||
|
|
||||||
However, one of the defining characteristics of computers as a medium
|
However, one of the defining characteristics of computers as a medium
|
||||||
lies in their ability to treat musical styles themselves as subjects of
|
lies in their ability to treat musical styles themselves as subjects of
|
||||||
meta-manipulation through simulation and modeling. When creating
|
meta-manipulation through simulation and modeling. When creating
|
||||||
instruments with computers, or when using such instruments, sound
|
instruments with computers or when using such instruments, sound
|
||||||
production involves programming---manipulating symbols embedded in a
|
production involves programming---manipulating symbols embedded in a
|
||||||
particular musical culture. This recursive embedding of the language and
|
particular musical culture. This recursive embedding of language and
|
||||||
perception constituting that musical culture into the resulting music is
|
recognition, which construct that musical culture, into the resulting
|
||||||
a process that goes beyond what is possible with acoustic instruments or
|
music is a process that goes beyond what is possible with acoustic
|
||||||
analog electronic instruments. Magnusson refers to this characteristic
|
instruments or analog instruments. Magnusson refers to this
|
||||||
of digital instruments as ``Epistemic Tools'' and points out that the
|
characteristic of digital instruments as ``Epistemic Tools'' and points
|
||||||
computer works as ``creating a snapshot of musical theory, freezing
|
out that the computer serves to ``create a snapshot of musical theory,
|
||||||
musical culture in time''\citep[p173]{Magnusson2009} through
|
freezing musical culture in time'' \citep[p.173]{Magnusson2009} through
|
||||||
formalization.
|
formalization.
|
||||||
|
|
||||||
Today, many people use computers for music production not because they
|
Today, many people use computers for music production not because they
|
||||||
consciously leverage the uniqueness of the meta-medium, but simply
|
consciously leverage the uniqueness of the meta-medium, but simply
|
||||||
because there are no quicker or more convenient alternatives available.
|
because there are no quicker or more convenient alternatives available.
|
||||||
Even so, within a musical culture where computers are used as a default
|
Even so, within a musical culture where computers are used as a
|
||||||
or reluctant choice, musicians are inevitably influenced by the
|
reluctant choice, musicians are inevitably influenced by the underlying
|
||||||
underlying infrastructures like software, protocols, and formats. As
|
infrastructures like software, protocols, and formats. As long as the
|
||||||
long as the history of programming languages for music remains
|
history of programming languages for music remains intertwined with the
|
||||||
intertwined with the history of computer music as it relates to specific
|
history of computer music as it relates to specific genres or
|
||||||
genres or communities, it becomes difficult to analyze music created
|
communities, it becomes difficult to analyze music created with
|
||||||
with computers as a passive means.
|
computers as merely a passive means.
|
||||||
|
|
||||||
In this paper, the history of programming languages for music is
|
In this paper, the history of programming languages for music is
|
||||||
reexamined with an approach that, opposite from Lyon, takes an extremely
|
reexamined with an approach that, in contrast to Lyon, adopts a
|
||||||
style-agnostic perspective. Rather than focusing on what has been
|
radically style-agnostic perspective. Rather than focusing on what has
|
||||||
created with these tools, the emphasis is placed on how these tools
|
been created with these tools, the emphasis is placed on how these tools
|
||||||
themselves have been constructed. The paper centers on the following two
|
themselves have been constructed. The paper centers on the following two
|
||||||
topics: 1. A critique of the universality of sound representation using
|
topics: 1. A critique of the universality of sound representation using
|
||||||
pulse-code modulation (PCM), the foundational concept underlying most of
|
pulse-code modulation (PCM)---the foundational concept underlying most
|
||||||
today's sound programming, by referencing early attempts of sound
|
of today's sound programming, by referencing early attempts at sound
|
||||||
generation using electronic computers. 2. An examination of the MUSIC-N
|
generation using electronic computers. 2. An examination of the MUSIC-N
|
||||||
family, the origin of PCM-based sound programming, to highlight that its
|
family, the origin of PCM-based sound programming, to highlight that its
|
||||||
design varies significantly across systems from the perspective of
|
design varies significantly across systems from the perspective of
|
||||||
@@ -123,10 +121,10 @@ of music created with computers.
|
|||||||
\section{PCM and Early Computer
|
\section{PCM and Early Computer
|
||||||
Music}\label{pcm-and-early-computer-music}
|
Music}\label{pcm-and-early-computer-music}
|
||||||
|
|
||||||
Usually the MUSIC I (1957) in Bell Labs\citep{Mathews1980} and
|
The MUSIC I (1957) in Bell Labs\citep{Mathews1980} and succeeding
|
||||||
succeeding MUSIC-N family are highlighted as the earliest examples of
|
MUSIC-N family are highlighted as the earliest examples of computer
|
||||||
computer music research. However, attempts to create music with
|
music research. However, attempts to create music with computers in the
|
||||||
computers in the UK and Australia prior to MUSIC have also been
|
UK and Australia prior to MUSIC have also been
|
||||||
documented\citep{doornbusch2017}. Organizing what was achieved by
|
documented\citep{doornbusch2017}. Organizing what was achieved by
|
||||||
MUSIC-N and earlier efforts can help clarify definitions of computer
|
MUSIC-N and earlier efforts can help clarify definitions of computer
|
||||||
music.
|
music.
|
||||||
@@ -137,31 +135,33 @@ control pitch. This was partly because the operational clock frequencies
|
|||||||
of early computers fell within the audible range, making the
|
of early computers fell within the audible range, making the
|
||||||
sonification of electrical signals a practical and cost-effective
|
sonification of electrical signals a practical and cost-effective
|
||||||
debugging method compared to visualizing them on displays or
|
debugging method compared to visualizing them on displays or
|
||||||
oscilloscopes. Some computers at this time like CSIR Mark I (CSIRAC) in
|
oscilloscopes.
|
||||||
Australia often had ``hoot'' primitive instructions that emit a single
|
|
||||||
pulse to a speaker.
|
|
||||||
|
|
||||||
In 1949, the background to music played on the BINAC in UK involved
|
For instance, Louis Wilson, who was an engineer of the BINAC in the UK,
|
||||||
engineer Louis Wilson, who noticed that an AM radio placed nearby could
|
noticed that an AM radio placed near the computer could pick up weak
|
||||||
pick up weak electromagnetic waves generated during the switching of
|
electromagnetic waves generated during the switching of vacuum tubes,
|
||||||
vacuum tubes, producing sounds. He leveraged this phenomenon by
|
producing sounds. He leveraged this phenomenon by connecting a speaker
|
||||||
connecting a speaker and a power amplifier to the computer's circuit, to
|
and a power amplifier to the computer's circuit to assist with
|
||||||
assist debugging. Frances Elizabeth Holberton took this a step further
|
debugging. Frances Elizabeth Holberton took this a step further by
|
||||||
by programming the computer to generate pulses at desired intervals,
|
programming the computer to generate pulses at desired intervals,
|
||||||
creating melodies \citep{woltman1990}. The early sound generation using
|
creating melodies in 1949\citep{woltman1990}.
|
||||||
computer were mostly playing melodies of existing music. represented by
|
|
||||||
BINAC and CSIR Mark.
|
|
||||||
|
|
||||||
However, not all sound generation at this time was merely the
|
Also, some computers at this time, such as the CSIR Mark I (CSIRAC) in
|
||||||
|
Australia often had primitive ``hoot'' instructions that emit a single
|
||||||
|
pulse to a speaker. Early sound generation using computers, including
|
||||||
|
the BINAC and CSIR Mark I, primarily involved playing melodies of
|
||||||
|
existing music.
|
||||||
|
|
||||||
|
However, not all sound generation at this time was merely involved the
|
||||||
reproduction of existing music. Doornbusch highlights experiments on the
|
reproduction of existing music. Doornbusch highlights experiments on the
|
||||||
British Pilot ACE (Prototype for Automatic Computing Engine), which
|
Pilot ACE (the Prototype for Automatic Computing Engine) in the UK,
|
||||||
utilized acoustic delay line memory to produce unique
|
which utilized acoustic delay line memory to produce unique
|
||||||
sounds\citep[p303-304]{doornbusch2017}. Acoustic delay line memory, used
|
sounds\citep[pp.303-304]{doornbusch2017}. Acoustic delay line memory,
|
||||||
as main memory in early computers like BINAC and CSIR Mark I, employed
|
used as the main memory in early computers such as the BINAC and the
|
||||||
the feedback of pulses traveling through mercury via a speaker and
|
CSIR Mark I, employed the feedback of pulses traveling through mercury
|
||||||
microphone setup to retain data. Donald Davis, an engineer on the ACE
|
via a speaker and microphone setup to retain data. Donald Davis, an
|
||||||
project, described the sounds it produced as
|
engineer on the ACE project, described the sounds it produced as
|
||||||
follows\citep[p19-20]{davis_very_1994}:
|
follows\citep[pp.19-20]{davis_very_1994}:
|
||||||
|
|
||||||
\begin{quote}
|
\begin{quote}
|
||||||
The Ace Pilot Model and its successor, the Ace proper, were both capable
|
The Ace Pilot Model and its successor, the Ace proper, were both capable
|
||||||
@@ -174,68 +174,70 @@ into colored noise as the complexity went beyond human understanding.
|
|||||||
\end{quote}
|
\end{quote}
|
||||||
|
|
||||||
This music arose unintentionally during program optimization and was
|
This music arose unintentionally during program optimization and was
|
||||||
made possible by ``misusing'' switches installed for debugging delay
|
made possible by the ``misuse'' of switches installed for debugging
|
||||||
line memory. Media scholar Miyazaki described the practice of listening
|
delay line memory. Media scholar Miyazaki described the practice of
|
||||||
to sounds generated by algorithms and their bit patterns, integrated
|
listening to sounds generated by algorithms and their bit patterns,
|
||||||
into programming, as ``Algo- \emph{rhythmic}
|
integrated into programming, as ``Algo- \emph{rhythmic}
|
||||||
Listening''\citep{miyazaki2012}.
|
Listening''\citep{miyazaki2012}.
|
||||||
|
|
||||||
Doornbusch warns against ignoring these early computer music practices
|
Doornbusch warns against ignoring these early computer music practices
|
||||||
simply because they did not directly influence subsequent
|
simply because they did not directly influence subsequent
|
||||||
research\citep[p305]{doornbusch2017}. Indeed, the sounds produced by the
|
research\citep[p.305]{doornbusch2017}. Indeed, the sounds produced by
|
||||||
Pilot ACE challenge the post-acousmatic historical narrative, which
|
the Pilot ACE challenge the post-acousmatic historical narrative, which
|
||||||
suggests that computer music transitioned from being democratized from
|
suggests that computer music transitioned from being democratized in
|
||||||
closed electro-acoustic music laboratories to individual musicians.
|
closed electro-acoustic music laboratories to individual musicians.
|
||||||
|
|
||||||
This is because the sounds generated by the Pilot ACE were not created
|
This is because the sounds generated by the Pilot ACE were not created
|
||||||
by musical experts, nor were they solely intended for debugging
|
by musical experts, nor were they solely intended for debugging
|
||||||
purposes. Instead, they were programmed with the goal of producing
|
purposes. Instead, they were programmed with the goal of producing
|
||||||
interesting sounds. Moreover, the sounds were tied to the hardware of
|
interesting sounds. Moreover, these sounds were tied to the hardware of
|
||||||
the acoustic delay line memory---a feature that was likely difficult to
|
the acoustic delay line memory---a feature that was likely difficult to
|
||||||
replicate, even in today's sound programming environments.
|
replicate, even in today's sound programming environments.
|
||||||
|
|
||||||
Similarly, in the 1960s at MIT, Peter Samson took advantage of the
|
Similarly, in the 1960s at MIT, Peter Samson took advantage of the
|
||||||
debugging speaker on the TX-0, a machine that had become outdated and
|
debugging speaker on the TX-0, a machine that had become outdated and
|
||||||
freely available for students to use. He conducted experiments where he
|
was freely available for students to use. He conducted experiments in
|
||||||
played melodies, such as Bach fugues, using square waves
|
which he played melodies, such as Bach fugues, using ``hoot''
|
||||||
\citep{levy_hackers_2010}. Samson's experiments with the TX-0 later
|
instruction\citep{levy_hackers_2010}. Samson's experiments with the TX-0
|
||||||
evolved into the creation of a program that allowed melodies to be
|
later evolved into the creation of a program that allowed melodies to be
|
||||||
described using text strings within MIT.
|
described using text within MIT.
|
||||||
|
|
||||||
Building on this, Samson developed a program called the Harmony Compiler
|
Building on this, Samson developed a program called the Harmony Compiler
|
||||||
for the DEC PDP-1, which was derived from the TX-0. This program gained
|
for the DEC PDP-1, which was derived from the TX-0. This program gained
|
||||||
significant popularity among MIT students. Around 1972, Samson began
|
significant popularity among MIT students. Around 1972, Samson began
|
||||||
surveying various digital synthesizers that were being developed at the
|
surveying various digital synthesizers that were under development at
|
||||||
time and went on to create a system specialized for computer music. The
|
the time and went on to create a system specialized for computer music.
|
||||||
resulting Samson Box was used at Stanford University's CCRMA (Center for
|
The resulting Samson Box was used at Stanford University's CCRMA (Center
|
||||||
Computer Research in Music and Acoustics) for over a decade until the
|
for Computer Research in Music and Acoustics) for over a decade until
|
||||||
early 1990s and became a tool for many composers to create their works
|
the early 1990s and became a tool for many composers to create their
|
||||||
\citep{loy_life_2013}. Considering his example, it is not appropriate to
|
works \citep{loy_life_2013}. Considering his example, it is not
|
||||||
separate the early experiments in sound generation by computers from the
|
appropriate to separate the early experiments in sound generation by
|
||||||
history of computer music solely because their initial purpose was
|
computers from the history of computer music solely because their
|
||||||
debugging.
|
initial purpose was debugging.
|
||||||
|
|
||||||
\subsection{Acousmatic Listening, the premise of the Universality of
|
\subsection{Acousmatic Listening, the premise of the Universality of
|
||||||
PCM}\label{acousmatic-listening-the-premise-of-the-universality-of-pcm}
|
PCM}\label{acousmatic-listening-the-premise-of-the-universality-of-pcm}
|
||||||
|
|
||||||
One of the reasons why MUSIC led to subsequent advancements in research
|
One of the reasons why MUSIC led to subsequent advancements in research
|
||||||
was not simply because it was developed early, but because it was the
|
was not simply that it was developed early, but because it was the first
|
||||||
first to implement sound representation on a computer based on
|
to implement, but because it was the first to implement sound
|
||||||
\textbf{pulse-code modulation (PCM)}, which theoretically can generate
|
representation on a computer based on \textbf{pulse-code modulation
|
||||||
``almost any sound''\citep[p557]{mathews1963}
|
(PCM)}, which theoretically can generate ``almost any
|
||||||
|
sound''\citep[p557]{mathews1963}
|
||||||
|
|
||||||
PCM, the foundational sound representation on today's computers,
|
PCM, the foundational digital sound representation today, involves
|
||||||
involves sampling audio waveforms into discrete intervals and quantize
|
sampling audio waveforms at discrete intervals and quantizing the sound
|
||||||
the sound pressure at each interval as discrete numerical values.
|
pressure at each interval as discrete numerical values.
|
||||||
|
|
||||||
The issue with the universalism of PCM in the history of computer music
|
The issue with the universalism of PCM in the history of computer music
|
||||||
is inherent in the concept of Acousmatic, which serves as a premise for
|
is inherent in the concept of Acousmatic Listening, which serves as a
|
||||||
Post-Acousmatic. Acousmatic, introduced by Piegnot as a listening style
|
premise for Post-Acousmatic. Acousmatic, introduced by Piegnot as a
|
||||||
for tape music such as musique concrète and later theorized by
|
listening style for tape music such as musique concrète and later
|
||||||
Schaeffer, refers to a mode of listening where the listener refrains
|
theorized by Schaeffer\citep[p106]{adkins2016}, refers to a mode of
|
||||||
from imagining a specific sound source. This concept has been widely
|
listening in which the listener refrains from imagining a specific sound
|
||||||
applied in theories of listening to recorded sound, including Chion's
|
source. This concept has been widely applied in theories of listening to
|
||||||
analysis of sound design in film.
|
recorded sound, including Michel Chion's analysis of sound design in
|
||||||
|
film.
|
||||||
|
|
||||||
However, as sound studies scholar Jonathan Sterne has pointed out,
|
However, as sound studies scholar Jonathan Sterne has pointed out,
|
||||||
discourses surrounding acousmatic listening often work to delineate
|
discourses surrounding acousmatic listening often work to delineate
|
||||||
@@ -244,12 +246,12 @@ contrast\footnote{Sterne later critiques the phenomenological basis of
|
|||||||
acousmatic listening, which presupposes an idealized, intact body as
|
acousmatic listening, which presupposes an idealized, intact body as
|
||||||
the listening subject. He proposes a methodology of political
|
the listening subject. He proposes a methodology of political
|
||||||
phenomenology centered on impairment, challenging these normative
|
phenomenology centered on impairment, challenging these normative
|
||||||
assumptions\citep{sterne_diminished_2022}. Discussions of universality
|
assumptions \citep{sterne_diminished_2022}. Discussions of
|
||||||
in computer music should also address ableism, as seen in the
|
universality in computer music should also address ableism,
|
||||||
relationship between recording technologies and auditory disabilities.}.
|
particularly in relation to recording technologies and auditory
|
||||||
This implies that prior to the advent of sound reproduction
|
disabilities.}. This implies that prior to the advent of sound
|
||||||
technologies, listening was unmediated and holistic---a narrative that
|
reproduction technologies, listening was unmediated and holistic---a
|
||||||
obscures the constructed nature of these assumptions.
|
narrative that obscures the constructed nature of these assumptions.
|
||||||
|
|
||||||
\begin{quote}
|
\begin{quote}
|
||||||
For instance, the claim that sound reproduction has ``alienated'' the
|
For instance, the claim that sound reproduction has ``alienated'' the
|
||||||
@@ -263,17 +265,17 @@ sound'' is underpinned by an ideology associated with sound reproduction
|
|||||||
technologies. This ideology assumes that recorded sound contains an
|
technologies. This ideology assumes that recorded sound contains an
|
||||||
``original'' source and that listeners can distinguish distortions or
|
``original'' source and that listeners can distinguish distortions or
|
||||||
noise from it. Sampling theory builds on this premise through Shannon's
|
noise from it. Sampling theory builds on this premise through Shannon's
|
||||||
information theory, by statistically modeling human auditory
|
information theory by statistically modeling human auditory
|
||||||
characteristics: it assumes that humans cannot discern volume
|
characteristics: it assumes that humans cannot discern volume
|
||||||
differences below certain thresholds or perceive vibrations outside
|
differences below certain thresholds or perceive vibrations outside
|
||||||
specific frequency ranges. By limiting representation to this range,
|
specific frequency ranges. By limiting representation to this range,
|
||||||
sampling theory ensures that all audible sounds can be effectively
|
sampling theory ensures that all audible sounds can be effectively
|
||||||
encoded.
|
encoded.
|
||||||
|
|
||||||
By the way, the actual implementation of PCM in MUSIC I only allowed for
|
Incidentally, the actual implementation of PCM in MUSIC I only allowed
|
||||||
monophonic triangle waves with controllable volume, pitch, and
|
for monophonic triangle waves with controllable volume, pitch, and
|
||||||
timing\citep{Mathews1980}. Would anyone today describe such a system as
|
timing\citep{Mathews1980}. Would anyone today describe such a system as
|
||||||
capable of producing ``infinite variations'' in sound synthesis?
|
capable of producing ``almost any sound''?
|
||||||
|
|
||||||
Even when considering more contemporary applications, processes like
|
Even when considering more contemporary applications, processes like
|
||||||
ring modulation (RM), amplitude modulation (AM), or distortion often
|
ring modulation (RM), amplitude modulation (AM), or distortion often
|
||||||
@@ -281,32 +283,33 @@ generate aliasing artifacts unless proper oversampling is applied. These
|
|||||||
artifacts occur because PCM, while universally suitable for reproducing
|
artifacts occur because PCM, while universally suitable for reproducing
|
||||||
recorded sound, is not inherently versatile as a medium for generating
|
recorded sound, is not inherently versatile as a medium for generating
|
||||||
new sounds. As Puckette has argued, alternative representations, such as
|
new sounds. As Puckette has argued, alternative representations, such as
|
||||||
collections of linear segments or physical modeling synthesis, present
|
collections of linear segments or physical modeling synthesis, offer
|
||||||
other possibilities\citep{puckette2015}. Therefore, PCM is not a
|
other possibilities\citep{puckette2015}. Therefore, PCM is not a
|
||||||
completely universal tool for creating sound.
|
completely universal tool for creating sound.
|
||||||
|
|
||||||
\section{What Does the Unit Generator
|
\section{What Does the Unit Generator
|
||||||
Hide?}\label{what-does-the-unit-generator-hide}
|
Hide?}\label{what-does-the-unit-generator-hide}
|
||||||
|
|
||||||
From with version III, MUSIC took the form of an acoustic compiler
|
Beginning with version III, MUSIC took the form of an acoustic compiler
|
||||||
(block diagram compiler) that takes two input sources: a score language,
|
(block diagram compiler) that processes two input sources: a score
|
||||||
which represents a list of time-varying parameters, and an orchestra
|
language, which represents a list of time-varying parameters, and an
|
||||||
language, which describes the connections between \textbf{Unit
|
orchestra language, which describes the connections between \textbf{Unit
|
||||||
Generators} such as oscillators and filters. In this paper, the term
|
Generators} such as oscillators and filters. In this paper, the term
|
||||||
``Unit Generator'' means a signal processing modules where its
|
``Unit Generator''refers to a signal processing module whose
|
||||||
implementation is either not open or written in a language different
|
implementation is either not open or written in a language different
|
||||||
from the one used by the user.
|
from the one used by the user.
|
||||||
|
|
||||||
MUSIC family in the context of computer music research made success for
|
The MUSIC family, in the context of computer music research, achieved
|
||||||
performing sound synthesis based on PCM and (but) it came with the
|
success for performing sound synthesis based on PCM but this success
|
||||||
establishment of a division of labor between professional musicians and
|
came with the establishment of a division of labor between professional
|
||||||
computer engineers through the development of domain-specific languages.
|
musicians and computer engineers through the development of
|
||||||
Mathews explained that he developed a compiler for MUSIC III in response
|
domain-specific languages. Mathews explained that he developed a
|
||||||
to requests for additional features for MUSIC II such as envelopes and
|
compiler for MUSIC III in response to requests from many composers for
|
||||||
vibrato by many composers, while also ensuring that the program would
|
additional features in MUSIC II, such as envelopes and vibrato, while
|
||||||
not be fixed in a specialized musical expression (Max V. Mathews 2007,
|
also ensuring that the program would not be restricted to a specialized
|
||||||
13:10-17:50). He repeatedly stated that his role was that of a scientist
|
form of musical expression (Max V. Mathews 2007, 13:10-17:50). He
|
||||||
rather than a musician:
|
repeatedly stated that his role was that of a scientist rather than a
|
||||||
|
musician:
|
||||||
|
|
||||||
\begin{quote}
|
\begin{quote}
|
||||||
When we first made these music programs the original users were not
|
When we first made these music programs the original users were not
|
||||||
@@ -319,26 +322,26 @@ willing to experiment.\citep[p17]{Mathews1980}
|
|||||||
|
|
||||||
This clear delineation of roles between musicians and scientists became
|
This clear delineation of roles between musicians and scientists became
|
||||||
one of the defining characteristics of post-MUSIC computer music
|
one of the defining characteristics of post-MUSIC computer music
|
||||||
research. Paradoxically, the computer music research that desired
|
research. Paradoxically, while computer music research aimed to create
|
||||||
creating sounds never heard before paved the way for research by
|
sounds never heard before, it also paved the way for further research by
|
||||||
allowing musicians to focus on their composition without knowing about
|
allowing musicians to focus on composition without having to understand
|
||||||
cumbersome works of programming.
|
the cumbersome work of programming.
|
||||||
|
|
||||||
\subsection{Example: Hiding First-Order Variables in Signal
|
\subsection{Example: Hiding Internal State Variables in Signal
|
||||||
Processing}\label{example-hiding-first-order-variables-in-signal-processing}
|
Processing}\label{example-hiding-internal-state-variables-in-signal-processing}
|
||||||
|
|
||||||
Although the MUSIC N series shares a common workflow of using a score
|
Although the MUSIC N series shares a common workflow of using a score
|
||||||
language and an orchestra language, the actual implementation of each
|
language and an orchestra language, the actual implementation of each
|
||||||
programming language varies significantly, even within the series.
|
programming language varies significantly, even within the series.
|
||||||
|
|
||||||
One notable but overlooked example is MUSIGOL, a derivative of MUSIC IV
|
One notable but often overlooked example is MUSIGOL, a derivative of
|
||||||
\citep{innis_sound_1968}. In MUSIGOL, not only was the system itself but
|
MUSIC IV \citep{innis_sound_1968}. In MUSIGOL, not only was the system
|
||||||
even the score and orchestra by user programs were written entirely as
|
itself but even the score and orchestra defined by user were written
|
||||||
ALGOL 60 language. Like today's Processing or Arduino, MUSIGOL is one of
|
entirely as ALGOL 60 language. Similar to today's Processing or Arduino,
|
||||||
the earliest examples of a programming language for music implemented as
|
MUSIGOL is one of the earliest examples of a programming language for
|
||||||
an internal DSL (DSL as a library)\footnote{While MUS10, used at
|
music implemented as an internal DSL (DSL as a library)\footnote{While
|
||||||
Stanford University, was not an internal DSL, it was created by
|
MUS10, used at Stanford University, was not an internal DSL, it was
|
||||||
modifying an existing ALGOL parser \citep[p248]{loy1985}.}.
|
created by modifying an existing ALGOL parser \citep[p.248]{loy1985}.}.
|
||||||
(Therefore, according to the definition of Unit Generator provided in
|
(Therefore, according to the definition of Unit Generator provided in
|
||||||
this paper, MUSIGOL does not qualify as a language that uses Unit
|
this paper, MUSIGOL does not qualify as a language that uses Unit
|
||||||
Generators.)
|
Generators.)
|
||||||
@@ -352,14 +355,9 @@ time steps prior \(O_{n-t}\), and an arbitrary amplitude parameter
|
|||||||
|
|
||||||
\[O_n = I_1 \cdot S_n + I_2 \cdot O_{n-1} - I_3 \cdot O_{n-2}\]
|
\[O_n = I_1 \cdot S_n + I_2 \cdot O_{n-1} - I_3 \cdot O_{n-2}\]
|
||||||
|
|
||||||
In MUSIC V, this band-pass filter can be used as in Listing
|
In MUSIC V, this band-pass filter can be used as shown in Listing
|
||||||
\ref{lst:musicv} \citep[p78]{mathews_technology_1969}.
|
\ref{lst:musicv} \citep[p.78]{mathews_technology_1969}. Here,
|
||||||
|
\passthrough{\lstinline!I1!} represents the input bus, and
|
||||||
\begin{lstlisting}[label={lst:musicv}, caption={Example of the use of FLT UGen in MUSIC V.}]
|
|
||||||
FLT I1 O I2 I3 Pi Pj;
|
|
||||||
\end{lstlisting}
|
|
||||||
|
|
||||||
Here, \passthrough{\lstinline!I1!} represents the input bus, and
|
|
||||||
\passthrough{\lstinline!O!} is the output bus. The parameters
|
\passthrough{\lstinline!O!} is the output bus. The parameters
|
||||||
\passthrough{\lstinline!I2!} and \passthrough{\lstinline!I3!} correspond
|
\passthrough{\lstinline!I2!} and \passthrough{\lstinline!I3!} correspond
|
||||||
to the normalized values of the coefficients \(I_2\) and \(I_3\),
|
to the normalized values of the coefficients \(I_2\) and \(I_3\),
|
||||||
@@ -371,32 +369,35 @@ from the Score, specifically among the available
|
|||||||
case, however, these parameters are repurposed as general-purpose memory
|
case, however, these parameters are repurposed as general-purpose memory
|
||||||
to temporarily store feedback signals. Similarly, other Unit Generators,
|
to temporarily store feedback signals. Similarly, other Unit Generators,
|
||||||
such as oscillators, reuse note parameters to handle operations like
|
such as oscillators, reuse note parameters to handle operations like
|
||||||
phase accumulation.
|
phase accumulation. As a result, users needed to manually calculate
|
||||||
|
feedback gains based on the desired frequency
|
||||||
As a result, users needed to manually calculate feedback gains based on
|
characteristics\footnote{It is said that a preprocessing feature called
|
||||||
the desired frequency characteristics\footnote{It is said that a
|
\passthrough{\lstinline!CONVT!} could be used to transform frequency
|
||||||
preprocessing feature called \passthrough{\lstinline!CONVT!} could be
|
characteristics into coefficients
|
||||||
used to transform frequency characteristics into coefficients
|
|
||||||
\citep[p77]{mathews_technology_1969}.}, and they also had to account
|
\citep[p77]{mathews_technology_1969}.}, and they also had to account
|
||||||
for using at least two sample memory spaces.
|
for at least two sample memory spaces.
|
||||||
|
|
||||||
On the other hand, in later MUSIC 11, and succeeding CSound by Barry
|
On the other hand, in later MUSIC 11, and its successor CSound by Barry
|
||||||
Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named
|
Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named
|
||||||
\passthrough{\lstinline!reson!}. This UGen takes four parameters: the
|
\passthrough{\lstinline!reson!}. This UGen takes four parameters: the
|
||||||
input signal, center cutoff frequency, bandwidth, and Q
|
input signal, center cutoff frequency, bandwidth, and Q
|
||||||
factor\citep[p248]{vercoe_computer_1983}. Unlike previous
|
factor\citep[p248]{vercoe_computer_1983}. Unlike previous
|
||||||
implementations, users no longer need to calculate coefficients manually
|
implementations, users no longer need to calculate coefficients
|
||||||
and no need to aware of the two-sample memory space. However, in MUSIC
|
manually, nor do they need to be aware of the two-sample memory space.
|
||||||
11 and CSound, it is possible to implement this band-pass filter from
|
However, in MUSIC 11 and CSound, it is possible to implement this
|
||||||
scratch as a User Defined Opcode (UDO) as in Listing \ref{lst:reson}.
|
band-pass filter from scratch as a User-Defined Opcode (UDO) as shownin
|
||||||
Vercoe emphasized that while signal processing primitives should allow
|
Listing \ref{lst:reson}. Vercoe emphasized that while signal processing
|
||||||
for low-level operations, such as single-sample feedback, and eliminate
|
primitives should allow for low-level operations, such as single-sample
|
||||||
black boxes, it is equally important to provide high-level modules that
|
feedback, and eliminate black boxes, it is equally important to provide
|
||||||
avoid unnecessary complexity (``avoid the clutter'') when users do not
|
high-level modules that avoid unnecessary complexity (``avoid the
|
||||||
need to understand the internal details
|
clutter'') when users do not need to understand the internal details
|
||||||
\citep[p247]{vercoe_computer_1983}.
|
\citep[p.247]{vercoe_computer_1983}.
|
||||||
|
|
||||||
\begin{lstlisting}[label={lst:reson}, caption={Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)}]
|
\begin{lstlisting}[caption={Example of the use of FLT UGen in MUSIC V.}, label=lst:musicv]
|
||||||
|
FLT I1 O I2 I3 Pi Pj;
|
||||||
|
\end{lstlisting}
|
||||||
|
|
||||||
|
\begin{lstlisting}[caption={Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)}, label=lst:reson]
|
||||||
instr 1
|
instr 1
|
||||||
la1 init 0
|
la1 init 0
|
||||||
la2 init 0
|
la2 init 0
|
||||||
@@ -427,27 +428,27 @@ general-purpose languages like C or C++\footnote{ChucK later introduced
|
|||||||
However, not all existing UGens are replaced by UDOs by default both
|
However, not all existing UGens are replaced by UDOs by default both
|
||||||
in CSound and ChucK, which remain supplemental features possibly
|
in CSound and ChucK, which remain supplemental features possibly
|
||||||
because the runtime performance of UDO is inferior to natively
|
because the runtime performance of UDO is inferior to natively
|
||||||
implemented UGens.}. If users wish to define low-level UGens (external
|
implemented UGens.}. If users wish to define low-level UGens (called
|
||||||
objects in Max and Pd), they need to set up a development environment
|
external objects in Max and Pd), they need to set up a development
|
||||||
for C or C++.
|
environment for C or C++.
|
||||||
|
|
||||||
When UGens are implemented in low-level languages like C, even if the
|
When UGens are implemented in low-level languages like C, even if the
|
||||||
implementation is open-source, the division of knowledge effectively
|
implementation is open-source, the division of knowledge effectively
|
||||||
forces users (composers) to treat UGens as black boxes. This reliance on
|
forces users (composers) to treat UGens as black boxes. This reliance on
|
||||||
UGens as black boxes reflects and deepens the division of labor between
|
UGens as black boxes reflects and deepens the division of labor between
|
||||||
musicians and scientists that was establish in MUSIC though the it can
|
musicians and scientists that was established in MUSIC though it can be
|
||||||
be interpreted as both a cause and a result.
|
interpreted as both a cause and a result.
|
||||||
|
|
||||||
For example, Puckette, the developer of Max and Pure Data, noted that
|
For example, Puckette, the developer of Max and Pure Data, noted that
|
||||||
the division of labor at IRCAM between Researchers, Musical
|
the division of labor at IRCAM between Researchers, Musical
|
||||||
Assistants(Realizers), and Composers has parallels in the current Max
|
Assistants(Realizers), and Composers has parallels in the current Max
|
||||||
ecosystem, where the roles are divided into Max developers them selves,
|
ecosystem, where roles are divided among Max developers themselves,
|
||||||
developers of external objects, and Max users \citep{puckette_47_2020}.
|
developers of external objects, and Max users \citep{puckette_47_2020}.
|
||||||
As described in the ethnography of 1980s IRCAM by anthropologist
|
As described in the ethnography of 1980s IRCAM by anthropologist
|
||||||
Georgina Born, the division of labor between fundamental research
|
Georgina Born, the division of labor between fundamental research
|
||||||
scientists and composers at IRCAM was extremely clear. This structure
|
scientists and composers at IRCAM was extremely clear. This structure
|
||||||
was also tied to the exclusion of popular music and its associated
|
was also tied to the exclusion of popular music and its associated
|
||||||
technologies in IRCAM's research focus \citep{Born1995}.
|
technologies from IRCAM's research focus \citep{Born1995}.
|
||||||
|
|
||||||
However, such divisions are not necessarily the result of differences in
|
However, such divisions are not necessarily the result of differences in
|
||||||
values along the axes analyzed by Born, such as
|
values along the axes analyzed by Born, such as
|
||||||
@@ -467,13 +468,13 @@ Lacking adequate knowledge of the technical system, musicians
|
|||||||
increasingly found themselves drawn to prefabricated programs as a
|
increasingly found themselves drawn to prefabricated programs as a
|
||||||
source of new sound material. (\ldots)it also suggests a
|
source of new sound material. (\ldots)it also suggests a
|
||||||
reconceptualization on the part of the industry of the musician as a
|
reconceptualization on the part of the industry of the musician as a
|
||||||
particular type of consumer. \citep[p89]{theberge_any_1997}
|
particular type of consumer. \citep[p.89]{theberge_any_1997}
|
||||||
\end{quote}
|
\end{quote}
|
||||||
|
|
||||||
This argument can be extended beyond electronic music to encompass
|
This argument can be extended beyond electronic music to encompass
|
||||||
computer-based music in general. For example, media researcher Lori
|
computer-based music in general. For example, media researcher Lori
|
||||||
Emerson noted that while the proliferation of personal computers began
|
Emerson noted that while the proliferation of personal computers began
|
||||||
with the vision of ``metamedium''---tools that users could modify
|
with the vision of a ``metamedium''---tools that users could modify
|
||||||
themselves, as exemplified by Xerox PARC's Dynabook---the vision was
|
themselves, as exemplified by Xerox PARC's Dynabook---the vision was
|
||||||
ultimately realized in an incomplete form through devices like the
|
ultimately realized in an incomplete form through devices like the
|
||||||
Macintosh and iPad, which distanced users from programming by
|
Macintosh and iPad, which distanced users from programming by
|
||||||
@@ -484,16 +485,16 @@ extensibility through programming renders it merely a device for media
|
|||||||
consumption \citep{kay2019}.
|
consumption \citep{kay2019}.
|
||||||
|
|
||||||
Although programming environments as tools for music production are not
|
Although programming environments as tools for music production are not
|
||||||
relatively used, the UGen concept serves as a premise for today's
|
widely used, the UGen concept serves as a premise for today's popular
|
||||||
popular music production software and infrastructure, like audio plugin
|
music production software and infrastructure, such as audio plugin
|
||||||
formats for DAW softwares and WebAudio. It is known that the concept of
|
formats for DAW softwares and WebAudio. It is known that the concept of
|
||||||
Unit Generators emerged either simultaneously with or even slightly
|
Unit Generators emerged either simultaneously with or even slightly
|
||||||
before modular synthesizers \citep[p20]{park_interview_2009}. However,
|
before modular synthesizers \citep[p.20]{park_interview_2009}. However,
|
||||||
UGen-based languages have actively incorporated metaphors of modular
|
UGen-based languages have actively incorporated metaphors from modular
|
||||||
synthesizers for their user interface, as Vercoe said that the
|
synthesizers for their user interfaces, as Vercoe noted that the
|
||||||
distinction between ``ar'' (audio-rate) and ``kr'' (control-rate)
|
distinction between ``ar'' (audio-rate) and ``kr'' (control-rate)
|
||||||
processing introduced in MUSIC 11 is said to have been inspired by
|
processing introduced in MUSIC 11 is said to have been inspired by
|
||||||
Buchla's distinction in its plug type
|
Buchla's distinction in plug types
|
||||||
\citep[1:01:38--1:04:04]{vercoe_barry_2012}.
|
\citep[1:01:38--1:04:04]{vercoe_barry_2012}.
|
||||||
|
|
||||||
However, adopting visual metaphors comes with the limitation that it
|
However, adopting visual metaphors comes with the limitation that it
|
||||||
@@ -501,21 +502,22 @@ constrains the complexity of representation to what is visually
|
|||||||
conceivable. In languages with visual patching interfaces like Max and
|
conceivable. In languages with visual patching interfaces like Max and
|
||||||
Pure Data, meta-operations on UGens are often restricted to simple
|
Pure Data, meta-operations on UGens are often restricted to simple
|
||||||
tasks, such as parallel duplication. Consequently, even users of Max or
|
tasks, such as parallel duplication. Consequently, even users of Max or
|
||||||
Pure Data may not necessarily be engaging in expressions that are only
|
Pure Data may not necessarily be engaging in forms of expressions that
|
||||||
possible with computers. Instead, many might simply be using these tools
|
are only possible with computers. Instead, many might simply be using
|
||||||
as the most convenient software equivalents of modular synthesizers.
|
these tools as the most convenient software equivalents of modular
|
||||||
|
synthesizers.
|
||||||
|
|
||||||
\section{Context of Programming Languages for Music After
|
\section{Context of Programming Languages for Music After
|
||||||
2000}\label{context-of-programming-languages-for-music-after-2000}
|
2000}\label{context-of-programming-languages-for-music-after-2000}
|
||||||
|
|
||||||
Based on the discussions thus far, music programming languages developed
|
Based on the discussions thus far, music programming languages developed
|
||||||
after the 2000s can be categorized into two distinct directions: those
|
after the 2000s can be categorized into two distinct directions: those
|
||||||
that narrow the scope of the language's role by attempting alternative
|
that narrow the scope of the language's role by introducing alternative
|
||||||
abstractions at a higher level, distinct from the UGen paradigm, and
|
abstractions at a higher-level, distinct from the UGen paradigm, and
|
||||||
those that expand the general-purpose capabilities of the language,
|
those that expand the general-purpose capabilities of the language,
|
||||||
reducing black-boxing.
|
reducing black-boxing.
|
||||||
|
|
||||||
Languages that pursued alternative abstractions at higher levels have
|
Languages that pursued alternative higher-level abstractions have
|
||||||
evolved alongside the culture of live coding, where performances are
|
evolved alongside the culture of live coding, where performances are
|
||||||
conducted by rewriting code in real time. The activities of the live
|
conducted by rewriting code in real time. The activities of the live
|
||||||
coding community, including groups such as TOPLAP since the 2000s, were
|
coding community, including groups such as TOPLAP since the 2000s, were
|
||||||
@@ -526,7 +528,7 @@ states, ``Obscurantism is dangerous''
|
|||||||
\citep{toplap_manifestodraft_2004}.
|
\citep{toplap_manifestodraft_2004}.
|
||||||
|
|
||||||
Languages implemented as clients for SuperCollider, such as \textbf{IXI}
|
Languages implemented as clients for SuperCollider, such as \textbf{IXI}
|
||||||
(on Ruby) \citep{Magnusson2011}, \textbf{Sonic Pi}(on Ruby),
|
(on Ruby) \citep{Magnusson2011}, \textbf{Sonic Pi} (on Ruby),
|
||||||
\textbf{Overtone} (on Clojure) \citep{Aaron2013}, \textbf{TidalCycles}
|
\textbf{Overtone} (on Clojure) \citep{Aaron2013}, \textbf{TidalCycles}
|
||||||
(on Haskell) \citep{McLean2014}, and \textbf{FoxDot} (on Python)
|
(on Haskell) \citep{McLean2014}, and \textbf{FoxDot} (on Python)
|
||||||
\citep{kirkbride2016foxdot}, leverage the expressive power of more
|
\citep{kirkbride2016foxdot}, leverage the expressive power of more
|
||||||
@@ -538,31 +540,28 @@ can also be applied to visual patterns and other outputs, meaning it is
|
|||||||
not inherently tied to PCM-based waveform output as the final result.
|
not inherently tied to PCM-based waveform output as the final result.
|
||||||
|
|
||||||
On the other hand, due to their high-level design, these languages often
|
On the other hand, due to their high-level design, these languages often
|
||||||
rely on ad hoc implementations for tasks like sound manipulation and
|
rely on ad-hoc implementations for tasks like sound manipulation and
|
||||||
low-level signal processing, such as effects.
|
low-level signal processing, such as effects. McCartney, the developer
|
||||||
|
of SuperCollider, stated that if general-purpose programming languages
|
||||||
|
were sufficiently expressive, there would be no need to create
|
||||||
|
specialized languages \citep{McCartney2002}. This prediction appears
|
||||||
|
reasonable when considering examples like MUSIGOL. However, in practice,
|
||||||
|
scripting languages that excel in dynamic program modification face
|
||||||
|
challenges in modern preemptive OS environments. For instance, dynamic
|
||||||
|
memory management techniques such as garbage collection can hinder
|
||||||
|
deterministic execution timing required for real-time processing
|
||||||
|
\citep{Dannenberg2005}.
|
||||||
|
|
||||||
McCartney, the developer of SuperCollider, stated that if
|
Historically, programming languages like FORTRAN or C served as a
|
||||||
general-purpose programming languages were sufficiently expressive,
|
portable way for implementing programs across different architectures.
|
||||||
there would be no need to create special languages
|
However, with the proliferation of higher-level languages, programming
|
||||||
\citep{McCartney2002}. This prediction appears reasonable when
|
in C or C++ has become relatively more difficult, akin to assembly
|
||||||
considering examples like MUSIGOL. However, in practice, scripting
|
|
||||||
languages that excel in dynamic program modification face challenges in
|
|
||||||
modern preemptive OS environments. For instance, dynamic memory
|
|
||||||
management techniques such as garbage collection can hinder the ability
|
|
||||||
to guarantee deterministic execution timing required for real-time
|
|
||||||
processing \citep{Dannenberg2005}.
|
|
||||||
|
|
||||||
Historically, programming in languages like FORTRAN or C served as a
|
|
||||||
universal method for implementing audio processing on computers,
|
|
||||||
independent of architecture. However, with the proliferation of
|
|
||||||
general-purpose programming languages, programming in C or C++ has
|
|
||||||
become relatively more difficult, akin to programming in assembly
|
|
||||||
language in earlier times. Furthermore, considering the challenges of
|
language in earlier times. Furthermore, considering the challenges of
|
||||||
portability across not only different CPUs but also diverse host
|
portability not only across different CPUs but also diverse host
|
||||||
environments such as operating systems and the Web, these languages are
|
environments such as OSs and the Web, these languages are no longer as
|
||||||
no longer as portable as they once were. Consequently, systems targeting
|
portable as they once were. Consequently, internal DSL for music
|
||||||
signal processing implemented as internal DSLs have become exceedingly
|
including signal processing have become exceedingly rare, with only a
|
||||||
rare, with only a few examples like LuaAV\citep{wakefield2010}.
|
few examples such as LuaAV\citep{wakefield2010}.
|
||||||
|
|
||||||
Instead, an approach has emerged to create general-purpose languages
|
Instead, an approach has emerged to create general-purpose languages
|
||||||
specifically designed for use in music from the ground up. One prominent
|
specifically designed for use in music from the ground up. One prominent
|
||||||
@@ -576,14 +575,14 @@ processing code, including sound manipulation, into machine code for
|
|||||||
high-speed execution.
|
high-speed execution.
|
||||||
|
|
||||||
The expressive power of general-purpose languages and compiler
|
The expressive power of general-purpose languages and compiler
|
||||||
infrastructures like LLVM have given rise to an approach focused on
|
infrastructures like LLVM has given rise to an approach focused on
|
||||||
designing languages with mathematical formalization that reduce
|
designing languages with mathematical formalization that reduces
|
||||||
black-boxing. \textbf{Faust} \citep{Orlarey2009}, for instance, is a
|
black-boxing. \textbf{Faust} \citep{Orlarey2009}, for instance, is a
|
||||||
language that retains a graph-based structure akin to UGens but is built
|
language that retains a graph-based structure akin to UGens but is built
|
||||||
on a formal system called Block Diagram Algebra. Thanks to its
|
on a formal system called Block Diagram Algebra. Thanks to its
|
||||||
formalization, Faust can be transpiled into general-purpose languages
|
formalization, Faust can be transpiled into various low-level languages
|
||||||
such as C, C++, or Rust and can also be used as an External Object in
|
such as C, C++, or Rust and can also be used as external objects in Max
|
||||||
environments like Max or Pure Data.
|
or Pure Data.
|
||||||
|
|
||||||
Languages like \textbf{Kronos} \citep{norilo2015} and \textbf{mimium}
|
Languages like \textbf{Kronos} \citep{norilo2015} and \textbf{mimium}
|
||||||
\citep{matsuura_mimium_2021}, which are based on the more general
|
\citep{matsuura_mimium_2021}, which are based on the more general
|
||||||
@@ -599,27 +598,27 @@ certain degree of expressive freedom through coding. In this context,
|
|||||||
efforts like Extempore, Kronos, and mimium are not merely programming
|
efforts like Extempore, Kronos, and mimium are not merely programming
|
||||||
languages for music but are also situated within the broader research
|
languages for music but are also situated within the broader research
|
||||||
context of functional reactive programming (FRP), which focuses on
|
context of functional reactive programming (FRP), which focuses on
|
||||||
representing time-varying values in computation. Most of computing
|
representing time-varying values in computation. Most computing models
|
||||||
models lack an inherent concept of real time and instead operates based
|
lack an inherent concept of real-time and instead operates based on
|
||||||
on discrete computational steps. Similarly, low-level general-purpose
|
discrete computational steps. Similarly, low-level general-purpose
|
||||||
programming languages do not natively include primitives for real-time
|
programming languages do not natively include primitives for real-time
|
||||||
concepts. Consequently, the exploration of computational models tied to
|
concepts. Consequently, the exploration of computational models tied to
|
||||||
time ---a domain inseparable from music--- remains vital and has the
|
time ---a domain inseparable from music--- remains vital and has the
|
||||||
potential to contribute to the theoretical foundations of
|
potential to contribute to the theoretical foundations of
|
||||||
general-purpose programming languages.
|
general-purpose programming languages.
|
||||||
|
|
||||||
However, strongly formalized languages come with their own trade-offs.
|
However, strongly formalized languages come with another trade-off.
|
||||||
While they allow UGens to be defined without black-boxing, understanding
|
While they allow UGens to be defined without black-boxing, understanding
|
||||||
the design and implementation of these languages often requires expert
|
the design and implementation of these languages often requires expert
|
||||||
knowledge. This can create a deeper division between language developers
|
knowledge. This can create a deeper division between language developers
|
||||||
and users, in contrast to the many but small and shallow division seen
|
and users, in contrast to the many but small and shallow divisions seen
|
||||||
in the Multi-Language paradigm, like SuperCollider developers, external
|
in the multi-language paradigm, like SuperCollider developers, external
|
||||||
UGen developers, client language developers (e.g., TidalCycles),
|
UGen developers, client language developers (e.g., TidalCycles),
|
||||||
SuperCollider users, and client language users.
|
SuperCollider users, and client language users.
|
||||||
|
|
||||||
Although there is no clear solution to this trade-off, one intriguing
|
Although there is no clear solution to this trade-off, one intriguing
|
||||||
idea is the development of self-hosting languages for music---that is,
|
idea is the development of self-hosting languages for music---that is,
|
||||||
languages where their own compilers are written in the language itself.
|
languages whose their own compilers are written in the language itself.
|
||||||
At first glance, this may seem impractical. However, by enabling users
|
At first glance, this may seem impractical. However, by enabling users
|
||||||
to learn and modify the language's mechanisms spontaneously, this
|
to learn and modify the language's mechanisms spontaneously, this
|
||||||
approach could create an environment that fosters deeper engagement and
|
approach could create an environment that fosters deeper engagement and
|
||||||
@@ -633,7 +632,7 @@ black-boxing tendencies of the Unit Generator paradigm. Historically, it
|
|||||||
was expected that the clear division of roles between engineers and
|
was expected that the clear division of roles between engineers and
|
||||||
composers would enable the creation of new forms of expression using
|
composers would enable the creation of new forms of expression using
|
||||||
computers. Indeed, from the perspective of Post-Acousmatic discourse,
|
computers. Indeed, from the perspective of Post-Acousmatic discourse,
|
||||||
some, like Holbrook and Rudi, still consider this division to be a
|
some, such as Holbrook and Rudi, still consider this division to be a
|
||||||
positive development:
|
positive development:
|
||||||
|
|
||||||
\begin{quote}
|
\begin{quote}
|
||||||
@@ -644,11 +643,11 @@ longer necessarily need mathematical and programming skills to use the
|
|||||||
technologies.\citep[p2]{holbrook2022}
|
technologies.\citep[p2]{holbrook2022}
|
||||||
\end{quote}
|
\end{quote}
|
||||||
|
|
||||||
However, this division of labor also creates a shared vocabulary
|
However, this division of labor also creates a shared vocabulary (as
|
||||||
(exactly seen in the Unit Generator by Mathews) and works to perpetuate
|
exemplified in the Unit Generator by Mathews) and serves to perpetuate
|
||||||
it. By portraying new technologies as something externally introduced,
|
it. By portraying new technologies as something externally introduced,
|
||||||
and by focusing on the agency of those who create music with computers,
|
and by focusing on the agency of those who create music with computers,
|
||||||
the individuals responsible for building the programming environments,
|
the individuals responsible for building programming environments,
|
||||||
software, protocols, and formats are rendered invisible
|
software, protocols, and formats are rendered invisible
|
||||||
\citep{sterne_there_2014}. This leads to an oversight of the indirect
|
\citep{sterne_there_2014}. This leads to an oversight of the indirect
|
||||||
power relationships produced by these infrastructures.
|
power relationships produced by these infrastructures.
|
||||||
@@ -659,10 +658,10 @@ aesthetic value within musical culture (and what forms of musical
|
|||||||
practice they enable), as well as the social (im)balances of power they
|
practice they enable), as well as the social (im)balances of power they
|
||||||
produce.
|
produce.
|
||||||
|
|
||||||
The academic value of the research of programming languages for music is
|
The academic value of the research on programming languages for music is
|
||||||
often vaguely claimed, like the word of ``general'', ``expressive'', and
|
often vaguely asserted, using terms such as ``general'', ``expressive'',
|
||||||
``efficient'' but it is difficult to argue these claims when the
|
and ``efficient''. However, it is difficult to argue these claims when
|
||||||
processing speed is no more the primary issue. Thus, as like
|
processing speed is no longer the primary concern. Thus, as with
|
||||||
Idiomaticity \citep{McPherson2020} by McPherson et al., we need to
|
Idiomaticity \citep{McPherson2020} by McPherson et al., we need to
|
||||||
develop and share a vocabulary for understanding the value judgments we
|
develop and share a vocabulary for understanding the value judgments we
|
||||||
make about music languages.
|
make about music languages.
|
||||||
@@ -670,15 +669,15 @@ make about music languages.
|
|||||||
In a broader sense, the development of programming languages for music
|
In a broader sense, the development of programming languages for music
|
||||||
has also expanded to the individual level. Examples include
|
has also expanded to the individual level. Examples include
|
||||||
\textbf{Gwion} by Astor, which is inspired by ChucK and enhances its
|
\textbf{Gwion} by Astor, which is inspired by ChucK and enhances its
|
||||||
abstraction capabilities like lambda functions \citep{astor_gwion_2017};
|
abstraction, such as lambda functions \citep{astor_gwion_2017};
|
||||||
\textbf{Vult}, a DSP transpiler language created by Ruiz for his modular
|
\textbf{Vult}, a DSP transpiler language created by Ruiz for his modular
|
||||||
synthesizer hardware \citep{Ruiz2020}; and a UGen-based live coding
|
synthesizer hardware \citep{Ruiz2020}; and a UGen-based live coding
|
||||||
environment designed for web environment, \textbf{Glicol}
|
environment designed for web, \textbf{Glicol} \citep{lan_glicol_2020}.
|
||||||
\citep{lan_glicol_2020}. However, these efforts have not yet been
|
However, these efforts have not yet been incorporate into academic
|
||||||
included into academic discourse.
|
discourse.
|
||||||
|
|
||||||
Conversely, practical knowledge of past languages in 50-60s as well as
|
Conversely, practical knowledge of past languages in 1960s as well as
|
||||||
real-time hardware-oriented systems from the 80s, is gradually being
|
real-time hardware-oriented systems from the 1980s, is gradually being
|
||||||
lost. While research efforts such as \emph{Inside Computer Music}, which
|
lost. While research efforts such as \emph{Inside Computer Music}, which
|
||||||
analyzes historical works of computer music, have begun
|
analyzes historical works of computer music, have begun
|
||||||
\citep{clarke_inside_2020}, an archaeological practice focused on the
|
\citep{clarke_inside_2020}, an archaeological practice focused on the
|
||||||
|
|||||||
13
main.bib
13
main.bib
@@ -218,10 +218,8 @@
|
|||||||
editor = {Torre, Giuseppe},
|
editor = {Torre, Giuseppe},
|
||||||
year = {2022},
|
year = {2022},
|
||||||
month = jul,
|
month = jul,
|
||||||
series = {International {{Computer Music Conference}}, {{ICMC Proceedings}}},
|
|
||||||
pages = {140--144},
|
pages = {140--144},
|
||||||
publisher = {International Computer Music Association},
|
publisher = {International Computer Music Association},
|
||||||
address = {San Francisco},
|
|
||||||
urldate = {2024-12-11},
|
urldate = {2024-12-11},
|
||||||
abstract = {This short paper considers the practices of computer music through a perspective of the post-acousmatic. As the majority of music is now made using computers, the question emerges: How relevant are the topics, methods, andconventions from the ``historical'' genre of computer music? Originally an academic genre confined to large mainframes, computer music's tools and conventions have proliferated and spread to all areas of music-making. As agenre steeped in technological traditions, computer music is often primarily concerned with the technologies of its own making, and in this sense isolated from the social conditions of musical practice. The post-acousmatic is offeredas a methodological perspective to understand technology based music, its histories, and entanglements.},
|
abstract = {This short paper considers the practices of computer music through a perspective of the post-acousmatic. As the majority of music is now made using computers, the question emerges: How relevant are the topics, methods, andconventions from the ``historical'' genre of computer music? Originally an academic genre confined to large mainframes, computer music's tools and conventions have proliferated and spread to all areas of music-making. As agenre steeped in technological traditions, computer music is often primarily concerned with the technologies of its own making, and in this sense isolated from the social conditions of musical practice. The post-acousmatic is offeredas a methodological perspective to understand technology based music, its histories, and entanglements.},
|
||||||
keywords = {Computer music,Post-Acousmatic Practice},
|
keywords = {Computer music,Post-Acousmatic Practice},
|
||||||
@@ -349,11 +347,12 @@
|
|||||||
file = {/Users/tomoya/Zotero/storage/N4NELPL9/Loy and Abbott - 1985 - Programming languages for computer music synthesis.pdf}
|
file = {/Users/tomoya/Zotero/storage/N4NELPL9/Loy and Abbott - 1985 - Programming languages for computer music synthesis.pdf}
|
||||||
}
|
}
|
||||||
|
|
||||||
@inproceedings{lyon_we_2006,
|
@misc{lyon_we_2006,
|
||||||
title = {Do {{We Still Need Computer Music}}?},
|
title = {Do {{We Still Need Computer Music}}?},
|
||||||
booktitle = {{{EMS}}},
|
|
||||||
author = {Lyon, Eric},
|
author = {Lyon, Eric},
|
||||||
year = {2006},
|
year = {2006},
|
||||||
|
journal = {Talk given at EMS 2006, Beijing},
|
||||||
|
url = {https://disis.music.vt.edu/eric/LyonPapers/Do_We_Still_Need_Computer_Music.pdf},
|
||||||
urldate = {2025-01-17},
|
urldate = {2025-01-17},
|
||||||
file = {/Users/tomoya/Zotero/storage/SK2DXEE8/Do_We_Still_Need_Computer_Music.pdf}
|
file = {/Users/tomoya/Zotero/storage/SK2DXEE8/Do_We_Still_Need_Computer_Music.pdf}
|
||||||
}
|
}
|
||||||
@@ -682,7 +681,7 @@
|
|||||||
author = {Ostertag, Bob},
|
author = {Ostertag, Bob},
|
||||||
year = {1998},
|
year = {1998},
|
||||||
urldate = {2025-01-17},
|
urldate = {2025-01-17},
|
||||||
howpublished = {https://web.archive.org/web/20160312125123/http://bobostertag.com/writings-articles-computer-music-sucks.htm},
|
howpublished = {\url{https://web.archive.org/web/20160312125123/http://bobostertag.com/writings-articles-computer-music-sucks.htm}},
|
||||||
file = {/Users/tomoya/Zotero/storage/9QAGQSVS/writings-articles-computer-music-sucks.html}
|
file = {/Users/tomoya/Zotero/storage/9QAGQSVS/writings-articles-computer-music-sucks.html}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -752,8 +751,8 @@
|
|||||||
title = {Vult {{Language}}},
|
title = {Vult {{Language}}},
|
||||||
author = {Ruiz, Leonardo Laguna},
|
author = {Ruiz, Leonardo Laguna},
|
||||||
year = {2020},
|
year = {2020},
|
||||||
url = {http://modlfo.github.io/vult/},
|
urldate = {2025-01-29},
|
||||||
urldate = {2024-11-27}
|
howpublished = {http://modlfo.github.io/vult/}
|
||||||
}
|
}
|
||||||
|
|
||||||
@inproceedings{Salazar2012,
|
@inproceedings{Salazar2012,
|
||||||
|
|||||||
70
main.md
70
main.md
@@ -86,35 +86,31 @@ The MUSIC family, in the context of computer music research, achieved success fo
|
|||||||
|
|
||||||
> When we first made these music programs the original users were not composers; they were the psychologist Guttman, John Pierce, and myself, who are fundamentally scientists. We wanted to have musicians try the system to see if they could learn the language and express themselves with it. So we looked for adventurous musicians and composers who were willing to experiment.[@Mathews1980, p17]
|
> When we first made these music programs the original users were not composers; they were the psychologist Guttman, John Pierce, and myself, who are fundamentally scientists. We wanted to have musicians try the system to see if they could learn the language and express themselves with it. So we looked for adventurous musicians and composers who were willing to experiment.[@Mathews1980, p17]
|
||||||
|
|
||||||
This clear delineation of roles between musicians and scientists became one of the defining characteristics of post-MUSIC computer music research. Paradoxically, while computer music research aimed to create sounds never heard before, it also paved the way for further research by allowing musicians to focus on compositionwithout having to understand the cumbersome work of programming.
|
This clear delineation of roles between musicians and scientists became one of the defining characteristics of post-MUSIC computer music research. Paradoxically, while computer music research aimed to create sounds never heard before, it also paved the way for further research by allowing musicians to focus on composition without having to understand the cumbersome work of programming.
|
||||||
|
|
||||||
### Example: Hiding First-Order Variables in Signal Processing
|
### Example: Hiding Internal State Variables in Signal Processing
|
||||||
|
|
||||||
Although the MUSIC N series shares a common workflow of using a score language and an orchestra language, the actual implementation of each programming language varies significantly, even within the series.
|
Although the MUSIC N series shares a common workflow of using a score language and an orchestra language, the actual implementation of each programming language varies significantly, even within the series.
|
||||||
|
|
||||||
One notable but overlooked example is MUSIGOL, a derivative of MUSIC IV [@innis_sound_1968]. In MUSIGOL, not only was the system itself but even the score and orchestra by user programs were written entirely as ALGOL 60 language. Like today's Processing or Arduino, MUSIGOL is one of the earliest examples of a programming language for music implemented as an internal DSL (DSL as a library)[^mus10]. (Therefore, according to the definition of Unit Generator provided in this paper, MUSIGOL does not qualify as a language that uses Unit Generators.)
|
One notable but often overlooked example is MUSIGOL, a derivative of MUSIC IV [@innis_sound_1968]. In MUSIGOL, not only was the system itself but even the score and orchestra defined by user were written entirely as ALGOL 60 language. Similar to today's Processing or Arduino, MUSIGOL is one of the earliest examples of a programming language for music implemented as an internal DSL (DSL as a library)[^mus10]. (Therefore, according to the definition of Unit Generator provided in this paper, MUSIGOL does not qualify as a language that uses Unit Generators.)
|
||||||
|
|
||||||
[^mus10]: While MUS10, used at Stanford University, was not an internal DSL, it was created by modifying an existing ALGOL parser [@loy1985, p248].
|
[^mus10]: While MUS10, used at Stanford University, was not an internal DSL, it was created by modifying an existing ALGOL parser [@loy1985, p.248].
|
||||||
|
|
||||||
The level of abstraction deemed intuitive for musicians varied across different iterations of the MUSIC N series. This can be illustrated by examining the description of a second-order band-pass filter. The filter mixes the current input signal $S_n$, the output signal from $t$ time steps prior $O_{n-t}$, and an arbitrary amplitude parameter $I_1$, as shown in the following equation:
|
The level of abstraction deemed intuitive for musicians varied across different iterations of the MUSIC N series. This can be illustrated by examining the description of a second-order band-pass filter. The filter mixes the current input signal $S_n$, the output signal from $t$ time steps prior $O_{n-t}$, and an arbitrary amplitude parameter $I_1$, as shown in the following equation:
|
||||||
|
|
||||||
$$O_n = I_1 \cdot S_n + I_2 \cdot O_{n-1} - I_3 \cdot O_{n-2}$$
|
$$O_n = I_1 \cdot S_n + I_2 \cdot O_{n-1} - I_3 \cdot O_{n-2}$$
|
||||||
|
|
||||||
In MUSIC V, this band-pass filter can be used as in Listing \ref{lst:musicv} [@mathews_technology_1969, p78].
|
In MUSIC V, this band-pass filter can be used as shown in Listing \ref{lst:musicv} [@mathews_technology_1969, p.78]. Here, `I1` represents the input bus, and `O` is the output bus. The parameters `I2` and `I3` correspond to the normalized values of the coefficients $I_2$ and $I_3$, divided by $I_1$ (as a result, the overall gain of the filter can be greater or less than 1). The parameters `Pi` and `Pj` are normally used to receive parameters from the Score, specifically among the available `P0` to `P30`. In this case, however, these parameters are repurposed as general-purpose memory to temporarily store feedback signals. Similarly, other Unit Generators, such as oscillators, reuse note parameters to handle operations like phase accumulation. As a result, users needed to manually calculate feedback gains based on the desired frequency characteristics[^musicv], and they also had to account for at least two sample memory spaces.
|
||||||
|
|
||||||
~~~{label=lst:musicv caption="Example of the use of FLT UGen in MUSIC V."}
|
|
||||||
FLT I1 O I2 I3 Pi Pj;
|
|
||||||
~~~
|
|
||||||
|
|
||||||
Here, `I1` represents the input bus, and `O` is the output bus. The parameters `I2` and `I3` correspond to the normalized values of the coefficients $I_2$ and $I_3$, divided by $I_1$ (as a result, the overall gain of the filter can be greater or less than 1). The parameters `Pi` and `Pj` are normally used to receive parameters from the Score, specifically among the available `P0` to `P30`. In this case, however, these parameters are repurposed as general-purpose memory to temporarily store feedback signals. Similarly, other Unit Generators, such as oscillators, reuse note parameters to handle operations like phase accumulation.
|
|
||||||
|
|
||||||
As a result, users needed to manually calculate feedback gains based on the desired frequency characteristics[^musicv], and they also had to account for using at least two sample memory spaces.
|
|
||||||
|
|
||||||
[^musicv]: It is said that a preprocessing feature called `CONVT` could be used to transform frequency characteristics into coefficients [@mathews_technology_1969, p77].
|
[^musicv]: It is said that a preprocessing feature called `CONVT` could be used to transform frequency characteristics into coefficients [@mathews_technology_1969, p77].
|
||||||
|
|
||||||
On the other hand, in later MUSIC 11, and succeeding CSound by Barry Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named `reson`. This UGen takes four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor[@vercoe_computer_1983, p248]. Unlike previous implementations, users no longer need to calculate coefficients manually and no need to aware of the two-sample memory space. However, in MUSIC 11 and CSound, it is possible to implement this band-pass filter from scratch as a User Defined Opcode (UDO) as in Listing \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample feedback, and eliminate black boxes, it is equally important to provide high-level modules that avoid unnecessary complexity ("avoid the clutter") when users do not need to understand the internal details [@vercoe_computer_1983, p247].
|
On the other hand, in later MUSIC 11, and its successor CSound by Barry Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named `reson`. This UGen takes four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor[@vercoe_computer_1983, p248]. Unlike previous implementations, users no longer need to calculate coefficients manually, nor do they need to be aware of the two-sample memory space. However, in MUSIC 11 and CSound, it is possible to implement this band-pass filter from scratch as a User-Defined Opcode (UDO) as shownin Listing \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample feedback, and eliminate black boxes, it is equally important to provide high-level modules that avoid unnecessary complexity ("avoid the clutter") when users do not need to understand the internal details [@vercoe_computer_1983, p.247].
|
||||||
|
|
||||||
~~~{label=lst:reson caption="Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)"}
|
~~~{#lst:musicv caption="Example of the use of FLT UGen in MUSIC V."}
|
||||||
|
FLT I1 O I2 I3 Pi Pj;
|
||||||
|
~~~
|
||||||
|
|
||||||
|
~~~{#lst:reson caption="Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)"}
|
||||||
instr 1
|
instr 1
|
||||||
la1 init 0
|
la1 init 0
|
||||||
la2 init 0
|
la2 init 0
|
||||||
@@ -134,65 +130,63 @@ a1 reson a1,p5,p6,1
|
|||||||
endin
|
endin
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
On the other hand, in succeeding environments that inherit the Unit Generator paradigm, such as Pure Data [@puckette_pure_1997], Max (whose signal processing functionalities were ported from Pure Data as MSP), SuperCollider [@mccartney_supercollider_1996], and ChucK [@wang_chuck_2015], primitive UGens are implemented in general-purpose languages like C or C++[^chugen]. If users wish to define low-level UGens (external objects in Max and Pd), they need to set up a development environment for C or C++.
|
On the other hand, in succeeding environments that inherit the Unit Generator paradigm, such as Pure Data [@puckette_pure_1997], Max (whose signal processing functionalities were ported from Pure Data as MSP), SuperCollider [@mccartney_supercollider_1996], and ChucK [@wang_chuck_2015], primitive UGens are implemented in general-purpose languages like C or C++[^chugen]. If users wish to define low-level UGens (called external objects in Max and Pd), they need to set up a development environment for C or C++.
|
||||||
|
|
||||||
[^chugen]: ChucK later introduced ChuGen, which is similar extension to CSound’s UDO, allowing users to define UGens within the ChucK language itself [@Salazar2012]. However, not all existing UGens are replaced by UDOs by default both in CSound and ChucK, which remain supplemental features possibly because the runtime performance of UDO is inferior to natively implemented UGens.
|
[^chugen]: ChucK later introduced ChuGen, which is similar extension to CSound’s UDO, allowing users to define UGens within the ChucK language itself [@Salazar2012]. However, not all existing UGens are replaced by UDOs by default both in CSound and ChucK, which remain supplemental features possibly because the runtime performance of UDO is inferior to natively implemented UGens.
|
||||||
|
|
||||||
When UGens are implemented in low-level languages like C, even if the implementation is open-source, the division of knowledge effectively forces users (composers) to treat UGens as black boxes. This reliance on UGens as black boxes reflects and deepens the division of labor between musicians and scientists that was establish in MUSIC though the it can be interpreted as both a cause and a result.
|
When UGens are implemented in low-level languages like C, even if the implementation is open-source, the division of knowledge effectively forces users (composers) to treat UGens as black boxes. This reliance on UGens as black boxes reflects and deepens the division of labor between musicians and scientists that was established in MUSIC though it can be interpreted as both a cause and a result.
|
||||||
|
|
||||||
For example, Puckette, the developer of Max and Pure Data, noted that the division of labor at IRCAM between Researchers, Musical Assistants(Realizers), and Composers has parallels in the current Max ecosystem, where the roles are divided into Max developers them selves, developers of external objects, and Max users [@puckette_47_2020]. As described in the ethnography of 1980s IRCAM by anthropologist Georgina Born, the division of labor between fundamental research scientists and composers at IRCAM was extremely clear. This structure was also tied to the exclusion of popular music and its associated technologies in IRCAM’s research focus [@Born1995].
|
For example, Puckette, the developer of Max and Pure Data, noted that the division of labor at IRCAM between Researchers, Musical Assistants(Realizers), and Composers has parallels in the current Max ecosystem, where roles are divided among Max developers themselves, developers of external objects, and Max users [@puckette_47_2020]. As described in the ethnography of 1980s IRCAM by anthropologist Georgina Born, the division of labor between fundamental research scientists and composers at IRCAM was extremely clear. This structure was also tied to the exclusion of popular music and its associated technologies from IRCAM’s research focus [@Born1995].
|
||||||
|
|
||||||
However, such divisions are not necessarily the result of differences in values along the axes analyzed by Born, such as modernist/postmodernist/populist or low-tech/high-tech distinctions[^wessel]. This is because the black-boxing of technology through the division of knowledge occurs in popular music as well. Paul Théberge pointed out that the "democratization" of synthesizers in the 1980s was achieved through the concealment of technology, which transformed musicians as creators into consumers.
|
However, such divisions are not necessarily the result of differences in values along the axes analyzed by Born, such as modernist/postmodernist/populist or low-tech/high-tech distinctions[^wessel]. This is because the black-boxing of technology through the division of knowledge occurs in popular music as well. Paul Théberge pointed out that the "democratization" of synthesizers in the 1980s was achieved through the concealment of technology, which transformed musicians as creators into consumers.
|
||||||
|
|
||||||
[^wessel]: David Wessel revealed that the individual referred to as RIG in Born’s ethnography was himself and commented that Born oversimplified her portrayal of Pierre Boulez, then director of IRCAM, as a modernist. [@taylor_article_1999]
|
[^wessel]: David Wessel revealed that the individual referred to as RIG in Born’s ethnography was himself and commented that Born oversimplified her portrayal of Pierre Boulez, then director of IRCAM, as a modernist. [@taylor_article_1999]
|
||||||
|
|
||||||
> Lacking adequate knowledge of the technical system, musicians increasingly found themselves drawn to prefabricated programs as a source of new sound material. (...)it also suggests a reconceptualization on the part of the industry of the musician as a particular type of consumer. [@theberge_any_1997, p89]
|
> Lacking adequate knowledge of the technical system, musicians increasingly found themselves drawn to prefabricated programs as a source of new sound material. (...)it also suggests a reconceptualization on the part of the industry of the musician as a particular type of consumer. [@theberge_any_1997, p.89]
|
||||||
|
|
||||||
This argument can be extended beyond electronic music to encompass computer-based music in general. For example, media researcher Lori Emerson noted that while the proliferation of personal computers began with the vision of "metamedium"—tools that users could modify themselves, as exemplified by Xerox PARC's Dynabook—the vision was ultimately realized in an incomplete form through devices like the Macintosh and iPad, which distanced users from programming by black-boxing functionality [@emerson2014]. In fact, Alan Kay, the architect behind the Dynabook concept, remarked that while the iPad's appearance may resemble the ideal he originally envisioned, its lack of extensibility through programming renders it merely a device for media consumption [@kay2019].
|
This argument can be extended beyond electronic music to encompass computer-based music in general. For example, media researcher Lori Emerson noted that while the proliferation of personal computers began with the vision of a "metamedium"—tools that users could modify themselves, as exemplified by Xerox PARC's Dynabook—the vision was ultimately realized in an incomplete form through devices like the Macintosh and iPad, which distanced users from programming by black-boxing functionality [@emerson2014]. In fact, Alan Kay, the architect behind the Dynabook concept, remarked that while the iPad's appearance may resemble the ideal he originally envisioned, its lack of extensibility through programming renders it merely a device for media consumption [@kay2019].
|
||||||
|
|
||||||
Although programming environments as tools for music production are not relatively used, the UGen concept serves as a premise for today's popular music production software and infrastructure, like audio plugin formats for DAW softwares and WebAudio. It is known that the concept of Unit Generators emerged either simultaneously with or even slightly before modular synthesizers [@park_interview_2009, p20]. However, UGen-based languages have actively incorporated metaphors of modular synthesizers for their user interface, as Vercoe said that the distinction between "ar" (audio-rate) and "kr" (control-rate) processing introduced in MUSIC 11 is said to have been inspired by Buchla's distinction in its plug type [@vercoe_barry_2012, 1:01:38–1:04:04].
|
Although programming environments as tools for music production are not widely used, the UGen concept serves as a premise for today's popular music production software and infrastructure, such as audio plugin formats for DAW softwares and WebAudio. It is known that the concept of Unit Generators emerged either simultaneously with or even slightly before modular synthesizers [@park_interview_2009, p.20]. However, UGen-based languages have actively incorporated metaphors from modular synthesizers for their user interfaces, as Vercoe noted that the distinction between "ar" (audio-rate) and "kr" (control-rate) processing introduced in MUSIC 11 is said to have been inspired by Buchla's distinction in plug types [@vercoe_barry_2012, 1:01:38–1:04:04].
|
||||||
|
|
||||||
However, adopting visual metaphors comes with the limitation that it constrains the complexity of representation to what is visually conceivable. In languages with visual patching interfaces like Max and Pure Data, meta-operations on UGens are often restricted to simple tasks, such as parallel duplication. Consequently, even users of Max or Pure Data may not necessarily be engaging in expressions that are only possible with computers. Instead, many might simply be using these tools as the most convenient software equivalents of modular synthesizers.
|
However, adopting visual metaphors comes with the limitation that it constrains the complexity of representation to what is visually conceivable. In languages with visual patching interfaces like Max and Pure Data, meta-operations on UGens are often restricted to simple tasks, such as parallel duplication. Consequently, even users of Max or Pure Data may not necessarily be engaging in forms of expressions that are only possible with computers. Instead, many might simply be using these tools as the most convenient software equivalents of modular synthesizers.
|
||||||
|
|
||||||
## Context of Programming Languages for Music After 2000
|
## Context of Programming Languages for Music After 2000
|
||||||
|
|
||||||
Based on the discussions thus far, music programming languages developed after the 2000s can be categorized into two distinct directions: those that narrow the scope of the language's role by attempting alternative abstractions at a higher level, distinct from the UGen paradigm, and those that expand the general-purpose capabilities of the language, reducing black-boxing.
|
Based on the discussions thus far, music programming languages developed after the 2000s can be categorized into two distinct directions: those that narrow the scope of the language's role by introducing alternative abstractions at a higher-level, distinct from the UGen paradigm, and those that expand the general-purpose capabilities of the language, reducing black-boxing.
|
||||||
|
|
||||||
Languages that pursued alternative abstractions at higher levels have evolved alongside the culture of live coding, where performances are conducted by rewriting code in real time. The activities of the live coding community, including groups such as TOPLAP since the 2000s, were not only about turning coding itself into a performance but also served as a resistance against laptop performances that relied on black-boxed music software. This is evident in the community's manifesto, which states, "Obscurantism is dangerous" [@toplap_manifestodraft_2004].
|
Languages that pursued alternative higher-level abstractions have evolved alongside the culture of live coding, where performances are conducted by rewriting code in real time. The activities of the live coding community, including groups such as TOPLAP since the 2000s, were not only about turning coding itself into a performance but also served as a resistance against laptop performances that relied on black-boxed music software. This is evident in the community's manifesto, which states, "Obscurantism is dangerous" [@toplap_manifestodraft_2004].
|
||||||
|
|
||||||
Languages implemented as clients for SuperCollider, such as **IXI** (on Ruby) [@Magnusson2011], **Sonic Pi**(on Ruby), **Overtone** (on Clojure) [@Aaron2013], **TidalCycles** (on Haskell) [@McLean2014], and **FoxDot** (on Python) [@kirkbride2016foxdot], leverage the expressive power of more general-purpose programming languages. While embracing the UGen paradigm, they enable high-level abstractions for previously difficult-to-express elements like note values and rhythm. For example, the abstraction of patterns in TidalCycles is not limited to music but can also be applied to visual patterns and other outputs, meaning it is not inherently tied to PCM-based waveform output as the final result.
|
Languages implemented as clients for SuperCollider, such as **IXI** (on Ruby) [@Magnusson2011], **Sonic Pi** (on Ruby), **Overtone** (on Clojure) [@Aaron2013], **TidalCycles** (on Haskell) [@McLean2014], and **FoxDot** (on Python) [@kirkbride2016foxdot], leverage the expressive power of more general-purpose programming languages. While embracing the UGen paradigm, they enable high-level abstractions for previously difficult-to-express elements like note values and rhythm. For example, the abstraction of patterns in TidalCycles is not limited to music but can also be applied to visual patterns and other outputs, meaning it is not inherently tied to PCM-based waveform output as the final result.
|
||||||
|
|
||||||
On the other hand, due to their high-level design, these languages often rely on ad hoc implementations for tasks like sound manipulation and low-level signal processing, such as effects.
|
On the other hand, due to their high-level design, these languages often rely on ad-hoc implementations for tasks like sound manipulation and low-level signal processing, such as effects. McCartney, the developer of SuperCollider, stated that if general-purpose programming languages were sufficiently expressive, there would be no need to create specialized languages [@McCartney2002]. This prediction appears reasonable when considering examples like MUSIGOL. However, in practice, scripting languages that excel in dynamic program modification face challenges in modern preemptive OS environments. For instance, dynamic memory management techniques such as garbage collection can hinder deterministic execution timing required for real-time processing [@Dannenberg2005].
|
||||||
|
|
||||||
McCartney, the developer of SuperCollider, stated that if general-purpose programming languages were sufficiently expressive, there would be no need to create special languages [@McCartney2002]. This prediction appears reasonable when considering examples like MUSIGOL. However, in practice, scripting languages that excel in dynamic program modification face challenges in modern preemptive OS environments. For instance, dynamic memory management techniques such as garbage collection can hinder the ability to guarantee deterministic execution timing required for real-time processing [@Dannenberg2005].
|
Historically, programming languages like FORTRAN or C served as a portable way for implementing programs across different architectures. However, with the proliferation of higher-level languages, programming in C or C++ has become relatively more difficult, akin to assembly language in earlier times. Furthermore, considering the challenges of portability not only across different CPUs but also diverse host environments such as OSs and the Web, these languages are no longer as portable as they once were. Consequently, internal DSL for music including signal processing have become exceedingly rare, with only a few examples such as LuaAV[@wakefield2010].
|
||||||
|
|
||||||
Historically, programming in languages like FORTRAN or C served as a universal method for implementing audio processing on computers, independent of architecture. However, with the proliferation of general-purpose programming languages, programming in C or C++ has become relatively more difficult, akin to programming in assembly language in earlier times. Furthermore, considering the challenges of portability across not only different CPUs but also diverse host environments such as operating systems and the Web, these languages are no longer as portable as they once were. Consequently, systems targeting signal processing implemented as internal DSLs have become exceedingly rare, with only a few examples like LuaAV[@wakefield2010].
|
|
||||||
|
|
||||||
Instead, an approach has emerged to create general-purpose languages specifically designed for use in music from the ground up. One prominent example is **Extempore**, a live programming environment developed by Sorensen [@sorensen_extempore_2018]. Extempore consists of Scheme, a LISP-based language, and xtlang, a meta-implementation on top of Scheme. While xtlang requires users to write hardware-oriented type signatures similar to those in C, it leverages the LLVM compiler infrastructure [@Lattner] to just-in-time (JIT) compile signal processing code, including sound manipulation, into machine code for high-speed execution.
|
Instead, an approach has emerged to create general-purpose languages specifically designed for use in music from the ground up. One prominent example is **Extempore**, a live programming environment developed by Sorensen [@sorensen_extempore_2018]. Extempore consists of Scheme, a LISP-based language, and xtlang, a meta-implementation on top of Scheme. While xtlang requires users to write hardware-oriented type signatures similar to those in C, it leverages the LLVM compiler infrastructure [@Lattner] to just-in-time (JIT) compile signal processing code, including sound manipulation, into machine code for high-speed execution.
|
||||||
|
|
||||||
The expressive power of general-purpose languages and compiler infrastructures like LLVM have given rise to an approach focused on designing languages with mathematical formalization that reduce black-boxing. **Faust** [@Orlarey2009], for instance, is a language that retains a graph-based structure akin to UGens but is built on a formal system called Block Diagram Algebra. Thanks to its formalization, Faust can be transpiled into general-purpose languages such as C, C++, or Rust and can also be used as an External Object in environments like Max or Pure Data.
|
The expressive power of general-purpose languages and compiler infrastructures like LLVM has given rise to an approach focused on designing languages with mathematical formalization that reduces black-boxing. **Faust** [@Orlarey2009], for instance, is a language that retains a graph-based structure akin to UGens but is built on a formal system called Block Diagram Algebra. Thanks to its formalization, Faust can be transpiled into various low-level languages such as C, C++, or Rust and can also be used as external objects in Max or Pure Data.
|
||||||
|
|
||||||
Languages like **Kronos** [@norilo2015] and **mimium** [@matsuura_mimium_2021], which are based on the more general computational model of lambda calculus, focus on PCM-based signal processing while exploring interactive meta-operations on programs [@Norilo2016] and balancing self-contained semantics with interoperability with other general-purpose languages [@matsuura_lambda-mmm_2024].
|
Languages like **Kronos** [@norilo2015] and **mimium** [@matsuura_mimium_2021], which are based on the more general computational model of lambda calculus, focus on PCM-based signal processing while exploring interactive meta-operations on programs [@Norilo2016] and balancing self-contained semantics with interoperability with other general-purpose languages [@matsuura_lambda-mmm_2024].
|
||||||
|
|
||||||
Domain-specific languages (DSLs) are constructed within a double bind: they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through coding. In this context, efforts like Extempore, Kronos, and mimium are not merely programming languages for music but are also situated within the broader research context of functional reactive programming (FRP), which focuses on representing time-varying values in computation. Most of computing models lack an inherent concept of real time and instead operates based on discrete computational steps. Similarly, low-level general-purpose programming languages do not natively include primitives for real-time concepts. Consequently, the exploration of computational models tied to time —a domain inseparable from music— remains vital and has the potential to contribute to the theoretical foundations of general-purpose programming languages.
|
Domain-specific languages (DSLs) are constructed within a double bind: they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through coding. In this context, efforts like Extempore, Kronos, and mimium are not merely programming languages for music but are also situated within the broader research context of functional reactive programming (FRP), which focuses on representing time-varying values in computation. Most computing models lack an inherent concept of real-time and instead operates based on discrete computational steps. Similarly, low-level general-purpose programming languages do not natively include primitives for real-time concepts. Consequently, the exploration of computational models tied to time —a domain inseparable from music— remains vital and has the potential to contribute to the theoretical foundations of general-purpose programming languages.
|
||||||
|
|
||||||
However, strongly formalized languages come with their own trade-offs. While they allow UGens to be defined without black-boxing, understanding the design and implementation of these languages often requires expert knowledge. This can create a deeper division between language developers and users, in contrast to the many but small and shallow division seen in the Multi-Language paradigm, like SuperCollider developers, external UGen developers, client language developers (e.g., TidalCycles), SuperCollider users, and client language users.
|
However, strongly formalized languages come with another trade-off. While they allow UGens to be defined without black-boxing, understanding the design and implementation of these languages often requires expert knowledge. This can create a deeper division between language developers and users, in contrast to the many but small and shallow divisions seen in the multi-language paradigm, like SuperCollider developers, external UGen developers, client language developers (e.g., TidalCycles), SuperCollider users, and client language users.
|
||||||
|
|
||||||
Although there is no clear solution to this trade-off, one intriguing idea is the development of self-hosting languages for music—that is, languages where their own compilers are written in the language itself. At first glance, this may seem impractical. However, by enabling users to learn and modify the language's mechanisms spontaneously, this approach could create an environment that fosters deeper engagement and understanding among users.
|
Although there is no clear solution to this trade-off, one intriguing idea is the development of self-hosting languages for music—that is, languages whose their own compilers are written in the language itself. At first glance, this may seem impractical. However, by enabling users to learn and modify the language's mechanisms spontaneously, this approach could create an environment that fosters deeper engagement and understanding among users.
|
||||||
|
|
||||||
## Conclusion
|
## Conclusion
|
||||||
|
|
||||||
This paper has reexamined the history of computer music and music programming languages with a focus on the universalism of PCM and the black-boxing tendencies of the Unit Generator paradigm. Historically, it was expected that the clear division of roles between engineers and composers would enable the creation of new forms of expression using computers. Indeed, from the perspective of Post-Acousmatic discourse, some, like Holbrook and Rudi, still consider this division to be a positive development:
|
This paper has reexamined the history of computer music and music programming languages with a focus on the universalism of PCM and the black-boxing tendencies of the Unit Generator paradigm. Historically, it was expected that the clear division of roles between engineers and composers would enable the creation of new forms of expression using computers. Indeed, from the perspective of Post-Acousmatic discourse, some, such as Holbrook and Rudi, still consider this division to be a positive development:
|
||||||
|
|
||||||
> Most newer tools abstract the signal processing routines and variables, making them easier to use while removing the need for understanding the underlying processes in order to create meaningful results. Composers no longer necessarily need mathematical and programming skills to use the technologies.[@holbrook2022, p2]
|
> Most newer tools abstract the signal processing routines and variables, making them easier to use while removing the need for understanding the underlying processes in order to create meaningful results. Composers no longer necessarily need mathematical and programming skills to use the technologies.[@holbrook2022, p2]
|
||||||
|
|
||||||
However, this division of labor also creates a shared vocabulary (exactly seen in the Unit Generator by Mathews) and works to perpetuate it. By portraying new technologies as something externally introduced, and by focusing on the agency of those who create music with computers, the individuals responsible for building the programming environments, software, protocols, and formats are rendered invisible [@sterne_there_2014]. This leads to an oversight of the indirect power relationships produced by these infrastructures.
|
However, this division of labor also creates a shared vocabulary (as exemplified in the Unit Generator by Mathews) and serves to perpetuate it. By portraying new technologies as something externally introduced, and by focusing on the agency of those who create music with computers, the individuals responsible for building programming environments, software, protocols, and formats are rendered invisible [@sterne_there_2014]. This leads to an oversight of the indirect power relationships produced by these infrastructures.
|
||||||
|
|
||||||
For this reason, future research on programming languages for music must address how the tools, including the languages themselves, contribute aesthetic value within musical culture (and what forms of musical practice they enable), as well as the social (im)balances of power they produce.
|
For this reason, future research on programming languages for music must address how the tools, including the languages themselves, contribute aesthetic value within musical culture (and what forms of musical practice they enable), as well as the social (im)balances of power they produce.
|
||||||
|
|
||||||
The academic value of the research of programming languages for music is often vaguely claimed, like the word of "general", "expressive", and "efficient" but it is difficult to argue these claims when the processing speed is no more the primary issue. Thus, as like Idiomaticity [@McPherson2020] by McPherson et al., we need to develop and share a vocabulary for understanding the value judgments we make about music languages.
|
The academic value of the research on programming languages for music is often vaguely asserted, using terms such as "general", "expressive", and "efficient". However, it is difficult to argue these claims when processing speed is no longer the primary concern. Thus, as with Idiomaticity [@McPherson2020] by McPherson et al., we need to develop and share a vocabulary for understanding the value judgments we make about music languages.
|
||||||
|
|
||||||
In a broader sense, the development of programming languages for music has also expanded to the individual level. Examples include **Gwion** by Astor, which is inspired by ChucK and enhances its abstraction capabilities like lambda functions [@astor_gwion_2017]; **Vult**, a DSP transpiler language created by Ruiz for his modular synthesizer hardware [@Ruiz2020]; and a UGen-based live coding environment designed for web environment, **Glicol** [@lan_glicol_2020]. However, these efforts have not yet been included into academic discourse.
|
In a broader sense, the development of programming languages for music has also expanded to the individual level. Examples include **Gwion** by Astor, which is inspired by ChucK and enhances its abstraction, such as lambda functions [@astor_gwion_2017]; **Vult**, a DSP transpiler language created by Ruiz for his modular synthesizer hardware [@Ruiz2020]; and a UGen-based live coding environment designed for web, **Glicol** [@lan_glicol_2020]. However, these efforts have not yet been incorporate into academic discourse.
|
||||||
|
|
||||||
Conversely, practical knowledge of past languages in 50-60s as well as real-time hardware-oriented systems from the 80s, is gradually being lost. While research efforts such as *Inside Computer Music*, which analyzes historical works of computer music, have begun [@clarke_inside_2020], an archaeological practice focused on the construction of computer music systems themselves will also be necessary.
|
Conversely, practical knowledge of past languages in 1960s as well as real-time hardware-oriented systems from the 1980s, is gradually being lost. While research efforts such as *Inside Computer Music*, which analyzes historical works of computer music, have begun [@clarke_inside_2020], an archaeological practice focused on the construction of computer music systems themselves will also be necessary.
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user