From 6ed990c433a3084eed6b842456ba29a521c33954 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=BE=E6=B5=A6=20=E7=9F=A5=E4=B9=9F=20Matsuura=20Tomoy?= =?UTF-8?q?a?= Date: Mon, 7 Apr 2025 05:24:24 +0000 Subject: [PATCH] updating content --- content.tex | 48 +++++++++++++++++++++--------------------------- main.bib | 33 ++++++++++++++++++++++++++++++--- main.md | 30 +++++++++++++++--------------- 3 files changed, 66 insertions(+), 45 deletions(-) diff --git a/content.tex b/content.tex index 3318534..97dac0e 100644 --- a/content.tex +++ b/content.tex @@ -1,7 +1,7 @@ \section{Introduction}\label{introduction} -Programming languages and environments for music such as Max, Pure Data, -CSound, and SuperCollider has been referred to as ``Computer Music +Programming languages and environments for music, for instance, Max, Pure Data, +Csound, and SuperCollider, has been referred to as ``Computer Music Language''\citep{McCartney2002, Nishino2016, McPherson2020}, ``Language for Computer Music''\citep{Dannenberg2018}, and ``Computer Music Programming Systems''\citep{Lazzarini2013}, though there is no clear @@ -9,7 +9,7 @@ consensus on the use of these terms. However, as the shared term ``Computer Music'' implies, these programming languages are deeply intertwined with the history of technology-driven music, which developed under the premise that ``almost any sound can be -produced''\citep{mathews_acoustic_1961} through the use of computers. +produced''\citep[p557]{mathews1963} through the use of computers. In the early days, when computers existed only in research laboratories and neither displays nor mice existed, creating sound or music with @@ -26,9 +26,7 @@ and the various constraints required for real-time audio processing have significantly increased the specialized knowledge necessary for developing programming languages for music today. Furthermore, some languages developed after the 2000s are not necessarily aimed at -pursuing new forms of musical expression. It seems that there is still -no unified perspective on how the value of such languages should be -evaluated. +pursuing new forms of musical expression. There is still no unified perspective on how the value of those languages should be evaluated. In this paper, a critical historical review is conducted by drawing on discussions from sound studies alongside existing surveys, aiming to @@ -44,7 +42,7 @@ framework tied to specific styles or communities, as represented in Ostertag's \emph{Why Computer Music Sucks}\citep{ostertag1998} since the 1990s. -As Lyon observed nearly two decades ago, it is now nearly impossible to +As Eric Lyon observed nearly two decades ago, it is now nearly impossible to imagine a situation in which computers are not involved at any stage from the production to experience of music\citep[p1]{lyon_we_2006}. The necessity of using the term ``Computer Music'' to describe academic @@ -125,12 +123,10 @@ The MUSIC I (1957) in Bell Labs\citep{Mathews1980} and succeeding MUSIC-N family are highlighted as the earliest examples of computer music research. However, attempts to create music with computers in the UK and Australia prior to MUSIC have also been -documented\citep{doornbusch2017}. Organizing what was achieved by -MUSIC-N and earlier efforts can help clarify definitions of computer -music. +documented\citep{doornbusch2017}. The earliest experiments with sound generation on computers in the 1950s -involved controlling the intervals between one-bit pulses (on or off) to +involved controlling the intervals of one-bit pulses to control pitch. This was partly because the operational clock frequencies of early computers fell within the audible range, making the sonification of electrical signals a practical and cost-effective @@ -223,7 +219,7 @@ was not simply that it was developed early, but because it was the first to implement, but because it was the first to implement sound representation on a computer based on \textbf{pulse-code modulation (PCM)}, which theoretically can generate ``almost any -sound''\citep[p557]{mathews1963} +sound''. PCM, the foundational digital sound representation today, involves sampling audio waveforms at discrete intervals and quantizing the sound @@ -268,7 +264,7 @@ noise from it. Sampling theory builds on this premise through Shannon's information theory by statistically modeling human auditory characteristics: it assumes that humans cannot discern volume differences below certain thresholds or perceive vibrations outside -specific frequency ranges. By limiting representation to this range, +specific frequency ranges. By limiting representation to the reconizable range, sampling theory ensures that all audible sounds can be effectively encoded. @@ -282,16 +278,14 @@ ring modulation (RM), amplitude modulation (AM), or distortion often generate aliasing artifacts unless proper oversampling is applied. These artifacts occur because PCM, while universally suitable for reproducing recorded sound, is not inherently versatile as a medium for generating -new sounds. As Puckette has argued, alternative representations, such as -collections of linear segments or physical modeling synthesis, offer +new sounds. As Puckette has argued, alternative representations, for instance, representation by a sequence of linear segments or physical modeling synthesis, offer other possibilities\citep{puckette2015}. Therefore, PCM is not a completely universal tool for creating sound. \section{What Does the Unit Generator Hide?}\label{what-does-the-unit-generator-hide} -Beginning with version III, MUSIC took the form of an acoustic compiler -(block diagram compiler) that processes two input sources: a score +Beginning with version III, MUSIC took the form of a block diagram compiler that processes two input sources: a score language, which represents a list of time-varying parameters, and an orchestra language, which describes the connections between \textbf{Unit Generators} such as oscillators and filters. In this paper, the term @@ -307,7 +301,7 @@ domain-specific languages. Mathews explained that he developed a compiler for MUSIC III in response to requests from many composers for additional features in MUSIC II, such as envelopes and vibrato, while also ensuring that the program would not be restricted to a specialized -form of musical expression (Max V. Mathews 2007, 13:10-17:50). He +form of musical expression\citep[13:10-17:50]{mathews_max_2007}. He repeatedly stated that his role was that of a scientist rather than a musician: @@ -338,8 +332,8 @@ One notable but often overlooked example is MUSIGOL, a derivative of MUSIC IV \citep{innis_sound_1968}. In MUSIGOL, not only was the system itself but even the score and orchestra defined by user were written entirely as ALGOL 60 language. Similar to today's Processing or Arduino, -MUSIGOL is one of the earliest examples of a programming language for -music implemented as an internal DSL (DSL as a library)\footnote{While +MUSIGOL is one of the earliest internal DSL (Domain-specific languages) for +music, which means implemented as an library \footnote{While MUS10, used at Stanford University, was not an internal DSL, it was created by modifying an existing ALGOL parser \citep[p.248]{loy1985}.}. (Therefore, according to the definition of Unit Generator provided in @@ -377,14 +371,14 @@ characteristics\footnote{It is said that a preprocessing feature called \citep[p77]{mathews_technology_1969}.}, and they also had to account for at least two sample memory spaces. -On the other hand, in later MUSIC 11, and its successor CSound by Barry +On the other hand, in later MUSIC 11, and its successor Csound by Barry Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named \passthrough{\lstinline!reson!}. This UGen takes four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor\citep[p248]{vercoe_computer_1983}. Unlike previous implementations, users no longer need to calculate coefficients manually, nor do they need to be aware of the two-sample memory space. -However, in MUSIC 11 and CSound, it is possible to implement this +However, in MUSIC 11 and Csound, it is possible to implement this band-pass filter from scratch as a User-Defined Opcode (UDO) as shownin Listing \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample @@ -423,10 +417,10 @@ Generator paradigm, such as Pure Data \citep{puckette_pure_1997}, Max MSP), SuperCollider \citep{mccartney_supercollider_1996}, and ChucK \citep{wang_chuck_2015}, primitive UGens are implemented in general-purpose languages like C or C++\footnote{ChucK later introduced - ChuGen, which is similar extension to CSound's UDO, allowing users to + ChuGen, which is similar extension to Csound's UDO, allowing users to define UGens within the ChucK language itself \citep{Salazar2012}. However, not all existing UGens are replaced by UDOs by default both - in CSound and ChucK, which remain supplemental features possibly + in Csound and ChucK, which remain supplemental features possibly because the runtime performance of UDO is inferior to natively implemented UGens.}. If users wish to define low-level UGens (called external objects in Max and Pd), they need to set up a development @@ -569,8 +563,8 @@ example is \textbf{Extempore}, a live programming environment developed by Sorensen \citep{sorensen_extempore_2018}. Extempore consists of Scheme, a LISP-based language, and xtlang, a meta-implementation on top of Scheme. While xtlang requires users to write hardware-oriented type -signatures similar to those in C, it leverages the LLVM compiler -infrastructure \citep{Lattner} to just-in-time (JIT) compile signal +signatures similar to those in C, it leverages the LLVM\citep{Lattner}, the compiler +infrastructure to just-in-time (JIT) compile signal processing code, including sound manipulation, into machine code for high-speed execution. @@ -592,7 +586,7 @@ processing while exploring interactive meta-operations on programs interoperability with other general-purpose languages \citep{matsuura_lambda-mmm_2024}. -Domain-specific languages (DSLs) are constructed within a double bind: +DSLs are constructed within a double bind: they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through coding. In this context, efforts like Extempore, Kronos, and mimium are not merely programming diff --git a/main.bib b/main.bib index 0418287..e1aaa20 100644 --- a/main.bib +++ b/main.bib @@ -226,6 +226,20 @@ file = {/Users/tomoya/Zotero/storage/NBRFF5ND/Holbrook et al. - Computer music and post-acousmatic practices.pdf} } +@inbook{inglizian_beyond_2020, + title = {Beyond {{Bending}} - {{Triggering}}, {{Sequencing}}, and {{Modulating Circuit-Bent Toys}}}, + booktitle = {Handmade {{Electronic Music}}: {{The Art}} of {{Hardware Hacking}}}, + author = {Inglizian, Alex}, + year = {2020}, + month = jun, + edition = {3}, + publisher = {Routledge}, + address = {New York}, + doi = {10.4324/9780429264818}, + collaborator = {Collins, Nicolas}, + isbn = {978-0-429-26481-8} +} + @article{innis_sound_1968, title = {Sound {{Synthesis}} by {{Computer}}: {{Musigol}}, a {{Program Written Entirely}} in {{Extended Algol}}}, shorttitle = {Sound {{Synthesis}} by {{Computer}}}, @@ -257,6 +271,19 @@ file = {/Users/tomoya/Zotero/storage/52TPMQQG/American-computer-pioneer-Alan-Kay-s-concept-the-Dynabook-was-published-in-1972-How-come-Steve-.html} } +@book{kelly_cracked_2009, + title = {Cracked {{Media}}: {{The Sound}} of {{Malfunction}}}, + shorttitle = {Cracked Media}, + author = {Kelly, Caleb}, + year = {2009}, + publisher = {MIT Press}, + address = {Cambridge, Mass}, + isbn = {978-0-262-01314-7}, + lccn = {ML197 .K495 2009}, + keywords = {Avant-garde (Music),Glitch music,History and criticism}, + annotation = {OCLC: ocn263498032} +} + @inproceedings{kirkbride2016foxdot, title = {{{FoxDot}}: {{Live}} Coding with Python and Supercollider}, booktitle = {Proceedings of the {{International Conference}} on {{Live Interfaces}}}, @@ -352,8 +379,8 @@ author = {Lyon, Eric}, year = {2006}, journal = {Talk given at EMS 2006, Beijing}, - url = {https://disis.music.vt.edu/eric/LyonPapers/Do_We_Still_Need_Computer_Music.pdf}, urldate = {2025-01-17}, + howpublished = {https://disis.music.vt.edu/eric/LyonPapers/Do\_We\_Still\_Need\_Computer\_Music.pdf}, file = {/Users/tomoya/Zotero/storage/SK2DXEE8/Do_We_Still_Need_Computer_Music.pdf} } @@ -681,7 +708,7 @@ author = {Ostertag, Bob}, year = {1998}, urldate = {2025-01-17}, - howpublished = {\url{https://web.archive.org/web/20160312125123/http://bobostertag.com/writings-articles-computer-music-sucks.htm}}, + howpublished = {https://web.archive.org/web/20160312125123/http://bobostertag.com/writings-articles-computer-music-sucks.htm}, file = {/Users/tomoya/Zotero/storage/9QAGQSVS/writings-articles-computer-music-sucks.html} } @@ -947,8 +974,8 @@ author = {Vercoe, Barry L.}, year = {2012}, month = apr, - url = {https://libraries.mit.edu/music-oral-history/interviews/barry-vercoe-4242012/}, urldate = {2022-01-14}, + howpublished = {https://libraries.mit.edu/music-oral-history/interviews/barry-vercoe-4242012/}, language = {en-US}, file = {/Users/tomoya/Zotero/storage/H5B6GV4U/barry-vercoe-4242012.html} } diff --git a/main.md b/main.md index 61a8aa7..be469d9 100644 --- a/main.md +++ b/main.md @@ -1,10 +1,10 @@ ## Introduction -Programming languages and environments for music such as Max, Pure Data, CSound, and SuperCollider has been referred to as "Computer Music Language"[@McCartney2002;@Nishino2016;@McPherson2020], "Language for Computer Music"[@Dannenberg2018], and "Computer Music Programming Systems"[@Lazzarini2013], though there is no clear consensus on the use of these terms. However, as the shared term "Computer Music" implies, these programming languages are deeply intertwined with the history of technology-driven music, which developed under the premise that "almost any sound can be produced"[@mathews_acoustic_1961] through the use of computers. +Programming languages and environments for music, for instance, Max, Pure Data, Csound, and SuperCollider, has been referred to as "Computer Music Language"[@McCartney2002;@Nishino2016;@McPherson2020], "Language for Computer Music"[@Dannenberg2018], and "Computer Music Programming Systems"[@Lazzarini2013], though there is no clear consensus on the use of these terms. However, as the shared term "Computer Music" implies, these programming languages are deeply intertwined with the history of technology-driven music, which developed under the premise that "almost any sound can be produced"[@mathews1963,p557] through the use of computers. In the early days, when computers existed only in research laboratories and neither displays nor mice existed, creating sound or music with computers was inevitably equivalent to programming. Today, however, programming as a means to produce sound on a computer—rather than employing Digital Audio Workstation (DAW) software like Pro Tools is not popular. In other words, programming languages for music developed after the proliferation of personal computers are the softwares that intentionally chose programming (whether textual or graphical) as their frontend for making sound. -Since the 1990s, the theoretical development of programming languages and the various constraints required for real-time audio processing have significantly increased the specialized knowledge necessary for developing programming languages for music today. Furthermore, some languages developed after the 2000s are not necessarily aimed at pursuing new forms of musical expression. It seems that there is still no unified perspective on how the value of such languages should be evaluated. +Since the 1990s, the theoretical development of programming languages and the various constraints required for real-time audio processing have significantly increased the specialized knowledge necessary for developing programming languages for music today. Furthermore, some languages developed after the 2000s are not necessarily aimed at pursuing new forms of musical expression. There is still no unified perspective on how the value of those languages should be evaluated. In this paper, a critical historical review is conducted by drawing on discussions from sound studies alongside existing surveys, aiming to consider programming languages for music independently from computer music as the specific genre. @@ -12,7 +12,7 @@ In this paper, a critical historical review is conducted by drawing on discussio The term "Computer Music," despite its literal and potentially broad meaning, has been noted for being used within a narrowly defined framework tied to specific styles or communities, as represented in Ostertag's *Why Computer Music Sucks*[@ostertag1998] since the 1990s. -As Lyon observed nearly two decades ago, it is now nearly impossible to imagine a situation in which computers are not involved at any stage from the production to experience of music[@lyon_we_2006, p1]. The necessity of using the term "Computer Music" to describe academic contexts has consequently diminished. +As Eric Lyon observed nearly two decades ago, it is now nearly impossible to imagine a situation in which computers are not involved at any stage from the production to experience of music[@lyon_we_2006, p1]. The necessity of using the term "Computer Music" to describe academic contexts has consequently diminished. Holbrook and Rudi extended Lyon's discussion by proposing the use of frameworks like Post-Acousmatic[@adkins2016] to redefine "Computer Music." Their approach incorporates the tradition of pre-computer experimental/electronic music, situating it as part of the broader continuum of technology-based or technology-driven music[@holbrook2022]. @@ -30,9 +30,9 @@ Ultimately, the paper concludes that programming languages for music developed s ## PCM and Early Computer Music -The MUSIC I (1957) in Bell Labs[@Mathews1980] and succeeding MUSIC-N family are highlighted as the earliest examples of computer music research. However, attempts to create music with computers in the UK and Australia prior to MUSIC have also been documented[@doornbusch2017]. Organizing what was achieved by MUSIC-N and earlier efforts can help clarify definitions of computer music. +The MUSIC I (1957) in Bell Labs[@Mathews1980] and succeeding MUSIC-N family are highlighted as the earliest examples of computer music research. However, attempts to create music with computers in the UK and Australia prior to MUSIC have also been documented[@doornbusch2017]. -The earliest experiments with sound generation on computers in the 1950s involved controlling the intervals between one-bit pulses (on or off) to control pitch. This was partly because the operational clock frequencies of early computers fell within the audible range, making the sonification of electrical signals a practical and cost-effective debugging method compared to visualizing them on displays or oscilloscopes. +The earliest experiments with sound generation on computers in the 1950s involved controlling the intervals of one-bit pulses to control pitch. This was partly because the operational clock frequencies of early computers fell within the audible range, making the sonification of electrical signals a practical and cost-effective debugging method compared to visualizing them on displays or oscilloscopes. For instance, Louis Wilson, who was an engineer of the BINAC in the UK, noticed that an AM radio placed near the computer could pick up weak electromagnetic waves generated during the switching of vacuum tubes, producing sounds. He leveraged this phenomenon by connecting a speaker and a power amplifier to the computer's circuit to assist with debugging. Frances Elizabeth Holberton took this a step further by programming the computer to generate pulses at desired intervals, creating melodies in 1949[@woltman1990]. @@ -56,7 +56,7 @@ Building on this, Samson developed a program called the Harmony Compiler for the ### Acousmatic Listening, the premise of the Universality of PCM -One of the reasons why MUSIC led to subsequent advancements in research was not simply that it was developed early, but because it was the first to implement, but because it was the first to implement sound representation on a computer based on **pulse-code modulation (PCM)**, which theoretically can generate "almost any sound"[@mathews1963,p557] +One of the reasons why MUSIC led to subsequent advancements in research was not simply that it was developed early, but because it was the first to implement, but because it was the first to implement sound representation on a computer based on **pulse-code modulation (PCM)**, which theoretically can generate "almost any sound". PCM, the foundational digital sound representation today, involves sampling audio waveforms at discrete intervals and quantizing the sound pressure at each interval as discrete numerical values. @@ -70,17 +70,17 @@ However, as sound studies scholar Jonathan Sterne has pointed out, discourses su -The claim that PCM-based sound synthesis can produce "almost any sound" is underpinned by an ideology associated with sound reproduction technologies. This ideology assumes that recorded sound contains an "original" source and that listeners can distinguish distortions or noise from it. Sampling theory builds on this premise through Shannon's information theory by statistically modeling human auditory characteristics: it assumes that humans cannot discern volume differences below certain thresholds or perceive vibrations outside specific frequency ranges. By limiting representation to this range, sampling theory ensures that all audible sounds can be effectively encoded. +The claim that PCM-based sound synthesis can produce "almost any sound" is underpinned by an ideology associated with sound reproduction technologies. This ideology assumes that recorded sound contains an "original" source and that listeners can distinguish distortions or noise from it. Sampling theory builds on this premise through Shannon's information theory by statistically modeling human auditory characteristics: it assumes that humans cannot discern volume differences below certain thresholds or perceive vibrations outside specific frequency ranges. By limiting representation to the reconizable range, sampling theory ensures that all audible sounds can be effectively encoded. Incidentally, the actual implementation of PCM in MUSIC I only allowed for monophonic triangle waves with controllable volume, pitch, and timing[@Mathews1980]. Would anyone today describe such a system as capable of producing "almost any sound"? -Even when considering more contemporary applications, processes like ring modulation (RM), amplitude modulation (AM), or distortion often generate aliasing artifacts unless proper oversampling is applied. These artifacts occur because PCM, while universally suitable for reproducing recorded sound, is not inherently versatile as a medium for generating new sounds. As Puckette has argued, alternative representations, such as collections of linear segments or physical modeling synthesis, offer other possibilities[@puckette2015]. Therefore, PCM is not a completely universal tool for creating sound. +Even when considering more contemporary applications, processes like ring modulation (RM), amplitude modulation (AM), or distortion often generate aliasing artifacts unless proper oversampling is applied. These artifacts occur because PCM, while universally suitable for reproducing recorded sound, is not inherently versatile as a medium for generating new sounds. As Puckette has argued, alternative representations, for instance, representation by a sequence of linear segments or physical modeling synthesis, offer other possibilities[@puckette2015]. Therefore, PCM is not a completely universal tool for creating sound. ## What Does the Unit Generator Hide? -Beginning with version III, MUSIC took the form of an acoustic compiler (block diagram compiler) that processes two input sources: a score language, which represents a list of time-varying parameters, and an orchestra language, which describes the connections between **Unit Generators** such as oscillators and filters. In this paper, the term "Unit Generator"refers to a signal processing module whose implementation is either not open or written in a language different from the one used by the user. +Beginning with version III, MUSIC took the form of a block diagram compiler that processes two input sources: a score language, which represents a list of time-varying parameters, and an orchestra language, which describes the connections between **Unit Generators** such as oscillators and filters. In this paper, the term "Unit Generator"refers to a signal processing module whose implementation is either not open or written in a language different from the one used by the user. -The MUSIC family, in the context of computer music research, achieved success for performing sound synthesis based on PCM but this success came with the establishment of a division of labor between professional musicians and computer engineers through the development of domain-specific languages. Mathews explained that he developed a compiler for MUSIC III in response to requests from many composers for additional features in MUSIC II, such as envelopes and vibrato, while also ensuring that the program would not be restricted to a specialized form of musical expression (Max V. Mathews 2007, 13:10-17:50). He repeatedly stated that his role was that of a scientist rather than a musician: +The MUSIC family, in the context of computer music research, achieved success for performing sound synthesis based on PCM but this success came with the establishment of a division of labor between professional musicians and computer engineers through the development of domain-specific languages. Mathews explained that he developed a compiler for MUSIC III in response to requests from many composers for additional features in MUSIC II, such as envelopes and vibrato, while also ensuring that the program would not be restricted to a specialized form of musical expression[@mathews_max_2007,13:10-17:50]. He repeatedly stated that his role was that of a scientist rather than a musician: @@ -92,7 +92,7 @@ This clear delineation of roles between musicians and scientists became one of t Although the MUSIC N series shares a common workflow of using a score language and an orchestra language, the actual implementation of each programming language varies significantly, even within the series. -One notable but often overlooked example is MUSIGOL, a derivative of MUSIC IV [@innis_sound_1968]. In MUSIGOL, not only was the system itself but even the score and orchestra defined by user were written entirely as ALGOL 60 language. Similar to today's Processing or Arduino, MUSIGOL is one of the earliest examples of a programming language for music implemented as an internal DSL (DSL as a library)[^mus10]. (Therefore, according to the definition of Unit Generator provided in this paper, MUSIGOL does not qualify as a language that uses Unit Generators.) +One notable but often overlooked example is MUSIGOL, a derivative of MUSIC IV [@innis_sound_1968]. In MUSIGOL, not only was the system itself but even the score and orchestra defined by user were written entirely as ALGOL 60 language. Similar to today's Processing or Arduino, MUSIGOL is one of the earliest internal DSL (Domain-specific languages) for music, which means implemented as an library[^mus10]. (Therefore, according to the definition of Unit Generator provided in this paper, MUSIGOL does not qualify as a language that uses Unit Generators.) [^mus10]: While MUS10, used at Stanford University, was not an internal DSL, it was created by modifying an existing ALGOL parser [@loy1985, p.248]. @@ -104,7 +104,7 @@ In MUSIC V, this band-pass filter can be used as shown in Listing \ref{lst:music [^musicv]: It is said that a preprocessing feature called `CONVT` could be used to transform frequency characteristics into coefficients [@mathews_technology_1969, p77]. -On the other hand, in later MUSIC 11, and its successor CSound by Barry Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named `reson`. This UGen takes four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor[@vercoe_computer_1983, p248]. Unlike previous implementations, users no longer need to calculate coefficients manually, nor do they need to be aware of the two-sample memory space. However, in MUSIC 11 and CSound, it is possible to implement this band-pass filter from scratch as a User-Defined Opcode (UDO) as shownin Listing \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample feedback, and eliminate black boxes, it is equally important to provide high-level modules that avoid unnecessary complexity ("avoid the clutter") when users do not need to understand the internal details [@vercoe_computer_1983, p.247]. +On the other hand, in later MUSIC 11, and its successor Csound by Barry Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named `reson`. This UGen takes four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor[@vercoe_computer_1983, p248]. Unlike previous implementations, users no longer need to calculate coefficients manually, nor do they need to be aware of the two-sample memory space. However, in MUSIC 11 and Csound, it is possible to implement this band-pass filter from scratch as a User-Defined Opcode (UDO) as shownin Listing \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample feedback, and eliminate black boxes, it is equally important to provide high-level modules that avoid unnecessary complexity ("avoid the clutter") when users do not need to understand the internal details [@vercoe_computer_1983, p.247]. ~~~{#lst:musicv caption="Example of the use of FLT UGen in MUSIC V."} FLT I1 O I2 I3 Pi Pj; @@ -132,7 +132,7 @@ a1 reson a1,p5,p6,1 On the other hand, in succeeding environments that inherit the Unit Generator paradigm, such as Pure Data [@puckette_pure_1997], Max (whose signal processing functionalities were ported from Pure Data as MSP), SuperCollider [@mccartney_supercollider_1996], and ChucK [@wang_chuck_2015], primitive UGens are implemented in general-purpose languages like C or C++[^chugen]. If users wish to define low-level UGens (called external objects in Max and Pd), they need to set up a development environment for C or C++. -[^chugen]: ChucK later introduced ChuGen, which is similar extension to CSound’s UDO, allowing users to define UGens within the ChucK language itself [@Salazar2012]. However, not all existing UGens are replaced by UDOs by default both in CSound and ChucK, which remain supplemental features possibly because the runtime performance of UDO is inferior to natively implemented UGens. +[^chugen]: ChucK later introduced ChuGen, which is similar extension to Csound’s UDO, allowing users to define UGens within the ChucK language itself [@Salazar2012]. However, not all existing UGens are replaced by UDOs by default both in Csound and ChucK, which remain supplemental features possibly because the runtime performance of UDO is inferior to natively implemented UGens. When UGens are implemented in low-level languages like C, even if the implementation is open-source, the division of knowledge effectively forces users (composers) to treat UGens as black boxes. This reliance on UGens as black boxes reflects and deepens the division of labor between musicians and scientists that was established in MUSIC though it can be interpreted as both a cause and a result. @@ -146,7 +146,7 @@ However, such divisions are not necessarily the result of differences in values This argument can be extended beyond electronic music to encompass computer-based music in general. For example, media researcher Lori Emerson noted that while the proliferation of personal computers began with the vision of a "metamedium"—tools that users could modify themselves, as exemplified by Xerox PARC's Dynabook—the vision was ultimately realized in an incomplete form through devices like the Macintosh and iPad, which distanced users from programming by black-boxing functionality [@emerson2014]. In fact, Alan Kay, the architect behind the Dynabook concept, remarked that while the iPad's appearance may resemble the ideal he originally envisioned, its lack of extensibility through programming renders it merely a device for media consumption [@kay2019]. -Although programming environments as tools for music production are not widely used, the UGen concept serves as a premise for today's popular music production software and infrastructure, such as audio plugin formats for DAW softwares and WebAudio. It is known that the concept of Unit Generators emerged either simultaneously with or even slightly before modular synthesizers [@park_interview_2009, p.20]. However, UGen-based languages have actively incorporated metaphors from modular synthesizers for their user interfaces, as Vercoe noted that the distinction between "ar" (audio-rate) and "kr" (control-rate) processing introduced in MUSIC 11 is said to have been inspired by Buchla's distinction in plug types [@vercoe_barry_2012, 1:01:38–1:04:04]. +Although programming environments as tools for music production are not widely used, the UGen concept serves as a premise for today's popular music production software and infrastructure, such as audio plugin formats for DAW softwares or WebAudio. It is known that the concept of Unit Generators emerged either simultaneously with or even slightly before modular synthesizers [@park_interview_2009, p.20]. However, UGen-based languages have actively incorporated metaphors from modular synthesizers for their user interfaces, as Vercoe noted that the distinction between "ar" (audio-rate) and "kr" (control-rate) processing introduced in MUSIC 11 is said to have been inspired by Buchla's distinction in plug types [@vercoe_barry_2012, 1:01:38–1:04:04]. However, adopting visual metaphors comes with the limitation that it constrains the complexity of representation to what is visually conceivable. In languages with visual patching interfaces like Max and Pure Data, meta-operations on UGens are often restricted to simple tasks, such as parallel duplication. Consequently, even users of Max or Pure Data may not necessarily be engaging in forms of expressions that are only possible with computers. Instead, many might simply be using these tools as the most convenient software equivalents of modular synthesizers. @@ -168,7 +168,7 @@ The expressive power of general-purpose languages and compiler infrastructures l Languages like **Kronos** [@norilo2015] and **mimium** [@matsuura_mimium_2021], which are based on the more general computational model of lambda calculus, focus on PCM-based signal processing while exploring interactive meta-operations on programs [@Norilo2016] and balancing self-contained semantics with interoperability with other general-purpose languages [@matsuura_lambda-mmm_2024]. -Domain-specific languages (DSLs) are constructed within a double bind: they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through coding. In this context, efforts like Extempore, Kronos, and mimium are not merely programming languages for music but are also situated within the broader research context of functional reactive programming (FRP), which focuses on representing time-varying values in computation. Most computing models lack an inherent concept of real-time and instead operates based on discrete computational steps. Similarly, low-level general-purpose programming languages do not natively include primitives for real-time concepts. Consequently, the exploration of computational models tied to time —a domain inseparable from music— remains vital and has the potential to contribute to the theoretical foundations of general-purpose programming languages. +DSLs are constructed within a double bind: they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through coding. In this context, efforts like Extempore, Kronos, and mimium are not merely programming languages for music but are also situated within the broader research context of functional reactive programming (FRP), which focuses on representing time-varying values in computation. Most computing models lack an inherent concept of real-time and instead operates based on discrete computational steps. Similarly, low-level general-purpose programming languages do not natively include primitives for real-time concepts. Consequently, the exploration of computational models tied to time —a domain inseparable from music— remains vital and has the potential to contribute to the theoretical foundations of general-purpose programming languages. However, strongly formalized languages come with another trade-off. While they allow UGens to be defined without black-boxing, understanding the design and implementation of these languages often requires expert knowledge. This can create a deeper division between language developers and users, in contrast to the many but small and shallow divisions seen in the multi-language paradigm, like SuperCollider developers, external UGen developers, client language developers (e.g., TidalCycles), SuperCollider users, and client language users.