proofing finished
This commit is contained in:
70
main.md
70
main.md
@@ -86,35 +86,31 @@ The MUSIC family, in the context of computer music research, achieved success fo
|
||||
|
||||
> When we first made these music programs the original users were not composers; they were the psychologist Guttman, John Pierce, and myself, who are fundamentally scientists. We wanted to have musicians try the system to see if they could learn the language and express themselves with it. So we looked for adventurous musicians and composers who were willing to experiment.[@Mathews1980, p17]
|
||||
|
||||
This clear delineation of roles between musicians and scientists became one of the defining characteristics of post-MUSIC computer music research. Paradoxically, while computer music research aimed to create sounds never heard before, it also paved the way for further research by allowing musicians to focus on compositionwithout having to understand the cumbersome work of programming.
|
||||
This clear delineation of roles between musicians and scientists became one of the defining characteristics of post-MUSIC computer music research. Paradoxically, while computer music research aimed to create sounds never heard before, it also paved the way for further research by allowing musicians to focus on composition without having to understand the cumbersome work of programming.
|
||||
|
||||
### Example: Hiding First-Order Variables in Signal Processing
|
||||
### Example: Hiding Internal State Variables in Signal Processing
|
||||
|
||||
Although the MUSIC N series shares a common workflow of using a score language and an orchestra language, the actual implementation of each programming language varies significantly, even within the series.
|
||||
|
||||
One notable but overlooked example is MUSIGOL, a derivative of MUSIC IV [@innis_sound_1968]. In MUSIGOL, not only was the system itself but even the score and orchestra by user programs were written entirely as ALGOL 60 language. Like today's Processing or Arduino, MUSIGOL is one of the earliest examples of a programming language for music implemented as an internal DSL (DSL as a library)[^mus10]. (Therefore, according to the definition of Unit Generator provided in this paper, MUSIGOL does not qualify as a language that uses Unit Generators.)
|
||||
One notable but often overlooked example is MUSIGOL, a derivative of MUSIC IV [@innis_sound_1968]. In MUSIGOL, not only was the system itself but even the score and orchestra defined by user were written entirely as ALGOL 60 language. Similar to today's Processing or Arduino, MUSIGOL is one of the earliest examples of a programming language for music implemented as an internal DSL (DSL as a library)[^mus10]. (Therefore, according to the definition of Unit Generator provided in this paper, MUSIGOL does not qualify as a language that uses Unit Generators.)
|
||||
|
||||
[^mus10]: While MUS10, used at Stanford University, was not an internal DSL, it was created by modifying an existing ALGOL parser [@loy1985, p248].
|
||||
[^mus10]: While MUS10, used at Stanford University, was not an internal DSL, it was created by modifying an existing ALGOL parser [@loy1985, p.248].
|
||||
|
||||
The level of abstraction deemed intuitive for musicians varied across different iterations of the MUSIC N series. This can be illustrated by examining the description of a second-order band-pass filter. The filter mixes the current input signal $S_n$, the output signal from $t$ time steps prior $O_{n-t}$, and an arbitrary amplitude parameter $I_1$, as shown in the following equation:
|
||||
|
||||
$$O_n = I_1 \cdot S_n + I_2 \cdot O_{n-1} - I_3 \cdot O_{n-2}$$
|
||||
|
||||
In MUSIC V, this band-pass filter can be used as in Listing \ref{lst:musicv} [@mathews_technology_1969, p78].
|
||||
|
||||
~~~{label=lst:musicv caption="Example of the use of FLT UGen in MUSIC V."}
|
||||
FLT I1 O I2 I3 Pi Pj;
|
||||
~~~
|
||||
|
||||
Here, `I1` represents the input bus, and `O` is the output bus. The parameters `I2` and `I3` correspond to the normalized values of the coefficients $I_2$ and $I_3$, divided by $I_1$ (as a result, the overall gain of the filter can be greater or less than 1). The parameters `Pi` and `Pj` are normally used to receive parameters from the Score, specifically among the available `P0` to `P30`. In this case, however, these parameters are repurposed as general-purpose memory to temporarily store feedback signals. Similarly, other Unit Generators, such as oscillators, reuse note parameters to handle operations like phase accumulation.
|
||||
|
||||
As a result, users needed to manually calculate feedback gains based on the desired frequency characteristics[^musicv], and they also had to account for using at least two sample memory spaces.
|
||||
In MUSIC V, this band-pass filter can be used as shown in Listing \ref{lst:musicv} [@mathews_technology_1969, p.78]. Here, `I1` represents the input bus, and `O` is the output bus. The parameters `I2` and `I3` correspond to the normalized values of the coefficients $I_2$ and $I_3$, divided by $I_1$ (as a result, the overall gain of the filter can be greater or less than 1). The parameters `Pi` and `Pj` are normally used to receive parameters from the Score, specifically among the available `P0` to `P30`. In this case, however, these parameters are repurposed as general-purpose memory to temporarily store feedback signals. Similarly, other Unit Generators, such as oscillators, reuse note parameters to handle operations like phase accumulation. As a result, users needed to manually calculate feedback gains based on the desired frequency characteristics[^musicv], and they also had to account for at least two sample memory spaces.
|
||||
|
||||
[^musicv]: It is said that a preprocessing feature called `CONVT` could be used to transform frequency characteristics into coefficients [@mathews_technology_1969, p77].
|
||||
|
||||
On the other hand, in later MUSIC 11, and succeeding CSound by Barry Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named `reson`. This UGen takes four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor[@vercoe_computer_1983, p248]. Unlike previous implementations, users no longer need to calculate coefficients manually and no need to aware of the two-sample memory space. However, in MUSIC 11 and CSound, it is possible to implement this band-pass filter from scratch as a User Defined Opcode (UDO) as in Listing \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample feedback, and eliminate black boxes, it is equally important to provide high-level modules that avoid unnecessary complexity ("avoid the clutter") when users do not need to understand the internal details [@vercoe_computer_1983, p247].
|
||||
On the other hand, in later MUSIC 11, and its successor CSound by Barry Vercoe, the band-pass filter is defined as a Unit Generator (UGen) named `reson`. This UGen takes four parameters: the input signal, center cutoff frequency, bandwidth, and Q factor[@vercoe_computer_1983, p248]. Unlike previous implementations, users no longer need to calculate coefficients manually, nor do they need to be aware of the two-sample memory space. However, in MUSIC 11 and CSound, it is possible to implement this band-pass filter from scratch as a User-Defined Opcode (UDO) as shownin Listing \ref{lst:reson}. Vercoe emphasized that while signal processing primitives should allow for low-level operations, such as single-sample feedback, and eliminate black boxes, it is equally important to provide high-level modules that avoid unnecessary complexity ("avoid the clutter") when users do not need to understand the internal details [@vercoe_computer_1983, p.247].
|
||||
|
||||
~~~{label=lst:reson caption="Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)"}
|
||||
~~~{#lst:musicv caption="Example of the use of FLT UGen in MUSIC V."}
|
||||
FLT I1 O I2 I3 Pi Pj;
|
||||
~~~
|
||||
|
||||
~~~{#lst:reson caption="Example of scratch implementation and built-in operation of RESON UGen respectively, in MUSIC11. Retrieved from the original paper. (Comments are omitted for the space restriction.)"}
|
||||
instr 1
|
||||
la1 init 0
|
||||
la2 init 0
|
||||
@@ -134,65 +130,63 @@ a1 reson a1,p5,p6,1
|
||||
endin
|
||||
~~~
|
||||
|
||||
On the other hand, in succeeding environments that inherit the Unit Generator paradigm, such as Pure Data [@puckette_pure_1997], Max (whose signal processing functionalities were ported from Pure Data as MSP), SuperCollider [@mccartney_supercollider_1996], and ChucK [@wang_chuck_2015], primitive UGens are implemented in general-purpose languages like C or C++[^chugen]. If users wish to define low-level UGens (external objects in Max and Pd), they need to set up a development environment for C or C++.
|
||||
On the other hand, in succeeding environments that inherit the Unit Generator paradigm, such as Pure Data [@puckette_pure_1997], Max (whose signal processing functionalities were ported from Pure Data as MSP), SuperCollider [@mccartney_supercollider_1996], and ChucK [@wang_chuck_2015], primitive UGens are implemented in general-purpose languages like C or C++[^chugen]. If users wish to define low-level UGens (called external objects in Max and Pd), they need to set up a development environment for C or C++.
|
||||
|
||||
[^chugen]: ChucK later introduced ChuGen, which is similar extension to CSound’s UDO, allowing users to define UGens within the ChucK language itself [@Salazar2012]. However, not all existing UGens are replaced by UDOs by default both in CSound and ChucK, which remain supplemental features possibly because the runtime performance of UDO is inferior to natively implemented UGens.
|
||||
|
||||
When UGens are implemented in low-level languages like C, even if the implementation is open-source, the division of knowledge effectively forces users (composers) to treat UGens as black boxes. This reliance on UGens as black boxes reflects and deepens the division of labor between musicians and scientists that was establish in MUSIC though the it can be interpreted as both a cause and a result.
|
||||
When UGens are implemented in low-level languages like C, even if the implementation is open-source, the division of knowledge effectively forces users (composers) to treat UGens as black boxes. This reliance on UGens as black boxes reflects and deepens the division of labor between musicians and scientists that was established in MUSIC though it can be interpreted as both a cause and a result.
|
||||
|
||||
For example, Puckette, the developer of Max and Pure Data, noted that the division of labor at IRCAM between Researchers, Musical Assistants(Realizers), and Composers has parallels in the current Max ecosystem, where the roles are divided into Max developers them selves, developers of external objects, and Max users [@puckette_47_2020]. As described in the ethnography of 1980s IRCAM by anthropologist Georgina Born, the division of labor between fundamental research scientists and composers at IRCAM was extremely clear. This structure was also tied to the exclusion of popular music and its associated technologies in IRCAM’s research focus [@Born1995].
|
||||
For example, Puckette, the developer of Max and Pure Data, noted that the division of labor at IRCAM between Researchers, Musical Assistants(Realizers), and Composers has parallels in the current Max ecosystem, where roles are divided among Max developers themselves, developers of external objects, and Max users [@puckette_47_2020]. As described in the ethnography of 1980s IRCAM by anthropologist Georgina Born, the division of labor between fundamental research scientists and composers at IRCAM was extremely clear. This structure was also tied to the exclusion of popular music and its associated technologies from IRCAM’s research focus [@Born1995].
|
||||
|
||||
However, such divisions are not necessarily the result of differences in values along the axes analyzed by Born, such as modernist/postmodernist/populist or low-tech/high-tech distinctions[^wessel]. This is because the black-boxing of technology through the division of knowledge occurs in popular music as well. Paul Théberge pointed out that the "democratization" of synthesizers in the 1980s was achieved through the concealment of technology, which transformed musicians as creators into consumers.
|
||||
|
||||
[^wessel]: David Wessel revealed that the individual referred to as RIG in Born’s ethnography was himself and commented that Born oversimplified her portrayal of Pierre Boulez, then director of IRCAM, as a modernist. [@taylor_article_1999]
|
||||
|
||||
> Lacking adequate knowledge of the technical system, musicians increasingly found themselves drawn to prefabricated programs as a source of new sound material. (...)it also suggests a reconceptualization on the part of the industry of the musician as a particular type of consumer. [@theberge_any_1997, p89]
|
||||
> Lacking adequate knowledge of the technical system, musicians increasingly found themselves drawn to prefabricated programs as a source of new sound material. (...)it also suggests a reconceptualization on the part of the industry of the musician as a particular type of consumer. [@theberge_any_1997, p.89]
|
||||
|
||||
This argument can be extended beyond electronic music to encompass computer-based music in general. For example, media researcher Lori Emerson noted that while the proliferation of personal computers began with the vision of "metamedium"—tools that users could modify themselves, as exemplified by Xerox PARC's Dynabook—the vision was ultimately realized in an incomplete form through devices like the Macintosh and iPad, which distanced users from programming by black-boxing functionality [@emerson2014]. In fact, Alan Kay, the architect behind the Dynabook concept, remarked that while the iPad's appearance may resemble the ideal he originally envisioned, its lack of extensibility through programming renders it merely a device for media consumption [@kay2019].
|
||||
This argument can be extended beyond electronic music to encompass computer-based music in general. For example, media researcher Lori Emerson noted that while the proliferation of personal computers began with the vision of a "metamedium"—tools that users could modify themselves, as exemplified by Xerox PARC's Dynabook—the vision was ultimately realized in an incomplete form through devices like the Macintosh and iPad, which distanced users from programming by black-boxing functionality [@emerson2014]. In fact, Alan Kay, the architect behind the Dynabook concept, remarked that while the iPad's appearance may resemble the ideal he originally envisioned, its lack of extensibility through programming renders it merely a device for media consumption [@kay2019].
|
||||
|
||||
Although programming environments as tools for music production are not relatively used, the UGen concept serves as a premise for today's popular music production software and infrastructure, like audio plugin formats for DAW softwares and WebAudio. It is known that the concept of Unit Generators emerged either simultaneously with or even slightly before modular synthesizers [@park_interview_2009, p20]. However, UGen-based languages have actively incorporated metaphors of modular synthesizers for their user interface, as Vercoe said that the distinction between "ar" (audio-rate) and "kr" (control-rate) processing introduced in MUSIC 11 is said to have been inspired by Buchla's distinction in its plug type [@vercoe_barry_2012, 1:01:38–1:04:04].
|
||||
Although programming environments as tools for music production are not widely used, the UGen concept serves as a premise for today's popular music production software and infrastructure, such as audio plugin formats for DAW softwares and WebAudio. It is known that the concept of Unit Generators emerged either simultaneously with or even slightly before modular synthesizers [@park_interview_2009, p.20]. However, UGen-based languages have actively incorporated metaphors from modular synthesizers for their user interfaces, as Vercoe noted that the distinction between "ar" (audio-rate) and "kr" (control-rate) processing introduced in MUSIC 11 is said to have been inspired by Buchla's distinction in plug types [@vercoe_barry_2012, 1:01:38–1:04:04].
|
||||
|
||||
However, adopting visual metaphors comes with the limitation that it constrains the complexity of representation to what is visually conceivable. In languages with visual patching interfaces like Max and Pure Data, meta-operations on UGens are often restricted to simple tasks, such as parallel duplication. Consequently, even users of Max or Pure Data may not necessarily be engaging in expressions that are only possible with computers. Instead, many might simply be using these tools as the most convenient software equivalents of modular synthesizers.
|
||||
However, adopting visual metaphors comes with the limitation that it constrains the complexity of representation to what is visually conceivable. In languages with visual patching interfaces like Max and Pure Data, meta-operations on UGens are often restricted to simple tasks, such as parallel duplication. Consequently, even users of Max or Pure Data may not necessarily be engaging in forms of expressions that are only possible with computers. Instead, many might simply be using these tools as the most convenient software equivalents of modular synthesizers.
|
||||
|
||||
## Context of Programming Languages for Music After 2000
|
||||
|
||||
Based on the discussions thus far, music programming languages developed after the 2000s can be categorized into two distinct directions: those that narrow the scope of the language's role by attempting alternative abstractions at a higher level, distinct from the UGen paradigm, and those that expand the general-purpose capabilities of the language, reducing black-boxing.
|
||||
Based on the discussions thus far, music programming languages developed after the 2000s can be categorized into two distinct directions: those that narrow the scope of the language's role by introducing alternative abstractions at a higher-level, distinct from the UGen paradigm, and those that expand the general-purpose capabilities of the language, reducing black-boxing.
|
||||
|
||||
Languages that pursued alternative abstractions at higher levels have evolved alongside the culture of live coding, where performances are conducted by rewriting code in real time. The activities of the live coding community, including groups such as TOPLAP since the 2000s, were not only about turning coding itself into a performance but also served as a resistance against laptop performances that relied on black-boxed music software. This is evident in the community's manifesto, which states, "Obscurantism is dangerous" [@toplap_manifestodraft_2004].
|
||||
Languages that pursued alternative higher-level abstractions have evolved alongside the culture of live coding, where performances are conducted by rewriting code in real time. The activities of the live coding community, including groups such as TOPLAP since the 2000s, were not only about turning coding itself into a performance but also served as a resistance against laptop performances that relied on black-boxed music software. This is evident in the community's manifesto, which states, "Obscurantism is dangerous" [@toplap_manifestodraft_2004].
|
||||
|
||||
Languages implemented as clients for SuperCollider, such as **IXI** (on Ruby) [@Magnusson2011], **Sonic Pi**(on Ruby), **Overtone** (on Clojure) [@Aaron2013], **TidalCycles** (on Haskell) [@McLean2014], and **FoxDot** (on Python) [@kirkbride2016foxdot], leverage the expressive power of more general-purpose programming languages. While embracing the UGen paradigm, they enable high-level abstractions for previously difficult-to-express elements like note values and rhythm. For example, the abstraction of patterns in TidalCycles is not limited to music but can also be applied to visual patterns and other outputs, meaning it is not inherently tied to PCM-based waveform output as the final result.
|
||||
Languages implemented as clients for SuperCollider, such as **IXI** (on Ruby) [@Magnusson2011], **Sonic Pi** (on Ruby), **Overtone** (on Clojure) [@Aaron2013], **TidalCycles** (on Haskell) [@McLean2014], and **FoxDot** (on Python) [@kirkbride2016foxdot], leverage the expressive power of more general-purpose programming languages. While embracing the UGen paradigm, they enable high-level abstractions for previously difficult-to-express elements like note values and rhythm. For example, the abstraction of patterns in TidalCycles is not limited to music but can also be applied to visual patterns and other outputs, meaning it is not inherently tied to PCM-based waveform output as the final result.
|
||||
|
||||
On the other hand, due to their high-level design, these languages often rely on ad hoc implementations for tasks like sound manipulation and low-level signal processing, such as effects.
|
||||
On the other hand, due to their high-level design, these languages often rely on ad-hoc implementations for tasks like sound manipulation and low-level signal processing, such as effects. McCartney, the developer of SuperCollider, stated that if general-purpose programming languages were sufficiently expressive, there would be no need to create specialized languages [@McCartney2002]. This prediction appears reasonable when considering examples like MUSIGOL. However, in practice, scripting languages that excel in dynamic program modification face challenges in modern preemptive OS environments. For instance, dynamic memory management techniques such as garbage collection can hinder deterministic execution timing required for real-time processing [@Dannenberg2005].
|
||||
|
||||
McCartney, the developer of SuperCollider, stated that if general-purpose programming languages were sufficiently expressive, there would be no need to create special languages [@McCartney2002]. This prediction appears reasonable when considering examples like MUSIGOL. However, in practice, scripting languages that excel in dynamic program modification face challenges in modern preemptive OS environments. For instance, dynamic memory management techniques such as garbage collection can hinder the ability to guarantee deterministic execution timing required for real-time processing [@Dannenberg2005].
|
||||
|
||||
Historically, programming in languages like FORTRAN or C served as a universal method for implementing audio processing on computers, independent of architecture. However, with the proliferation of general-purpose programming languages, programming in C or C++ has become relatively more difficult, akin to programming in assembly language in earlier times. Furthermore, considering the challenges of portability across not only different CPUs but also diverse host environments such as operating systems and the Web, these languages are no longer as portable as they once were. Consequently, systems targeting signal processing implemented as internal DSLs have become exceedingly rare, with only a few examples like LuaAV[@wakefield2010].
|
||||
Historically, programming languages like FORTRAN or C served as a portable way for implementing programs across different architectures. However, with the proliferation of higher-level languages, programming in C or C++ has become relatively more difficult, akin to assembly language in earlier times. Furthermore, considering the challenges of portability not only across different CPUs but also diverse host environments such as OSs and the Web, these languages are no longer as portable as they once were. Consequently, internal DSL for music including signal processing have become exceedingly rare, with only a few examples such as LuaAV[@wakefield2010].
|
||||
|
||||
Instead, an approach has emerged to create general-purpose languages specifically designed for use in music from the ground up. One prominent example is **Extempore**, a live programming environment developed by Sorensen [@sorensen_extempore_2018]. Extempore consists of Scheme, a LISP-based language, and xtlang, a meta-implementation on top of Scheme. While xtlang requires users to write hardware-oriented type signatures similar to those in C, it leverages the LLVM compiler infrastructure [@Lattner] to just-in-time (JIT) compile signal processing code, including sound manipulation, into machine code for high-speed execution.
|
||||
|
||||
The expressive power of general-purpose languages and compiler infrastructures like LLVM have given rise to an approach focused on designing languages with mathematical formalization that reduce black-boxing. **Faust** [@Orlarey2009], for instance, is a language that retains a graph-based structure akin to UGens but is built on a formal system called Block Diagram Algebra. Thanks to its formalization, Faust can be transpiled into general-purpose languages such as C, C++, or Rust and can also be used as an External Object in environments like Max or Pure Data.
|
||||
The expressive power of general-purpose languages and compiler infrastructures like LLVM has given rise to an approach focused on designing languages with mathematical formalization that reduces black-boxing. **Faust** [@Orlarey2009], for instance, is a language that retains a graph-based structure akin to UGens but is built on a formal system called Block Diagram Algebra. Thanks to its formalization, Faust can be transpiled into various low-level languages such as C, C++, or Rust and can also be used as external objects in Max or Pure Data.
|
||||
|
||||
Languages like **Kronos** [@norilo2015] and **mimium** [@matsuura_mimium_2021], which are based on the more general computational model of lambda calculus, focus on PCM-based signal processing while exploring interactive meta-operations on programs [@Norilo2016] and balancing self-contained semantics with interoperability with other general-purpose languages [@matsuura_lambda-mmm_2024].
|
||||
|
||||
Domain-specific languages (DSLs) are constructed within a double bind: they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through coding. In this context, efforts like Extempore, Kronos, and mimium are not merely programming languages for music but are also situated within the broader research context of functional reactive programming (FRP), which focuses on representing time-varying values in computation. Most of computing models lack an inherent concept of real time and instead operates based on discrete computational steps. Similarly, low-level general-purpose programming languages do not natively include primitives for real-time concepts. Consequently, the exploration of computational models tied to time —a domain inseparable from music— remains vital and has the potential to contribute to the theoretical foundations of general-purpose programming languages.
|
||||
Domain-specific languages (DSLs) are constructed within a double bind: they aim to specialize in a particular purpose while still providing a certain degree of expressive freedom through coding. In this context, efforts like Extempore, Kronos, and mimium are not merely programming languages for music but are also situated within the broader research context of functional reactive programming (FRP), which focuses on representing time-varying values in computation. Most computing models lack an inherent concept of real-time and instead operates based on discrete computational steps. Similarly, low-level general-purpose programming languages do not natively include primitives for real-time concepts. Consequently, the exploration of computational models tied to time —a domain inseparable from music— remains vital and has the potential to contribute to the theoretical foundations of general-purpose programming languages.
|
||||
|
||||
However, strongly formalized languages come with their own trade-offs. While they allow UGens to be defined without black-boxing, understanding the design and implementation of these languages often requires expert knowledge. This can create a deeper division between language developers and users, in contrast to the many but small and shallow division seen in the Multi-Language paradigm, like SuperCollider developers, external UGen developers, client language developers (e.g., TidalCycles), SuperCollider users, and client language users.
|
||||
However, strongly formalized languages come with another trade-off. While they allow UGens to be defined without black-boxing, understanding the design and implementation of these languages often requires expert knowledge. This can create a deeper division between language developers and users, in contrast to the many but small and shallow divisions seen in the multi-language paradigm, like SuperCollider developers, external UGen developers, client language developers (e.g., TidalCycles), SuperCollider users, and client language users.
|
||||
|
||||
Although there is no clear solution to this trade-off, one intriguing idea is the development of self-hosting languages for music—that is, languages where their own compilers are written in the language itself. At first glance, this may seem impractical. However, by enabling users to learn and modify the language's mechanisms spontaneously, this approach could create an environment that fosters deeper engagement and understanding among users.
|
||||
Although there is no clear solution to this trade-off, one intriguing idea is the development of self-hosting languages for music—that is, languages whose their own compilers are written in the language itself. At first glance, this may seem impractical. However, by enabling users to learn and modify the language's mechanisms spontaneously, this approach could create an environment that fosters deeper engagement and understanding among users.
|
||||
|
||||
## Conclusion
|
||||
|
||||
This paper has reexamined the history of computer music and music programming languages with a focus on the universalism of PCM and the black-boxing tendencies of the Unit Generator paradigm. Historically, it was expected that the clear division of roles between engineers and composers would enable the creation of new forms of expression using computers. Indeed, from the perspective of Post-Acousmatic discourse, some, like Holbrook and Rudi, still consider this division to be a positive development:
|
||||
This paper has reexamined the history of computer music and music programming languages with a focus on the universalism of PCM and the black-boxing tendencies of the Unit Generator paradigm. Historically, it was expected that the clear division of roles between engineers and composers would enable the creation of new forms of expression using computers. Indeed, from the perspective of Post-Acousmatic discourse, some, such as Holbrook and Rudi, still consider this division to be a positive development:
|
||||
|
||||
> Most newer tools abstract the signal processing routines and variables, making them easier to use while removing the need for understanding the underlying processes in order to create meaningful results. Composers no longer necessarily need mathematical and programming skills to use the technologies.[@holbrook2022, p2]
|
||||
|
||||
However, this division of labor also creates a shared vocabulary (exactly seen in the Unit Generator by Mathews) and works to perpetuate it. By portraying new technologies as something externally introduced, and by focusing on the agency of those who create music with computers, the individuals responsible for building the programming environments, software, protocols, and formats are rendered invisible [@sterne_there_2014]. This leads to an oversight of the indirect power relationships produced by these infrastructures.
|
||||
However, this division of labor also creates a shared vocabulary (as exemplified in the Unit Generator by Mathews) and serves to perpetuate it. By portraying new technologies as something externally introduced, and by focusing on the agency of those who create music with computers, the individuals responsible for building programming environments, software, protocols, and formats are rendered invisible [@sterne_there_2014]. This leads to an oversight of the indirect power relationships produced by these infrastructures.
|
||||
|
||||
For this reason, future research on programming languages for music must address how the tools, including the languages themselves, contribute aesthetic value within musical culture (and what forms of musical practice they enable), as well as the social (im)balances of power they produce.
|
||||
|
||||
The academic value of the research of programming languages for music is often vaguely claimed, like the word of "general", "expressive", and "efficient" but it is difficult to argue these claims when the processing speed is no more the primary issue. Thus, as like Idiomaticity [@McPherson2020] by McPherson et al., we need to develop and share a vocabulary for understanding the value judgments we make about music languages.
|
||||
The academic value of the research on programming languages for music is often vaguely asserted, using terms such as "general", "expressive", and "efficient". However, it is difficult to argue these claims when processing speed is no longer the primary concern. Thus, as with Idiomaticity [@McPherson2020] by McPherson et al., we need to develop and share a vocabulary for understanding the value judgments we make about music languages.
|
||||
|
||||
In a broader sense, the development of programming languages for music has also expanded to the individual level. Examples include **Gwion** by Astor, which is inspired by ChucK and enhances its abstraction capabilities like lambda functions [@astor_gwion_2017]; **Vult**, a DSP transpiler language created by Ruiz for his modular synthesizer hardware [@Ruiz2020]; and a UGen-based live coding environment designed for web environment, **Glicol** [@lan_glicol_2020]. However, these efforts have not yet been included into academic discourse.
|
||||
In a broader sense, the development of programming languages for music has also expanded to the individual level. Examples include **Gwion** by Astor, which is inspired by ChucK and enhances its abstraction, such as lambda functions [@astor_gwion_2017]; **Vult**, a DSP transpiler language created by Ruiz for his modular synthesizer hardware [@Ruiz2020]; and a UGen-based live coding environment designed for web, **Glicol** [@lan_glicol_2020]. However, these efforts have not yet been incorporate into academic discourse.
|
||||
|
||||
Conversely, practical knowledge of past languages in 50-60s as well as real-time hardware-oriented systems from the 80s, is gradually being lost. While research efforts such as *Inside Computer Music*, which analyzes historical works of computer music, have begun [@clarke_inside_2020], an archaeological practice focused on the construction of computer music systems themselves will also be necessary.
|
||||
Conversely, practical knowledge of past languages in 1960s as well as real-time hardware-oriented systems from the 1980s, is gradually being lost. While research efforts such as *Inside Computer Music*, which analyzes historical works of computer music, have begun [@clarke_inside_2020], an archaeological practice focused on the construction of computer music systems themselves will also be necessary.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user