proofreading

This commit is contained in:
2025-01-29 06:49:55 +00:00
parent 2aa6fde9d6
commit e0238ab50b

62
main.md
View File

@@ -1,90 +1,92 @@
## Introduction
Programming languages and environments for music have developed hand in hand with the history of creating music using computers. Software and systems like Max, Pure Data, CSound, and SuperCollider has been referred to as "Computer Music Language"[@McCartney2002;@Nishino2016;@McPherson2020], "Language for Computer Music"[@Dannenberg2018], and "Computer Music Programming Systems"[@Lazzarini2013], though there is no clear consensus on the use of these terms. However, as the shared term "Computer Music" implies, these programming languages are deeply intertwined with the history of technology-driven music, which developed under the premise that "almost any sound can be produced"[@mathews_acoustic_1961] through the use of computers.
Programming languages and environments for music such as Max, Pure Data, CSound, and SuperCollider has been referred to as "Computer Music Language"[@McCartney2002;@Nishino2016;@McPherson2020], "Language for Computer Music"[@Dannenberg2018], and "Computer Music Programming Systems"[@Lazzarini2013], though there is no clear consensus on the use of these terms. However, as the shared term "Computer Music" implies, these programming languages are deeply intertwined with the history of technology-driven music, which developed under the premise that "almost any sound can be produced"[@mathews_acoustic_1961] through the use of computers.
In the early days, when computers were confined to research laboratories and neither displays nor mouse existed, creating sound or music with computers was inevitably equal to the work of programming. Today, however, programming as a means to produce sound on a computer—rather than employing Digital Audio Workstation (DAW) software like Pro Tools is not usual. In other words, programming languages for music developed after the proliferation of personal computers are the softwares that intentionally chose programming (whether textual or graphical) as their frontend for making sound.
In the early days, when computers existed only in research laboratories and neither displays nor mice existed, creating sound or music with computers was inevitably equivalent to programming. Today, however, programming as a means to produce sound on a computer—rather than employing Digital Audio Workstation (DAW) software like Pro Tools is not popular. In other words, programming languages for music developed after the proliferation of personal computers are the softwares that intentionally chose programming (whether textual or graphical) as their frontend for making sound.
Since the 1990s, the theoretical development of programming languages and the various constraints required for real-time audio processing have significantly increased the specialized knowledge necessary for developing programming languages for music today. Furthermore, some languages developed after the 2000s are not necessarily aimed at pursuing new forms of musical expression. It seems that there is still no unified perspective on how the value of such languages should be evaluated.
In this paper, a critical historical review is conducted by deriving discussions from sound studies alongside existing surveys, aiming to consider programming languages for music independently from computer music as the specific genre.
In this paper, a critical historical review is conducted by drawing on discussions from sound studies alongside existing surveys, aiming to consider programming languages for music independently from computer music as the specific genre.
### Use of the Term "Computer Music"
The term "Computer Music," despite its literal and potential broad meaning, has been noted as being used within a narrowly defined framework tied to specific styles or communities, as represented in Ostartag's *Why Computer Music Sucks*[@ostertag1998] since the 1990s.
The term "Computer Music," despite its literal and potentially broad meaning, has been noted for being used within a narrowly defined framework tied to specific styles or communities, as represented in Ostertag's *Why Computer Music Sucks*[@ostertag1998] since the 1990s.
As Lyon observed nearly two decades ago, it is now nearly impossible to imagine a situation in which computers are not involved at any stage from production to experience of music[@lyon_we_2006, p1]. The necessity of using the term "Computer Music" to describe academic contexts has consequently diminished.
As Lyon observed nearly two decades ago, it is now nearly impossible to imagine a situation in which computers are not involved at any stage from the production to experience of music[@lyon_we_2006, p1]. The necessity of using the term "Computer Music" to describe academic contexts has consequently diminished.
Holbrook and Rudi continued Lyon's discussion by proposing the use of frameworks like Post-Acousmatic[@adkins2016] to redefine "Computer Music." Their approach incorporates the tradition of pre-computer experimental/electronic music, situating it as part of the broader continuum of technology-based or technology-driven music[@holbrook2022].
Holbrook and Rudi extended Lyon's discussion by proposing the use of frameworks like Post-Acousmatic[@adkins2016] to redefine "Computer Music." Their approach incorporates the tradition of pre-computer experimental/electronic music, situating it as part of the broader continuum of technology-based or technology-driven music[@holbrook2022].
While the strict definition of the Post-Acousmatic music is not given deliberately, one of its elements contains the expansion of music production from institutional settings to individuals and the use of the technology were diversified[@adkins2016, p113]. However, while the Post-Acousmatic discourse integrates the historical fact that declining computer costs and access beyond laboratories have enabled diverse musical expressions, it simultaneously marginalizes much of the music that is "just using computers" and fails to provide insights into this divided landscape.
While the strict definition of Post-Acousmatic music is deliberately left open, one of its key aspects is the expansion of music production from institutional settings to individuals and as well as the diversification of technological usage[@adkins2016, p113]. However, while the Post-Acousmatic discourse integrates the historical fact that declining computer costs and increasing accessibility beyond laboratories have enabled diverse musical expressions, it still marginalizes much of the music that is "just using computers" and fails to provide insights into this divided landscape.
<!-- Lyon argues that defining computer music simply as music created with computers is too permissive, while defining it as music that could not exist without computers is too strict. He highlights the difficulty of considering instruments that use digital simulations, such as virtual analog synthesizers, within these definitions. Furthermore, --> Lyon argues that the term "computer music" is style-agnostic definition almost like "piano music," implying that it ignores the style and form inside music produced by the instruments.
<!-- Lyon argues that defining computer music simply as music created with computers is too permissive, while defining it as music that could not exist without computers is too strict. He highlights the difficulty of considering instruments that use digital simulations, such as virtual analog synthesizers, within these definitions. Furthermore, --> Lyon argues that the term "computer music" is a style-agnostic definition almost like "piano music," implying that it ignores the style and form inside music produced by the instrument.
However, one of the defining characteristics of computers as a medium lies in their ability to treat musical styles themselves as subjects of meta-manipulation through simulation and modeling. When creating instruments with computers, or when using such instruments, sound production involves programming—manipulating symbols embedded in a particular musical culture. This recursive embedding of the language and perception constituting that musical culture into the resulting music is a process that goes beyond what is possible with acoustic instruments or analog electronic instruments. Magnusson refers to this characteristic of digital instruments as "Epistemic Tools" and points out that the computer works as "creating a snapshot of musical theory, freezing musical culture in time"[@Magnusson2009,p173] through formalization.
However, one of the defining characteristics of computers as a medium lies in their ability to treat musical styles themselves as subjects of meta-manipulation through simulation and modeling. When creating instruments with computers or when using such instruments, sound production involves programming—manipulating symbols embedded in a particular musical culture. This recursive embedding of language and recognition, which construct that musical culture, into the resulting music is a process that goes beyond what is possible with acoustic instruments or analog instruments. Magnusson refers to this characteristic of digital instruments as "Epistemic Tools" and points out that the computer serves to "create a snapshot of musical theory, freezing musical culture in time" [@Magnusson2009, p.173] through formalization.
Today, many people use computers for music production not because they consciously leverage the uniqueness of the meta-medium, but simply because there are no quicker or more convenient alternatives available. Even so, within a musical culture where computers are used as a default or reluctant choice, musicians are inevitably influenced by the underlying infrastructures like software, protocols, and formats. As long as the history of programming languages for music remains intertwined with the history of computer music as it relates to specific genres or communities, it becomes difficult to analyze music created with computers as a passive means.
Today, many people use computers for music production not because they consciously leverage the uniqueness of the meta-medium, but simply because there are no quicker or more convenient alternatives available. Even so, within a musical culture where computers are used as a reluctant choice, musicians are inevitably influenced by the underlying infrastructures like software, protocols, and formats. As long as the history of programming languages for music remains intertwined with the history of computer music as it relates to specific genres or communities, it becomes difficult to analyze music created with computers as merely a passive means.
In this paper, the history of programming languages for music is reexamined with an approach that, opposite from Lyon, takes an extremely style-agnostic perspective. Rather than focusing on what has been created with these tools, the emphasis is placed on how these tools themselves have been constructed. The paper centers on the following two topics: 1. A critique of the universality of sound representation using pulse-code modulation (PCM), the foundational concept underlying most of today's sound programming, by referencing early attempts of sound generation using electronic computers. 2. An examination of the MUSIC-N family, the origin of PCM-based sound programming, to highlight that its design varies significantly across systems from the perspective of today's programming language design and that it has evolved over time into a black box, eliminating the need for users to understand its internal workings.
In this paper, the history of programming languages for music is reexamined with an approach that, in contrast to Lyon, adopts a radically style-agnostic perspective. Rather than focusing on what has been created with these tools, the emphasis is placed on how these tools themselves have been constructed. The paper centers on the following two topics: 1. A critique of the universality of sound representation using pulse-code modulation (PCM)the foundational concept underlying most of today's sound programming, by referencing early attempts at sound generation using electronic computers. 2. An examination of the MUSIC-N family, the origin of PCM-based sound programming, to highlight that its design varies significantly across systems from the perspective of today's programming language design and that it has evolved over time into a black box, eliminating the need for users to understand its internal workings.
Ultimately, the paper concludes that programming languages for music developed since the 2000s are not solely aimed at creating new music but also serve as alternatives to the often-invisible technological infrastructures surrounding music, such as formats and protocols. By doing so, the paper proposes new perspectives for the historical study of music created with computers.
## PCM and Early Computer Music
Usually the MUSIC I (1957) in Bell Labs[@Mathews1980] and succeeding MUSIC-N family are highlighted as the earliest examples of computer music research. However, attempts to create music with computers in the UK and Australia prior to MUSIC have also been documented[@doornbusch2017]. Organizing what was achieved by MUSIC-N and earlier efforts can help clarify definitions of computer music.
The MUSIC I (1957) in Bell Labs[@Mathews1980] and succeeding MUSIC-N family are highlighted as the earliest examples of computer music research. However, attempts to create music with computers in the UK and Australia prior to MUSIC have also been documented[@doornbusch2017]. Organizing what was achieved by MUSIC-N and earlier efforts can help clarify definitions of computer music.
The earliest experiments with sound generation on computers in the 1950s involved controlling the intervals between one-bit pulses (on or off) to control pitch. This was partly because the operational clock frequencies of early computers fell within the audible range, making the sonification of electrical signals a practical and cost-effective debugging method compared to visualizing them on displays or oscilloscopes. Some computers at this time like CSIR Mark I (CSIRAC) in Australia often had "hoot" primitive instructions that emit a single pulse to a speaker.
The earliest experiments with sound generation on computers in the 1950s involved controlling the intervals between one-bit pulses (on or off) to control pitch. This was partly because the operational clock frequencies of early computers fell within the audible range, making the sonification of electrical signals a practical and cost-effective debugging method compared to visualizing them on displays or oscilloscopes.
In 1949, the background to music played on the BINAC in UK involved engineer Louis Wilson, who noticed that an AM radio placed nearby could pick up weak electromagnetic waves generated during the switching of vacuum tubes, producing sounds. He leveraged this phenomenon by connecting a speaker and a power amplifier to the computer's circuit, to assist debugging. Frances Elizabeth Holberton took this a step further by programming the computer to generate pulses at desired intervals, creating melodies [@woltman1990]. The early sound generation using computer were mostly playing melodies of existing music. represented by BINAC and CSIR Mark.
For instance, Louis Wilson, who was an engineer of the BINAC in the UK, noticed that an AM radio placed near the computer could pick up weak electromagnetic waves generated during the switching of vacuum tubes, producing sounds. He leveraged this phenomenon by connecting a speaker and a power amplifier to the computer's circuit to assist with debugging. Frances Elizabeth Holberton took this a step further by programming the computer to generate pulses at desired intervals, creating melodies in 1949[@woltman1990].
However, not all sound generation at this time was merely the reproduction of existing music. Doornbusch highlights experiments on the British Pilot ACE (Prototype for Automatic Computing Engine), which utilized acoustic delay line memory to produce unique sounds[@doornbusch2017, p303-304]. Acoustic delay line memory, used as main memory in early computers like BINAC and CSIR Mark I, employed the feedback of pulses traveling through mercury via a speaker and microphone setup to retain data. Donald Davis, an engineer on the ACE project, described the sounds it produced as follows[@davis_very_1994, p19-20]:
Also, some computers at this time, such as the CSIR Mark I (CSIRAC) in Australia often had primitive "hoot" instructions that emit a single pulse to a speaker. Early sound generation using computers, including the BINAC and CSIR Mark I, primarily involved playing melodies of existing music.
However, not all sound generation at this time was merely involved the reproduction of existing music. Doornbusch highlights experiments on the Pilot ACE (the Prototype for Automatic Computing Engine) in the UK, which utilized acoustic delay line memory to produce unique sounds[@doornbusch2017, pp.303-304]. Acoustic delay line memory, used as the main memory in early computers such as the BINAC and the CSIR Mark I, employed the feedback of pulses traveling through mercury via a speaker and microphone setup to retain data. Donald Davis, an engineer on the ACE project, described the sounds it produced as follows[@davis_very_1994, pp.19-20]:
> The Ace Pilot Model and its successor, the Ace proper, were both capable of composing their own music and playing it on a little speaker built into the control desk. I say composing because no human had any intentional part in choosing the notes. The music was very interesting, though atonal, and began by playing rising arpeggios: these gradually became more complex and faster, like a developing fugue. They dissolved into colored noise as the complexity went beyond human understanding.
<!-- > Loops were always multiples of 32 microseconds long, so notes had frequencies which were submultiples of 31.25 KHz. The music was based on a very strange scale, which was nothing like equal tempered or harmonic, but was quite pleasant. -->
This music arose unintentionally during program optimization and was made possible by "misusing" switches installed for debugging delay line memory. Media scholar Miyazaki described the practice of listening to sounds generated by algorithms and their bit patterns, integrated into programming, as "Algo- *rhythmic* Listening"[@miyazaki2012].
This music arose unintentionally during program optimization and was made possible by the "misuse" of switches installed for debugging delay line memory. Media scholar Miyazaki described the practice of listening to sounds generated by algorithms and their bit patterns, integrated into programming, as "Algo- *rhythmic* Listening"[@miyazaki2012].
Doornbusch warns against ignoring these early computer music practices simply because they did not directly influence subsequent research[@doornbusch2017, p305]. Indeed, the sounds produced by the Pilot ACE challenge the post-acousmatic historical narrative, which suggests that computer music transitioned from being democratized from closed electro-acoustic music laboratories to individual musicians.
Doornbusch warns against ignoring these early computer music practices simply because they did not directly influence subsequent research[@doornbusch2017, p.305]. Indeed, the sounds produced by the Pilot ACE challenge the post-acousmatic historical narrative, which suggests that computer music transitioned from being democratized in closed electro-acoustic music laboratories to individual musicians.
This is because the sounds generated by the Pilot ACE were not created by musical experts, nor were they solely intended for debugging purposes. Instead, they were programmed with the goal of producing interesting sounds. Moreover, the sounds were tied to the hardware of the acoustic delay line memory—a feature that was likely difficult to replicate, even in today's sound programming environments.
This is because the sounds generated by the Pilot ACE were not created by musical experts, nor were they solely intended for debugging purposes. Instead, they were programmed with the goal of producing interesting sounds. Moreover, these sounds were tied to the hardware of the acoustic delay line memory—a feature that was likely difficult to replicate, even in today's sound programming environments.
Similarly, in the 1960s at MIT, Peter Samson took advantage of the debugging speaker on the TX-0, a machine that had become outdated and freely available for students to use. He conducted experiments where he played melodies, such as Bach fugues, using square waves [@levy_hackers_2010]. Samsons experiments with the TX-0 later evolved into the creation of a program that allowed melodies to be described using text strings within MIT.
Similarly, in the 1960s at MIT, Peter Samson took advantage of the debugging speaker on the TX-0, a machine that had become outdated and was freely available for students to use. He conducted experiments in which he played melodies, such as Bach fugues, using "hoot" instruction[@levy_hackers_2010]. Samsons experiments with the TX-0 later evolved into the creation of a program that allowed melodies to be described using text within MIT.
Building on this, Samson developed a program called the Harmony Compiler for the DEC PDP-1, which was derived from the TX-0. This program gained significant popularity among MIT students. Around 1972, Samson began surveying various digital synthesizers that were being developed at the time and went on to create a system specialized for computer music. The resulting Samson Box was used at Stanford University's CCRMA (Center for Computer Research in Music and Acoustics) for over a decade until the early 1990s and became a tool for many composers to create their works [@loy_life_2013]. Considering his example, it is not appropriate to separate the early experiments in sound generation by computers from the history of computer music solely because their initial purpose was debugging.
Building on this, Samson developed a program called the Harmony Compiler for the DEC PDP-1, which was derived from the TX-0. This program gained significant popularity among MIT students. Around 1972, Samson began surveying various digital synthesizers that were under development at the time and went on to create a system specialized for computer music. The resulting Samson Box was used at Stanford University's CCRMA (Center for Computer Research in Music and Acoustics) for over a decade until the early 1990s and became a tool for many composers to create their works [@loy_life_2013]. Considering his example, it is not appropriate to separate the early experiments in sound generation by computers from the history of computer music solely because their initial purpose was debugging.
### Acousmatic Listening, the premise of the Universality of PCM
One of the reasons why MUSIC led to subsequent advancements in research was not simply because it was developed early, but because it was the first to implement sound representation on a computer based on **pulse-code modulation (PCM)**, which theoretically can generate "almost any sound"[@mathews1963,p557]
One of the reasons why MUSIC led to subsequent advancements in research was not simply that it was developed early, but because it was the first to implement, but because it was the first to implement sound representation on a computer based on **pulse-code modulation (PCM)**, which theoretically can generate "almost any sound"[@mathews1963,p557]
PCM, the foundational sound representation on today's computers, involves sampling audio waveforms into discrete intervals and quantize the sound pressure at each interval as discrete numerical values.
PCM, the foundational digital sound representation today, involves sampling audio waveforms at discrete intervals and quantizing the sound pressure at each interval as discrete numerical values.
The issue with the universalism of PCM in the history of computer music is inherent in the concept of Acousmatic, which serves as a premise for Post-Acousmatic. Acousmatic, introduced by Piegnot as a listening style for tape music such as musique concrète and later theorized by Schaeffer, refers to a mode of listening where the listener refrains from imagining a specific sound source. This concept has been widely applied in theories of listening to recorded sound, including Chions analysis of sound design in film.
The issue with the universalism of PCM in the history of computer music is inherent in the concept of Acousmatic Listening, which serves as a premise for Post-Acousmatic. Acousmatic, introduced by Piegnot as a listening style for tape music such as musique concrète and later theorized by Schaeffer[@adkins2016,p106], refers to a mode of listening in which the listener refrains from imagining a specific sound source. This concept has been widely applied in theories of listening to recorded sound, including Michel Chions analysis of sound design in film.
However, as sound studies scholar Jonathan Sterne has pointed out, discourses surrounding acousmatic listening often work to delineate pre-recording auditory experiences as "natural" by contrast[^husserl]. This implies that prior to the advent of sound reproduction technologies, listening was unmediated and holistic—a narrative that obscures the constructed nature of these assumptions.
[^husserl]: Sterne later critiques the phenomenological basis of acousmatic listening, which presupposes an idealized, intact body as the listening subject. He proposes a methodology of political phenomenology centered on impairment, challenging these normative assumptions[@sterne_diminished_2022]. Discussions of universality in computer music should also address ableism, as seen in the relationship between recording technologies and auditory disabilities.
[^husserl]: Sterne later critiques the phenomenological basis of acousmatic listening, which presupposes an idealized, intact body as the listening subject. He proposes a methodology of political phenomenology centered on impairment, challenging these normative assumptions [@sterne_diminished_2022]. Discussions of universality in computer music should also address ableism, particularly in relation to recording technologies and auditory disabilities.
> For instance, the claim that sound reproduction has “alienated” the voice from the human body implies that the voice and the body existed in some prior holistic, unalienated, and self present relation. [@sterne_audible_2003,p20-21]
<!-- > They assume that, at some time prior to the invention of sound reproduction technologies, the body was whole, undamaged, and phenomenologically coherent. -->
The claim that PCM-based sound synthesis can produce "almost any sound" is underpinned by an ideology associated with sound reproduction technologies. This ideology assumes that recorded sound contains an "original" source and that listeners can distinguish distortions or noise from it. Sampling theory builds on this premise through Shannon's information theory, by statistically modeling human auditory characteristics: it assumes that humans cannot discern volume differences below certain thresholds or perceive vibrations outside specific frequency ranges. By limiting representation to this range, sampling theory ensures that all audible sounds can be effectively encoded.
The claim that PCM-based sound synthesis can produce "almost any sound" is underpinned by an ideology associated with sound reproduction technologies. This ideology assumes that recorded sound contains an "original" source and that listeners can distinguish distortions or noise from it. Sampling theory builds on this premise through Shannon's information theory by statistically modeling human auditory characteristics: it assumes that humans cannot discern volume differences below certain thresholds or perceive vibrations outside specific frequency ranges. By limiting representation to this range, sampling theory ensures that all audible sounds can be effectively encoded.
By the way, the actual implementation of PCM in MUSIC I only allowed for monophonic triangle waves with controllable volume, pitch, and timing[@Mathews1980]. Would anyone today describe such a system as capable of producing "infinite variations" in sound synthesis?
Incidentally, the actual implementation of PCM in MUSIC I only allowed for monophonic triangle waves with controllable volume, pitch, and timing[@Mathews1980]. Would anyone today describe such a system as capable of producing "almost any sound"?
Even when considering more contemporary applications, processes like ring modulation (RM), amplitude modulation (AM), or distortion often generate aliasing artifacts unless proper oversampling is applied. These artifacts occur because PCM, while universally suitable for reproducing recorded sound, is not inherently versatile as a medium for generating new sounds. As Puckette has argued, alternative representations, such as collections of linear segments or physical modeling synthesis, present other possibilities[@puckette2015]. Therefore, PCM is not a completely universal tool for creating sound.
Even when considering more contemporary applications, processes like ring modulation (RM), amplitude modulation (AM), or distortion often generate aliasing artifacts unless proper oversampling is applied. These artifacts occur because PCM, while universally suitable for reproducing recorded sound, is not inherently versatile as a medium for generating new sounds. As Puckette has argued, alternative representations, such as collections of linear segments or physical modeling synthesis, offer other possibilities[@puckette2015]. Therefore, PCM is not a completely universal tool for creating sound.
## What Does the Unit Generator Hide?
From with version III, MUSIC took the form of an acoustic compiler (block diagram compiler) that takes two input sources: a score language, which represents a list of time-varying parameters, and an orchestra language, which describes the connections between **Unit Generators** such as oscillators and filters. In this paper, the term "Unit Generator" means a signal processing modules where its implementation is either not open or written in a language different from the one used by the user.
Beginning with version III, MUSIC took the form of an acoustic compiler (block diagram compiler) that processes two input sources: a score language, which represents a list of time-varying parameters, and an orchestra language, which describes the connections between **Unit Generators** such as oscillators and filters. In this paper, the term "Unit Generator"refers to a signal processing module whose implementation is either not open or written in a language different from the one used by the user.
MUSIC family in the context of computer music research made success for performing sound synthesis based on PCM and (but) it came with the establishment of a division of labor between professional musicians and computer engineers through the development of domain-specific languages. Mathews explained that he developed a compiler for MUSIC III in response to requests for additional features for MUSIC II such as envelopes and vibrato by many composers, while also ensuring that the program would not be fixed in a specialized musical expression (Max V. Mathews 2007, 13:10-17:50). He repeatedly stated that his role was that of a scientist rather than a musician:
The MUSIC family, in the context of computer music research, achieved success for performing sound synthesis based on PCM but this success came with the establishment of a division of labor between professional musicians and computer engineers through the development of domain-specific languages. Mathews explained that he developed a compiler for MUSIC III in response to requests from many composers for additional features in MUSIC II, such as envelopes and vibrato, while also ensuring that the program would not be restricted to a specialized form of musical expression (Max V. Mathews 2007, 13:10-17:50). He repeatedly stated that his role was that of a scientist rather than a musician:
<!-- > The only answer I could see was not to make the instruments myself—not to impose my taste and ideas about instruments on the musicians—but rather to make a set of fairly universal building blocks and give the musician both the task and the freedom to put these together into his or her instruments. (p16) -->
> When we first made these music programs the original users were not composers; they were the psychologist Guttman, John Pierce, and myself, who are fundamentally scientists. We wanted to have musicians try the system to see if they could learn the language and express themselves with it. So we looked for adventurous musicians and composers who were willing to experiment.[@Mathews1980, p17]
This clear delineation of roles between musicians and scientists became one of the defining characteristics of post-MUSIC computer music research. Paradoxically, the computer music research that desired creating sounds never heard before paved the way for research by allowing musicians to focus on their composition without knowing about cumbersome works of programming.
This clear delineation of roles between musicians and scientists became one of the defining characteristics of post-MUSIC computer music research. Paradoxically, while computer music research aimed to create sounds never heard before, it also paved the way for further research by allowing musicians to focus on compositionwithout having to understand the cumbersome work of programming.
### Example: Hiding First-Order Variables in Signal Processing