- [Major] Changed `hash` passed to `querySelector` to `decodeURIComponent(hash)` to fix the issue where non-English anchors were not correctly positioning the popover content to the corresponding title.
- [Minor] Updated the type hint from `HTMLLinkElement` to `HTMLAnchorElement` as the passed element is an `<a>` element, not a `<link>` element (reference: https://developer.mozilla.org/en-US/docs/Web/API/HTMLLinkElement).
* replace .gitlab-ci.yml example with more reliable and faster ci job
* literally removing 1 space, inside a code block, in docs, just to make prettier not cry
simplifies slug from FullSlug to SimpleSlug before storing it in the visited pages list in memory
this leads to "index" page and "folder/index", "tags/tag/index" being stored a "/", "folder/" and "tags/tag/" respectively in the list of visited pages.
this ensures that the homepage is rightfully coloured as a visited page in the "color" function of the graph
* Tags appear as hollow circles on the graph
Added a few lines to make tags appear as hollow circles on the graph, as opposed to pages which are plain circles, for better visual separation.
* Applied Prettier code style
* fix: change callout metadata regex to include non-letter characters
* fix: make metadata regex non-greedy
This allows for users to have callouts such as
> [!NOTE|left foo-bar 123] a ]+ title with square brackets [s] a
> Contents
* Add homepage link with internationalization
* Construct pathname from baseUrl config value
* More robust URL manipulation
* Add Farsi (#1133)
* Fix bad rebase
* fix(wikilinks): handle wikilinks inside tables seperately from other wikilinks
* Prettier
* Cleaned up duplicate code
* Remove test logging
* Refactored and fixed for non-aliased wikilinks inside table
* Updated naming and comments
* Updated comment of wikilink regex
* Updated regex to match previous formatting
* Match table even if EOF is immediately after the table.
* Update quartz/plugins/transformers/ofm.ts
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* Change table escape replace to non-regex version
* Prettier
* Prettier
---------
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* feat(search): add search by title/content index and tag at the same time
* fix(search): set search type to basic and remove tag from term for proper highlightning and scroll when searched by tag and title/content index
* fix(search): use indexOf to find space so it is easier to read
* fix(search): trim trailing whitespaces before splitting
* fix(search): set limit to 10000 for combined search mode (to make filter by tag more accurate)
* fix: wikiLink in table
- update regexp to make '\' to group in alias
- handle alias using block_id
* style: format with prettier
* style: add comment for block_ref(without alias) in table
---------
Co-authored-by: hulinjiang <hulinjiang@58.com>
I really don't know why I translated this like that into "pas trouvé", and it bugged me a lot. I finally fixed it…
Signed-off-by: Mara-Li <lili.simonetti@outlook.fr>
* Stop mutating resources param in ComponentResources emitter
* Add done rebuilding log for fast rebuilds
* Move google font loading to Head component
* Simplify code and fix comment
* Add options to support goatcounter analytics
* goatcounter: support self-hosted
* Add to configuration docs for goatcounter settings
* use https instead of protocol-relative link for goatcounter js
* Fix docker volume lock issue by altering asset cleanup method
Modified build process to prevent the deletion of the output directory.
* Add fsOps utility for filesystem operations
* Use cleanDirectory in build process to fix volume lock issue
* applied prettier
* handle ENOENT error when output dir does not exist
* remove native function in favor of rimraf
* use path.join to concatenate paths
* feat(popover): Add support for images
* fix: run prettier
* feat(popover): use switch logic for content types & adjust styles
* feat(popover): Add content type data tag for popover-inner class
Rather than recommend a different hosting provider, Cloudflare Pages
users that prioritize the `git` method for their `CreatedModifiedDate`
configuration can preface the build command with a means of fetching the
required repository history.
See:
- https://gohugo.io/methods/page/gitinfo/#hosting-considerations
* fix: alt error mix with height/width
More granular detection of alt and resize in image
* fix: format
* feat: init i18n
* feat: add translation
* style: prettier for test
* fix: build-up the locale to fusion with dateLocale
* style: run prettier
* remove cursed file
* refactor: remove i18n library and use locale way instead
* format with prettier
* forgot to remove test
* prevent merging error
* format
* format
* fix: allow string for locale
- Check during translation if valid / existing locale
- Allow to use "en" and "en-US" for example
- Add fallback directly in the function
- Add default key in the function
- Add docstring to cfg.ts
* forgot item translation
* remove unused locale variable
* forgot to remove fr-FR testing
* format
When providing an absolute path to the content directory (e.g. when using an Obsidian Vault in another directory), the build step would fail with
Failed to process `/absolute/path/to/file.md`: ENOENT: no such file or directory, stat '/current/working/directory/absolute/path/'
This problem originated in the `CreatedModifiedDate` transformer which tries to construct a native filesystem path to the file to call `fs.stat` on. It did not however, account for the original file path contained in the received `VFile` being an absolute path and so, just concatenated the current working directory with the absolute path producing a nonexistent one.
This patch adds a simple fix for this issue by checking if the original file path is already absolute before concatenating with the current working directory.
* Add icons as masks
To handle a simple way to add custom icons, i made it pure css. Icon are now a mask for the callout-icon div, so they always follow the --color form the current callout.
Now to add a custom icon, you simply add
```css
.callout {
&[data-callout="custom"] {
--color: #customcolor;
--border: #custombordercolor;
--bg: #custombg;
--callout-icon: url('data:image/svg+xml; utf8, <custom formatted svg>');
}
```
to custom.scss
* remove now unused code
* Make callouts an enum
* docs: update instructions for custom callouts
* Prettier & run format
* dynamic matching
For maintainability, make dynamic mathching. If we or Obsidian want to support more callouts, we simply add it to the enum
* callout mapping const
Getting ride of the enum entierly as it's not worth here?
* fix callout icon styling
* Add forgotten icons
* Rebase
* harmonize callout icon and fold icon
* fix docs + prettier
* Update docs/features/callouts.md
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* Update quartz/plugins/transformers/ofm.ts
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* Suggestions fix
* remove unecessary rules
* comment is always nice
* Update docs/features/callouts.md
---------
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* feat: div that encapsulate PageList component
* change class to follow review
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* apply page-listing div to TagContent
---------
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* fix: alt error mix with height/width
More granular detection of alt and resize in image
* fix: format
* feat: allow to translate the date displayed
* style: format
* fix: rename to fusion dateLocale with locale (i18n support)
* Update quartz/components/PageList.tsx
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* remove default key as it was already set
* add docstring for locale
---------
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* docs: improve first-time git setup
* fix: cssClasses was not applied on index page
* refactor: remove vscode files
* fix: format
* fix: cssClasses should be applied on the entire div, not only the article
* feat: support cssClasses for tag-listing
---------
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* add an option to display or not reading time from notes
* Prettier (?)
* Remove ContentMeta override from quartz.layout.ts
* Make it positive ! 🌞
* Update quartz/components/ContentMeta.tsx
---------
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
* fix: allow publish property to be a string (ExplicitPublish)
Previously, the ExplicitPublish filter would publish if the `publish`
property was truthy.
The filter expects the `publish` property to be a boolean:
```
---
publish: true
---
```
However, Obsidian only shows the above if you are viewing a page in
“Source” mode.
If you are not in Source view, and you choose Three Dots Menu (...),
“Add file property”, you will get a string, not a boolean. It seems
likely that many users will do this and get:
```
publish: "true"
```
Notice that `"true"` is a string, not the boolean value `true`. If the
user changes this to `"false"`, the page will still be published:
```
publish: "false"
```
That is because the string value `"false"` is truthy.
This PR does the following:
- Allows the `publish` property to be either a boolean or a string.
- If it’s a string, it’s considered `true` if the string is `"true"`
(not case-sensitive; it will also work if it is `"True"`, `"TRUE"`,
etc.)
- Guarantees that the returned value from `shouldPublish` is a `boolean`
-- previously it could be any truthy value even though it was cast to
`boolean`
* style: use double-quotes everywhere
* style: format according to project style guide
* Add option to allow embedding YouTube videos with Obsidian Markdown syntax
* Update Obsidian compatability doc page
* Switch to converting YT links as an html plugin
* Continue setup even if a file to delete is not found
For various reasons, `.gitkeep` may be deleted already.
(In my case, even though I followed the [Getting Started](https://quartz.jzhao.xyz) instructions exactly, my first run resulted in an `fatal: 'upstream' does not appear to be a git repository`)
If we try to delete `.gitkeep` again and don't ignore `ENOENT`, then the whole setup fails.
* Use fs.existsSync
* Added doc example to explorer sortFn
* Prettier fixed formatting
* Let Prettier fix the formatting of the entire markdown file
* Updated example
* Added extra commentary and fixed example
* Update docs/features/explorer.md
* doc fixes
* docs: remove leftover TODO
* docs: move example to `advanced`
---------
Co-authored-by: Sidney <85735034+Epicrex@users.noreply.github.com>
Co-authored-by: Jacky Zhao <j.zhao2k19@gmail.com>
Co-authored-by: Ben Schlegel <ben5.schlegel@gmail.com>
* use slugs instead of title as basis for explorer
* fix folder persist state, better default behaviour
* use relative path instead of full path as full path is affected by -d
* dont use title in breadcrumb if it's just index lol
<summary>The author-date variant of the Chicago style</summary>
<updated>2024-05-09T13:08:37+00:00</updated>
<rights license="http://creativecommons.org/licenses/by-sa/3.0/">This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License</rights>
> Furthermore, since the majority of composers don't really want to become computer programmers, even given a high-level, specialized language, itmakes sense to develop models that are of intermediate generality but easy to use.
> Lacking adequate knowledge of the technical system, musicians increasingly found themselves drawn to prefabricated programs as a source of new sound material. As I have argued, however, this assertion is not simply a state ment of fact; it also suggests a reconceptualization on the part of the industry of the musician as a particular type of consumer.(p89)
ルネサスのチップには内部的にキャパシティブタッチセンサ用のピンが出ていて、それが実は背面のArduinoのMade with ❤の❤の部分に配線されているのでそれを引き伸ばすとタッチセンサが使えるという謎の裏技がある。普通に表面にピンか半田付できるランドを残しておいて欲しかった。
[How to access the Capacitive Touch Sensing Unit - UNO R4 WiFi - Arduino Forum](https://forum.arduino.cc/t/how-to-access-the-capacitive-touch-sensing-unit/1145940)
けど今調べたらなんか新しいの出てるな
[GitHub - delta-G/R4\_Touch: Capacitive Touch Sensing for the Arduino UNO-R4](https://github.com/delta-G/R4_Touch)
[GitHub - angrykoala/meta-brainfuck: A brainfuck-like programming language that generates code of itself](https://github.com/angrykoala/meta-brainfuck)
[Algorithmic symphonies from one line of code -- how and why?(2011)](http://countercomplex.blogspot.com/2011/10/algorithmic-symphonies-from-one-line-of.html)
title: "Computer Music Languages and Systems: The Synergy Between Technology and Creativity"
date: 2016-01-01
citekey: Nishino2016
tags:
- research
- bookSection
---
[[Nishino Hiroki]]
> [!Cite]
> Nishino, Hiroki, とRyohei Nakatsu. 2016. 「Computer Music Languages and Systems: The Synergy Between Technology and Creativity」. _Handbook of Digital Games and Entertainment Technologies_. [https://doi.org/10.1007/978-981-4560-52-8](https://doi.org/10.1007/978-981-4560-52-8).
> INTRODUCCIÓN 1 Estamos en un momento de pleno apogeo en lo que se refiere al uso de nuevas metodologías en la enseñanza del español como lengua extranjera, a la implementación de las tecnologías de la información y la comunicación, y a la inclusión de elementos lúdicos para mejorar la experiencia de enseñanza y aprendizaje. En este artículo queremos realizar una aproximación al concepto de gamificación o ludificación, un término ya presente en el ámbito empresarial y que recientemente se ha adaptado al contexto docente de lenguas extranjeras por las múltiples ventajas que pue-de ofrecer durante el aprendizaje. El uso del juego o sus elementos en el contexto de enseñanza y aprendizaje de len-guas extranjeras tiene como fin modificar el comportamiento de los aprendientes hacia el proceso de aprendizaje de la lengua meta; por ejemplo, conseguir que aumente su mo-tivación y que el aprendizaje sea significativo y duradero. No obstante, para conseguir este objetivo es necesario analizar previamente las características contextuales, atender a los objetivos curriculares y ante todo, tener en cuenta las necesidades específicas de los aprendientes. Este artículo tiene el objetivo principal de promover una reflexión sobre este térmi-no y su implementación en el aula, así como proponer una serie de ideas para imple-mentarlas en el contexto del aula. Por último, queremos despertar en otros profesores de lengua extranjera el interés y la curiosidad por implementar la gamificación en sus prácticas docentes. 1 Los datos presentados en este taller son una adaptación del taller titulado " Y tú, ¿gamificas? " impartido por Matías Hidalgo Gallardo y Antonia García Jiménez durante las III Jornadas de formación de profesores de ELE en Hong Kong (13-14 de marzo de 2015). 74 ¿QUÉ ES LA GAMIFICACIÓN? La conceptualización de este término tiene su origen en el mundo de los negocios, pues es en este contexto donde se empezó a utilizar. Así, Werbach y Hunter (2012) se-ñalan que la gamificación consiste en el uso de elementos de juegos y técnicas de diseño de juegos en contextos no lúdicos. Teniendo en cuenta en el contexto en el que nos encontramos como docentes, la definición que acabamos de presentar debe modificarse. Tomaremos como referencia la propuesta de Foncubierta y Rodríguez (2014) que definen la gamificación como la técnica o técnicas que el profesor emplea en el diseño de una actividad, tarea o proceso de aprendizaje (sean de naturaleza analógica o digital) introduciendo elementos del juego (insignias, lí-mite de tiempo, puntuación, dados, etc.) y/o su pensamiento (retos, competición, etc.) con el fin de enriquecer esa experiencia de aprendizaje, dirigir y/o modificar el comportamiento de los alumnos en el aula (Foncubierta y Rodriguez 2).
>.
>
# Notes
![[The Computer Music Tutorial, second edition - Curtis Roads#Notes]]
title: "Computer music and post-acousmatic practices: International Computer Music Conference 2022"
date: 2022-07-03
citekey: holbrook2022
tags:
- research
- conferencePaper
- "#computermusic"
---
> [!Cite]
> Holbrook, Ulf, とJoran Rudi. 2022. 「Computer music and post-acousmatic practices: International Computer Music Conference 2022」. _Proceedings of the International Computer Music Conference, ICMC 2022_, 編集者: Giuseppe Torre, 140–44. International Computer Music Conference, ICMC Proceedings. San Francisco: International Computer Music Association. [https://icmc2022.files.wordpress.com/2022/09/icmc2022-proceedings.pdf](https://icmc2022.files.wordpress.com/2022/09/icmc2022-proceedings.pdf).
> **Title**:: Computer music and post-acousmatic practices: International Computer Music Conference 2022
> **Year**:: 2022
> **Citekey**:: holbrook2022
> **itemType**:: conferencePaper
> **Publisher**:: International Computer Music Association
> **Pages**:: 140-144
> [!LINK]
>
> [Holbrook et al. - Computer music and post-acousmatic practices.pdf](file:///Users/tomoya/Zotero/storage/NBRFF5ND/Holbrook%20et%20al.%20-%20Computer%20music%20and%20post-acousmatic%20practices.pdf).
> [!Abstract]
>
> This short paper considers the practices of computer music through a perspective of the post-acousmatic. As the majority of music is now made using computers, the question emerges: How relevant are the topics, methods, andconventions from the “historical” genre of computer music? Originally an academic genre confined to large mainframes, computer music’s tools and conventions have proliferated and spread to all areas of music-making. As agenre steeped in technological traditions, computer music is often primarily concerned with the technologies of its own making, and in this sense isolated from the social conditions of musical practice. The post-acousmatic is offeredas a methodological perspective to understand technology based music, its histories, and entanglements.
>.
>
# Notes
[[ポストアクースマティック]]とコンピューター音楽という言葉の結びつきについて検討する論文
> From its inception and up until today, computer music composers have sought and gained new tools, and have shifted their methods towards uses of high-level software on computers and portable tablets. Most newer tools abstract the signal processing routines and variables, making them easier to use while removing the need for understanding the underlying processes in order to create meaningful results.
[Electroacoustic Music Studies Asia Network \[EMSAN\] | IReMus](https://www.iremus.cnrs.fr/en/base-de-donnees/electroacoustic-music-studies-asia-network-emsan)
データベース
[EMSAN: The Electroacoustic Music Studies Asia Network](http://emsan.lib.ntnu.edu.tw/about.jsp)
[Paper: Feminism in Programming Language Design – Felienne Hermans](https://www.felienne.com/archives/8470)
[A Case for Feminism in Programming Language Design | Proceedings of the 2024 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software](https://dl.acm.org/doi/10.1145/3689492.3689809)
> The music is great, but video... It is worth to know that the Suite has an exact narrative. According to Jean-Claude Risset: "this music I composed for the play Little Boy by Pierre Halet. The theme of the play is the revival of the Hiroshima bombing in the form of a nightmare of Eatherly, the pilot of a reconnaissance plane who later developed guilt jeopardizing his mental health. Fall corresponds to the release of the bomb. The pilot thinks that Little Boy, the bomb with which he identifies himself, is falling - in fact this is a psychological collapse that never reaches any bottom. To illustrate this, I have produced a paradoxical glissando, which appears to glide down for ever amidst more normal tones. This is accomplished by ganging together a number of octave components, as pioneered by organ makers such as Callinet centuries ago and by psychologist Roger Shepard with the computer, and used in instrumental music from Bach to Berg and later my own Phases for orchestra".
> Little Boy was realized at Bell Laboratories. All its sounds have been produced with the MUSIC V program. The Computer Suite is excerpted from music composed for the play Little Boy by Pierre Halet. The theme of the play is the revival of the Hiroshima bombing in the form of a nightmare of the pilot of the reconnaissance plane, who later developed guilts jeopardizing his mental health. The Suite attempts to roughly sum-up the movement of the play; it comprises three parts. The first section, Flight and Countdown, follows the pilot's dream, which takes him through a musically stylized plane flight, with inharmonic textures, episodes of synthetic jazz and japanese-like tunes.The flight is terminated by a count-down preceding the release of the bomb. The following section is the Fall. The pilot thinks that Little Boy, the bomb with which he identifies himself, is falling - in fact this is a psychological collapse that never reaches any bottom, hence the endless descending spiral. The last part is called Contra-Apotheosis like the anti-climactic end of the play. Here various time fragments are recalled or evoked in a deliberately desintegrated way, as the obsessions of the central character and his entire world mentally rotate.Thus the jazz band gets mixed up and ends as a gun-like beat; the Japanese instruments turn into sirens; a siren glides upwards yet becomes lower and lower; a pandemonium of sounds builds up above a rotating glissando, to be quieted down and dissolved into memories.
> It is worth pointing out that the starting point of MUS10 was an existing ALGOL parser, modified for music synthesis. We shall see several examples of this later in which the language designer simply took an existing language cmpiler and modified it to suit musical requirements. This is a very simple but effective way to start a language design.([Programming languages for computer music synthesis, performance, and composition | ACM Computing Surveys](https://dl.acm.org/doi/10.1145/4468.4485), p248)
[Music IV Programmer's Manual : Max Mathews, Joan Miller : Free Download, Borrow, and Streaming : Internet Archive](http://archive.org/details/music-iv-programmers-manual)
[Max Mathews Full Interview | NAMM.org](https://www.namm.org/video/orh/max-mathews-full-interview)
Nottaの文字起こしを使用したのち手動で固有名詞は修正したが、まだエラーあるかも/Transcription is powered by [Notta](https://notta.ai) and I corrected specific words like human names and institution names but I think still there are several errors...
Thank you for having a few minutes for me, I do appreciate it.
[[Max Mathews|Max V. Mathews]] 00:09
Okay.
Interviewer 00:11
I think it's a good place to start, if you don't mind. It's just a little bit of background on yourself. And tell me the role of music in your life when you were growing up.
Max V. Mathews 00:23
Two things, I learned to play the violin, not well, and I still don't play it well when I was in high school. And I continued to play the violin with school orchestras and chamber groups, and still do that. It's a great joy in my life.Then at the end of the Second World War, I was in the Navy in Seattle, and the good people of Seattle had set up a listening room where you could go and listen mostly to shellac 78 RPM records, but a few vinyl 78 RPM records. And so I realized at that time that music had an emotional and beautiful and pleasurable content, and that also has been a great factor in my life. So those were the two places where I got into music.
Interviewer 01:22
Now where did you grow up?
Max V. Mathews 01:24
I grew up in Nebraska, and when I was 17, I guess I enlisted in the Navy as a radio technician trainee. Now we were called radio technicians, but we were really trained to repair radars, but the word radar was secret at that time. And so I finished school there and then went to Seattle and helped commission a destroyer, and then we shot the guns and shook the boat down and went back to Seattle, and then I was mustered out because the war had ended and VJ Day was over. I met Marjorie in San Francisco at the radar training school on Treasure Island, and we hit it off immediately. So I stayed in the West Coast, went to school at Caltech, studied electrical engineering there because I was in love with radar circuits. I wish I had studied physics there, but nevertheless it's a wonderful school.And then I went on to MIT and got introduced to computers. Those days analog computers were the useful computers, digital computers were still being developed, and I sort of loved these big complicated systems, and so we solved the kinds of problems that analog computers could solve, and that was my schooling.
Interviewer 03:03
Very interesting. Can you give me a little background on your family? Did your parents also grow up in Nebraska?
Max V. Mathews 03:11
Yes, my parents were both born there and grew up there. They were both teachers. My father was the principal of the teachers' training high school in Peru. There was a little teachers' college there. But what he really enjoyed was teaching the sciences. So he taught physics and biology and chemistry. And he let me play in his laboratory as well as in his workshops. And that was another thing that set the course of my life. I still enjoy working in a workshop and I still enjoy the sciences very much.
Interviewer 04:00
Very interesting. Well, what were the computers like when you first started getting interested in that?
Max V. Mathews 04:10
Well, the one computer that we used most, and this was to develop counter missiles to protect mostly against air attacks at that time. And this was a combination of an electromechanical system. So the integrator on the computer was a mechanical integrator, but the other components, the adders and more simple operations were done electronically. Then operational amplifiers were designed and came along at that time. And so then most of the simple integrations were taken over by the operational amplifier feedback circuit that still does that job. And only complex integrations of fairly nonlinear processes had to be done with the mechanical components.So the computer itself filled a large room full of relay racks that held both the analog components and the mechanical components. Now, there were a lot of interconnecting that had to be done at a patch field. The question would be, had you done it correctly, would it give the right solution to the original problem? And so we needed check solutions, and you could integrate the solution on a Marchant mechanical multiplying calculator machine. If you had a group of five or ten, I think in those days it was entirely women, and they worked for about a month to calculate one solution, whereas the analog computer, of course, would turn out a solution in a few seconds. So we would get these digital integrations and compare it with the analog result, and then figure out what mistakes we'd made and corrected, and then go ahead and study a lot of different conditions.When I came to Bell Labs in 1955, I started working and always worked in acoustic research there, and our main job was developing new telephone, well, new speech coders that really would compress the amount of channel that was needed to transmit the speech over expensive things like the transatlantic cable. And in the beginning, people had a number of ideas on how the encoding might work. Pitch period repeating was one of them. Channel vocoder processing was another of them. Format vocoders was yet a third, and in order to try these things, one had to build a sample model of them, and this was very complicated. The vacuum tubes were the things that we had to design and work with in those days. The transistor had not yet become practical. So it might take several years to design a trial equipment, and usually it didn't work. So then you would go back and do it again. And I thought that, well, I should say that this was just the time that computers were becoming powerful enough to do a digital simulation of many things. And in the case of speech, the essential thing was a way of getting speech into the computer and then getting it back out after you had processed it to see what it sounded like. And the key element that made that possible was not the computer, digital computer itself. You could run the computer for a few days to make a few minutes of speech. But the crucial thing was the digital tape recorder, which could take the output of an analog to digital converter at speech rates.
Max V. Mathews 09:00
In those days, it was 10,000 samples per second. Today it's 44,000 samples a second for CD music and more for other things. Anyhow, take these rapid flow of samples coming out and record them on a digital tape that then could be taken to the computer to be the input, slow input. And the computer would write a digital tape and you could take this back and play it back again at the 10,000 samples per second so you could hear the thing at speech frequencies. And this digital tape-based A to D computer input and output was the equipment that we built at Bell Labs that made this possible and was a completely successful device for speech research. And most of the modern coders came from this. And now, of course, as you know, it's not only digitized speech is not only used for research, it's the way that almost all information is transmitted. The reason being that digital transmissions are very rugged and number is a number and you can hand it on from one medium to another and from one company to another. And as long as you use the proper error correcting codes why if it goes to Mars and back you'll still get the correct numbers. So that's how the world works today.
Interviewer 10:38
Very interesting. Max, when did it first come into your mind that computers and music could be put together?
Max V. Mathews 10:48
I've forgotten the exact date, but it was in 1957, and my boss, or really my boss's boss, John Pierce, the famous engineer who invented satellite communication, and I were going to a concert. We both liked music as an art. And the concert was at local pianist who played some compositions by Schnabel and by Schoenberg. And at the intermission, we thought about these, and we thought that Schoenberg was very nice and that Schnabel was very bad, and John said to me, "Max, I bet the computer could do better than this", and "why don't you either take a little time off from writing programs for speech compression or maybe work in the midnight oil and make a music program". And as I said at the beginning, I love to play the violin, but I'm just not very good at it, and so I was delighted at the prospect of making an instrument that would be easier to play, at least in a mechanical sense, and I thought the computer would be that. So I went off and wrote my Music 1 program, which actually made sound, but horrible sound, so that you couldn't really claim it was music. But that led to Music 2 and eventually Music 5, which did make good music. And gradually, I'm not a musician, well, in any sense. I consider myself a creator and an inventor of new musical instruments, computer-based instruments. But my ideas did make an impact on musicians and composers and I think started, or it was one of the startings of the fields of computer music.
Interviewer 13:05
Absolutely. Tell me about music too. I'm sort of curious about that.
Max V. Mathews 13:10
Well, Music 1 had only one voice and only one wave shape, a triangular wave, an equal slope up and equal slope down. And the reason was that the fastest computer at the time, the [[IBM 704]], was still very slow. And the only thing it could do a tall fast was addition. And if you think about it, each sample could be computed from the last sample by simply adding a number to it. So the time was one addition per sample. Well, the only thing the composer had at his disposal was the steepness of the slope, how big the number was. So that would determine how loud the waveform was, and the pitch that you were going to make, and the duration of the note. And so that wasn't very much, and you didn't have any polyphony there. So they asked for making a program that could have more voices. And I made one with four voices. And I made one where you could have a controlled wave shape so that you could get different timbers as much as the wave shape contributes to the timbre. Now, in a computer, calculating a sine wave, or a damp sine wave, or a complicated wave is pretty slow, especially in those days. So I invented the wavetable oscillator where you would calculate one pitch period of the wave and store it in the computer memory, and then read this out at various pitches so that this then could be done basically by looking up one location in the computer memory, which is fast. And I also put a amplitude control on the thing by multiplying the wave shape by number. So this cost a multiplication and a couple of additions. So it was more expensive. By that time, computers had gotten maybe 10 or 100 times as fast as the first computer. So it really was practical. So that was music too. And some thing that most listeners would call music came out of that. And some professional composers used it. But they always wanted more. In particular, they didn't have any things like a controlled attack and decay, or vibrato, or filtering, or noise, for that matter. So it was a perfectly reasonable request. But I was unwilling to contemplate even adding these kind of code, one device at a time, to my music program. So what I consider my really important contribution, that still is important, came in MUSIC 3. And this was what I call a block diagram compiler. And so I would make a block, which was this waveform oscillator. And it would have two inputs. One was the amplitude of the output. And the other was the frequency of the output. And it would have one output. And I would make a mixer block, which could add two things together and mix them. And I made a multiplier block in case you wanted to do simple ring modulation. And I made a noise generator. And essentially, I made a toolkit of these blocks that I gave to the musician, the composer. And he could interconnect them in any way he wanted to make as complex a sound as he wanted. And this was also a note-based system so that you would tell the computer to play a note.
Max V. Mathews 17:50
And you would give the parameters that you wanted the computer to read for that note. You almost always specified the pitch and the loudness of the note. But you could have an attack and decay block generator included in this, and you could say how fast you wanted the attack and how long you wanted the decay to last, or you could even make an arbitrary wave shape for the envelope of the sound. And so this really was an enormous hit, and it put the creativity then, not only for composing the notes, the melodies, or the harmonies that you wanted played on the musician, on the composer, but it gave him an additional task of creating the timbres that he wanted.And that was a mixed blessing. He didn't have the timbres of the violin and the orchestral instruments to call upon that he understood. He had to learn how timbre was related to the physical parameters of the waveform. And that turned out to be an interesting challenge for musicians that some people learn to do beautifully and others will never learn it.The man who really got this started at the beginning was [[Jean Claude Risset]], a French composer and physicist who came to Bell Labs and worked with me. It was one of my great good luck and pleasures that he was around. And so he made a sound catalog that showed how you could create sounds of various instruments and sounds that were interesting but were definitely not traditional instruments. And that work still goes on. Risset is coming here to give some lectures at Stanford on April 3rd. He'll be here for the entire spring quarter.
Interviewer 20:03
Hmm, very interesting.
Max V. Mathews 20:06
But to finish up this series, that got me to Music 3. Along came the best computer that IBM ever produced, the IBM 704, 1794, excuse me. It was a transistorized computer, it was much faster, and it had quite a long life. They finally stopped supporting it in the mid-1960s, I guess.I had to write Music 4, simply reprogramming all the stuff I had done for the previous computer, for this new computer, which was a big and not very interesting job. So, when the 1794 was retired, and I had to consider another computer, I rewrote Music 5, which is essentially just a rewrite of Music 3 or Music 4, but in a compiler language. FORTRAN was the compiler that was powerful and existed in those days. And so that when the next generation beyond the Music 5 computers, the PDP-10 was a good example of a computer that ran well with music, I didn't have to rewrite anything. I could simply recompile the FORTRAN program, and that's true today. Now the sort of most direct descendant of Music 5 is a program written by [[Barry Vercoe]], who's at the Media Lab at MIT, and it's called Csound, and the reason the C in [[Csound]] stands for the C compiler. Now you're asking about Bell Labs, and many wonderful things came out of Bell Labs, including Unix, and of course Linux, and now the OSX operating system for Macintosh all started at Bell Labs. And the most powerful compiler, and I think the most widely used compiler, was also created at Bell Labs. It was called the C compiler, A and B were its predecessors, and C was so good that people stopped there, and now that's it for the world. Every computer has to have a C compiler now, whether it's a big computer or a little tiny DSP chip. So that's where that came from.
Interviewer 23:03
Very interesting you had mentioned um the envelope before and i just wonder were there other applications for that before the music programs.
Max V. Mathews 23:18
Other applications for what?
Interviewer 23:22
Well, for the process of, like, the use of envelope and pitch changes and.
Max V. Mathews 23:29
Ah, well, most of that is specific to music. Now, there are plenty of speech compression programs, and there are also music compression programs. And they make use of many ways of compressing sound. But I think the most interesting and most important today is compression of speech and music that is based on a property of the human ear. And this is called masking. And if you have a loud sound and a very soft sound, the loud sound will make it completely impossible to hear the soft sound. You won't hear it at all. And in fact, if you have a component in a sound, let's say a frequency band, which is loud, and the adjacent frequency band is very soft, why, you can't hear the soft frequency band. So that means, as far as speech coding goes, that you only have to send information to encode the loud things. And you do not have to send any or very little information to encode the soft things that are occurring while the loud things are happening. And this is how MP3, this is one of the important factors in MP3 and in speech codes that enable us to send and record and play back good music and good speech with very little bandwidth. How to send speech over Skype and other devices that send it over the Internet entirely digitally and without an enormous bandwidth. So I've forgotten the question that I was answering there, but anyway, this is one of the useful directions that has come out of the acoustic research in the last decades.
Interviewer 26:01
That's very interesting. Could you give us a little information, the background on Bell Labs and some of the key players?
Max V. Mathews 26:10
I can give you information about the most important players there, which were the members of the research department at Bell Labs. AT&T got started based on a patent of Alexander Graham Bell as a telephone network, and there was a lot of technology needed to implement Bell's patent, and so AT&T set up a scientific and technical group in New York City originally to do this, and that became a separate sub-company owned by AT&T called telephone laboratories. It grew to have a number of different parts, one of which was research, and that was a fairly small part of the company. The major people were in the development areas that took the research ideas and then converted them into products that were then supplied to the telephone companies. Originally and almost to the end, the research department consisted entirely of PhDs, usually in the field of physics and mathematics, then gradually some chemical departments were added to this, but a very select group. At that time, the telephone system was a regulated monopoly so that there was only one telephone company in almost the entire country. That made sense because there was no real reason for having two networks of wires connecting the houses together, and that was a very expensive part of the system. This then became a great source of income, and a very small portion of this income financed the research department. The research department didn't directly try to do things that would make profits, rather it tried to do things that were useful in the world of communication. They had a great deal of freedom in deciding what they thought would be useful.The sort of golden age of research at Bell Labs, at least in my horizon, started with the invention of the transistor to replace vacuum tubes for amplifying signals. This was done by what we call solid state physicists, physicists who understand how crystal materials interact with electrons, and how you can make amplifiers and get controlled voltages out of these. Then acoustic research was set up to apply the technology and to understand how people, how their ear works, what they need to understand speech, what they need to like speech, and what's dangerous about sounds, if they're too loud. The threshold of hearing and basic things about human hearing were part of that group. Now, the golden age of research at Bell Labs was really, well, it started out with the idea that Bell and his associates had that one should support a research group with an adequate amount of money. but it continued with one man, William O. Baker, who was the Vice President of Research. He both maintained the very selective standards of the people in the group, and he guarded the freedom of choice of how they would use the money, what they would do research on, very, very zealously, so that he insisted that AT&T provide him with the money to run the research department without strings attached, and his associates would decide how they would spend this money.Finally, he kept the size of the research group very limited. When I went to Bell Labs in 1955, there were about 1,000 people in the research department, and Bell Labs was about 10,000.
Max V. Mathews 32:10
When I left in 1987, there were still about 1,000 people in the research department. The rest of the Bell Labs had about 30,000 people, so he insisted that everyone use their resources wisely and not try to grow. This lasted until the Consent Decree in about 1980, which broke up the Bell System into seven operating areas, separate companies, and a company called AT&T, which would contain the Bell Labs, the research part, and also the Western Electric, which was the manufacturing arm that would provide telephone equipment to the operating companies, as it always had. But it opened the whole thing to competition, and also by that time digital transmission was coming in. In contrast to analog transmission of sound, which is very fragile, and if you want to send a conversation from San Francisco to New York or to Paris by analog, that means you really have to send it over carefully controlled analog equipment that really means all the equipment needs to be run by one company. But when digital things came along, then you could pass the digits on from between many, many companies in many, many ways. So essentially, the Telephone Research Lab no longer had the support that it did with this controlled monopoly, and so it was no longer possible really to support this group. It's expensive even to run a thousand people. The budget was something like $200 million a year. So that's my view of research in the part of Bell Labs. It was a wonderful time. It was a time when there was, of course, in the Second World War and afterwards, a strong military research group at Bell Labs and development group and things like the Nike anti-aircraft missile were developed there and many other things. Underwater sound was also another branch of the military research. I think the military research actually still goes on. Bell Labs eventually split up and became Lucent, which is the name you probably know it by. And now it's amalgamated with the French company Alcatel, so it's Alcatel-Lucent. And it's no longer limited to working in the field of communications as the original AT&T was. As a monopoly, it could not work in any field. It was allowed to work in the movie field, though, and developed sound techniques for movie film in the 1920s.
Interviewer 36:26
Was it still in New York when you joined them?
Max V. Mathews 36:29
No, it had moved, well, they still had the West Street Laboratories in New York, although they subsequently closed them maybe in 1960. But its central office was in New Jersey, Mary Hale, New Jersey, about 30 miles west of New York City, which could communicate to New York City easily on the train.And AT&T's headquarters at that time was still in New York City. And then it had other facilities in New Jersey, primarily at Homedale, which was about 50 miles south of Murray Hill and Whippany, which was about 10 miles north. But it had other laboratories connected more with products near Chicago and Indiana and became more diversified, which was a problem.
Interviewer 37:35
How so?
Max V. Mathews 37:36
Oh, just the fact that it's a lot easier to think of something new by going to lunch with your friends and talking with them than it is to call them up over telephone in Chicago from Murray Hill.
Interviewer 37:59
Do you think, based on what you were doing and what others were doing at Bell Labs, that it is correct to say that what [[Bob Moog]] and [[Don Buchla]] were doing were the first in their fields for synthesized music?
Max V. Mathews 38:22
Well, saying what's first is always problematic, and I don't much try to speculate there. The thing that was interesting was that Moog and Buchla and myself, both, all three of us developed what I called a block diagram compiler. A compiler is not the right word. In the case of Buchla and Moog, they were modular synthesizers so that you could have a bunch of modules and plug them together with patch cords that a musician, the user, could plug them together in any way he wanted. They were analog modules, and I made the digital equivalent of most of those, or they made the analog equipment of mine, the oscillator, of course, and the attack and decay generators and the filters and the mixers and things like that. The computer had at least the initial advantage that the computer memory could also contain the score of the music, and in the early Moog things it was harder to put the score into an analog device. They did gradually introduce what they called sequencers, which is a form of score, but it never became as general as what you could do with a digital computer, and it never became as general as what you can do with MIDI files.And do you know what the difference is between a MIDI file and MIDI commands? Well, a MIDI command has no execution time attached to it per se. Just a command that lets you turn on a note in some synthesizer by some other keyboard that sends a standard command, the MIDI command, to the synthesizer.And this was an enormous advance for analog equipment or combination digital analog because the MIDI file itself is digital. But it was an enormous communication standard, very reluctantly entered into by the big companies. Yamaha, I don't think, was at the beginning of this. It was [[Dave Smith]] that, I've forgotten his name of his company.
Interviewer 41:14
Sequential circuit?
Max V. Mathews 41:14
Sequential circuits, and Roland and one other company that were the initiators of the MIDI commands. Then people figured out that if you put a sequence of these commands into a computer that would play them one after the other, and if you put a time code in that said when to play them, or really the delta time, how long it is between playing one command and playing the next command, then you could encode a complete piece of music as a MIDI file, and so this was another really great breakthrough that Smith and Roland and this other company did.
Interviewer 42:06
Yeah, absolutely. What role, if any, did music concrete play in the evolution of all of this?
Max V. Mathews 42:16
Um... Oh, music concrete started before all this came along, and the technology used was the tape recorder technology, and changing the speed of tapes and making tape loops, which play something repetitiously, and being able to splice snippets of tape with various sounds on them, so you could make a composition, for example, by splicing the tapes of various pitches, and that was a very successful and a very tedious operation, and one of the things that I tried to do was to make the computer do the tedious part of it, which it does very well, and make the composer think more about the expressive part. Now people argue a lot about music concrete, and what was Stockhausen's alternate thing where he generated all sounds, not by recording real sources, but by using oscillators, I think. I've forgotten the name for that, but anyway, that now, I think, is an absolutely meaningless argument, because digitized sound is so universal that the sources of the sound can either come from nature, from recordings of instruments, sampled things, or they can be synthesized, and you can use FM techniques, or additive synthesis, or a myriad of other ways of making your sound. So I don't really think it's worth hashing over this very old conflict, and I guess [[Pierre Schaffer]] is died a number of years ago.
Interviewer 44:50
Yeah.
Max V. Mathews 44:51
Stockhausen is still around. Chowning's FM synthesis really started out as a purely synthesized sound with no recording of natural sounds being involved.But now most synthesizers use samples. They process these samples in ways, including FM ways, to get the timbre that the person wants.
Interviewer 45:21
And did you know, John, before...
Max V. Mathews 45:26
John was studying as a grad student at Stanford, and he and Ray Say too read a paper I wrote in Science Magazine about the Music 3 program, and he came back to Bell Labs and spent a day with me, and he was very bright, and he understood what I was doing instantly, and he went back to Stanford and wrote his own music program, and then he tied up with the artificial intelligence laboratory that [[John McCarthy]] had set up at Stanford, and they had a very good computer, a DEC PDP-10, which in my mind was by far the best computer that existed in those days. So John could, at night when the AI people were home sleeping, he could use a computer for making music on these programs, and so he made wonderful music, and he, well, one of the things that Ray Say found was that in order to be interesting, the spectrum of a sound has to change over the duration of a note, and if the spectrum is constant over the note, why your ear very rapidly gets tired of the sound and doesn't think it's beautiful or charming, and so Ray Say used additive synthesis with a lot of oscillators and changing their amplitude, their outputs to make a changeable spectrum, and he could make very good instrumental sounds and other sounds this way, but it was very expensive, and John found that by using frequency modulation in a way that it had never been used for communication purposes, that he could also make the spectrum change over notes and do similar things to what Risset did with additive synthesis, and this was much more efficient.It took less computer power to do that, and he also, John was a very good salesman. He persuaded the Yamaha company to design a chip to do FM synthesis, and this was the Yamaha DX7 computer, and sort of overnight that brought down the price of an entry-level system that could make interesting music from a [[PDP-11]] computer costing about $100,000 to a DX7 costing about $2,000, and of course that increased the number of people who were using this from, I don't know, maybe a ratio of a thousand to one increase from the decrease in the cost. So anyway, as I say, John visited me in the early 60s, and then he went back and did his thing at Stanford, and Risset spent several years at Bell Labs in the 60s, and then he went back to France, and gradually got a digital system going there, and persuaded [[Pierre Boulez]] that, or maybe Boulez persuaded himself that there should be a computer part of the IRCAM laboratory that Boulez had talked Pompidou into supporting in France, and Risset was put in charge of that laboratory. Risset persuaded me and Boulez that I should spend some time there. I continued to work at Bell Labs, helping set up IRCAM.
Interviewer 49:41
Hm.
Max V. Mathews 49:41
I was the first scientific director there. It was a very interesting job.
Interviewer 49:52
What sort of things made it so interesting for you there?
Max V. Mathews 49:56
Oh, no, excitement of working in Paris, trying to learn how to speak a little French. Getting a system going with a PDP-10 computer, which the French had enough money to buy, and getting the analog to digital analog parts on it. Using them, they had some very good studio rooms so that you could do good psychoacoustic research. You need a nice quiet room to listen to things in, and here come had that.The rooms were connected to the computer so you could make good test sounds to evaluate. Working with Risset and Gerald Bennett, who I still work with very much. [[David Wessel]], of course, came over there. It's about a decade or two. Working with the musicians there and the technical people. It was an exciting time in my life.
Interviewer 51:09
Going back to John for just a second. From your perspective, what was the importance of FM synthesis?
Max V. Mathews 51:21
Well, the importance was that you could make good music with it. That also led to the [[SAMSON BOX|Samson Box]], which could do real-time FM synthesis, as could the DX7, but more powerful synthesis. And so the Samson Box was designed and built here, I guess, in the Bay Area by Peter Samson. And for about a decade, it had a monopoly on the rapid and efficient synthesis of really powerful music, a monopoly at John's CCRMA Laboratory. And so just an enormous string of very excellent music came out of that, and good musicians from all over were attracted to CCRMA because of that machine. Now, you could make this same music, but at a much slower time on a PDP-10 by itself, but the Samson Box made a second of music in a second of time. That was real time. It was intended to be used for live performance of computer music. That was a tension, and it could have done that, but it really was never capitalized on because, A, you had to have a PDP-10 to drive the Samson Box, and B, you had to have the Samson Box, which was about the size of a big refrigerator. And so it really wasn't practical to take this on the stage where you have to do a performance. And so it produced essentially tape music, but rich tape music. The lifetime of the Samson Box was really ended by the advent of the laptop computers, and the laptop computers getting so powerful that they now can do what the Samson Box can do ten times faster than the Samson Box. Either the Macintosh or the PC that I have can do that. They, of course, surpassed the PDP-10, so the power of computers that you can carry around in your briefcase is greater than musicians know how to utilize. The world is no longer limited, the musical world, by the technology and what it can do. Instead, it's very much limited by our understanding of the human ear and the human brain and what people want to hear as music, what excites them, what makes them think it's beautiful. And that's the continuing forefront of research and future development for music entirely.
Interviewer 55:00
What exactly is an oscillator and were the oscillators that theremin used the same oscillators that were used in the early days of Bell Labs? Can you talk a little bit about that?
Max V. Mathews 55:16
Yeah, they were the same oscillators. They were based on the vacuum tube, the triode that do forests, and maybe others invented. And that made it possible to make radios and do things.And Thurman's work came along very shortly after the vacuum tube came along, and long-distance telephony essentially had to use vacuum tubes.
Interviewer 55:48
What made theremin's use of the oscillator so unique, do you think?
Max V. Mathews 55:55
Oh, he found that if you had a somewhat unstable oscillator, you could influence the pitch of the oscillator by moving your hand in the electric field produced by the oscillator and an antenna attached to the oscillator. And so this was a way of controlling the pitch. And he also used the same technique for controlling the loudness of the sound. So that was his real contribution.
Interviewer 56:34
Did you ever have a chance to meet him?
Max V. Mathews 56:35
Oh yeah, he came over with one of his daughters, I think, to Stanford and gave a lecture and a concert. I played with the daughter.She played the theremin and Rachmaninoff's vocalese, and she did the vocalese part, which the theremin is good for. I did the orchestral accompaniment on one of my instruments, the radio baton.
Interviewer 57:13
Very interesting. What sort of guy did you find him to be?
Max V. Mathews 57:18
Oh, he, at the age of 90, could out-drink and out-stay me in the evening, and I stayed around until midnight, and then I went home and collapsed. Yeah, I think he was a universal man, a citizen of the world.
Interviewer 57:37
You mentioned a music baton, which is something I wanted to just briefly talk about. You had several instruments that you really helped design. Was that the first?
Max V. Mathews 57:47
Well, Music 1 was the first, and then I got interested in real-time performance. The radio baton and the conductor program were intended as a live performance instrument.The conductor program supplied the performer with a virtual orchestra, and the performer was a conductor, not an instrument player, or at least a simulated thing. So he would beat time using one baton in one hand, as the conductor did in the conductor program, would follow his beat. He could speed up or slow down. Then he would use the other hand to provide expression to the music, the loudness or the timbre, and both of these batons could be moved in three-dimensional space and could send XYZ information to the computer. That's where the radio part came in to track the batons.
Interviewer 58:55
Interesting. How many of those were made?
Max V. Mathews 58:59
Oh, about, they're still being made, about 50 of them.
Interviewer 59:05
Is there any part of that that you wish you could have added a feature or something to that didn't get worked in right away?
Max V. Mathews 59:19
I'm still adding features to them, and so originally they were a mechanical drum that you had to actually hit to sense, but it would sense where you hit it. Then it became a radio device. The radio technology was designed by a friend from Bell Labs named Bob Bowie. He's retired and lives in Vermont now. Anyway, this meant you didn't have to touch anything. You could wave these things in three-dimensional space, and that was nice, a great freedom. Originally, you had to have wires attached to the batons to power the little transmitters that were in the ends of the batons. The latest model is wireless, and [[Tom Oberheim]] helped me design and build the batons. We still worked together, and I went to breakfast with him before I came here. He and I together made the radio baton version of it, the cordless radio baton. So that is my main live performance instrument, and I love live performance, I think. Performing music and playing with other people is one of the real joys of life. Chamber music is wonderful.
Interviewer 01:00:56
Well said. And just because I don't want to insult the enormous contribution you did at Stanford, I just wanted to acknowledge that and ask you, was that a good run for you?
Max V. Mathews 01:01:11
I still go down there a couple of weeks, days a week, even though I retired last September officially. But yes, and I've enjoyed working with John, for example, and [[Bill Schottstadt]], and many of the other people in CCRMA. It's a great group. A very, again, a very free group where people aren't told what to do. They have to figure out what they want to do.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.