[obsidian] vault backup: 2025-01-20 18:02:02[
All checks were successful
Build / build (push) Successful in 4m59s

This commit is contained in:
松浦 知也 Matsuura Tomoya 2025-01-20 18:02:02 +09:00
parent a5bc82518b
commit 707d58381d

View File

@ -6,43 +6,39 @@ date: 2025-01-20 17:39
[Max Mathews Full Interview | NAMM.org](https://www.namm.org/video/orh/max-mathews-full-interview)
Nottaの文字起こしを使用したのち手動で固有名詞は修正したが、まだエラーあるかも
Transcription is powered by [Notta](https://notta.ai) and I corrected specific words like human names and institution names but I think still there are several errors...
Nottaの文字起こしを使用したのち手動で固有名詞は修正したが、まだエラーあるかも/Transcription is powered by [Notta](https://notta.ai) and I corrected specific words like human names and institution names but I think still there are several errors...
## AI要約
 この書き起こしは、コンピュータ音楽とデジタル音響合成の先駆者にインタビューした内容を広範に紹介しています。Mathewsはベル研究所とその後スタンフォード大学での経験を語り、初期の音楽プログラミングから影響力のある音楽ソフトウェアの開発に至るまでの道のりを説明しました。彼は、バイオリンを弾いたり海軍の任務中に音楽の持つ感情的な力を発見したりした初期の音楽経験について触れました。また、Music 1からMusic 5への音楽プログラムの進化を詳細に述べ、ブロックダイアグラムコンパイラやウェーブテーブルオシレーターといった重要な革新を強調しました。ベル研究所での音声コーディングやデジタル音響処理における彼の仕事の重要性についても説明しました。さらに、ラジオ・バトンのようなリアルタイムパフォーマンス楽器の開発、FM合成の影響、および大型メインフレームから現代のートパソコンへのコンピュータ音楽技術の進化についても触れました。また、ベル研究所の歴史やその研究文化、John ChowningやPierre Boulezといった重要な人物についても言及しました。インタビューを通じて、音楽技術の開発における人間の知覚を理解することの重要性が強調されました。
この書き起こしは、コンピュータ音楽とデジタル音響合成の先駆者にインタビューした内容を広範に紹介しています。Mathewsはベル研究所とその後スタンフォード大学での経験を語り、初期の音楽プログラミングから影響力のある音楽ソフトウェアの開発に至るまでの道のりを説明しました。彼は、バイオリンを弾いたり海軍の任務中に音楽の持つ感情的な力を発見したりした初期の音楽経験について触れました。また、Music 1からMusic 5への音楽プログラムの進化を詳細に述べ、ブロックダイアグラムコンパイラやウェーブテーブルオシレーターといった重要な革新を強調しました。ベル研究所での音声コーディングやデジタル音響処理における彼の仕事の重要性についても説明しました。さらに、ラジオ・バトンのようなリアルタイムパフォーマンス楽器の開発、FM合成の影響、および大型メインフレームから現代のートパソコンへのコンピュータ音楽技術の進化についても触れました。また、ベル研究所の歴史やその研究文化、John ChowningやPierre Boulezといった重要な人物についても言及しました。インタビューを通じて、音楽技術の開発における人間の知覚を理解することの重要性が強調されました。
### チャプター
00:00:11 子供の頃の音楽的背景と教育
 話者は、初期の音楽体験について語り、高校でバイオリンを学び、その後オーケストラや室内楽グループで活動を続けたことを話しました。また、シアトルでの海軍勤務についても語り、シェラックやビニールレコードを使用したリスニングルームで音楽の持つ感情的な力を発見したと述べました。
話者は、初期の音楽体験について語り、高校でバイオリンを学び、その後オーケストラや室内楽グループで活動を続けたことを話しました。また、シアトルでの海軍勤務についても語り、シェラックやビニールレコードを使用したリスニングルームで音楽の持つ感情的な力を発見したと述べました。
00:01:24 学歴とキャリアの初期段階
 話者は、ネブラスカから海軍を経て、電気工学のためにカリフォルニア工科大学カリテックに進学し、最終的にはMITでコンピュータとアナログコンピューティングシステムに出会った教育の旅について詳しく説明しました。
話者は、ネブラスカから海軍を経て、電気工学のためにカリフォルニア工科大学カリテックに進学し、最終的にはMITでコンピュータとアナログコンピューティングシステムに出会った教育の旅について詳しく説明しました。
00:10:48 音楽プログラムの開発
 話し手は、音楽プログラムの進化について説明しました。最初にMusic 1の限界について述べ、その後、Music 2の四声システムとウェーブテーブルオシレーター、さらにMusic 3のブロックダイアグラムコンパイラ、最後にMusic 5のFortran実装に至るまでの流れを紹介しました。
話し手は、音楽プログラムの進化について説明しました。最初にMusic 1の限界について述べ、その後、Music 2の四声システムとウェーブテーブルオシレーター、さらにMusic 3のブロックダイアグラムコンパイラ、最後にMusic 5のFORTRAN実装に至るまでの流れを紹介しました。
00:26:10 ベルラボと音響研究
 発表者はベル研究所での彼の仕事についての概要を述べ、音声符号化の研究、デジタルテープ技術、音声と音楽の伝送を圧縮するシステムの開発に焦点を当てました。
発表者はベル研究所での彼の仕事についての概要を述べ、音声符号化の研究、デジタルテープ技術、音声と音楽の伝送を圧縮するシステムの開発に焦点を当てました。
00:51:21 FMシンセシスとスタンフォード大学
 話し手は、FM合成の重要性、ジョン・チャウニングの貢献、スタンフォードでのサムソンボックスの開発、そして大型メインフレームから現代のートパソコンまでの音楽技術の進化について語りました。
話し手は、FM合成の重要性、ジョン・チャウニングの貢献、スタンフォードでのサムソンボックスの開発、そして大型メインフレームから現代のートパソコンまでの音楽技術の進化について語りました。
00:57:47 楽器とパフォーマンス技術
 講演者は、ラジオバトンと指揮者プログラムがライブパフォーマンス用の楽器として発展してきたことを説明し、機械から無線技術への進化を詳しく述べました。
講演者は、ラジオバトンと指揮者プログラムがライブパフォーマンス用の楽器として発展してきたことを説明し、機械から無線技術への進化を詳しく述べました。
 次のアクション項目
00:59:19 話者は無線バトンシステムに機能を追加し続けることについて言及しました。
00:59:19 ラジオバトンシステムに機能を追加し続けることについて言及しました。
00:59:05 話者は、ラジオバトンプロジェクトにおけるトム・オーバーハイムとの継続的な協力について言及しました。
@ -60,8 +56,7 @@ Interviewer 00:06
Thank you for having a few minutes for me, I do appreciate it. 
Max V. Mathews 00:09
[[Max Mathews|Max V. Mathews]] 00:09
Okay. 
@ -123,11 +118,11 @@ Absolutely. Tell me about music too. I'm sort of curious about that. 
Max V. Mathews 13:10
Well, Music 1 had only one voice and only one wave shape, a triangular wave, an equal slope up and equal slope down. And the reason was that the fastest computer at the time, the IBM 704, was still very slow. And the only thing it could do a tall fast was addition. And if you think about it, each sample could be computed from the last sample by simply adding a number to it. So the time was one addition per sample. Well, the only thing the composer had at his disposal was the steepness of the slope, how big the number was. So that would determine how loud the waveform was, and the pitch that you were going to make, and the duration of the note. And so that wasn't very much, and you didn't have any polyphony there.So they asked for making a program that could have more voices. And I made one with four voices. And I made one where you could have a controlled wave shape so that you could get different timbers as much as the wave shape contributes to the timbre. Now, in a computer, calculating a sine wave, or a damp sine wave, or a complicated wave is pretty slow, especially in those days. So I invented the wavetable oscillator where you would calculate one pitch period of the wave and store it in the computer memory, and then read this out at various pitches so that this then could be done basically by looking up one location in the computer memory, which is fast. And I also put a amplitude control on the thing by multiplying the wave shape by number. So this cost a multiplication and a couple of additions. So it was more expensive. By that time, computers had gotten maybe 10 or 100 times as fast as the first computer. So it really was practical. So that was music too. And some thing that most listeners would call music came out of that. And some professional composers used it. But they always wanted more. In particular, they didn't have any things like a controlled attack and decay, or vibrato, or filtering, or noise, for that matter. So it was a perfectly reasonable request.But I was unwilling to contemplate even adding these kind of code, one device at a time, to my music program. So what I consider my really important contribution, that still is important, came in music three. And this was what I call a block diagram compiler. And so I would make a block, which was this waveform oscillator. And it would have two inputs. One was the amplitude of the output. And the other was the frequency of the output. And it would have one output. And I would make a mixer block, which could add two things together and mix them. And I made a multiplier block in case you wanted to do simple ring modulation. And I made a noise generator. And essentially, I made a toolkit of these blocks that I gave to the musician, the composer. And he could interconnect them in any way he wanted to make as complex a sound as he wanted. And this was also a note-based system so that you would tell the computer to play a note. 
Well, Music 1 had only one voice and only one wave shape, a triangular wave, an equal slope up and equal slope down. And the reason was that the fastest computer at the time, the [[IBM 704]], was still very slow. And the only thing it could do a tall fast was addition. And if you think about it, each sample could be computed from the last sample by simply adding a number to it. So the time was one addition per sample. Well, the only thing the composer had at his disposal was the steepness of the slope, how big the number was. So that would determine how loud the waveform was, and the pitch that you were going to make, and the duration of the note. And so that wasn't very much, and you didn't have any polyphony there.So they asked for making a program that could have more voices. And I made one with four voices. And I made one where you could have a controlled wave shape so that you could get different timbers as much as the wave shape contributes to the timbre. Now, in a computer, calculating a sine wave, or a damp sine wave, or a complicated wave is pretty slow, especially in those days. So I invented the wavetable oscillator where you would calculate one pitch period of the wave and store it in the computer memory, and then read this out at various pitches so that this then could be done basically by looking up one location in the computer memory, which is fast. And I also put a amplitude control on the thing by multiplying the wave shape by number. So this cost a multiplication and a couple of additions. So it was more expensive. By that time, computers had gotten maybe 10 or 100 times as fast as the first computer. So it really was practical. So that was music too. And some thing that most listeners would call music came out of that. And some professional composers used it. But they always wanted more. In particular, they didn't have any things like a controlled attack and decay, or vibrato, or filtering, or noise, for that matter. So it was a perfectly reasonable request.But I was unwilling to contemplate even adding these kind of code, one device at a time, to my music program. So what I consider my really important contribution, that still is important, came in music three. And this was what I call a block diagram compiler. And so I would make a block, which was this waveform oscillator. And it would have two inputs. One was the amplitude of the output. And the other was the frequency of the output. And it would have one output. And I would make a mixer block, which could add two things together and mix them. And I made a multiplier block in case you wanted to do simple ring modulation. And I made a noise generator. And essentially, I made a toolkit of these blocks that I gave to the musician, the composer. And he could interconnect them in any way he wanted to make as complex a sound as he wanted. And this was also a note-based system so that you would tell the computer to play a note. 
Max V. Mathews 17:50
And you would give the parameters that you wanted the computer to read for that note. You almost always specified the pitch and the loudness of the note. But you could have an attack and decay block generator included in this, and you could say how fast you wanted the attack and how long you wanted the decay to last, or you could even make an arbitrary wave shape for the envelope of the sound. And so this really was an enormous hit, and it put the creativity then, not only for composing the notes, the melodies, or the harmonies that you wanted played on the musician, on the composer, but it gave him an additional task of creating the timbres that he wanted.And that was a mixed blessing. He didn't have the timbres of the violin and the orchestral instruments to call upon that he understood. He had to learn how timbre was related to the physical parameters of the waveform. And that turned out to be an interesting challenge for musicians that some people learn to do beautifully and others will never learn it.The man who really got this started at the beginning was Jean-Claude Risset, a French composer and physicist who came to Bell Labs and worked with me. It was one of my great good luck and pleasures that he was around. And so he made a sound catalog that showed how you could create sounds of various instruments and sounds that were interesting but were definitely not traditional instruments. And that work still goes on.Brisset is coming here to give some lectures at Stanford on April 3rd. He'll be here for the entire spring quarter. 
And you would give the parameters that you wanted the computer to read for that note. You almost always specified the pitch and the loudness of the note. But you could have an attack and decay block generator included in this, and you could say how fast you wanted the attack and how long you wanted the decay to last, or you could even make an arbitrary wave shape for the envelope of the sound. And so this really was an enormous hit, and it put the creativity then, not only for composing the notes, the melodies, or the harmonies that you wanted played on the musician, on the composer, but it gave him an additional task of creating the timbres that he wanted.And that was a mixed blessing. He didn't have the timbres of the violin and the orchestral instruments to call upon that he understood. He had to learn how timbre was related to the physical parameters of the waveform. And that turned out to be an interesting challenge for musicians that some people learn to do beautifully and others will never learn it.The man who really got this started at the beginning was [[Jean Claude Risset]], a French composer and physicist who came to Bell Labs and worked with me. It was one of my great good luck and pleasures that he was around. And so he made a sound catalog that showed how you could create sounds of various instruments and sounds that were interesting but were definitely not traditional instruments. And that work still goes on. Risset is coming here to give some lectures at Stanford on April 3rd. He'll be here for the entire spring quarter. 
Interviewer 20:03
@ -135,11 +130,11 @@ Hmm, very interesting. 
Max V. Mathews 20:06
But to finish up this series, that got me to Music 3. Along came the best computer that IBM ever produced, the IBM 704, 1794, excuse me. It was a transistorized computer, it was much faster, and it had quite a long life. They finally stopped supporting it in the mid-1960s, I guess.I had to write Music 4, simply reprogramming all the stuff I had done for the previous computer, for this new computer, which was a big and not very interesting job. So, when the 1794 was retired, and I had to consider another computer, I rewrote Music 5, which is essentially just a rewrite of Music 3 or Music 4, but in a compiler language. Fortran was the compiler that was powerful and existed in those days. And so that when the next generation beyond the Music 5 computers, the PDP-10 was a good example of a computer that ran well with music, I didn't have to rewrite anything. I could simply recompile the Fortran program, and that's true today. Now the sort of most direct descendant of Music 5 is a program written by Barry Vercoe, who's at the Media Lab at MIT, and it's called Csound, and the reason the C in Csound stands for the C compiler. Now you're asking about Bell Labs, and many wonderful things came out of Bell Labs, including Unix, and of course Linux, and now the OSX operating system for Macintoshes all started at Bell Labs. And the most powerful compiler, and I think the most widely used compiler, was also created at Bell Labs. It was called the C compiler, A and B were its predecessors, and C was so good that people stopped there, and now that's it for the world. Every computer has to have a C compiler now, whether it's a big computer or a little tiny DSP chip. So that's where that came from. 
But to finish up this series, that got me to Music 3. Along came the best computer that IBM ever produced, the IBM 704, 1794, excuse me. It was a transistorized computer, it was much faster, and it had quite a long life. They finally stopped supporting it in the mid-1960s, I guess.I had to write Music 4, simply reprogramming all the stuff I had done for the previous computer, for this new computer, which was a big and not very interesting job. So, when the 1794 was retired, and I had to consider another computer, I rewrote Music 5, which is essentially just a rewrite of Music 3 or Music 4, but in a compiler language. FORTRAN was the compiler that was powerful and existed in those days. And so that when the next generation beyond the Music 5 computers, the PDP-10 was a good example of a computer that ran well with music, I didn't have to rewrite anything. I could simply recompile the FORTRAN program, and that's true today. Now the sort of most direct descendant of Music 5 is a program written by [[Barry Vercoe]], who's at the Media Lab at MIT, and it's called Csound, and the reason the C in [[CSound]] stands for the C compiler. Now you're asking about Bell Labs, and many wonderful things came out of Bell Labs, including Unix, and of course Linux, and now the OSX operating system for Macintoshes all started at Bell Labs. And the most powerful compiler, and I think the most widely used compiler, was also created at Bell Labs. It was called the C compiler, A and B were its predecessors, and C was so good that people stopped there, and now that's it for the world. Every computer has to have a C compiler now, whether it's a big computer or a little tiny DSP chip. So that's where that came from. 
Interviewer 23:03
very interesting you had mentioned um the envelope before and i just wonder were there other applications for that before the music programs .
Very interesting you had mentioned um the envelope before and i just wonder were there other applications for that before the music programs .
Max V. Mathews 23:18
@ -187,12 +182,12 @@ Oh, just the fact that it's a lot easier to think of something new by going to l
Interviewer 37:59
Do you think, based on what you were doing and what others were doing at Bell Labs, that it is correct to say that what Bob Moog and Don Buchla were doing were the first in their fields for synthesized music? 
Do you think, based on what you were doing and what others were doing at Bell Labs, that it is correct to say that what [[Bob Moog]] and [[Don Buchla]] were doing were the first in their fields for synthesized music? 
Max V. Mathews 38:22
Well, saying what's first is always problematic, and I don't much try to speculate there. The thing that was interesting was that Moog and Buchla and myself, both, all three of us developed what I called a block diagram compiler. A compiler is not the right word. In the case of Buchla and Moog, they were modular synthesizers so that you could have a bunch of modules and plug them together with patch cords that a musician, the user, could plug them together in any way he wanted. They were analog modules, and I made the digital equivalent of most of those, or they made the analog equipment of mine, the oscillator, of course, and the attack and decay generators and the filters and the mixers and things like that. The computer had at least the initial advantage that the computer memory could also contain the score of the music, and in the early Moog things it was harder to put the score into an analog device. They did gradually introduce what they called sequencers, which is a form of score, but it never became as general as what you could do with a digital computer, and it never became as general as what you can do with MIDI files.And do you know what the difference is between a MIDI file and MIDI commands? Well, a MIDI command has no execution time attached to it per se. Just a command that lets you turn on a note in some synthesizer by some other keyboard that sends a standard command, the MIDI command, to the synthesizer.And this was an enormous advance for analog equipment or combination digital analog because the MIDI file itself is digital. But it was an enormous communication standard, very reluctantly entered into by the big companies. Yamaha, I don't think, was at the beginning of this. It was Dave Smith that, I've forgotten his name of his company. 
Well, saying what's first is always problematic, and I don't much try to speculate there. The thing that was interesting was that Moog and Buchla and myself, both, all three of us developed what I called a block diagram compiler. A compiler is not the right word. In the case of Buchla and Moog, they were modular synthesizers so that you could have a bunch of modules and plug them together with patch cords that a musician, the user, could plug them together in any way he wanted. They were analog modules, and I made the digital equivalent of most of those, or they made the analog equipment of mine, the oscillator, of course, and the attack and decay generators and the filters and the mixers and things like that. The computer had at least the initial advantage that the computer memory could also contain the score of the music, and in the early Moog things it was harder to put the score into an analog device. They did gradually introduce what they called sequencers, which is a form of score, but it never became as general as what you could do with a digital computer, and it never became as general as what you can do with MIDI files.And do you know what the difference is between a MIDI file and MIDI commands? Well, a MIDI command has no execution time attached to it per se. Just a command that lets you turn on a note in some synthesizer by some other keyboard that sends a standard command, the MIDI command, to the synthesizer.And this was an enormous advance for analog equipment or combination digital analog because the MIDI file itself is digital. But it was an enormous communication standard, very reluctantly entered into by the big companies. Yamaha, I don't think, was at the beginning of this. It was [[Dave Smith]] that, I've forgotten his name of his company. 
Interviewer 41:14
@ -224,7 +219,7 @@ And did you know, John, before... 
Max V. Mathews 45:26
John was studying as a grad student at Stanford, and he and Ray Say too read a paper I wrote in Science Magazine about the Music 3 program, and he came back to Bell Labs and spent a day with me, and he was very bright, and he understood what I was doing instantly, and he went back to Stanford and wrote his own music program, and then he tied up with the artificial intelligence laboratory that John McCarthy had set up at Stanford, and they had a very good computer, a DEC PDP-10, which in my mind was by far the best computer that existed in those days. So John could, at night when the AI people were home sleeping, he could use a computer for making music on these programs, and so he made wonderful music, and he, well, one of the things that Ray Say found was that in order to be interesting, the spectrum of a sound has to change over the duration of a note, and if the spectrum is constant over the note, why your ear very rapidly gets tired of the sound and doesn't think it's beautiful or charming, and so Ray Say used additive synthesis with a lot of oscillators and changing their amplitude, their outputs to make a changeable spectrum, and he could make very good instrumental sounds and other sounds this way, but it was very expensive, and John found that by using frequency modulation in a way that it had never been used for communication purposes, that he could also make the spectrum change over notes and do similar things to what Risset did with additive synthesis, and this was much more efficient.It took less computer power to do that, and he also, John was a very good salesman. He persuaded the Yamaha company to design a chip to do FM synthesis, and this was the Yamaha DX7 computer, and sort of overnight that brought down the price of an entry-level system that could make interesting music from a [[PDP-11]] computer costing about $100,000 to a DX7 costing about $2,000, and of course that increased the number of people who were using this from, I don't know, maybe a ratio of a thousand to one increase from the decrease in the cost. So anyway, as I say, John visited me in the early 60s, and then he went back and did his thing at Stanford, and Risset spent several years at Bell Labs in the 60s, and then he went back to France, and gradually got a digital system going there, and persuaded Pierre Boulez that, or maybe Boulez persuaded himself that there should be a computer part of the IRCAM laboratory that Boulez had talked Pompidou into supporting in France, and Risset was put in charge of that laboratory. Risset persuaded me and Boulez that I should spend some time there. I continued to work at Bell Labs, helping set up IRCAM. 
John was studying as a grad student at Stanford, and he and Ray Say too read a paper I wrote in Science Magazine about the Music 3 program, and he came back to Bell Labs and spent a day with me, and he was very bright, and he understood what I was doing instantly, and he went back to Stanford and wrote his own music program, and then he tied up with the artificial intelligence laboratory that [[John McCarthy]] had set up at Stanford, and they had a very good computer, a DEC PDP-10, which in my mind was by far the best computer that existed in those days. So John could, at night when the AI people were home sleeping, he could use a computer for making music on these programs, and so he made wonderful music, and he, well, one of the things that Ray Say found was that in order to be interesting, the spectrum of a sound has to change over the duration of a note, and if the spectrum is constant over the note, why your ear very rapidly gets tired of the sound and doesn't think it's beautiful or charming, and so Ray Say used additive synthesis with a lot of oscillators and changing their amplitude, their outputs to make a changeable spectrum, and he could make very good instrumental sounds and other sounds this way, but it was very expensive, and John found that by using frequency modulation in a way that it had never been used for communication purposes, that he could also make the spectrum change over notes and do similar things to what Risset did with additive synthesis, and this was much more efficient.It took less computer power to do that, and he also, John was a very good salesman. He persuaded the Yamaha company to design a chip to do FM synthesis, and this was the Yamaha DX7 computer, and sort of overnight that brought down the price of an entry-level system that could make interesting music from a [[PDP-11]] computer costing about $100,000 to a DX7 costing about $2,000, and of course that increased the number of people who were using this from, I don't know, maybe a ratio of a thousand to one increase from the decrease in the cost. So anyway, as I say, John visited me in the early 60s, and then he went back and did his thing at Stanford, and Risset spent several years at Bell Labs in the 60s, and then he went back to France, and gradually got a digital system going there, and persuaded [[Pierre Boulez]] that, or maybe Boulez persuaded himself that there should be a computer part of the IRCAM laboratory that Boulez had talked Pompidou into supporting in France, and Risset was put in charge of that laboratory. Risset persuaded me and Boulez that I should spend some time there. I continued to work at Bell Labs, helping set up IRCAM. 
Interviewer 49:41
@ -243,7 +238,7 @@ What sort of things made it so interesting for you there? 
Max V. Mathews 49:56
Oh, no, excitement of working in Paris, trying to learn how to speak a little French. Getting a system going with a PDP-10 computer, which the French had enough money to buy, and getting the analog to digital analog parts on it. Using them, they had some very good studio rooms so that you could do good psychoacoustic research. You need a nice quiet room to listen to things in, and here come had that.The rooms were connected to the computer so you could make good test sounds to evaluate. Working with Risset and Gerald Bennett, who I still work with very much. David Wessel, of course, came over there. It's about a decade or two. Working with the musicians there and the technical people. It was an exciting time in my life. 
Oh, no, excitement of working in Paris, trying to learn how to speak a little French. Getting a system going with a PDP-10 computer, which the French had enough money to buy, and getting the analog to digital analog parts on it. Using them, they had some very good studio rooms so that you could do good psychoacoustic research. You need a nice quiet room to listen to things in, and here come had that.The rooms were connected to the computer so you could make good test sounds to evaluate. Working with Risset and Gerald Bennett, who I still work with very much. [[David Wessel]], of course, came over there. It's about a decade or two. Working with the musicians there and the technical people. It was an exciting time in my life. 
Interviewer 51:09
@ -251,7 +246,7 @@ Going back to John for just a second. From your perspective, what was the import
Max V. Mathews 51:21
Well, the importance was that you could make good music with it. That also led to the Samson Box, which could do real-time FM synthesis, as could the DX7, but more powerful synthesis. And so the Samson Box was designed and built here, I guess, in the Bay Area by Peter Samson. And for about a decade, it had a monopoly on the rapid and efficient synthesis of really powerful music, a monopoly at John's CCRMA Laboratory. And so just an enormous string of very excellent music came out of that, and good musicians from all over were attracted to CCRMA because of that machine. Now, you could make this same music, but at a much slower time on a PDP-10 by itself, but the Samson Box made a second of music in a second of time. That was real time. It was intended to be used for live performance of computer music. That was a tension, and it could have done that, but it really was never capitalized on because, A, you had to have a PDP-10 to drive the Samson Box, and B, you had to have the Samson Box, which was about the size of a big refrigerator. And so it really wasn't practical to take this on the stage where you have to do a performance. And so it produced essentially tape music, but rich tape music. The lifetime of the Samson Box was really ended by the advent of the laptop computers, and the laptop computers getting so powerful that they now can do what the Samson Box can do ten times faster than the Samson Box. Either the Macintosh or the PC that I have can do that. They, of course, surpassed the PDP-10, so the power of computers that you can carry around in your briefcase is greater than musicians know how to utilize. The world is no longer limited, the musical world, by the technology and what it can do. Instead, it's very much limited by our understanding of the human ear and the human brain and what people want to hear as music, what excites them, what makes them think it's beautiful. And that's the continuing forefront of research and future development for music entirely. 
Well, the importance was that you could make good music with it. That also led to the [[SAMSON BOX|Samson Box]], which could do real-time FM synthesis, as could the DX7, but more powerful synthesis. And so the Samson Box was designed and built here, I guess, in the Bay Area by Peter Samson. And for about a decade, it had a monopoly on the rapid and efficient synthesis of really powerful music, a monopoly at John's CCRMA Laboratory. And so just an enormous string of very excellent music came out of that, and good musicians from all over were attracted to CCRMA because of that machine. Now, you could make this same music, but at a much slower time on a PDP-10 by itself, but the Samson Box made a second of music in a second of time. That was real time. It was intended to be used for live performance of computer music. That was a tension, and it could have done that, but it really was never capitalized on because, A, you had to have a PDP-10 to drive the Samson Box, and B, you had to have the Samson Box, which was about the size of a big refrigerator. And so it really wasn't practical to take this on the stage where you have to do a performance. And so it produced essentially tape music, but rich tape music. The lifetime of the Samson Box was really ended by the advent of the laptop computers, and the laptop computers getting so powerful that they now can do what the Samson Box can do ten times faster than the Samson Box. Either the Macintosh or the PC that I have can do that. They, of course, surpassed the PDP-10, so the power of computers that you can carry around in your briefcase is greater than musicians know how to utilize. The world is no longer limited, the musical world, by the technology and what it can do. Instead, it's very much limited by our understanding of the human ear and the human brain and what people want to hear as music, what excites them, what makes them think it's beautiful. And that's the continuing forefront of research and future development for music entirely. 
Interviewer 55:00
@ -274,7 +269,6 @@ Interviewer 56:34
Did you ever have a chance to meet him? 
Max V. Mathews 56:35
Oh yeah, he came over with one of his daughters, I think, to Stanford and gave a lecture and a concert. I played with the daughter.She played the theremin and Rachmaninoff's vocalese, and she did the vocalese part, which the theremin is good for. I did the orchestral accompaniment on one of my instruments, the radio baton. 
@ -311,7 +305,7 @@ Is there any part of that that you wish you could have added a feature or someth
Max V. Mathews 59:19
I'm still adding features to them, and so originally they were a mechanical drum that you had to actually hit to sense, but it would sense where you hit it. Then it became a radio device. The radio technology was designed by a friend from Bell Labs named Bob Bowie. He's retired and lives in Vermont now. Anyway, this meant you didn't have to touch anything. You could wave these things in three-dimensional space, and that was nice, a great freedom. Originally, you had to have wires attached to the batons to power the little transmitters that were in the ends of the batons. The latest model is wireless, and Tom Oberheim helped me design and build the batons. We still worked together, and I went to breakfast with him before I came here. He and I together made the radio baton version of it, the cordless radio baton. So that is my main live performance instrument, and I love live performance, I think. Performing music and playing with other people is one of the real joys of life. Chamber music is wonderful. 
I'm still adding features to them, and so originally they were a mechanical drum that you had to actually hit to sense, but it would sense where you hit it. Then it became a radio device. The radio technology was designed by a friend from Bell Labs named Bob Bowie. He's retired and lives in Vermont now. Anyway, this meant you didn't have to touch anything. You could wave these things in three-dimensional space, and that was nice, a great freedom. Originally, you had to have wires attached to the batons to power the little transmitters that were in the ends of the batons. The latest model is wireless, and [[Tom Oberheim]] helped me design and build the batons. We still worked together, and I went to breakfast with him before I came here. He and I together made the radio baton version of it, the cordless radio baton. So that is my main live performance instrument, and I love live performance, I think. Performing music and playing with other people is one of the real joys of life. Chamber music is wonderful. 
Interviewer 01:00:56
@ -319,5 +313,5 @@ Well said. And just because I don't want to insult the enormous contribution you
Max V. Mathews 01:01:11
I still go down there a couple of weeks, days a week, even though I retired last September officially. But yes, and I've enjoyed working with John, for example, and Bill Schottstadt, and many of the other people in CCRMA. It's a great group. A very, again, a very free group where people aren't told what to do. They have to figure out what they want to do.
I still go down there a couple of weeks, days a week, even though I retired last September officially. But yes, and I've enjoyed working with John, for example, and [[Bill Schottstadt]], and many of the other people in CCRMA. It's a great group. A very, again, a very free group where people aren't told what to do. They have to figure out what they want to do.