diff --git a/src/main.tex b/src/main.tex index f63d3e0..3c22bc3 100644 --- a/src/main.tex +++ b/src/main.tex @@ -184,35 +184,29 @@ \maketitle \begin{abstract} -This is the template file for the proceedings of the third \F{} -International Conference (IFC-24). -This template has been derived from IFC-20 templates and aims at producing -conference proceedings in electronic form. -The format is essentially the one used for ICASSP conferences. Please use -either this \LaTeX{} or the accompanying Word formats when preparing your -submission. +This paper proposes \lambdammm : a call-by-value simply-typed lambda calculus-based intermediate representation of programming languages for music that deal with synchronous signal processing, as well as a virtual machine and instruction set to run \lambdammm . Digital signal processing can be represented with a syntax that incorporates the internal states of delay and feedback into the lambda calculus. \lambdammm is a superset of the lambda calculus, which allows users to construct a generative signal processing graph and its execution in identical semantics. On the other hand, due to its specification, a problem was found that when dealing with higher-order functions, the users have to determine whether the execution is in the global environment evaluation or the DSP execution, and it is implied that the multi-stage computation would solve the issue. \end{abstract} \section{Introduction} \label{sec:intro} -A number of programming languages for sound and music have been developed to date, but not many have strongly formalised semantics. One language that is both rigorously formalised and practical is Faust\cite{Orlarey2004}, which combines blocks with inputs and outputs with five primitive operations - parallel, sequential, split, merge and recursive connection. By having basic arithmetics and delay as a primitive block, any type of signal processing can be written in Faust. In a later extension, macros based on a term rewriting system has introduced that allows the higher abstraction of systems with an arbitrary number of inputs and outputs\cite{graf2010}. +Many programming languages for sound and music have been developed, but only a few have strongly formalized semantics. One language that is both rigorously formalized and practical is Faust\cite{Orlarey2004}, which combines blocks with inputs and outputs with five primitive operations - parallel, sequential, split, mergem and recursive connection. By having basic arithmetics and delay as a primitive block, any type of signal processing can be written in Faust. In a later extension, a macro based on a term rewriting system has been introduced, allowing the higher abstraction of systems with an arbitrary number of inputs and outputs\cite{graf2010}. -This strong abstraction capability through formalisation enables Faust to be translated to various backends, such as C++ and Rust. On the other hand, BDA lacks theoretical and practical compatibility with common programming languages: although it is possible to call external C functions in Faust, the assumption is basically pure function calls without memory allocation and deallocation. Therefore, while it is easy to embed Faust in another language, it is not easy to call another language from Faust. +This strong abstraction capability through formalization enables Faust to be translated to various backends, such as C++ and Rust. On the other hand, BDA lacks theoretical and practical compatibility with common programming languages. Although it is possible to call external C functions in Faust, those functions are assumed to be pure functions without a heap memory allocation and deallocation. Therefore, while it is easy to embed Faust in another language, it is not easy to call another language from Faust. -In addition, macros in Faust are independent as a term rewriting system that generates BDA based on pattern matching. Therefore, even though the distinction between real and integer types does not exist in BDA, the arguments for pattern matching implicitly required to be an integer, which causes compile errors. This implicit typing rules are not intuitice for novice users. +In addition, macros in Faust are independent as a term rewriting system that generates BDA based on pattern matching. Therefore, even though the distinction between real and integer types does not exist in BDA, the arguments for pattern matching are implicitly required to be an integer, which causes compile errors. These implicit typing rules are not intuitive for novice users. -Proposing a computational model for signal processing that is based on the more generic computational models, such as the lambda calculus has the potential to interoperate between many different general-purpose languages, and also facilitate the appropriation of existing optimisation methods and the implementation of compilers and run-time. +Proposing a computational model for signal processing based on the more generic computational models, such as the lambda calculus has the potential to interoperate between many different general-purpose languages, and also facilitate the appropriation of existing optimisation methods and the implementation of compilers and run-time. -Currently, it has been proved that BDA can be converted to a general-purpose functional language in the form of using Arrows, a higher-level abstraction of monads\cite{gaster2018}. But higher-order functions on general-purpose functional languages are often implemented on the basis of dynamic memory allocation and release, which makes it difficult to use them in host languages for the real-time signal processing. +Currently, it has been proved that BDA can be converted to a general-purpose functional language in the form of using Arrows, a higher-level abstraction of monads\cite{gaster2018}. However, higher-order functions on general-purpose functional languages are often implemented on the basis of dynamic memory allocation and release, which makes it difficult to use them in host languages for real-time signal processing. -Also, Kronos\cite{norilo2015} and W-calculus\cite{arias2021} are examples of attempts at lambda-calculus-based abstraction, while being influenced by Faust. Kronos is based on the theoretical foundation of System-$F\omega$, a lambda computation in which the type itself is the object of the lambda abstraction (a function can be defined that computes the type and returns a new type). The W-calculus limits the systems it represents to linear time-invariant ones and defines a more formal semantics, aiming at automatic proofs of the linearlity and an identity of graph topologies. +Also, Kronos\cite{norilo2015} and W-calculus\cite{arias2021} are examples of attempts at lambda calculus-based abstraction while being influenced by Faust. Kronos is based on the theoretical foundation of System-$F\omega$, a lambda computation in which the type itself is the object of the lambda abstraction (a function can be defined that computes the type and returns a new type). The W-calculus limits the systems it represents to linear time-invariant ones and defines a more formal semantics, aiming at automatic proofs of the linearity and identity of graph topologies. -Previously, the author designed a programming language for music \textit{mimium} \cite{matsuura2021a}. It adds the basic operations of delay and feedback to lambda-calculus, signal processing can be expressed concisely while having a syntax close to that of general-purpose programming languages(especially, the syntax of mimium is designed to be looks like Rust language). +Previously, the author designed a programming language for music \textit{mimium} \cite{matsuura2021a}. By adding basic operations of delay and feedback to lambda calculus, signal processing can be concisely expressed while having a syntax close to that of general-purpose programming languages(especially, the syntax of mimium is designed to be looks like Rust language). -One of the previous issues with mimium was the inability to compile the codes which contains a combination of recursive or higher-order functions and stateful functions because the compiler can not be determine the data size of the internal state of the signal processing. +One of the previous issues with mimium was the inability to compile the codes, which contain a combination of recursive or higher-order functions and stateful functions because the compiler could not determine the data size of the internal state of the signal processing. -In this paper, the syntax and semantics of \lambdammm, a superset of the call-by-value simply-typed lambda calculus are explained, as a computation model that is supposed to be an intermediate representation of mimium. Also, a virtual machine and its instruction set are proposed to execute this computation model practically. Lastly, a space for optimization for running mimium using \lambdammm will be discussed. +In this paper, I propose the syntax and semantics of \lambdammm, a superset of the call-by-value simply-typed lambda calculus, as a computation model that is supposed to be an intermediate representation of mimium. Also, I propose a virtual machine and its instruction set to execute this computation model practically. Lastly, I discuss a space for optimization for running mimium using \lambdammm. \section{Syntax} @@ -220,14 +214,14 @@ In this paper, the syntax and semantics of \lambdammm, a superset of the call-by \input{syntax.tex} -Definition of types and terms of the $\lambda_{mmm}$ are shown in Figure \ref{fig:syntax_v}. +Definition of types and terms of the \lambdammm are shown in Figure \ref{fig:syntax_v}. Two terms are added in addition to the usual simply-typed lambda calculus, $delay e_1 e_2 $ that refers a previous value of $e_1$ to $e_2$ sample before, and $feed x.e$ abstraction that the user can refer the result of the evaluation of the $e$ one unit time before as $x$ during the evaluation of $e$ itself. \subsection{Syntactic sugar of the feedback expression in mimium} \label{sec:mimium} -\begin{lstlisting}[label=lst:onepole,language=Rust,caption=\it Example of the code of one-pole filter in mimium.] +\begin{lstlisting}[float,floatplacement=H,label=lst:onepole,language=Rust,caption=\it Example of the code of one-pole filter in mimium.] fn onepole(x,g){ x*(1.0-g) + self*g } @@ -249,6 +243,8 @@ mimium by the author has a keyword $self$ that can be used in function definitio \subsection{Typing Rule} \label{sec:typing} +\input{typing.tex} + Additional typing rules to usual simply-typed lambda calculus are shown in Figure \ref{fig:typing}. As primitive types, there are a real number type to used in most of signal processing and a natural number type that is used to indice of delay. @@ -257,7 +253,6 @@ In the W-calculus, a starting point of designing \lambdammm, function types can In \lambdammm, the problem of memory allocation for closures is left to the implementation of the runtime in the Section\ref{sec:vm}, and higher-order functions are allowed. However, the $feed$ abstraction does not allow function types as its input and output. Allowing the return of function types in the $feed$ abstraction means that it is possible to define functions whose processing contents change time-to-time. While this may be interesting theoritically, there are currently no practical cases in real-world signal processing, and it is expected to further complicate implementations. -\input{typing.tex} \section{Semantics} @@ -267,7 +262,7 @@ In \lambdammm, the problem of memory allocation for closures is left to the impl The excerpt of operational semantics of the \lambdammm is shown in Figure \ref{fig:semantics}. This big-step semantics is a conceptual explanation of the evaluation that, when the current time is $n$, the previous evaluation environment $t$ samples before can be referred to as $E^{n-t}$ , and that when the time < 0, the evaluation of any term is evaluated to the default value of its type (0 for the numeric types). -Of course, if we tried to execute this semantics in a straightforward manner, we would have to redo the calculation from time 0 to the current time every sample, with saving all the variable environments at each sample. In practice, therefore, a virtual machine is defined that takes into account the internal memory space used by delay and feed, and the \lambdammm terms are converted into instructions for that machine before execution. +Of course, if we tried to execute this semantics in a straightforward manner, we would have to redo the calculation from time 0 to the current time every sample, with saving all the variable environments at each sample. In practice, therefore, a virtual machine is defined that takes into account the internal memory space used by $delay$ and $feed$, and the \lambdammm terms are converted into instructions for that machine before execution. \section{VM Model and Instruction Set} \label{sec:vm} @@ -299,7 +294,7 @@ VM Instructions for \lambdammm differs from Lua VM in the following respects. Instructions in \lambdammm VM are 32bit data with operation tag and 3 operands. Currently, a bit width for the tag and each operands are all 8 bit\footnote[1]{Reason for this is that it is easy to implemented on \texttt{enum} data structure on Rust, a host language of the latest mimium compiler. Operands bitwidth and alignment may be changed in the future.}. -The VM of \lambdammm is a register machine like the Lua VM after version 5, although the VM has no real register but the register number simply means the offset index of call stack from the base pointer at the point of execution of the VM. The first operand of most instructions is the register number in which to store the result of the operation. +The VM of \lambdammm is a register machine like the Lua VM after version 5, although the VM has no real register but the register number simply means the offset index of the call stack from the base pointer at the point of execution of the VM. The first operand of most instructions is the register number in which to store the result of the operation. The list of instructions is shown in Figure \ref{fig:instruction} (basic arithmetic operations are partly omitted). The notation for the instruction follows the Lua VM paper \cite[p.13]{ierusalimschy2005}. From left to right, the name of operation, a list of operands, and pseudo-code of the operation. When using each of the three operands as unsigned 8 bits, they are denoted as \texttt{A B C}. When used with a signed integer, prefix \texttt{s} is added, and when the two operand fields are used as one 16 bits, an suffix \texttt{x} is added. For example, when B and C are merged and treated as signed 16 bits, they are denoted as \texttt{sBx}. @@ -342,33 +337,9 @@ ADDF & A B C & R(A) := R(B) as int + R(C) as in \\ \caption{\label{fig:instruction}{\it Instruction sets for VM to run $\lambda_{mmm}$.}} \end{figure*} -\subsection{Compilation to the VM instructions} - -\begin{lstlisting}[float,floatplacement=H,label=lst:bytecodes_onepole,caption=\it Compiled VM instructions of one-pole filter example in Listing \ref{lst:onepole}] - CONSTANTS:[1.0] - fn onepole(x,g) state_size:1 - MOVECONST 2 0 // load 1.0 - MOVE 3 1 // load g - SUBF 2 2 3 // 1.0 - g - MOVE 3 0 // load x - MULF 2 2 3 // x * (1.0-g) - GETSTATE 3 // load self - MOVE 4 1 // load g - MULF 3 3 4 // self * g - ADDF 2 2 3 // compute result - GETSTATE 3 // prepare return value - SETSTATE 2 // store to self - RETURN 3 1 -\end{lstlisting} - -Listing \ref{lst:bytecodes_onepole} shows an basic example when the mimium code in Listing \ref{lst:onepole} is compiled into VM bytecode. When \texttt{self} is referenced, the value is obtained with the \texttt{GETSTATE} instruction, and the internal state is updated by storing the return value with the \\ \texttt{SETSTATE} instruction before returning the value with \texttt{RETURN} from the function. Here, the actual return value is obtained by the second \texttt{GETSTATE} instruction in order to return the initial value of the internal state when time=0. - -For example, when a time counter is written as \texttt{| | \{self + 1\}}, it is the compiler's design choice whether the return value of time=0 should be 0 or 1 though the latter does not strictly follow the semantics in Figure \ref{fig:semantics}. If the design is to return 1 when time = 0, the second \texttt{GETSTATE} instruction can be removed and the value for the \texttt{RETURN} instruction should be \texttt{R(2)}. - \subsection{Overview of the VM structure} \label{sec:vmstructure} - The overview of a data structure of the VM, program and the instantiated closure for \lambdammm is shown in Figure \ref{fig:vmstructure} . In addition to the normal call stack, the VM has a storage area for managing internal state data for feedback and delay. This storage area is accompanied by data indicating the position from which the internal state is retrieved by the \texttt{GETSTATE} / \texttt{SETSTATE} instructions. This position is modified by \\ \texttt{SHIFTSTATE} operation. The the actual data in the state storage memory are statically layed out at compile time by analyzing function calls that include references to \texttt{self}, call of \texttt{delay} and the functions which will call such statefull functions recursively. @@ -384,13 +355,35 @@ When the closure escapes from the original function with \\ \texttt{RETURN} inst \caption{\label{fig:vmstructure}{\it Overview of the virtual machine, program and instantiated closures for \lambdammm.}} \end{figure*} -\subsection{Example of the compilation of statefull functions} +\subsection{Compilation to the VM instructions} + +\begin{lstlisting}[float,floatplacement=H,label=lst:bytecodes_onepole,caption=\it Compiled VM instructions of one-pole filter example in Listing \ref{lst:onepole}] + CONSTANTS:[1.0] + fn onepole(x,g) state_size:1 + MOVECONST 2 0 // load 1.0 + MOVE 3 1 // load g + SUBF 2 2 3 // 1.0 - g + MOVE 3 0 // load x + MULF 2 2 3 // x * (1.0-g) + GETSTATE 3 // load self + MOVE 4 1 // load g + MULF 3 3 4 // self * g + ADDF 2 2 3 // compute result + GETSTATE 3 // prepare return value + SETSTATE 2 // store to self + RETURN 3 1 +\end{lstlisting} + +Listing \ref{lst:bytecodes_onepole} shows an basic example when the mimium code in Listing \ref{lst:onepole} is compiled into VM bytecode. When \texttt{self} is referenced, the value is obtained with the \texttt{GETSTATE} instruction, and the internal state is updated by storing the return value with the \\ \texttt{SETSTATE} instruction before returning the value with \texttt{RETURN} from the function. Here, the actual return value is obtained by the second \texttt{GETSTATE} instruction in order to return the initial value of the internal state when time=0. + +For example, when a time counter is written as \texttt{| | \{self + 1\}}, it is the compiler's design choice whether the return value of time=0 should be 0 or 1 though the latter does not strictly follow the semantics E-FEED in Figure \ref{fig:semantics}. If the design is to return 1 when time = 0, the second \texttt{GETSTATE} instruction can be removed and the value for the \texttt{RETURN} instruction should be \texttt{R(2)}. + A more complex example code and its expected bytecode instructions are shown in Listing \ref{lst:fbdelay} and Listing \ref{lst:bytecodes_fbdelay}. The codes define delay with a feedback as \texttt{fbdelay}, the other function \texttt{twodelay} uses two feedback delay with different parameters, and \texttt{dsp} finally uses two \texttt{twodelay} function. -Each after the referring to \texttt{self} through \texttt{GETSTATE} instruction, or call to the other statefull function, \\ \texttt{SHIFTSTATE} instruction inserted to move the \texttt{StatePtr} forward to prepare the next non-closure function call. Before exits function, \texttt{StatePtr} is reset to the same position as that the current function context has begun by \texttt{SHIFTSTATE} (A sum of the operand for \texttt{SHIFTSTATE} in the function must be always). +Each after the referring to \texttt{self} through \texttt{GETSTATE} instruction, or call to the other statefull function, \\ \texttt{SHIFTSTATE} instruction inserted to move the \texttt{StatePtr} forward to prepare the next non-closure function call. Before exits function, \texttt{StatePtr} is reset to the same position as that the current function context has begun by \texttt{SHIFTSTATE} (A sum of the operand for \texttt{SHIFTSTATE} in a function must be always 0). -\begin{lstlisting}[label=lst:fbdelay,language=Rust,caption=\it Example code that combines self and delay without closure call.] +\begin{lstlisting}[float,floatplacement=H,label=lst:fbdelay,language=Rust,caption=\it Example code that combines self and delay without closure call.] fn fbdelay(x,fb,dtime){ x + delay(1000,self,dtime)*fb } @@ -424,7 +417,7 @@ MOVE 3 0 MOVE 4 1 MOVECONST 5 0 //load 0.7 CALL 2 3 1 -SHIFTSTATE 2 //=state_size of fbdelay +SHIFTSTATE 2 //2=state_size of fbdelay MOVECONST 3 5 //load "fbdelay" prototype MOVE 4 0 MOVECONST 5 1 //load 2 @@ -436,13 +429,13 @@ SHIFTSTATE -2 RETURN 3 1 fn dsp (x) -MOVECONST 1 6 //load "twodelay prototype" +MOVECONST 1 6 //load "twodelay" prototype MOVE 2 0 MOVECONST 3 3 //load 400 CALL 1 2 1 -SHIFTSTATE 4 //=state_size of twodelay +SHIFTSTATE 4 //4=state_size of twodelay MOVECONST 2 6 -MOVE 2 3 //load "twodelay prototype" +MOVE 2 3 //load "twodelay" prototype MOVE 3 0 MOVECONST 3 4 //load 400 CALL 2 2 1 @@ -513,25 +506,24 @@ fn dsp(){ This means that the major compiler optimization techniques such as the constant folding and the function inlining can not simply be appropriated. Those optimizations should be done after the evaluation of a global context and before evaluating \texttt{dsp} function. -To solve this situation, introducing distinction whether the term should be used in global context evaluation(Stage 0) and in the actual signal processing(stage 1) in type system. This can be realized with Multi-Stage Computation\cite{Taha1997}. Listing \ref{lst:filterbank_multi} is the example of \texttt{filterbank} code using MetaOCaml's syntaxes \texttt{..} which will generate evaluated program used in a next stage, and \texttt{\textasciitilde term} which embed terms evaluated at the previous stage. +To solve this situation, introducing distinction whether the term should be used in global context evaluation(Stage 0) and in the actual signal processing(stage 1) in type system. This can be realized with Multi-Stage Computation\cite{Taha1997}. Listing \ref{lst:filterbank_multi} is the example of \texttt{filterbank} code using BER MetaOCaml's syntaxes \texttt{..} which will generate evaluated program used in a next stage, and \texttt{\textasciitilde term} which embed terms evaluated at the previous stage\cite{kiselyov2014a}. \texttt{filterbank} function is evaluated in stage 0 while embedding itself by using \texttt{\textasciitilde}. This multi-stage computation code has a same semantics in a generative signal graph generation and execution of the signal processing, in contrast to that Faust have 2 different semantics of the term rewriting macro and BDA. -\section{Conclusions} +\section{Conclusion} +\label{sec:conclusion} + +This paper proposed \lambdammm , an intermediate representation for the programming languages for music and signal processing with the virtual machine and instruction set to run it. \lambdammm enables to describe generative signal graph and its contents in a single syntax and semantics. However, user have to be responsible to write codes that does not create escapable closures during the dsp execution and this problem would be difficult to understand by novice users. + -This template can be found on the conference website. For changing the number -of author affiliations (1 to 4), uncomment the corresponding regions in the -template \texttt{tex} file. Please, submit full-length papers (2 to 14 pages). -Submission is fully electronic and automated through the Conference Web -Submission System. DO NOT send us papers directly by e-mail. \section{Acknowledgments} -This work was supported by JSPS KAKENHI (Grant No. JP19K21615). Also great thanks for many anonymous reviewers. +This work was supported by JSPS KAKENHI (Grant No. \\JP19K21615). Also great thanks for many anonymous reviewers. %\newpage \nocite{*} diff --git a/src/ref.bib b/src/ref.bib index f91a7ed..dbf9521 100644 --- a/src/ref.bib +++ b/src/ref.bib @@ -1,130 +1,145 @@ @inproceedings{arias2021, - title = {The {{W-calculus}}: {{A Synchronous Framework}} for the {{Verified Modelling}} of {{Digital Signal Processing Algorithms}}}, - author = {Arias, Emilio Jesús Gallego and Jouvelot, Pierre and Ribstein, Sylvain and Desblancs, Dorian}, - year= {2021}, + title = {The {{W-calculus}}: {{A Synchronous Framework}} for the {{Verified Modelling}} of {{Digital Signal Processing Algorithms}}}, + author = {Arias, Emilio Jesús Gallego and Jouvelot, Pierre and Ribstein, Sylvain and Desblancs, Dorian}, + year = {2021}, booktitle = {Proceedings of the 9th ACM SIGPLAN International Workshop on Functional Art, Music, Modelling, and Design}, - volume = {12}, - pages = {35--46}, + volume = {12}, + pages = {35--46}, publisher = {Association for Computing Machinery}, - location = {New York, NY, USA}, - doi = {10.1145/3471872.3472970}, - url = {https://hal-mines-paristech.archives-ouvertes.fr/hal-03322174}, - urldate = {2021-08-30}, - abstract = {We introduce the W-calculus, an extension of the call-byvalue λ-calculus with synchronous semantics, designed to be flexible enough to capture different implementation forms of Digital Signal Processing algorithms, while permitting a direct embedding into the Coq proof assistant for mechanized formal verification. In particular, we are interested in the different implementations of classical DSP algorithms such as audio filters and resonators, and their associated high-level properties such as Linear Time-invariance. We describe the syntax and denotational semantics of the W-calculus, providing a Coq implementation. As a first application of the mechanized semantics, we prove that every program expressed in a restricted syntactic subset of W is linear time-invariant, by means of a characterization of the property using logical relations. This first semantics, while convenient for mechanized reasoning, is still not useful in practice as it requires re-computation of previous steps. To improve on that, we develop an imperative version of the semantics that avoids recomputation of prior stream states. We empirically evaluate the performance of the imperative semantics using a staged interpreter written in OCaml, which, for an input program in W , produces a specialized OCaml program, which is then fed to the optimizing OCaml compiler. The approach provides a convenient path from the high-level semantical description to low-level efficient code. Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.}, - keywords = {Digital Signal Processing,Formal Verification,Linear Time-invariance,Programming Language Semantics,Synchronous Programming}, - file = {/Users/tomoya/Downloads/3472970-vor.pdf;/Users/tomoya/Zotero/storage/C48ATPES/full-text.pdf} + location = {New York, NY, USA}, + doi = {10.1145/3471872.3472970}, + url = {https://hal-mines-paristech.archives-ouvertes.fr/hal-03322174}, + urldate = {2021-08-30}, + abstract = {We introduce the W-calculus, an extension of the call-byvalue λ-calculus with synchronous semantics, designed to be flexible enough to capture different implementation forms of Digital Signal Processing algorithms, while permitting a direct embedding into the Coq proof assistant for mechanized formal verification. In particular, we are interested in the different implementations of classical DSP algorithms such as audio filters and resonators, and their associated high-level properties such as Linear Time-invariance. We describe the syntax and denotational semantics of the W-calculus, providing a Coq implementation. As a first application of the mechanized semantics, we prove that every program expressed in a restricted syntactic subset of W is linear time-invariant, by means of a characterization of the property using logical relations. This first semantics, while convenient for mechanized reasoning, is still not useful in practice as it requires re-computation of previous steps. To improve on that, we develop an imperative version of the semantics that avoids recomputation of prior stream states. We empirically evaluate the performance of the imperative semantics using a staged interpreter written in OCaml, which, for an input program in W , produces a specialized OCaml program, which is then fed to the optimizing OCaml compiler. The approach provides a convenient path from the high-level semantical description to low-level efficient code. Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.}, + keywords = {Digital Signal Processing,Formal Verification,Linear Time-invariance,Programming Language Semantics,Synchronous Programming}, + file = {/Users/tomoya/Downloads/3472970-vor.pdf;/Users/tomoya/Zotero/storage/C48ATPES/full-text.pdf} } @inproceedings{gaster2018, - title = {{{OUTSIDE THE BLOCK SYNDICATE}}: {{TRANSLATING FAUST}}'{{S ALGEBRA OF BLOCKS TO THE ARROWS FRAMEWORK}}}, + title = {{{OUTSIDE THE BLOCK SYNDICATE}}: {{TRANSLATING FAUST}}'{{S ALGEBRA OF BLOCKS TO THE ARROWS FRAMEWORK}}}, booktitle = {Proceedings of the 1st {{International Faust Conference}}}, - author = {Gaster, Benedict R and Renney, Nathan and Mitchell, Tom}, - year = {2018}, - location = {Mainz,Germany}, - abstract = {Folklore has it that Faust's algebra of blocks can be represented in Hughes' algebra of Arrows. In this paper we formalise this understanding, showing that blocks can indeed be encoded with Causal Commutative Arrows. Whilst an interesting finding in itself, we believe that this formal translation opens up new avenues of research. For instance, recent work in functional reactive programming on well typed clocks, could provide an alternative to the dependent type approach proposed for multi-rate Faust.}, - file = {/Users/tomoya/Zotero/storage/6X7SPZEM/full-text.pdf} + author = {Gaster, Benedict R and Renney, Nathan and Mitchell, Tom}, + year = {2018}, + location = {Mainz,Germany}, + abstract = {Folklore has it that Faust's algebra of blocks can be represented in Hughes' algebra of Arrows. In this paper we formalise this understanding, showing that blocks can indeed be encoded with Causal Commutative Arrows. Whilst an interesting finding in itself, we believe that this formal translation opens up new avenues of research. For instance, recent work in functional reactive programming on well typed clocks, could provide an alternative to the dependent type approach proposed for multi-rate Faust.}, + file = {/Users/tomoya/Zotero/storage/6X7SPZEM/full-text.pdf} } @inproceedings{graf2010, - title = {Term {{Rewriting Extension}} for the {{Faust Programming Language}}}, + title = {Term {{Rewriting Extension}} for the {{Faust Programming Language}}}, booktitle = {International {{Linux Audio Conference}}}, - author = {Gräf, Albert}, - year = {2010}, - url = {https://hal.archives-ouvertes.fr/hal-03162973 https://hal.archives-ouvertes.fr/hal-03162973/document}, - abstract = {This paper discusses a term rewriting extension for the functional signal processing language Faust. The extension equips Faust with a hygienic macro processing facility. Faust macros can be used to define complicated, parameterized block diagrams, and perform arbitrary symbolic manipulations of block diagrams. Thus they make it easier to create elaborate signal processor specifications involving many complicated components.}, - keywords = {Digital signal processing,Faust,functional programming,macro processing,term rewriting}, - file = {/Users/tomoya/Zotero/storage/KXEWFSGX/full-text.pdf} + author = {Gräf, Albert}, + year = {2010}, + url = {https://hal.archives-ouvertes.fr/hal-03162973 https://hal.archives-ouvertes.fr/hal-03162973/document}, + abstract = {This paper discusses a term rewriting extension for the functional signal processing language Faust. The extension equips Faust with a hygienic macro processing facility. Faust macros can be used to define complicated, parameterized block diagrams, and perform arbitrary symbolic manipulations of block diagrams. Thus they make it easier to create elaborate signal processor specifications involving many complicated components.}, + keywords = {Digital signal processing,Faust,functional programming,macro processing,term rewriting}, + file = {/Users/tomoya/Zotero/storage/KXEWFSGX/full-text.pdf} } @inproceedings{matsuura2021a, - title = {Mimium: A Self-Extensible Programming Language for Sound and Music}, + title = {Mimium: A Self-Extensible Programming Language for Sound and Music}, shorttitle = {Mimium}, - booktitle = {Proceedings of the 9th {{ACM SIGPLAN International Workshop}} on {{Functional Art}}, {{Music}}, {{Modelling}}, and {{Design}}}, - author = {Matsuura, Tomoya and Jo, Kazuhiro}, - year = {2021}, - month = aug, - series = {{{FARM}} 2021}, - pages = {1--12}, - publisher = {Association for Computing Machinery}, - address = {New York, NY, USA}, - doi = {10.1145/3471872.3472969}, - urldate = {2024-07-09}, - abstract = {We propose a programming language for music named mimium, which combines temporal-discrete control and signal processing in a single language. mimium has an intuitive imperative syntax and can use stateful functions as Unit Generator in the same way as ordinary function definitions and applications. Furthermore, the runtime performance is made equivalent to that of lower-level languages by compiling the code through the LLVM compiler infrastructure. By using the strategy of adding a minimum number of features for sound to the design and implementation of a general-purpose functional language, mimium is expected to lower the learning cost for users, simplify the implementation of compilers, and increase the self-extensibility of the language. In this paper, we present the basic language specification, semantics for simple task scheduling, the semantics for stateful functions, and the compilation process. mimium has certain specifications that have not been achieved in existing languages. Future works suggested include extending the compiler functionality to combine task scheduling with the functional paradigm and introducing multi-stage computation for parametric replication of stateful functions.}, - isbn = {978-1-4503-8613-5}, - file = {/Users/tomoya/Zotero/storage/TDBLJQTL/Matsuura and Jo - 2021 - mimium a self-extensible programming language for.pdf} + booktitle = {Proceedings of the 9th {{ACM SIGPLAN International Workshop}} on {{Functional Art}}, {{Music}}, {{Modelling}}, and {{Design}}}, + author = {Matsuura, Tomoya and Jo, Kazuhiro}, + year = {2021}, + month = aug, + series = {{{FARM}} 2021}, + pages = {1--12}, + publisher = {Association for Computing Machinery}, + address = {New York, NY, USA}, + doi = {10.1145/3471872.3472969}, + urldate = {2024-07-09}, + abstract = {We propose a programming language for music named mimium, which combines temporal-discrete control and signal processing in a single language. mimium has an intuitive imperative syntax and can use stateful functions as Unit Generator in the same way as ordinary function definitions and applications. Furthermore, the runtime performance is made equivalent to that of lower-level languages by compiling the code through the LLVM compiler infrastructure. By using the strategy of adding a minimum number of features for sound to the design and implementation of a general-purpose functional language, mimium is expected to lower the learning cost for users, simplify the implementation of compilers, and increase the self-extensibility of the language. In this paper, we present the basic language specification, semantics for simple task scheduling, the semantics for stateful functions, and the compilation process. mimium has certain specifications that have not been achieved in existing languages. Future works suggested include extending the compiler functionality to combine task scheduling with the functional paradigm and introducing multi-stage computation for parametric replication of stateful functions.}, + isbn = {978-1-4503-8613-5}, + file = {/Users/tomoya/Zotero/storage/TDBLJQTL/Matsuura and Jo - 2021 - mimium a self-extensible programming language for.pdf} } @article{norilo2015, - title = {Kronos: {{A Declarative Metaprogramming Language}} for {{Digital Signal Processing}}}, - author = {Norilo, Vesa}, - year = {2015}, - journal = {Computer Music Journal}, - volume = {39}, - number = {4}, - pages = {30--48}, - doi = {10.1162/COMJ_a_00330}, - url = {https://dl.acm.org/doi/abs/10.1162/COMJ_a_00330}, + title = {Kronos: {{A Declarative Metaprogramming Language}} for {{Digital Signal Processing}}}, + author = {Norilo, Vesa}, + year = {2015}, + journal = {Computer Music Journal}, + volume = {39}, + number = {4}, + pages = {30--48}, + doi = {10.1162/COMJ_a_00330}, + url = {https://dl.acm.org/doi/abs/10.1162/COMJ_a_00330}, abstract = {Kronos is a signal-processing programming language based on the principles of semifunctional reactive systems. It is aimed at efficient signal processing at the elementary level, and built to scale towards higher-level tasks by utilizing the powerful programming paradigms of "metaprogramming" and reactive multirate systems. The Kronos language features expressive source code as well as a streamlined, efficient runtime. The programming model presented is adaptable for both sample-stream and event processing, offering a cleanly functional programming paradigm for a wide range of musical signal-processing problems, exemplified herein by a selection and discussion of code examples.}, - file = {/Users/tomoya/Zotero/storage/THAKVEM6/m-api-574ff3be-cfe2-7867-406a-df50770bf1cb.pdf} + file = {/Users/tomoya/Zotero/storage/THAKVEM6/m-api-574ff3be-cfe2-7867-406a-df50770bf1cb.pdf} } @article{Orlarey2004, - title = {Syntactical and Semantical Aspects of {{Faust}}}, - author = {Orlarey, Y. and Fober, D. and Letz, S.}, - year = {2004}, - journal = {Soft Computing}, - volume = {8}, - number = {9}, - pages = {623--632}, - issn = {14327643}, - doi = {10.1007/s00500-004-0388-1}, - isbn = {0050000403}, + title = {Syntactical and Semantical Aspects of {{Faust}}}, + author = {Orlarey, Y. and Fober, D. and Letz, S.}, + year = {2004}, + journal = {Soft Computing}, + volume = {8}, + number = {9}, + pages = {623--632}, + issn = {14327643}, + doi = {10.1007/s00500-004-0388-1}, + isbn = {0050000403}, keywords = {Compiler,Dataflow,Functional programming,Real-time,Signal processing}, - file = {/Users/tomoya/Zotero/storage/YZVBLW85/Orlarey, Fober, Letz_2004_Syntactical and semantical aspects of Faust.pdf} + file = {/Users/tomoya/Zotero/storage/YZVBLW85/Orlarey, Fober, Letz_2004_Syntactical and semantical aspects of Faust.pdf} } @article{ierusalimschy2005, - title = {The {{Implementation}} of {{Lua}} 5.0}, - author = {Ierusalimschy, Roberto and de Figueiredo, Luiz Henrique and Celes, Waldemar}, - year = {2005}, - month = jul, - journal = {JUCS - Journal of Universal Computer Science}, - volume = {11}, - number = {7}, - pages = {1159--1176}, + title = {The {{Implementation}} of {{Lua}} 5.0}, + author = {Ierusalimschy, Roberto and de Figueiredo, Luiz Henrique and Celes, Waldemar}, + year = {2005}, + month = jul, + journal = {JUCS - Journal of Universal Computer Science}, + volume = {11}, + number = {7}, + pages = {1159--1176}, publisher = {Journal of Universal Computer Science}, - issn = {0948-6968}, - doi = {10.3217/jucs-011-07-1159}, - urldate = {2024-07-09}, - abstract = {We discuss the main novelties of the implementation of Lua 5.0: its register-based virtual machine, the new algorithm for optimizing tables used as arrays, the implementation of closures, and the addition of coroutines.}, + issn = {0948-6968}, + doi = {10.3217/jucs-011-07-1159}, + urldate = {2024-07-09}, + abstract = {We discuss the main novelties of the implementation of Lua 5.0: its register-based virtual machine, the new algorithm for optimizing tables used as arrays, the implementation of closures, and the addition of coroutines.}, copyright = {2005 Roberto Ierusalimschy, Luiz Henrique de Figueiredo, Waldemar Celes}, - language = {en}, - file = {/Users/tomoya/Zotero/storage/GQRQSVPC/Ierusalimschy et al. - 2005 - The Implementation of Lua 5.0.pdf} + language = {en}, + file = {/Users/tomoya/Zotero/storage/GQRQSVPC/Ierusalimschy et al. - 2005 - The Implementation of Lua 5.0.pdf} } @book{nystrom2021, - title = {{Crafting Interpreters}}, - author = {Nystrom, Robert}, - year = {2021}, - month = jul, + title = {{Crafting Interpreters}}, + author = {Nystrom, Robert}, + year = {2021}, + month = jul, publisher = {Genever Benning}, - address = {Daryaganj Delhi}, - abstract = {Despite using them every day, most software engineers know little about how programming languages are designed and implemented. For many, their only experience with that corner of computer science was a terrifying "compilers" class that they suffered through in undergrad and tried to blot from their memory as soon as they had scribbled their last NFA to DFA conversion on the final exam.That fearsome reputation belies a field that is rich with useful techniques and not so difficult as some of its practitioners might have you believe. A better understanding of how programming languages are built will make you a stronger software engineer and teach you concepts and data structures you'll use the rest of your coding days. You might even have fun.This book teaches you everything you need to know to implement a full-featured, efficient scripting language. You'll learn both high-level concepts around parsing and semantics and gritty details like bytecode representation and garbage collection. Your brain will light up with new ideas, and your hands will get dirty and calloused.Starting from main(), you will build a language that features rich syntax, dynamic typing, garbage collection, lexical scope, first-class functions, closures, classes, and inheritance. All packed into a few thousand lines of clean, fast code that you thoroughly understand because you wrote each one yourself.}, - isbn = {978-0-9905829-3-9}, + address = {Daryaganj Delhi}, + abstract = {Despite using them every day, most software engineers know little about how programming languages are designed and implemented. For many, their only experience with that corner of computer science was a terrifying "compilers" class that they suffered through in undergrad and tried to blot from their memory as soon as they had scribbled their last NFA to DFA conversion on the final exam.That fearsome reputation belies a field that is rich with useful techniques and not so difficult as some of its practitioners might have you believe. A better understanding of how programming languages are built will make you a stronger software engineer and teach you concepts and data structures you'll use the rest of your coding days. You might even have fun.This book teaches you everything you need to know to implement a full-featured, efficient scripting language. You'll learn both high-level concepts around parsing and semantics and gritty details like bytecode representation and garbage collection. Your brain will light up with new ideas, and your hands will get dirty and calloused.Starting from main(), you will build a language that features rich syntax, dynamic typing, garbage collection, lexical scope, first-class functions, closures, classes, and inheritance. All packed into a few thousand lines of clean, fast code that you thoroughly understand because you wrote each one yourself.}, + isbn = {978-0-9905829-3-9} } @article{Taha1997, - title = {Multi-{{Stage Programming}} with {{Explicit Annotations}}}, - author = {Taha, Walid and Sheard, Tim}, - year = {1997}, - month = dec, - journal = {SIGPLAN Notices (ACM Special Interest Group on Programming Languages)}, - volume = {32}, - number = {12}, - pages = {203--214}, + title = {Multi-{{Stage Programming}} with {{Explicit Annotations}}}, + author = {Taha, Walid and Sheard, Tim}, + year = {1997}, + month = dec, + journal = {SIGPLAN Notices (ACM Special Interest Group on Programming Languages)}, + volume = {32}, + number = {12}, + pages = {203--214}, publisher = {Association for Computing Machinery (ACM)}, - issn = {03621340}, - doi = {10.1145/258994.259019}, - urldate = {2021-05-12}, - abstract = {We introduce MetaML, a statically-typed multi-stage programming language extending Nielson and Nielson's two stage notation to an arbitrary number of stages. MetaML extends previous work by introducing four distinct staging annotations which generalize those published previously [25, 12, 7, 6] We give a static semantics in which type checking is done once and for all before the first stage, and a dynamic semantics which introduces a new concept of cross-stage persistence, which requires that variables available in any stage are also available in all future stages. We illustrate that staging is a manual form of binding time analysis. We explain why, even in the presence of automatic binding time analysis, explicit annotations are useful, especially for programs with more than two stages. A thesis of this paper is that multi-stage languages are useful as programming languages in their own right, and should support features that make it possible for programmers to write staged computations without significantly changing their normal programming style. To illustrate this we provide a simple three stage example, and an extended two-stage example elaborating a number of practical issues.}, - file = {/Users/tomoya/Zotero/storage/KFYY25CM/Taha, Sheard - 1997 - Multi-Stage Programming with Explicit Annotations.pdf;/Users/tomoya/Zotero/storage/X3DDM6HN/full-text.pdf} + issn = {03621340}, + doi = {10.1145/258994.259019}, + urldate = {2021-05-12}, + abstract = {We introduce MetaML, a statically-typed multi-stage programming language extending Nielson and Nielson's two stage notation to an arbitrary number of stages. MetaML extends previous work by introducing four distinct staging annotations which generalize those published previously [25, 12, 7, 6] We give a static semantics in which type checking is done once and for all before the first stage, and a dynamic semantics which introduces a new concept of cross-stage persistence, which requires that variables available in any stage are also available in all future stages. We illustrate that staging is a manual form of binding time analysis. We explain why, even in the presence of automatic binding time analysis, explicit annotations are useful, especially for programs with more than two stages. A thesis of this paper is that multi-stage languages are useful as programming languages in their own right, and should support features that make it possible for programmers to write staged computations without significantly changing their normal programming style. To illustrate this we provide a simple three stage example, and an extended two-stage example elaborating a number of practical issues.}, + file = {/Users/tomoya/Zotero/storage/KFYY25CM/Taha, Sheard - 1997 - Multi-Stage Programming with Explicit Annotations.pdf;/Users/tomoya/Zotero/storage/X3DDM6HN/full-text.pdf} +} + +@inproceedings{kiselyov2014a, + title = {The {{Design}} and {{Implementation}} of {{BER~MetaOCaml}}}, + booktitle = {{Proceedings of the 12th International Symposium on Functional and Logic Programming}}, + author = {Kiselyov, Oleg}, + editor = {Codish, Michael and Sumii, Eijiro}, + year = {2014}, + pages = {86--102}, + publisher = {Springer International Publishing}, + address = {Cham}, + doi = {10.1007/978-3-319-07151-0_6}, + abstract = {MetaOCaml is a superset of OCaml extending it with the data type for program code and operations for constructing and executing such typed code values. It has been used for compiling domain-specific languages and automating tedious and error-prone specializations of high-performance computational kernels. By statically ensuring that the generated code compiles and letting us quickly run it, MetaOCaml makes writing generators less daunting and more productive.}, + isbn = {978-3-319-07151-0}, + language = {en} } diff --git a/src/syntax.tex b/src/syntax.tex index e1960d3..a381cd8 100644 --- a/src/syntax.tex +++ b/src/syntax.tex @@ -40,7 +40,7 @@ \begin{equation*} \begin{aligned} e \; ::=& \ x &x \in {v_p} \ & [value]\\ - |& \ \lambda x.e & & [abs]\\ + |& \ \lambda x.e & & [lambda]\\ |& \ let\; x = e_1\; in\; e_2 & & [let]\\ |& \ fix \; x.e & & [fixpoint]\\ |& \ e_1 \; e_2 & & [app]\\ diff --git a/src/typing.tex b/src/typing.tex index 988bb50..0a95e77 100644 --- a/src/typing.tex +++ b/src/typing.tex @@ -1,14 +1,14 @@ \begin{figure}[ht] \begin{tabular}{cc} - \begin{minipage}[t]{0.45\hsize} + \begin{minipage}[t]{0.4\hsize} \centering \begin{equation*} \frac{\Gamma, x:\tau_a \vdash e:\tau_b}{\Gamma \vdash \lambda x.e:\tau_a \to \tau_b } - \end{equation*}\textrm{T-LAM} + \end{equation*}\textrm{T-LAMBDA} \end{minipage} & - \begin{minipage}[t]{0.45\hsize} + \begin{minipage}[t]{0.5\hsize} \centering \begin{equation*} \frac{ \Gamma \vdash e_1:N \quad \Gamma \vdash e_2:\tau }{\Gamma \vdash delay\ e_1\ e_2 : \tau} @@ -22,7 +22,7 @@ \end{equation*}\textrm{T-FEED} \end{minipage}& - \begin{minipage}[t]{0.45\hsize} + \begin{minipage}[t]{0.5\hsize} \centering \begin{equation*} \frac{ \Gamma \vdash e_c : R\ \Gamma \vdash e_t:\tau\ \Gamma \vdash e_e:\tau }{\Gamma \vdash if\ (e_c)\ e_t\ e_e\ : \tau} @@ -30,5 +30,5 @@ \end{minipage}\\ \end{tabular} - \caption{\label{fig:typing}{\it Typing rules for lambda abstraction and feed abstraction in $\lambda_{mmm}$.}} + \caption{\label{fig:typing}{\it Excerpt of the typing rules for \lambdammm.}} \end{figure} \ No newline at end of file