add link
This commit is contained in:
@@ -25,7 +25,7 @@ This paper proposes \lambdammm, a call-by-value, simply typed lambda calculus-ba
|
||||
|
||||
An earlier issue with \textit{mimium} was its inability to compile code that contained combinations of recursive or higher-order functions with stateful functions involving delay or feedback because the compiler could not determine the data size of the internal state used in signal processing.
|
||||
|
||||
In this paper, I propose the syntax and semantics of \lambdammm, an extended call-by-value simply typed lambda calculus, as a computational model intended to serve as an intermediate representation for \textit{mimium}. In addition, I propose a virtual machine and its instruction set, based on Lua's VM, to execute this computational model in practice. Finally, I discuss both the challenges and potential of the current \lambdammm\ model, one of which is that users must differentiate whether a calculation occurs in a global context or during actual signal processing; the other is that runtime interoperability with other programming languages could be easier than in existing DSP languages.
|
||||
In this paper, I propose the syntax and semantics of \lambdammm, an extended call-by-value simply typed lambda calculus, as a computational model intended to serve as an intermediate representation for \textit{mimium}\footnote{The newer version of mimium compiler and VM based on the model presented in this paper is on the GitHub. \url{https://github.com/tomoyanonymous/mimium-rs} }. In addition, I propose a virtual machine and its instruction set, based on Lua's VM, to execute this computational model in practice. Finally, I discuss both the challenges and potential of the current \lambdammm\ model, one of which is that users must differentiate whether a calculation occurs in a global context or during actual signal processing; the other is that runtime interoperability with other programming languages could be easier than in existing DSP languages.
|
||||
|
||||
\section{Syntax}
|
||||
\label{sec:syntax}
|
||||
@@ -67,7 +67,7 @@ This paper proposes \lambdammm, a call-by-value, simply typed lambda calculus-ba
|
||||
|
||||
The primitive types include a real number type, used in most signal processing, and a natural number type, used for the indices of delay.
|
||||
|
||||
In W-calculus, which directly inspired the design of \ lambdammm, the function types can only take tuples of real numbers and return tuples of real numbers. This restriction prevents the definition of higher-order functions. While this limitation is reasonable for a signal processing language—since higher-order functions require data structures such as closures that depend on dynamic memory allocation—it also reduces the generality of lambda calculus.
|
||||
In W-calculus, which directly inspired the design of \lambdammm, the function types can only take tuples of real numbers and return tuples of real numbers. This restriction prevents the definition of higher-order functions. While this limitation is reasonable for a signal processing language—since higher-order functions require data structures such as closures that depend on dynamic memory allocation—it also reduces the generality of lambda calculus.
|
||||
|
||||
In \lambdammm, the problem of memory allocation for closures is delegated to runtime implementation (see Section \ref{sec:vm}), which allows the use of higher-order functions. However, $feed$ abstraction does not permit function types to be either input or output. Allowing function types in the $feed$ abstraction enables the definition of functions whose behavior could change over time. While this is theoretically interesting, there are no practical examples in real-world signal processing, and such a feature would likely further complicate the implementation.
|
||||
|
||||
@@ -162,7 +162,7 @@ This paper proposes \lambdammm, a call-by-value, simply typed lambda calculus-ba
|
||||
|
||||
The overall structure of the virtual machine (VM), program, and instantiated closures for \lambdammm\ is depicted in Figure \ref{fig:vmstructure}. In addition to the usual call stack, the VM has a dedicated storage area (a flat array) to manage the internal state data for feedback and delay.
|
||||
|
||||
This storage area is accompanied by pointers that indicate the positions from which the internal state data are retrieved via the \\ \texttt{GETSTATE} and \texttt{SETSTATE} instructions. These positions are shifted forward or backward using the \texttt{SHIFTSTATE} instruction. The actual data layout in the state storage memory is statically determined during compilation by analyzing function calls involving references to \texttt{self}, \texttt{delay}, and other stateful functions, including those that recursively invoke such functions. The \texttt{DELAY} operation takes two inputs: \texttt{B}, representing the input value, and \texttt{C}, representing the delay time in the samples.
|
||||
This storage area is accompanied by pointers that indicate the positions from which the internal state data are retrieved via the \\ \texttt{GETSTATE} and \texttt{SETSTATE} instructions. These positions are shi-fted forward or backward using the \texttt{SHIFTSTATE} instruction. The actual data layout in the state storage memory is statically determined during compilation by analyzing function calls involving references to \texttt{self}, \texttt{delay}, and other stateful functions, including those that recursively invoke such functions. The \texttt{DELAY} operation takes two inputs: \texttt{B}, representing the input value, and \texttt{C}, representing the delay time in the samples.
|
||||
|
||||
However, for higher-order functions—functions that take another function as an argument or return one—the internal state layout of the passed function is unknown at compile time. Consequently, a separate internal state storage area is allocated to each instantiated closure, which is distinct from the global storage area maintained by the VM instance. The VM also uses an additional stack to keep track of the pointers in the state storage of instantiated closures. Each time a \texttt{CALLCLS} operation is executed, the VM pushes the pointer from the state storage of the closure onto the state stack. Upon completing the closure call, the VM pops the state pointer off the stack.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user