diff --git a/src/main.tex b/src/main.tex index 265d273..120a68f 100644 --- a/src/main.tex +++ b/src/main.tex @@ -210,18 +210,17 @@ Also, Kronos\cite{norilo2015} and W-calculus\cite{arias2021} are examples of att Previously, the author designed a programming language for music \textit{mimium} \cite{matsuura2021a}. It adds the basic operations of delay and feedback to lambda-calculus, signal processing can be expressed concisely while having a syntax close to that of general-purpose programming languages(especially, the syntax of mimium is designed to be looks like Rust language). -In this paper, we present the syntax and semantics of a computational model: \lambdammm, which is an intermediate representation of mimium based on W-calculus. We also propose a virtual machine and its instruction set for efficient execution of this computational model. - One of the previous issues with mimium was the inability to compile the codes which contains a combination of recursive or higher-order functions and stateful functions because the compiler can not be determine the data size of the internal state of the signal processing. -In this paper, the syntax and semantics of \lambdammm, a superset of the W-calculus-based call-by-value simply-typed lambda calculus are explained, as a computation model that is supposed to be an intermediate representation of mimium. Also a virtual machine and its instruction set are proposed to execute this computation model practically. Based on this model, it is possible to describe and execute generative signal processing in mimium. +In this paper, the syntax and semantics of \lambdammm, a superset of the call-by-value simply-typed lambda calculus are explained, as a computation model that is supposed to be an intermediate representation of mimium. Also, a virtual machine and its instruction set are proposed to execute this computation model practically. Lastly, a space for optimization for running mimium using \lambdammm will be discussed. + \section{Syntax} \label{sec:syntax} \input{syntax.tex} -Definition of types and terms of the $\lambda_{mmm}$ are shown in Figure.\ref{fig:syntax_v}. +Definition of types and terms of the $\lambda_{mmm}$ are shown in Figure \ref{fig:syntax_v}. Two terms are added in addition to the usual simply-typed lambda calculus, $delay e_1 e_2 $ that refers a previous value of $e_1$ to $e_2$ sample before, and $feed x.e$ abstraction that the user can refer the result of the evaluation of the $e$ one unit time before as $x$ during the evaluation of $e$ itself. @@ -234,8 +233,7 @@ Two terms are added in addition to the usual simply-typed lambda calculus, $dela } \end{lstlisting} -mimium by the author has a keyword $self$ that can be used in function definition, that refers to the previous return value of the function. The example code of the simple one-pole filter function is shown in Listing.\ref{lst:onepole}. This code can be expressed in $\lambda_{mmm}$ as Figure.\ref{fig:onepole}. - +mimium by the author has a keyword $self$ that can be used in function definition, that refers to the previous return value of the function. The example code of the simple one-pole filter function is shown in Listing \ref{lst:onepole}. This code can be expressed in $\lambda_{mmm}$ as Figure \ref{fig:onepole}. \begin{figure}[ht] \begin{equation*} @@ -245,13 +243,13 @@ mimium by the author has a keyword $self$ that can be used in function definitio & \ \lambda x. \lambda g.\ feed\ y.\ x *(1.0 - g) + y * g \ in\ ... \end{aligned} \end{equation*} -\caption{\label{fig:onepole}{\it Equivalent expression to Listing.\ref{lst:onepole} in $\lambda_{mmm}$.}} +\caption{\label{fig:onepole}{\it Equivalent expression to Listing \ref{lst:onepole} in $\lambda_{mmm}$.}} \end{figure} \subsection{Typing Rule} \label{sec:typing} -Additional typing rules to usual simply-typed lambda calculus are shown in Fig.\ref{fig:typing}. +Additional typing rules to usual simply-typed lambda calculus are shown in Figure \ref{fig:typing}. In the W-calculus, a starting point of designing \lambdammm, function types can takes tuples of real numbers and return tuples of real numbers. This means that higher-order functions cannot be written. While this restriction is reasonable as a design choice for a language for signal processing since higher-order functions require data structures that require dynamic memory allocation, such as closures, for their implementation, it also lacks the generality of the lambda calculus. @@ -266,7 +264,7 @@ In \lambdammm, the problem of memory allocation for closures is left to the impl \input{semantics.tex} -The operational semantics of the \lambdammm is shown in Fig.\ref{fig:semantics}. This big-step semantics is a conceptual explanation of the evaluation that, when the current time is $n$, the previous evaluation environment $t$ samples before can be referred to as $E^{n-t}$ , and that when the time < 0, the evaluation of any term is evaluated to the default value of its type (0 for the numeric types). +The operational semantics of the \lambdammm is shown in Figure \ref{fig:semantics}. This big-step semantics is a conceptual explanation of the evaluation that, when the current time is $n$, the previous evaluation environment $t$ samples before can be referred to as $E^{n-t}$ , and that when the time < 0, the evaluation of any term is evaluated to the default value of its type (0 for the numeric types). \section{VM Model and Instruction Set} @@ -278,9 +276,9 @@ When executing a computational model based on lambda calculus, the problem is ho On the contrary, a runtime performance can be improved by performing a process called closure transformation (or lambda lifting), which analyses all the names of outer variables referred by the inner function and transforms the inner function by adding argument so that the variables can be referred explicitly, but the compiler implementation of the transformation is relatively complex. -The Lua VM takes an intermediate approach between these two by adding the VM instructions \textit{GetUpValue/SetUpValue}, which allows the outer variables to be referenced dynamically at runtime. The implementation of compiler and VM using \textit{Upvalue} is simpler than closure conversion, while at the same time preventing execution performance degradation, as outer variables can be referenced via the call stack rather than on the heap memory unless the closure object escapes from the context of the original function\cite{nystrom2021}. +The Lua VM takes an intermediate approach between these two by adding the VM instructions \texttt{GETUPVALUE} / \texttt{SETUPVALUE}, which allows the outer variables to be referenced dynamically at runtime. The implementation of compiler and VM using \textit{Upvalue} is simpler than closure conversion, while at the same time preventing execution performance degradation, as outer variables can be referenced via the call stack rather than on the heap memory unless the closure object escapes from the context of the original function\cite{nystrom2021}. -Also, Upvalue helps interoperations between other programming languages, as Lua can be easily embedded through C language API and even in an external library function written in C, programmer can read and write Lua's Upvalue. +Also, Upvalue helps interoperations between other programming languages, as Lua can be easily embedded through C language API and when implementing external libraries in C, programmer can access to upvalues of Lua Runtime not only the stack values in C API. \subsection{Instruction Set} \label{sec:instruction} @@ -290,17 +288,18 @@ VM Instructions for \lambdammm differs from Lua VM in the following respects. \begin{enumerate} \item{Since mimium is a statically typed language unlike Lua, instructions for basic arithmetics are provided for each type.} +\item{The call operation is separated into the normal function call and the call of closure to handle higher-order statefull functions(See \ref{sec:vmstructure} for details). } \item{If statements are realised by a combination of two instructions, \texttt{JMP} and \texttt{JMPIFNEG}, whereas the Lua VM uses a dedicated \texttt{TEST} instruction.} \item{Instructions related to for loop, the \texttt{SELF} instruction used for object-oriented programming and the \texttt{TABLE}-related instructions for metadata references to variables are omitted in mimium as they are not used.} \item{Instructions related to list-related data structures are also omitted in this paper, as the implementation of data structures such as tuples and arrays was omitted in the description of the \lambdammm in this paper.} \end{enumerate} -Instructions in \lambdammm VM are 32bit tagged-union data that has up to 3 operands. Currently, a bit width for tag and each operands are all 8 bit\footnote[1]{Reason for this is that it is easy to implemented on \textrm{enum} data structure on Rust, a host language of the latest mimium compiler. Operands bitwidth and alignment may be changed in the future.}. +Instructions in \lambdammm VM are 32bit data with operation tag and 3 operands. Currently, a bit width for the tag and each operands are all 8 bit\footnote[1]{Reason for this is that it is easy to implemented on \texttt{enum} data structure on Rust, a host language of the latest mimium compiler. Operands bitwidth and alignment may be changed in the future.}. -The VM of \lambdammm is a register machine like the Lua VM(after version 5, although the VM has no real register but the register number simply means the offset index of call stack from the base pointer at the point of execution of the VM). The first operand of most instructions is the register number in which to store the result of the calculation. +The VM of \lambdammm is a register machine like the Lua VM after version 5, although the VM has no real register but the register number simply means the offset index of call stack from the base pointer at the point of execution of the VM. The first operand of most instructions is the register number in which to store the result of the operation. -The list of instructions is shown in Fig.\ref{fig:instruction} (basic arithmetic operations are partly omitted). The notation for the instruction follows the Lua VM paper \cite[p.13]{ierusalimschy2005}. From left to right, the name of operation, a list of operands, and pseudo-code of the operation. When using each of the three operands as unsigned 8 bits, they are denoted as \texttt{A B C}. When used with a signed integer, prefix \texttt{s} is added, and when the two operand fields are used as one 16 bits, an suffix \texttt{x} is added. For example, when B and C are merged and treated as signed 16 bits, they are denoted as \texttt{sBx}. +The list of instructions is shown in Figure \ref{fig:instruction} (basic arithmetic operations are partly omitted). The notation for the instruction follows the Lua VM paper \cite[p.13]{ierusalimschy2005}. From left to right, the name of operation, a list of operands, and pseudo-code of the operation. When using each of the three operands as unsigned 8 bits, they are denoted as \texttt{A B C}. When used with a signed integer, prefix \texttt{s} is added, and when the two operand fields are used as one 16 bits, an suffix \texttt{x} is added. For example, when B and C are merged and treated as signed 16 bits, they are denoted as \texttt{sBx}. In pseudo-code describing an functionality, \texttt{R(A)} means that data is moved in and out through the register (call stack) according to the numerical value of the operand \texttt{A}. \texttt{K(A)} means that it retrieves the \texttt{A}-th number in the static variable field of the compiled programm. \texttt{U(A)} means that referring \texttt{A}-th Upvalue of the current function. @@ -343,11 +342,11 @@ ADDF & A B C & R(A) := R(B) as int + R(C) as in \\ \subsection{Compilation to the VM instructions} -Listing.\ref{lst:bytecodes_onepole} shows an basic example when the mimium code in Listing.\ref{lst:onepole} is compiled into VM bytecode. When \texttt{self} is referenced, the value is obtained with the \texttt{GETSTATE} instruction, and the internal state is updated by storing the return value with the \\ \texttt{SETSTATE} instruction before returning the value with \texttt{RETURN} from the function. Here, the actual return value is obtained by the second GETSTATE instruction in order to return the initial value of the internal state when time=0. +Listing \ref{lst:bytecodes_onepole} shows an basic example when the mimium code in Listing \ref{lst:onepole} is compiled into VM bytecode. When \texttt{self} is referenced, the value is obtained with the \texttt{GETSTATE} instruction, and the internal state is updated by storing the return value with the \\ \texttt{SETSTATE} instruction before returning the value with \texttt{RETURN} from the function. Here, the actual return value is obtained by the second GETSTATE instruction in order to return the initial value of the internal state when time=0. -For example, when a time counter is written as \texttt{| | \{self + 1\}}, it is the compiler's design choice whether the return value of time=0 should be 0 or 1. but at present the author think it is more intuitive to return 0. If the design is to return 1 when time = 0, the second \texttt{GETSTATE} instruction can be removed and the value for the \texttt{RETURN} instruction should be \texttt{R(2)}. +For example, when a time counter is written as \texttt{| | \{self + 1\}}, it is the compiler's design choice whether the return value of time=0 should be 0 or 1 but at present, the author think it is more intuitive to return 0. If the design is to return 1 when time = 0, the second \texttt{GETSTATE} instruction can be removed and the value for the \texttt{RETURN} instruction should be \texttt{R(2)}. -\begin{lstlisting}[float,floatplacement=H,label=lst:bytecodes_onepole,caption=\it Compiled VM instructions of one-pole filter example in Listing.\ref{lst:onepole}] +\begin{lstlisting}[float,floatplacement=H,label=lst:bytecodes_onepole,caption=\it Compiled VM instructions of one-pole filter example in Listing \ref{lst:onepole}] CONSTANTS:[1.0] fn onepole(x,g) state_size:1 MOVECONST 2 0 // load 1.0 @@ -368,35 +367,31 @@ RETURN 3 1 \label{sec:vmstructure} -The overview of a data structure of the VM, program and the instantiated closure for \lambdammm is shown in \ref{fig:vmstructure}. In addition to the normal call stack, the VM has a storage area for managing internal state data for feedback and delay. +The overview of a data structure of the VM, program and the instantiated closure for \lambdammm is shown in Figure \ref{fig:vmstructure} . In addition to the normal call stack, the VM has a storage area for managing internal state data for feedback and delay. -This storage area is accompanied by data indicating the position from which the internal state is retrieved by the \texttt{GETSTATE} / \texttt{SETSTATE} instructions. This position is modified by \texttt{SHIFTSTATE} operation. The the actual data in the memory area are statically layed out at compile time by analyzing function calls that include references to self, delay and the functions which will call such statefull functions recursively. +This storage area is accompanied by data indicating the position from which the internal state is retrieved by the \texttt{GETSTATE} / \texttt{SETSTATE} instructions. This position is modified by \texttt{SHIFTSTATE} operation. The the actual data in the state storage memory are statically layed out at compile time by analyzing function calls that include references to \texttt{self}, call of \texttt{delay} and the functions which will call such statefull functions recursively. -However, in the case of higher-order functions that receive a function as an argument and return another function, the layout of the internal state of the receiving function is unknown at the compilation, so an internal state storage area is created for each instantiated closure separately from the global storage area held by the VM itself. The VM switches the \texttt{State\_Ptr}, which points the internal state storage to be used, at each closure call, to the storage area on the closure, and returns a pointer pointing to the global storage area each time the closure context ends. +However, in the case of higher-order functions that receive a function as an argument and return another function, the layout of the internal state of the given function is unknown at the compilation, so an internal state storage area is created for each instantiated closure separately from the global storage area held by the VM instance itself. The VM switches the \texttt{State\_Ptr}, which points the internal state storage to be used, at each closure call, to the storage area on the closure, and returns a pointer pointing to the global storage area each time the closure context ends. -Instantiated closures also hold the storage area of Upvalues. Until the closure exits the context of the parent function (Open Closure), Upvalues holds a negative offset on the stack at the current execution. Because this offset value can be determined at compile time, stored in the function prototype in the program. +Instantiated closures also hold the storage area of Upvalues. Until the closure exits the context of the parent function (Open Closure), Upvalues holds a negative offset on the stack at the ongoing execution. Because this offset value can be determined at compile time, stored in the function prototype in the program. For instance, if the Upvalue indexes in the program were \texttt{[4,3]}, \texttt{GETUPVALUE 6 1} means that, take \texttt{3} from the upvalue indexes and get value from \texttt{R(-2)} (3-5) over the base pointer and store it to \texttt{R(6)}. -When the closure escapes from the original function with \texttt{RETURN} instruction, inserted \texttt{CLOSE} instruction before the return, which causes, Actual upvalues are moved from stack to somewhere on the heap memory. This Upvalue may be referenced from multiple locations when using nested closures, and some form of garbage collection needed to free memory after it is no longer referred. +When the closure escapes from the original function with \\ \texttt{RETURN} instruction, inserted \texttt{CLOSE} instruction before the return, which causes, Actual upvalues are moved from stack to somewhere on the heap memory. This Upvalue may be referenced from multiple locations when using nested closures, and some form of garbage collection needed to free memory after it is no longer referred. \begin{figure*}[ht] \centerline{\includegraphics[width=\hsize]{lambdammm_vm_structure}} \caption{\label{fig:vmstructure}{\it Overview of the virtual machine, program and instantiated closures for \lambdammm.}} \end{figure*} -\section{Discussion} +\section{Discussion : Different behaviour depending on the position of let binding} \label{sec:discussion} -\subsection{Internal State} - -\subsection{Different behaviour depending on the position of \texttt{let} binding} - Because mimium treats functions that have internal states which change over time, when higher-order functions are used, there is a counterintuitive behavior compared to general functional programming languages. -An example is the higher-order function \texttt{filterbank}, which duplicates \texttt{filter} function \texttt{N} times parametrically, and mix them together. Listing.\ref{lst:filterbank_bad} is an example of an incorrect code, and Listing.\ref{lst:filterbank_good} is an example of the code that behave correctly. The difference between Listing.\ref{lst:filterbank_bad} and Listing.\ref{lst:filterbank_good} is that the recursive calls in the filterbank function are written directly in the inner function to be returned, and the recursive calls in the filterbank function are written with \texttt{let} binding out of the inner function\footnote[2]{In the previous specification of mimium in \cite{matsuura2021a}, the binding of new variable and destructive assignment were the same syntax(\texttt{x = a}) but the syntax for the variable binding has changed to use \texttt{let} keyword.}. Similarly, in the \texttt{dsp} function that will be called by the audio driver, the difference is whether the filterbank function is executed inside \texttt{dsp} or bound with \texttt{let} once in the global context. +An example is the higher-order function \texttt{filterbank}, which duplicates \texttt{filter} function \texttt{N} times parametrically, and adds them together. Listing \ref{lst:filterbank_bad} is an example of an incorrect code, and Listing \ref{lst:filterbank_good} is an example of the code that behave correctly. The difference between Listing \ref{lst:filterbank_bad} and Listing \ref{lst:filterbank_good} is that the recursive calls in the filterbank function are written directly in the inner function to be returned, and the recursive calls in the filterbank function are written with \texttt{let} binding out of the inner function\footnote[2]{In the previous specification of mimium in \cite{matsuura2021a}, the binding of new variable and destructive assignment were the same syntax(\texttt{x = a}) but the syntax for the variable binding has changed to use \texttt{let} keyword.}. Similarly, in the \texttt{dsp} function that will be called by the audio driver in mimium, the difference is whether the filterbank function is executed inside \texttt{dsp} or bound with \texttt{let} once in the global context. -In the case of normal functional language, if all the functions used in a composition do not contain destructive assignments, the calculation process will not change even if the variable bound by \texttt{let} is manually replaced with its term(beta reduction), as in the conversion from Listing.\ref{lst:filterbank_good} to Listing.\ref{lst:filterbank_bad}. +In the case of normal functional language, if all the functions used in a composition do not contain destructive assignments, the calculation process will not change even if the variable bound by \texttt{let} is manually replaced with its term(beta reduction), as in the conversion from Listing \ref{lst:filterbank_good} to Listing \ref{lst:filterbank_bad}. -But in mimium, there are two major stages of evaluation: the code is evaluated in the global environment (finalization of the signal processing graph) at first, then the dsp function is repeatedly executed (actual signal processing), and the function may involve implicit internal state updates. Therefore, even though the code does not include destructive assignments, the recursive execution of the \texttt{filterbank} function is performed only once in Listing.\ref{lst:filterbank_good} for the evaluation of the global environment, whereas in Listing.\ref{lst:filterbank_bad}, every sample the dsp function is executed, the recursive function is executed and a closure is generated. Since the initialization of the internal state in the closure is performed at the time of closure allocation, in the example of Listing\ref{lst:filterbank_bad}, the internal state of the closure after the evaluation of \texttt{filterbank} is reset at each time step. +But in mimium, there are two major stages of evaluation, 0: the code is evaluated in the global environment (concretizing the signal processing graph) at first, and 1: the dsp function is repeatedly executed (actual signal processing) and the function may involve implicit internal state updates. Therefore, even though the code does not include destructive assignments, the recursive execution of the \texttt{filterbank} function is performed only once in Listing \ref{lst:filterbank_good} for the evaluation of the global environment, whereas in Listing \ref{lst:filterbank_bad}, every sample the dsp function is executed, the recursive function is executed and a closure is generated. Since the initialization of the internal state in the closure is performed at the time of closure allocation, in the example of Listing\ref{lst:filterbank_bad}, the internal state of the closure after the evaluation of \texttt{filterbank} is reset at each time step. \begin{lstlisting}[float,floatplacement=H,label=lst:filterbank_bad,language=Rust,caption=\it Wrong example of the code that duplicate filter parametrically.] fn bandpass(x,freq){ @@ -405,12 +400,12 @@ fn bandpass(x,freq){ fn filterbank(n,filter){ if (n>0){ |x,freq| filter(x,freq+n*100) - + filterbank(n-1,filter)(x,freq) + + filterbank(n-1,filter)(x,freq) }else{ 0 } } -fn dsp(){ +fn dsp(){ //called by audio driver. filterbank(3,bandpass) } \end{lstlisting} @@ -430,10 +425,33 @@ fn dsp(){ myfilter(x,1000) } \end{lstlisting} +\begin{lstlisting}[float,floatplacement=H,label=lst:filterbank_multi,language=Rust,caption=\it Example of filterbank function using multi-stage computation in a future specification of mimium.] + fn filterbank(n,filter){ + .< if (n>0){ + |x,freq| filter(x,freq+n*100) + + ~filterbank(n-1,filter)(x,freq) + }else{ + 0 + } >. + } + fn dsp(){ + ~filterbank(3,bandpass) (x,1000) + } +\end{lstlisting} \section{Conclusions} +This means that the major compiler optimization techniques such as the constant folding and the function inlining can not simply be appropriated. Those optimizations should be done after the evaluation of a global context and before evaluating \texttt{dsp} function. + +To solve this situation, introducing distinction whether the term should be used in global context evaluation(Stage 0) and in the actual signal processing(stage 1) in type system. This can be realized with Multi-Stage Computation\cite{Taha1997}. Listing \ref{lst:filterbank_multi} is the example of \texttt{filterbank} code using MetaOCaml's syntaxes \texttt{..} which will generate evaluated program used in a next stage, and \texttt{\textasciitilde term} which embed terms evaluated at the previous stage. + +\texttt{filterbank} function is evaluated in stage 0 while embedding itself by using \texttt{\textasciitilde}. + +This multi-stage computation code has a same semantics in a generative signal graph generation and execution of the signal processing, in contrast to that Faust have 2 different semantics of the term rewriting macro and BDA. + + + This template can be found on the conference website. For changing the number of author affiliations (1 to 4), uncomment the corresponding regions in the template \texttt{tex} file. Please, submit full-length papers (2 to 14 pages). diff --git a/src/ref.bib b/src/ref.bib index b94d883..f91a7ed 100644 --- a/src/ref.bib +++ b/src/ref.bib @@ -112,6 +112,19 @@ abstract = {Despite using them every day, most software engineers know little about how programming languages are designed and implemented. For many, their only experience with that corner of computer science was a terrifying "compilers" class that they suffered through in undergrad and tried to blot from their memory as soon as they had scribbled their last NFA to DFA conversion on the final exam.That fearsome reputation belies a field that is rich with useful techniques and not so difficult as some of its practitioners might have you believe. A better understanding of how programming languages are built will make you a stronger software engineer and teach you concepts and data structures you'll use the rest of your coding days. You might even have fun.This book teaches you everything you need to know to implement a full-featured, efficient scripting language. You'll learn both high-level concepts around parsing and semantics and gritty details like bytecode representation and garbage collection. Your brain will light up with new ideas, and your hands will get dirty and calloused.Starting from main(), you will build a language that features rich syntax, dynamic typing, garbage collection, lexical scope, first-class functions, closures, classes, and inheritance. All packed into a few thousand lines of clean, fast code that you thoroughly understand because you wrote each one yourself.}, isbn = {978-0-9905829-3-9}, } - - - +@article{Taha1997, + title = {Multi-{{Stage Programming}} with {{Explicit Annotations}}}, + author = {Taha, Walid and Sheard, Tim}, + year = {1997}, + month = dec, + journal = {SIGPLAN Notices (ACM Special Interest Group on Programming Languages)}, + volume = {32}, + number = {12}, + pages = {203--214}, + publisher = {Association for Computing Machinery (ACM)}, + issn = {03621340}, + doi = {10.1145/258994.259019}, + urldate = {2021-05-12}, + abstract = {We introduce MetaML, a statically-typed multi-stage programming language extending Nielson and Nielson's two stage notation to an arbitrary number of stages. MetaML extends previous work by introducing four distinct staging annotations which generalize those published previously [25, 12, 7, 6] We give a static semantics in which type checking is done once and for all before the first stage, and a dynamic semantics which introduces a new concept of cross-stage persistence, which requires that variables available in any stage are also available in all future stages. We illustrate that staging is a manual form of binding time analysis. We explain why, even in the presence of automatic binding time analysis, explicit annotations are useful, especially for programs with more than two stages. A thesis of this paper is that multi-stage languages are useful as programming languages in their own right, and should support features that make it possible for programmers to write staged computations without significantly changing their normal programming style. To illustrate this we provide a simple three stage example, and an extended two-stage example elaborating a number of practical issues.}, + file = {/Users/tomoya/Zotero/storage/KFYY25CM/Taha, Sheard - 1997 - Multi-Stage Programming with Explicit Annotations.pdf;/Users/tomoya/Zotero/storage/X3DDM6HN/full-text.pdf} +} diff --git a/src/semantics.tex b/src/semantics.tex index 4f67e89..b83d514 100644 --- a/src/semantics.tex +++ b/src/semantics.tex @@ -2,36 +2,51 @@ \centering \begin{tabular}{ccc} - \begin{minipage}[b]{4.5cm} + \begin{minipage}[b]{5cm} \centering \begin{equation*} \frac{E^n \vdash e_1 \Downarrow v_1\ E^{n-v_1} \vdash e_2 \Downarrow v_2}{E^n \vdash\ delay\ e_1\ e_2 \Downarrow v_2} \end{equation*}\textrm{E-DELAY} \end{minipage} & - \begin{minipage}[b]{4.5cm} + \begin{minipage}[b]{5cm} \centering \begin{equation*} \frac{}{E^n \vdash\ \lambda x.e \Downarrow cls(\lambda x.e , E^n) } \end{equation*}\textrm{E-LAM} - \end{minipage}& - - - \begin{minipage}[b]{6cm} - \centering - \begin{equation*} - \frac{ E^{n-1} \vdash e \Downarrow v_1\ E^n, x \mapsto v_1 \vdash e \Downarrow v_2 }{E^n \vdash\ feed\ x\ e \Downarrow v_2} - \end{equation*}\textrm{E-FEED} - \end{minipage} - \\ - \multicolumn{3}{c}{ - \begin{minipage}[b]{8cm} + \end{minipage}& + \begin{minipage}[b]{5cm} \centering \begin{equation*} - \frac{E^n \vdash e_1 \Downarrow cls(\lambda x_c.e_c, E^n_c) E^n \vdash e_2 \Downarrow v_2\ E^n_c, x_c \mapsto v_2 \vdash e_c \Downarrow v }{E^n \vdash\ e_1\ e_2 \Downarrow v } - \end{equation*}\textrm{E-APP} + \frac{ E^{n-1} \vdash e \Downarrow v_1\ E^n, x \mapsto v_1 \vdash e \Downarrow v_2 }{E^n \vdash\ feed\ x\ e \Downarrow v_2} + \end{equation*}\textrm{E-FEED} \end{minipage} - } - + \\ + \begin{minipage}[b]{5cm} + \centering + \begin{equation*} + \frac{E^n \vdash e_c \Downarrow n \quad n > 0\ E^n \vdash e_t\ \Downarrow v\ }{E^n \vdash\ if (e_c)\ e_t else e_t \Downarrow v } + \end{equation*}\textrm{E-IFTRUE} + \end{minipage} + & + \begin{minipage}[b]{5cm} + \centering + \begin{equation*} + \frac{E^n \vdash e_c \Downarrow n \quad n \leqq0\ E^n \vdash e_e\ \Downarrow v\ }{E^n \vdash\ if (e_c)\ e_t else e_t \Downarrow v } + \end{equation*}\textrm{E-IFFALSE} + \end{minipage} + & + \ + \\ + \multicolumn{3}{c}{ + \begin{minipage}[b]{15cm} + \centering + \begin{equation*} + \frac{E^n \vdash e_1 \Downarrow cls(\lambda x_c.e_c, E^n_c) E^n \vdash e_2 \Downarrow v_2\ E^n_c,\ x_c \mapsto v_2 \vdash e_c \Downarrow v }{E^n \vdash\ e_1\ e_2 \Downarrow v } + \end{equation*}\textrm{E-APP} + \end{minipage} + } + \\ + \end{tabular} \caption{\label{fig:semantics}{\it Excerpt of the big-step semantics of $\lambda_{mmm}$.}} \end{figure*} \ No newline at end of file diff --git a/src/syntax.tex b/src/syntax.tex index 2aa08b4..e1960d3 100644 --- a/src/syntax.tex +++ b/src/syntax.tex @@ -47,6 +47,7 @@ e \; ::=& \ x &x \in {v_p} \ & [value]\\ |& \ if\; (e_c)\; e_t\; else\; e_e \; & & [if] \\ |& \ delay\; n \; e_1 \; e_2 &n \in \mathbb{N}\ & [delay]\\ |& \ feed \; x.e & & [feed]\\ + |& ... & & \\ %%|& \quad (e_1,e_2) \quad & [product]\\ %%|& \quad \pi_n e \quad n\in \mathbb{N},\; n>0 \quad & [project]\\ %%|& \quad \langle e \rangle \quad & [code] \\ @@ -58,5 +59,5 @@ e \; ::=& \ x &x \in {v_p} \ & [value]\\ } \end{tabular} - \caption{\label{fig:syntax_v}{\it Definition of Types, Values and Terms of the $\lambda_{mmm}$.}} + \caption{\label{fig:syntax_v}{\it Definition of Types, Values and Terms of the $\lambda_{mmm}$(Basic arithmetics are omitted).}} \end{figure} \ No newline at end of file