kdsch.org · about · rss
The tasks of designing a language and creating an interpreter are distinct, but practically, they are tightly coupled. The interpreter serves as a research vehicle for design choices. The language serves as an occasion for interpreter design.
The first interpreter I wrote was for a Forth-like, stack-based, postfix language. It was extremely simple, consisting of a tokenizer and a word evaluator, inspired by a description of the Open Firmware Forth interpreter. I got by with this minimal design for some months while I toyed around with language features and applications, particularly signal processing.
To concatenate two signals in time, there was a binary operator
construct sequences of length greater than two, multiple
seq calls could be
x y seq z seq
It was quite effective. However, it didn’t scale well as the composition depth increased. It ate up more and more CPU as the program went on, eventually causing audio buffer underruns. (I’m skirting some other design details implicated in this problem.)
What was needed? A way to construct flat sequences of arbitrary length. But how to do this in a language that has only binary operators? You need lists. And thus, you have to parse lists.
I could keep the same crude interpreter design. The opening bracket would have to be able to parse tokens. As I understand it, this is pretty close to how Forth interpreters actually work; every command word is capable of doing its own parsing. There’s no centralized piece of code that does all the parsing.
So, when the
[ word is executed, it eats up tokens and converts them into
some kind of list data structure. And when
] is encountered, the list is
pushed onto the stack.
Obviously, for this to work, you have to provide other operators that work on list types: push, pop, concatenate, sort, reverse, and so on.
But wait. What happens if the list parser gets something other than a literal?
What if it gets a
+? Does it execute it? Does it “quote” it?
Clearly, one consequence of this interpreter design is poor separation of concerns. For each word in the language, you have to keep answering the same questions about syntax. At no point can you be assured that your program’s syntax is correct. You just have to run it and find out. And type safety, too, is not possible.
So! To validate syntax, we need a better grip on parsing. And to validate type safety, we have to devise a type system. Hence, my research shifted to static analysis.
This required getting a more mature understanding of parsing, because type systems have to work on abstract syntax trees, and my interpreter did not have such a thing. I later realized that the run time stack was doubling as the parser stack! This is what makes the design so compact. The evaluator was doubling as the type checker.
Presumably the easiest parsers to write are recursive descent parsers. But to write such parser is greatly aided by specifying the language’s grammar.
Postfix languages have a very simple grammar. The bulk of it is taken up by binary operators:
expr := literal | binopExpr binopExpr := expr, expr, binop binop := + | * | ...
Each production gets its own parsing function. But there’s a problem. This
grammar has left recursion; specifically, the
binopExpr production is left
recursive. And such productions cannot be parsed by recursive descent!
You must realize that an RDP works by detecting the production case first.
It has a conditional clause covering each possibility. Had we chosen to put
binop on the left, this would be fine. But since it’s on the right, the
parser won’t work.
The RDP is looking for a leading token to trigger the productions, and in a
postfix language the productions can’t be triggered there. When you parse
4 5 * 6 +, you don’t know what kind of syntactic object you’re dealing with
until the very end.
There’s a wrinkle, though. The list parsing actually can be done by recursive
descent, because left brackets kick off the list productions. When you see
you know, even before getting to the terminating
], that you’ve got a list.
So I settled on a design combines the two approaches. Most productions are
bottom-up, as in a conventional Forth interpreter. A few, like lists, are
top-down. How does this work? It’s not so hard. The parser has a stack, which
is used for parsing bottom-up. But it also contains cases that cause recursion
over lists and other productions (perhaps definitions, that might start with
def). When it sees things like numbers and arithmetic operators, all of
that is handled by the stack. When it sees lists, it calls a list parser,
which consumes from the token stream and returns a list expression. All
expressions eventually end up on the stack.
Effectively, the parser constructs a symbolic expression (or a stack of them)
mirroring the logical structure of the program. This is usually called an
abstract syntax tree. I now realize that the parser has basically the same
structure as the entire original interpreter; but instead of working with
numbers and signals, it works with symbolic expressions. This can be viewed
as a kind of lazy evaluation; “evaluating”
1 2 + doesn’t immediately return
3, but simply the summation expression.
Once you have a syntax tree, you’re ready to play with type systems.
Most pedagogy about type systems revolves around the λ-calculus and languages built on it: Lisp, ML, and Haskell. These languages are quite different from concatenative languages. Particularly, their abstract syntax. Concatenative languages do not have variables, and thus no environment (at least, Joy has neither). And because type systems work on abstract syntax, they’re simply not modular with respect to language. Every bit of syntax in the language has to have a rule (a type judgment) for determining its type.
So where much effort is spent teaching students how to determine the types of variables, expressions, substitutions, most of this is not pertinent in a concatenative language. Instead of applications and abstractions we have compositions and quotations. Presumably, concatenative languages, for their lack of variables, are more akin to combinatory logic than to the λ-calculus. So one might ask, what about typed combinatory logics? Surely, they have been studied, but you won’t find them in introductory treatises.
So, how to you implement a type system? My research led me to believe that type inference, that fancy feature of advanced functional languages, probably isn’t as hard to implement as it first seemed. But I don’t entirely know, yet; I’m still in the middle of it.
A type system can be implemented by means of a constraint network derived from the abstract syntax tree.
What is a constraint network? SICP gives a great example. It’s a system that can solve an equation in any direction.
Discuss and submit patches for this page by emailing my public inbox. Please note the etiquette guidelines.
© 2023 Karl Schultheisz — source