I’ve got a few projects rolling at the moment. The overall trend is to pursue intelligibility.
I’m in the process of learning Pollen, a next-generation document publishing tool written in Racket, a dialect of Scheme. I intend to use it for my résumé, this blog (sorry Hugo, you’re too complex), and random typographical stuff I produce for events such as parties. Matthew Butterick’s Practical Typography has turned me into an amateur typographer, such that I now offer free, unsolicited design services to friends who ask me to improve their writing. Having been a reader of SICP since about 2014, I have some experience in Scheme, but Racket seems much more featureful; more like C++ is to C, or Scala is to Java. I’ll have to proceed with caution.
My multi-year exploration of concatenative programming languages has culminated in something like a framework for language experimentation. This is mostly a specification of interfaces; the algorithmic content is trivial, most of it being intentionally factored out.
These are the current goals:
Make the provided parser convenient to work with. It should be able to match patterns easily, somewhat like Go’s parser. If it doesn’t go well, I may choose to explore minimal-syntax areas of language design, which generally means avoiding parse-heavy “expressions” and treating everything as data, even such things as identifiers. I’ve tried this approach before, although it was when I was much more confused about how to do these things.
An extensible lexer. This feels somewhat trickier, but seems to have a lot in common with how a parser works. A set of rules determine the transformation from one lexer state to the next; the rules are specified externally.
A set of reference implementations. I’ve made considerable progress implementing machine interfaces, such as environments (as persistent binary search trees), stacks (as persistent linked lists), and expressions (as ropes). Classical lambda-compose syntax (see the paper if you aren’t familiar with it) is also implemented. But this isn’t enough to be useful for practical work, so I’ve also created extensions. I/O is the elephant in the room, as with all value-based programming languages. And I need to come up with a packaging model.
The idea with this framework is to decouple as much of the components as possible. A syntax implementation has to construct expressions, but does not know anything about the implementation of expressions; thus, if ropes aren’t right, we can switch to something else without having to touch any syntax.
Likewise, the lexer, parser, and machine know nothing about each other. Interfaces are defined by consumers. The parser and machine know very little about language syntax and semantics. That’s all implemented externally.
There’s evidence that code readability leads to quality. Measuring readability is challenging because it’s affected by many factors. In practice, no simple metric will suffice; you need a model. Thus you need a human-derived dataset on which to base your model.
I’m a pretty enthusiastic Go programmer, but I’ve used Go in embedded systems, not web apps. I think creating a survey-driven readability model would be a good way to round out my skills.
I’ve worked out a rough database schema.
Discuss this page by emailing my public inbox. Please note the etiquette guidelines.
© 2024 Karl Schultheisz