Table of Contents
- A note
- world hello
- Russ Forth: a simple Forth in Ruby
- Russ-Forth Core
- Forth Thinking
- Book of interest: Stack Computers - the new wave
- References and Interesting Information
- Author information and links
Read-Eval-Print-Love is an N-monthly zine of original content and curation about the Lisp family of programming languages and little-languages in general.
One day a hopeful macrolyte approached the master macrologist Pi Mu and asked a question:
“Master, I’ve studied with the immortal Brodie, but have left him in search of answers – and as such I would like to study under your tutelage.” Upon hearing this Pi Mu asked, “Brodie is a true master, so why is it that you’ve decided to leave him?” The student answered, “his teachings are obvious and I’ve transcended his teachings.” Pi Mu glared intently and asked yet another question, “What of his teachings were obvious?” The student responded, “I once asked him what the fundamental principle of Forth was, but I did not like the answer.” Pi Mu, losing patience asked gruffly, “but what did he say?” To which the student responded, “his answer was ‘Moore seeks for layers.’” Pi Mu then stated, “well, that was an excellent answer, but do you know it’s meaning?” “Of course,” responded the student, “it means that Moore, as the creator of Forth, would naturally build his programs in a layered fashion.” He added, “for in the same way I too can achieve Forth enlightenment through the practice of layered development.” Pi Mu then laughed, “oh you didn’t understand it at all!” The student, taken aback then asked, “well, how would you answer?” Pi Mu responded, “go on and ask me.” The student then repeated his question, “what is the fundamental principle of Forth?” Almost immediately Pi Mu responded, “Moore seeks for layers.”
The student became enlightened.
The Forth programming language was created by Charles (Chuck) Moore as the basis for a personal programming environment. After working on this system for some time in isolation, Moore decided to extract a Forth implementation for use in the National Radio Astronomy Observatory’s 11-meter radio telescope in 1971. Imagine this for a moment if you please. Chuck Moore, some time in 1968 found that the computing environments of the day were insufficient for his needs and decided to create his own from whole-cloth. Over the next 2-3 years Moore developed a computing environment suited to his philosophy of programmatic style. A crucial components of this system was the programming language forming its underpinnings: Forth.
By modern standards, Forth is an odd duck. First, the language is stack-centric, meaning that every operation in the language deals either implicitly or explicitly with a stack-based programming metaphor. This is decidedly different from a programming language like C that may use as an implementation details an underlying stack to pass function activations. In Forth, a program consisting of a single token can demonstrate how its stack is central in its programming model:
The Forth program above is interesting in that it illustrates the conceptual underpinnings of the language. In most modern programming languages, a program containing the number
9 would be quite pointless, but in Forth such a program would cause an effect in the Forth environment itself. That is, the number
9 in Forth isn’t just a constant integer, instead it’s a command to the Forth environment that states “push the integer 9 onto the data stack.” The data stack in Forth is a global scratchpad in Forth environments used by operations to retrieve and store the results of functions.
A larger Forth program is shown below:
In addition to the commands to push
9 onto the stack, there’s a new command called
/ that states: “pop something off of the stack and make it the denominator and then pop another thing off and make that the numerator and then push the resulting quotient onto the stack.” At the end of this program you’ll not be surprised to learn that the number
4 is the only element on the data stack.
Before I go any further I’d like to talk a little bit about the word “concatenative.” Very often when discussing Forth and languages of its ilk (the pedigree being Factor, Joy, Open Firmware, and PostScript) you’ll see this term bandied about, but what does it really mean? From a simplistic perspective, the term concatenative means “programs are built by laying code words side by side,” but again this is nearly meaningless. Instead, let me create a very simple little Clojure function that might help clarify the matter:
postfix function is fairly straight-forward. It takes a number of arguments (
words) that represent either functions or non-functions. A sample call looks as follows:
It then walks the
words and when it finds a function it calls it with the top-2 elements of the “stack” (the
words list) and plops the result back in place, otherwise it just plops the
word. Eventually, what comes out of
postfix is something that represents a stack, a la:
Now imagine that the vector
[5 4 *] represents a valid program:
I could literally concatenate another valid program onto this vector in order to create a new valid program:
Likewise, I could concatenate an operator into that result to create yet another valid program:
As shown, programs built with concatenative languages are composed via the act of “laying” operators, or even whole programs, side by side. Additionally, like in natural language, Forth words can directly replace longer phrases. Observe:
When the program above is run, the data stack will look as follows:
That is, the only element on the stack will be the number 25 because the program
5 dup * means “duplicate the thing on the top of the stack and push the duplicate on top of it, then take two things off of the stack (two 5s) and multiply them together, pushing the result back onto the stack.” The program fragment
dup * is therefore equivalent to the act of squaring the number on the top of the data stack. Indeed, I could use those two words to define a new word, as shown below:
I’ll talk more about the
: operator later, but for now you only need to understand that it creates a new word named
sq consisting of the two words
dup *. Now, I can directly replace the original longer phrase
dup * with
sq to keep the original behavior:
This is a common work-flow in Forth-like languages:
- Write some code to perform a task
- Identify some fragment that might be generally useful
- Extract it and give it a name
- Replace the relevant bits with the new word
There is nothing special about this work-flow except that it’s indicative of a philosophical underpinning of Forth-like languages driving code toward a highly-layered implementation. I’ll dive into this angle later.
In this installment of Read-Eval-Print-Love I’ll discuss Forth by first dissecting a small Ruby implementation called “Russ-Forth”1, which is a simplified Forth (e.g. no return stack). Once the run-time for Russ-Forth is in place I’ll implement the bare bones of a core library and talk about stack-shufflers, combinators, and the like. I’ll then talk about the ideas in the amazing book “Thinking Forth” by Leo Brodie which describe an interesting bottom-up approach to system design.
While Forth is an interesting programming language in its own right, the language itself is a philosophical statement by Chuck Moore (Brodie 1984) about how programs should be constructed. Of course, I’ll have fun with a tiny Forth-like implementation, but the point of this installment is to establish a context for describing the Forth philosophy. I’m personally of the opinion that far more important than the programming language itself, the idea of Forth is key.
- The name is an homage to my friend and colleague Russ Olsen, who posted the original code to me as a comment on one of my GoodReads reviews. Thanks Russ!↩
Russ Forth: a simple Forth in Ruby
This wouldn’t be an issue of Read-Eval-Print-Love without an implementation of a programming language, so here we are again. This time I’m going to implement a little version of Forth that looks a little bit like a Forth that you might find out in the wild,1 but that is greatly simplified.
The implementation of a small Forth interpreter like Russ Forth (RForth) is fairly small as you’ll see in the next few pages. Aside from the core functions, RForth consists only of a lexicon to store word (functions) definitions:
A compiler to convert those word definitions to core constructs – in this case, Ruby constructs:
And a token reader to handle the raw text input:
While these three pieces will provide the core functionality of the Forth scanning and execution, RForth needs more than that. Indeed, while not comprehensive, I will implement a small set of core functions found in many concatenative languages:
I’ll talk more about these core words later. For now let me walk through the implementation of the main RForth driver class.
Any good Ruby program starts with a
class definition, and this one is no different: 2
RussForth class will control the text scanning, compilation, and execution interpreter pipeline. On initialization the class is instantiated with the input and output channels, that are set to standard in and out (console I/O) by default.
As mentioned before, three key components of the system are implemented as a set of three classes:
Compiler. However, the core metaphor of the system is implemented as a simple Ruby array:
However, rather than being used simply as a array, the
@stack property will be managed as, well, a stack. You see, Forth is a stack-centric concatenative programming language.3 All arguments into and results from words pass from one to the other via the system stack. Since I talked about this in the introduction I won’t belabor the point here, but I might talk more about it in situ whenever appropriate.
The final stage of the initialization process is the population of the lexicon with core words. I’ll discussion this more later.
The RForth evaluation model is very simple. Indeed, much of the simplicity is owed to the fact that it farms off much of the execution logic to Ruby itself. That is, after the read and compilation phases RForth words are left as Ruby
Proc instances awaiting execution which are stored in the
Lexicon. Programmatically this looks as follows:
As you might have noticed, the
entry retrieved is actually a hash containing a
Proc. The reason for this level of indirection is because there’s a little extra information needed pertaining to the word that’s stored in the hash returned from
resolve_word. The implementation of
resolve_word is as follows:
First of all, if a word is in the
resolve_word simply returns it as is. However, if the word is not found then RForth needs to perform some careful logic:
Since this is a Ruby-based interpreter, I thought it would be fun to handle Ruby symbols. Therefore, the first check is to see if the read word starts with a colon. If it does, then the word is just the symbol itself – there’s a good reason for this, as I’ll talk about in a moment. But first, if the word is not a symbol then it might be a number:
Without going into too much detail about
to_number (I’ll inflict that on you in a moment) the point is that should the word in fact be a number then it too is simply the number itself. So in the cases where a word resolves to a symbol or a number, the RForth interpreter has to handle them in an interesting way:
That is, if the word resolved to either a number or a symbol then RForth builds a Ruby
Proc that does nothing but push them onto the top of the stack. You’ll sometimes hear that everything in Forth (and concatenative languages in general) is a function (word) including scalars like numbers and the like. Indeed, the rationale is sound given that it’s often important to perform calculations involving numeric constants and symbols and since that is the case, it makes sense to leverage the paradigm of the language itself to add such constructs. While not every concatenative language utilizes this trick, instead preferring more performant options, I chose to because I’m drawn to the purity of the idea.
Finally, if the word was none of a process in the lexicon, symbol, nor number then of course it was nothing at all. Real quick, allow me to show the abominable
to_number implementation below:
Rather than going into detail about this method, I’d prefer to turn a blind eye and pretend that it never happened. Instead, allow me to say that the above is all that there is to the evaluation model of a simple Forth like RForth. However, while it might make sense that numbers and symbols are self-pushing words, there are more complicated words comprising the RForth run-time. First, there are internal words implemented as Ruby methods and while I’ll get to those eventually, I’m more interested in user-defined words supported by the language itself. Of course, the Devil’s in the details and so in the next few sections I’ll dig into the (sometimes gnarly) details.
Reading and defining words
I glossed over the action of processing user-defined words, so let me take a few paragraphs to go back and discuss it a bit. Rather than dig into the details of tokenization and character-by-character lexing, I’ll keep things at a higher level.
Speaking of char-by-char lexing, the bulk of that task is farmed out to the
Reader class which I’ll discuss later. It’s assumed, that
Reader#read_word “knows” what it’s doing and will return a token corresponding to the name of the word. If you recall, the
read_and_define_word was initiated when the
resolve_word method encounter a colon. The colon started the user-defined word and a semicolon ends it – in between are just regular RForth words:
Once the constituent words comprising the body of the user-defined word are read, they are then compiled into a Ruby
A method that’s closely related to
read_and_define_word is shown below:
You’ll notice that the basic structure of
read_quotation is almost exactly like that of
read_and_define_word except for what happens to the returned
Compiler#compile_words. That is, rather than define the word in the
@lexicon, it’s stored directly onto the
@stack itself. For the Lisp programmers in the audience you might realize that this is the Forth analogy to a lambda or anonymous function. The utility for these anonymous words, called “quotations” in Forth, will be shown later when I talk about RForth combinators.
I’ll also talk a little about the compilation step later, but for now let me move on to the final parts of the
RForth class: driving and seeding.
I’ve already shown the basic parts of the system, but it’s the
run method that ties it all together:
It couldn’t possibly be easier. Read, Eval, Loop. However, it gets a little more interesting when a language proper is built on top of this simple frame. Commercial-grade Forth systems typically come with hundreds of general-purpose and domain-specific4 words out of the box. However I’m only going to talk about a handful of word classes for the purpose of illustration, including:
I’ll dive into each of these soon, but for now all that I want to say is that the words are defined as Ruby modules and imported to the RForth lexicon at launch time:
Each of the built-in words are implemented as methods contained in modules and aliased for use in the system:
However, there is a special kind of word supported by most Forth implementations called immediate words and I’ll talk about those later as well.
The basic structure of the RForth system is fairly straight-forward as most of the capability is farmed out to supporting classes, each of which I’ll discuss presently in turn.
Lexicon class just implements a fancy hash-map: 5
Lexicon is a map then it should behave like one:6
What’s stored in the map is a k/v pair of name/descriptor. The descriptor is yet another map that just describes the word and provides its implementation:
Immediate words, or words that are executed during the read phase, differ from regular words only by a flag:
Words can have aliases, which are defined in terms of previously stored words in the
Lexicon. The utility of this will become apparent when I talk about the
import_words_from method below.
Nothing about the
Lexicon has been terribly interesting so far (which is why I glossed over the details). However, its implementation is slightly more interesting in the way that it imports base words from Ruby modules. That is, the core words in RForth are implemented within Ruby modules and actively imported at start time to load the core
Lexicon instance, as shown below:
As shown, the public methods defined in a passed module are closed over individually and stored in the
Lexicon using their defined names. Perhaps now you see the utility of the
alias_word method defined above. That is, the limitations of Ruby method naming shouldn’t bleed into the RForth word name definitions and so while it’s straight-forward to import a module method directly, the presence of
alias_word allows me to fix up the name to something more appropriate to RForth. You’ll see this in action later, but for now I’ll talk about the class responsible for the read phase of the RForth implementation.
Forth languages, in general, do not have a read phase like you’d find in Lisp languages. Instead, the term
Reader is meant to encapsulate the logic for an RForth phase pertaining to the reading of words from an I/O stream. This is more like a classical language scanner except that it performs no level of token classification. Instead, since everything in RForth is a word then everything scanned is assumed to be a word. This is perhaps a naive implementation, but it works for my purposes.
Reader class is initialized with an input stream. There’s nothing surprising about that fact perhaps. As shown in the introduction, RForth words are always separated by white-space, (i.e. tab, space, newline, etc.) so the Reader needs a way to identify when it encounters some:
Regular expressions to the rescue!7
This is all fine and good, but the real meat of the word identification is shown below:
That little nasty bit of code is (unfortunately) obfuscating a simple bit of logic that can be stated simply as “append a bunch of characters to a word until a space or
nil happens along.” Once one of those separators are encountered then RForth just assumes that it’s found a word and just returns it:
And that’s the entirety of the
Reader class. This could be made more robust at the expense of clarity, but I decided to keep it simple for now. Frankly, only a few odd people (like myself) care about the minute details of lexical scanning, so by assuming that every token was potentially a word I was able to forego a whole swath of complexity.
I’ll follow this same complexity conservation pattern in the next section in the implementation of the RForth compiler.
I agonized long and hard over whether I would refer to this class as a compiler or a more terrible word (perhaps one starting with a ‘T’), but in the end my biases show through. The
Compiler class is more to stick to the layered phase separation of RForth than a traditional compiler. Indeed, the implementation is such that RForth words comprising a program are aggregated into Ruby
Proc instances for execution. This is not problematic until you realize that the
Compiler translates RForth programs into sequences of
Proc calls which then call other
Proc instances that then bottom out as Ruby module methods. That is, the compilation phase for RForth is very tightly coupled to the RForth run-time environment. In other words, the
Compiler class is simply responsible for translating RForth words into sequences of pre-existing Ruby method calls. That said, it still might be interesting to explore a little.
The compiler will take an array of words and transform them, collecting the results into an array of
First, each word is expected to exist in the given context (probably a
RussForth instance) and throw an error if it can’t be resolved.
If the resolved word happens to be an immediate word then it’s executed… well… immediately. Immediate words are the Forth way to define compile-time effects and I’ll talk more about that later.
However, if the resolved word is a normal word then its block attribute (the part that performs the action) is stored for later use:
That is, the result of the
compile_words method is a Ruby
Proc that closes over the
actions and executes them each in turn at some point in the future. This is the whole mechanism behind how user-defined words are “compiled” into Ruby
The conceptual framework for a Forth-like language is surprisingly small. With even a moderately expressive language one can create a Forth-like in very few lines of code, required very little conceptual bulk. That said, even though I’ve shown the guts of RForth, I’ve done very little to describe the kinds of programs and patterns that fall out of a Forth-like language. The rest of this installment will be devoted to just that topic with a particular eye towards demonstrating how such a simple language can lead to deep implications on how programs are reasoned about and constructed.
- If you’re interested in actually using a Forth for personal or professional use, then there are far better options than Russ Forth. At the end of this installment I’ll list a few.↩
- Save for perhaps the “good.”
class Russforth def initialize( s_in = $stdin, s_out = $stdout ) @s_in = s_in @s_out = s_out↩
- Most concatenative programming languages use a stack as the core metaphor for computation, but it’s not necessarily the case that concatenative languages must do so. Indeed, if I ever get around to writing an installment of Read-Eval-Print-Love about concatenative languages in general then I’ll build a little interpreter for a language that does not use a stack at all. That’s a task for another day however.↩
- I’ll dive deeper into Forth and domain specific programming later in this installment.
include Verbs::Shufflers include Verbs::StackOps include Math::Arithmetic include Verbs::Comparators include Verbs::Io↩
- Traditionally, Forth lexicons are implemented as linked lists to avoid overwriting previous word implementations.↩
- I’ve always liked Ruby’s capability to override the
array look-up as it’s allowed me to use Ruby think in terms of associative data rather than merely object-centric.↩
- I’ll avoid the temptation to add a certain famous quote about regular expressions. Instead, I will say that pound for pound regexes are amongst the most powerful programming abstractions going. That said, the density of information packed into a regexes leads me to minimal their footprint in my own programs to minimize the occurrences of cursing for my 2-weeks-later-self.↩
All of the infrastructure shown above is fine and dandy, but by itself it doesn’t do anything. To add some capability requires seeding the run-time environment with some core words. As I alluded to earlier, the core way to define words is to write them as Ruby module methods which are then mixed in at start time.
Before I start, I should make it clear that the implementation methods must adhere to a certain standard:
- All core words deal with an implicit
That is, RForth works off of a single globally accessible stack defined in the
Russforth class which will be accessible to the core words once they’re mixed in at run-time. If I were creating a stack-based language in something like Clojure (something more extensive than I showed earlier) than I would make the implementation quite different. That said, while allowing access to a shared property feels a little nasty, there are some advantages for clarity and ease of testing.
One potential problem with dealing in a global stack, and this is a problem even in many stack-based languages, is that it’s not immediately clear just by reading the code what the stack effects might be. Therefore, while discussing the implementations herein I’ll use a variant of the stack effect annotations available in the Factor programming language. For example, the stack effect
[x -- x x] means that a word takes a stack and duplicates the top element and pushes it back onto the stack. Another example is
[x y -- y x] that refers to the action of popping the first two elements off of the stack and pushing them back on in reverse order. The general form of the annotation is
[<before> -- <after>]. When reading stack effect annotations remember that the top of the stack is always the rightmost element of both the
Enough already – let’s begin with the fun stuff.
A quick example – “pop”
To start I’ll show a very simple stack operator named “.” that does one simple thing – it pops the top element from the stack and prints it out. The word lives in a module devoted to stack operations and for my purpose is the only word there:
. word works as follows:
As the element
2 is popped off of the stack it’s printed. The implementation of
. is trivial, but I wanted to start with it just to show the layout of the implementation modules, the use of
@stack, and how I’ll show examples.
Every good (and bad) programming language need mathematical operators and RForth is no exception. All of the RForth math words live in the same module:
I won’t belabor the implementations too much, as you can probably infer what they do just by reading them. For example, here are the implementations for
The implementations for
divide are only slightly more involved
For the sake of completeness I’ll just show the code for a one more word below:
There’s not much to show here, but I’ll eventually make use of these mathematical words later. Slightly more interesting words are implemented next.
The set of comparison operators live in a common module:
For my purposes I’ll need a relatively small set of comparators, starting with an equality word:
As previously shown, the
eq method is aliased to the
= word and works as follows:
You’ll notice that the values
2 were consumed from the stack by
=. This is common in Forth-like languages as it’s expected that most operations will use the stack as a scratchpad for computation, wiping and writing constantly as they go. There are some Forth-like languages that also provide variable declarations for lexical-based use and reference, but those will not be used in RForth. The implementation for
<> is very similar, as you might expect:
Likewise, for logical comparisons:
And that’s all for comparators for now.
Before I dig into the meatier words, I want to take a few moments to show the implementation of a few useful words related to I/O, all implemented in a common module:
The simplest I/O word is one that prints an arbitrary character:
Which can both be used to implement a word named
.S that can be used to inspect the state of the stack:
.S word prints a little hat on the top end of the stack to show the direction that the elements will be popped off.
Now that I’ve gotten these out of the way, I’ll now dive into the more interesting topic of stack shuffling.
An interesting set of core words implemented in RForth are the “stack shufflers.” In a nutshell, shufflers take elements off the stack and (perhaps) put them back on in various different configurations. All of the shufflers will reside in a common module:
The easiest possible shuffler is called
drop that takes a stack and just pops the top, throwing it away:
drop word works as follows:
The nice thing about Ruby arrays is that they can be treated just like stacks, so the use of
Array#pop is pretty clear. Likewise, the
<< (append to end) operator is analogous to a stack push, so a word named
dup that just duplicates the top element is implemented in a straight-forward way:
dup word works as follows:
One other nicety that Ruby provides is the array literal notation
[...] which allows me to define the
swap word via array concatenation:
Since the last element in the
@stack refers to the top element, I was able to use the sequencing of the
#pop operation to build a temporary array with the swapped order in place.
swap word works as follows:
I can use a similar technique as
swap for the implementation of
rot (rotate) below:
rot word works as follows:
A word analogous to
rot is called
over that takes the element under the top, duplicates it, and then pushes it back onto the stack:
over word works as follows:
And that’s all for now. I’ll show more later when I get into user-defined words and combinators.
The final set of core words that I’ll talk about are the combinators. In short, combinators are a set of words that incorporate the use of other words to perform some operation. All of the combinators are implemented in a common module:
The first combinator word that I’ll talk about is named
apply that takes a
Proc and just calls it for its stack effects:
apply word works as follows:
A more involved variant of
apply is called
dip that takes a
Proc and an element on the stack and then applies that
Proc to the rest of the stack, finally pushing the saved element back onto the stack:
dip word is an interesting little piece of business because of the way that it uses the
stash variable to hold a piece of data for later pushing. As an added bonus, it together with
swap forms a primitive base for implementing control flow structures, (Spiewak 2008) but I’ll only touch on that tangentially.
Industrial-strength Forth implementations provide a stashing mechanism in the language itself via something known as the “return stack.” The return stack is a structure used implicitly by Forth run-times as a place to store and retrieve the maze of pointers within and between nested words (Noble 1992). However, while the return stack is an internal run-time structure, Forth programmers happily use it as needed as a place to store values temporarily (as they need to be gone before a word finishes execution). To push values onto the return stack, Forth implementations (often) provide a set of operators
r> to push and pop values from the return stack. Therefore, to implement
dip in a “real Forth” would look like the following:
You would read this implementation as: 1. swap the top two main stack elements 2. store the top of the main stack onto the return stack 3. execute the quotation on the top of the main stack 4. push the top of the return stack onto the main stack
As you’ve probably already noticed, RussForth does not have a return stack as it wasn’t necessarily needed. However, adding something analogous to a return stack wouldn’t be too difficult, but I leave that as an exercise to the reader.
The implementation of
dip is not too complicated, but it could use some illustration to really help explain what’s happening. First, let me say that you might instinctively think to type the following:
To mean “perform multiplication on
2 back onto the stack.” The problem is that the default action of the read phase was to push words onto the stack by default, but
dip requires a
Proc as one of its arguments. However, if you recall, the way to push a
Proc was to use the quotation form, which means that the code should be written as:
Now that the
* word is encapsulated within a
Proc residing at the top of the stack,
dip can grab a hold of it and invoke the
#call method on it.
Graphically, the stack prior to the execution of
dip looks as follows:
dip calls the
Proc implementing the quotation, the stack will look like:
dip will hold onto the
2 after which it will be pushed back onto the stack after the quotation is executed:
This is pretty cool because it illustrates in a simple way the use of quotations in RForth and how they’re used by combinators to do some snazzy stuff. However, this installment is not about combinators, so while I’m tempted to continue, I’d like to take a step back and talk a little bit about building up a language.
Now that I’ve put in place some core functions I can start to build a core library using them. This section will not present a fully fleshed out core, but I’ll go over some interesting examples to give a feel for the kinds of things that comprise a Forth-like run-time.
Before I start I want to point out that earlier I showed the implementation of an immediate word
\\ that ate character until it found a newline. This is of course an implementation of a comment word:
The last comment line above is how I will display execution results such as I/O print outs and the like.
That out of the way, the first thing that I’d like to create is a simple utility to print out a newline. If you recall I implemented an
emit word that I could use to implement just such a user-defined utility:
So now, if we wanted to use this in a program to print a message it’s as simple as:
A more useful Forth word is one called
nip that is used to pop the element immediately under the top of the stack:
nip word works as follows:
Another useful word is known as
tuck that stashes a copy of the top of the stack under the next element down:
tuck word works as follows:
A couple of arithmetical operators that might be useful are implemented below:
Perhaps it’s obvious what these words do, but just in case it’s not, observe the following:
This is pretty straight-forward, but I’m leading up to something more interesting than the implementation of a core library. In the next section I’ll expand on this thread of thought and show how the manner in which Forth programs are constructed can inform the construction of programs in any other language.
In Forth-like languages, it’s often the case that programs are built from constituent parts such as those implemented so far. This may not seem very unique given that the same can be said of any programming language. However, Forth programmers often strive to write their programs in a bottom-up fashion, building layers of fluency informing all layers above. In other words, each layer in a Forth program is a special-purpose language specifically geared to support the layers above it in the program. However, it’s not the case that Forth programs are one-off deals. Instead, it’s a point of pride amongst Forth programmers that code be aggressively reusable (Pountain 1987). These two concerns, I think, form a powerful approach to programming that I’ll discuss (all too) briefly in this section.
In August 2010 I stumbled on a book that deeply influenced my personal programming style and my views on how programs are constructed. The book in question was Thinking Forth by Leo Brodie (Brodie 1987) and upon reading it I immediately put it into my own “personal pantheon” of influential programming books (along with SICP, AMOP, Object-Oriented Software Construction, Smalltalk Best Practice Patterns, and Programmers Guide to the 1802). Up until that point most of the programming books that I had read focused on code, but Thinking Forth spent a significant portion of its pages discussing the thought processes behind program construction.
While I’ve read other books that touched on programming from an angle of thoughtfulness, Thinking Forth was the first that I read that drew an essential marriage between the language in use and the thought processes advocated by it. Bear in mind that this was a deeper relationship than one that you might find in a programming language’s idioms. Indeed, while idioms are often the result of discovery, they are not something that you might consider philosophy. Instead, the very idea of Forth necessarily informs the base tenet of “factoring” as discussed in the book.
Factoring in Forth is akin to “Refactoring” in common parlance. The main difference is that refactoring is an activity that is applied to an exiting code-base and factoring is applied to an evolving code-base. That is, refactoring happens after the fact while factoring happens in situ. Thinking Forth therefore identifies and discusses a number of benchmarks while coding that signal the need for factoring, including, but not limited to:
- Factor when complexity tickles your conscious limits
- Factor when you’re able to elucidate a name for something
- Factor at the point when you feel that you need a comment
- Factor the moment you start repeating yourself
- Factor when you need to hide detail
- Factor when your command set (API) grows too large
- Don’t factor idioms
Using RussForth I’ll touch briefly on a few of these points below.
Factor when complexity tickles your conscious limits
The book puts stock in George Miller’s famous “Magical Number Seven…” paper (Miller, 1956) when discussing the notion of code’s cognitive load. That is, the gist is that people can general only hold 7 (± 2) pieces of information in their head, so it behooves the programmer to write such code that falls under those bounds. As a stack-based language, Forth’s ongoing stack manipulation from one word to another is quite confusing. Granted, I’m very far from even a novice in Forth-like languages, so such a statement should be taken with a grain of salt. That said, let’s explore a simple operation that one could conceivably find useful. That is, I could imagine a need to apply two separate quotations to a single value (Childers, 2016). Take a stack of the following form:
An expanded description of this sequence could be stated as, “multiply 3 by 12 and push it onto the stack and then multiply 4 by 12 and push it onto the stack too.” To accomplish this requires a series of stack manipulations of the form:
Breaking this down, let’s see what’s happening. To start, the stack looks as follows:
The application of
rot causes the following:
Next, the application of
dup causes the following:
Next, the application of
rot again causes the following:
Next, the application of
apply causes the following:
Now I want to use
swap rot to move that result out of the way and then prepping the next operation, causing the following:
Then the next use of
apply almost gets me there:
But since I wanted the stack to look a certain way a final
swap is needed leaving:
And that’s it! Unfortunately between the beginning and the end of putting this sequence of words together I’ve forgotten what’s happened with the stack. I’ve used 8 words to perform this task, but most of those are raw stack shufflers that on their own are somewhat opaque to the task at hand. The heart of the task lies in the
apply words. That is, the shuffling performs entirely to setting up the calls to
apply, which are informative in their own right. That said, I can replace some of the words to use another
dip to trim a word and still maintain the focus point of information:
I’ll avoid walking through the stack manipulations again, but I’ll talk just a moment about what this achieves. First, it trims a word and gets the whole sequence into that “7 things” range. Second, while trimming it also keeps the focused setup/process information quanta:
With more built-in features, this sequence could be further simplified, but I hope that my point is understood.
Factor when you’re able to elucidate a name for something
It seems that a process for applying two quotations to a single value might be generally useful. Once it becomes clear that a fragment of code is generally useful, it follows that it should have a name. 1 The name2 that I might chose comes straight from the description given earlier, “apply two separate quotations to a single value”:
And now the original fragment becomes:
Which leaves the stack in the condition that we saw earlier.
Factor at the point when you feel that you need a comment
Imagine that I wanted a version of
dip that used its stored value in the quotation rather than extracting it for later pushing. I would need to use stack shufflers to duplicate the stored value so that it could be used in the quotation:
Graphically this would look like the following:
With the application of
over swap the following would occur:
dip the final stack would be:
As shown, it’s tempting to put a comment into the original to explain how the use of
over swap works to weave the stored value back into the quotation application, but there’s a better way. That is, it’s better to factor out a new combinator instead:
The comment is now manifested as a reusable (and testable) word. 3
Don’t factor idioms
I had alluded earlier that idioms do not operate at the philosophical level of a given programming language. Instead, idioms are natural growths occurring on a programming language. That said, very often the form of idioms are directly influenced by the philosophical underpinnings of a language. Like idioms in natural languages, programming idioms should be viewed as atomic units with meaning quite independent of their constituent parts. Therefore, it’s important to leave idioms intact and resist the urge to factor them out in whole. Hardcore Forth implementations like arrayForth (based on ColorForth), GForth, and Open Firmware have their own rich sets of idioms, but underneath those idioms lies the Forth philosophy – some more than others.
Forth is an astonishing programming language. The very design and (most) implementations revolve around the idea that there’s a conceptual distance between the ideas in your head and the code on the screen and that distance should be as short as possible. Sadly, I simply cannot adequately cover the whole of the beauty of Forth thinking in this small space. There is so much going on in this book that it literally made me dizzy while reading it. From thoughts on factoring, code organization, testing, DSLs, encapsulation, data-hiding, variable naming, word length, decomposition, and design, Thinking Forth is, ideas-per-page, unmatched in the realm of programming books.
- If I ever get the urge to write a “Programming for Buddhists” book then Forth (or some other concatenative language) is my choice for the language. There is a nice parallel between the ideas of rupa, Maya, and nama-rupa that would be a blast to write about.↩
- This word is typically called
[ ] sipis equivalent to
dup. (Kirby, 2002)↩
Book of interest: Stack Computers - the new wave
available at https://users.ece.cmu.edu/~koopman/stack_computers/
On about 1989 I suspect that the idea of a stack computer was indeed considered “new wave,” but today it’s quite quaint.1 That said, Stack Computers - the new wave by Philip Koopman is well worth a read for the retro-computing-curious, if for no other reason than the bibliography. The book describes the architecture and run-time characteristics of a breed of computers being developed at the time, led by the Novix NC4016 chip. In addition to performing a survey of a few architectures, the book discusses the traps and pitfalls around programming those machines (there is a lot about Forth as you can imagine). The book ends with some thoughtful discussion about the “future” of stack architectures of which I find particularly interesting. Something that I think would be a fun activity would be to explore that final chapter in depth and compare it to the actual evolution of stack architectures in the intervening years. Perhaps this sort of thing appeals to you too?
- This is in no way meant to disparage modern stack hardware such as the Green Arrays F18A or software stack machines like the JVM.↩
References and Interesting Information
- Brodie, Leo. 1987. Starting Forth
- Brodie, Leo. 1984. Thinking Forth: A language and philosophy for solving problems
- Noble, Julian. 1992. Scientific Forth: A modern language for scientific computing
- Pountain, Dick. 1987. Object-oriented Forth: Implementation of data structure
- Miller, George. 1956. “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information”
- Childers, Charles. 2016. “Port of Retro’s combinator implementations to Forth” at https://gist.github.com/crcx/8060687
- Kirby, Brent. 2002. “The Theory of Concatenative Combinators” at http://tunes.org/~iepos/joy.html
- JonesForth - an implementation in literate ASM. https://github.com/kristopherjohnson/jonesforth
- “A Conversation with Manfred von Thun” at http://www.nsl.com/papers/interview.htm
- Spiewak, Daniel. 2008. The Joy of Concatenative Languages. http://www.codecommit.com/blog/cat/the-joy-of-concatenative-languages-part-1
Author information and links
Michael Fogus - a core contributor to Clojure and ClojureScript. Creator of programming source codes.
- me -at- fogus -dot- me
Discussion and information
Thanks go out to Russ Olsen for original implementation of Russ Forth. Also, thanks to Karsten Schmidt, Shaun Gilchrist, Jeremy Sherman, Chas Emerick, and the inimitable Alan Eliasen for their feedback on earlier versions of this installment.