1 Introduction

Dataflow is a method of implementing software that is very different to the prevailing Von Neumann method that the software industry has been based on since inception.

At the lowest level, dataflow is both a programming style and a way to manage parallelism. At the top, dataflow is an over-arching architecture that can incorporate and coordinate other computational methods seamlessly.

Dataflow is a family of methods that all share one important fact, data is king. The arrival of data causes the system to activate. Dataflow reacts to incoming data without having to be specifically told to do so. In traditional programming languages, the developer specifies exactly what the program will do at any moment.

1.1 Overview of the Book

It is important to understand the concepts of dataflow and not just the specifics of one library so that you can quickly adapt to any new library encountered. There are many varieties of dataflow with subtle differences yet they all can be considered dataflow. Sometimes very slight changes in the dataflow implementation can drastically change how you design programs. This book will explain the whole landscape of dataflow.

You’ll learn dataflow from the software perspective. How it is an architecture and a way to think about building programs.

We’ll start by covering it in its simplest form, Pipeline Dataflow, and then move on to the many features and variations you’ll encounter in existing implementations. Three of the most common styles of dataflow are explained in detail using code of a working implementation to bring theory into practice.

You should already have a little programming experience under your belt but you don’t need to be an expert to understand what this book covers.

1.2 Reactive Programming is Dataflow

“Reactive Programming” is a term that has become popular recently but its origin stretches back to at least 1985. The paper, “On the Development of Reactive Systems” by David Harel and Amir Pnueli was the first to define “reactive systems”:

“Reactive systems… are repeatedly prompted by the outside world and their role is to continuously respond to external inputs.”1

The paper specifies that reactive systems are not restricted to software alone. They were discussing ways to develop any type of reactive system, software or hardware. A few years later in 1989 Gerard Berry focuses on the software aspects in his paper, “Real Time Programming: Special Purpose or General Purpose Languages”:

“It is convenient to distinguish roughly between three kinds of computer programs. Transformational programs compute results from a given set of inputs; typical examples are compilers or numerical computation programs. Interactive programs interact at their own speed with users or with other programs; from a user point of view a time-sharing system is interactive. Reactive programs also maintain a continuous interaction with their environment, but at a speed which is determined by the environment, not by the program itself. Interactive programs work at their own pace and mostly deal with communications, while reactive programs only work in response to external demands and mostly deal with accurate interrupt handling.

Real-time programs are usually reactive. However, there are reactive program that are not usually considered as being real-time, such as protocols, system drivers or man-machine interface handlers. All reactive programs require a common programming style.

Complex applications usually require establishing cooperation between the three kinds of programs. For example, a programmer uses a man-machine interface involving menus, scroll bars and other reactive devices. The reactive interface permits him to tell the interactive operating systems to start transformational computations such as program compilations.”2

From the preceding quotes we can say that reactive programs…

  • Activate in response to external demands
  • Mostly deal with handling parallelism
  • Operate at the rate of incoming data
  • Often work in cooperation with transformational and interactive aspects

The definition of dataflow is a little more vague. Any system where the data moves between code units and triggers execution of the code could be called dataflow, that includes reactive systems. Thus, I consider Reactive Programming to be a subset of dataflow but a rather large subset. In casual use, Reactive Programming it is often a synonym for dataflow.

1.3 Von Neumann Architecture

The reason parallel programming is so hard is directly related to the design of the microprocessors that sit in all of our computers.

The Von Neumann architecture is used in the common microprocessors of today. It is often described as an architecture where data does not move. A global memory location is reserved and given a name (the variable name) to store the data. Its contents can be set or changed but the location is always the same. The processor commands, in general, deal with assigning values to memory locations and what command should execute next. A “program-counter” contains the address of the next command to execute and is affected by statements like goto and if.

Our programs are simply statements to tell the microprocessor what to do… in excruciating detail. Any part of the program can mutate any memory location at any time.

In contrast, dataflow has the data move from one piece of code to another. There is no program-counter to keep track of what should be executed next, data arrival triggers the code to execute. There is no need to worry about locks because the data is local and can only be accessed by the code it was sent to.

The shared memory design of the Von Neumann architecture poses no problems for sequential, single threaded programs. Parallel programs with multiple components trying to access a shared memory location, on the other hand, has forced us to use locks and other coordination methods with little success. Applications of this style are not scalable and puts too much burden on developers to get it right. Unfortunately we are probably stuck with Von Neumann processors for a long time. There’s too much software already written for them and it would be crazy to reproduce the software for a new architecture.

Even our programming languages are influenced by the Von Neumann architecture. Most current programming languages are based directly or indirectly on the C language which is not much more than a prettier form of assembly language. Since C uses Von Neumann principals, by extension all derivative languages are also Von Neumann languages.

It seems our best hope is to emulate a parallel friendly architecture on top of the Von Neumann base. That’s where this book comes in. All dataflow implementations that run on Von Neumann machines must translate dataflow techniques to Von Neumann techniques. I will show you how to build those systems and understand the ones you will encounter.

1.4 Benefits of Dataflow

Some of the benefits of dataflow that we’ll cover in this book are..

  • Dataflow has an inherent ability for parallelization. It doesn’t guarantee parallelism, but makes it much easier.
  • Dataflow is responsive to changing data and can be used to automatically propagate GUI events to all observers.
  • Dataflow is a fix for “callback hell.”
  • Dataflow is a high-level coordination language that assists in combining different programming languages into one architecture. How nodes are programmed is entirely left up to the developer (although implementations may put constraints on it, the definition of dataflow does not). Dataflow can be used to combine code from distant locations and written in different languages into one application.

    “Contrary to what was popularly believed in the early 1980s, dataflow and Von Neumann techniques were not mutually exclusive and irreconcilable concepts, but simply the two extremes of a continuum of possible computer architectures.”3

  • For those visual thinkers out there, dataflow graphs lend themselves to graphical representations and manipulation. Yet there’s no requirement that it must be displayed graphically. Some dataflow languages are text only, some are visual and the rest allow both views.

1.5 History

The first description of dataflow techniques was in the 1961 paper, “A Block Diagram Compiler”4 that developed a programming language (BLODI) to describe electronic circuits. The paper established the concepts of signal processing blocks communicating over inter-block links encoded as a textual computer language. “BLODI was written to lighten the programming burden in problems concerning the simulation of signal processing devices”5.

In 1966 William Robert Sutherland wrote, “The On-Line Graphical Specification of Computer Procedures”6 that heavily influenced the visual representation of dataflow. He proposed a purely visual programming language where the user interacted with computer using the new technology of video displays and drawing tablets. Objects were drawn and then given a meaning. In a historical video, Sutherland is shown drawing a square-root block and then defines its operation by drawing its constituent blocks.

Jack B. Dennis continued the evolution of dataflow by explaining the exact steps that must be taken to execute a dataflow program in his 1974 paper, “First Version of a Data Flow Procedure Language”7. Many consider this paper to be the first definition of how a dataflow implementation should operate.

Dataflow has always been closely related to hardware. It is essentially the same way electronic engineers think about circuits, just in the form a programming language. There have been many attempts to design processors based on dataflow as opposed to the common Von Neumann architecture. MIT’s Tagged Token architecture, the Manchester Prototype Dataflow Computer, Monsoon and The WaveScalar architecture were all dataflow processor designs. They never gained the popularity that Intel’s Von Neumann microprocessors did, not because they wouldn’t work, but because it was impossible for them to keep pace with the ever increasing clock speeds that Intel, Zilog and others mass market manufactures were able to provide.

From the 1990s until the early 2000s, less research went into dataflow because there was no pressing need. Every 18 months a new, faster microprocessor came out and no one felt a need to change the way things were done. Due to the increasingly graphical capabilities of computers, most of the advances during this period was concentrated in the visual aspects of dataflow. LabView is one of notable developments of this period.

Then we reached the limits of silicon. Starting around 2005, processor speed stopped increasing and the only option was to just add more cores to the chip. Parallelism became important again. Developers began looking around for solutions and created a resurgence in the 40+ year old concept of dataflow and reactive programming.

1.6 The Purpose of this Book

Dataflow is difficult to learn. Not due to inherent complexity but due to the number of variations dataflow can take on and the lack of a standardized language. Take for example the most common of all elements of dataflow, the node. Some call it a node while others call it a “block”, a “process”, an “action”, an “actor” and any number of other names. Extend this renaming to other basic elements and sometimes your are not sure what you are reading about. Half of the work in reading about dataflow is learning the author’s terminology.

Additionally, dataflow does not have a single set of features and capabilities. It is like ordering from a Chinese restaurant. Mix and match as you want but some things just don’t taste right together.

My goal is to describe all of the possible variations in easy to understand terms. Your goal should be to understand the general concepts of dataflow. Then you will be able to apply that knowledge to specific problems with possibly different semantics than those I describe. The purpose of this book is to give you the tools and understanding to work with a multitude of dataflow systems.