A framework for Genetic Programming

Up until now, we’ve done everything by hand. You might notice that some of the work is applicable, such as generating a random population of trees, and applying reproduction, crossover and mutation to build new generations is to every genetic programming task.

To help you solve genetic programs quickly, we’ve wrapped up most of the machinations necessary to abstract these concerns away into a library. We’re going to introduce that library by taking you through the implementation of a GP model for the Moonlander problem, that uses this library.

Both the generic library and the model to solve the moonlander problem are available on GitHub. They’re called moonlander-gp and moonlander-ast, respectively, and the locations can be found in the resources.

Overview of the framework

The operations that are provided by the framework are:

  • Generating a new population with random trees
  • Evolving the population using the three genetic operations:
    • Reproduction
    • Mutation
    • Crossover

To use the framework, you’ll need to supply two things of your own:

  • A set of AST node types, thereby specifying a grammar for the solution
  • A fitness function to score solutions.

Framework Traits

To work with ASTs in a generic way, we’ve had to implement quite some traits in Rust. We’ll quickly take you through them, before explaining that we won’t need to implement most of them yourself.

An important constraint for many of these traits is that concrete type information cannot be used, because we’ll pick random nodes at runtime, so the types can not be known statically. Hence, some traits, like Copyable, will duplicate similar traits like Clone that work on static information.

AstNode

This trait should be implemented by all types that represent a node in the abstract syntax tree. It is used to generically iterate over and construct trees.

Because Any::get_type_id() is not stable, it also contains a method to return an identifier per node type, so that we can match nodes of the same type in different trees during crossover.

1 pub trait AstNode: Any+Mutatable+Copyable+Sync {
2     fn node_type(&self) -> usize;
3     fn children(&self) -> Vec<&AstNode>;
4     fn replace_child(&self, old_child: &AstNode, new_child: &mut Option<Box<AstN\
5 ode>>) -> Box<AstNode>;
6 }

RandNode

This trait should be implemented to construct a random subtree with the root of the given type. It is like rand::Rand, except the generic Rng argument has been replaced with a reference to a generic Rng argument, because we’ll need to be calling this from trait functions on AstNode, and trait functions cannot be generic.

For the implementation of this trait, you can use the pick![] macro.

The function also takes an argument of type NodeWeights, which is used to control the height of trees that get generated. Whenever there’s a choice to be made between internal and leaf nodes in the tree, the weights on that object will be used to control the relative probabilities. This (roughly) controls the height of the generated subtree.

1 pub trait RandNode: Sized {
2     fn rand(weights: NodeWeights, rng: &mut Rng) -> Self;
3 }

Mutatable

This trait is used for mutation of a node, or a subtree. There is a default implementation for all nodes that uses the RandNode implementation to replace the subtree with a new random subtree, but you can also mutate nodes in a more controlled fashion, such as replacing the operator in a node that represents a binary operation, while keeping the operands the same, by implementing this trait directly.

1 pub trait Mutatable {
2     /// Return a mutation of this node
3     fn mutate(&self, max_height: i32, rng: &mut Rng) -> Box<AstNode>;
4 }

Copyable

As aforementioned, this is a trait that we use to construct a copy of a subtree. Implementation of this trait comes for free for every type that implements Clone.

1 pub trait Copyable {
2     fn copy(&self) -> Box<AstNode>;
3 }

impl_astnode!

Fortunately, you can get implementation of most of these traits for free for enums by using the impl_astnode! macro, which implements both AstNode and RandNode. This brings Mutatable for free, and if you #[derive(Clone)] as well you get Copyable for free as well.

Hence, AST nodes and all of their traits can be defined as succinctly as:

 1 #[derive(Clone)]
 2 pub enum Condition {
 3 	True,
 4 	False,
 5 	And(Box<Condition>, Box<Condition>),
 6     ...
 7 	Less(Box<Expression>, Box<Expression>),
 8 	LessEqual(Box<Expression>, Box<Expression>),
 9     ...
10 }
11 
12 impl_astnode!(Condition, 0,
13               leaf True(),
14               leaf False(),
15               int And(left, right),
16               int Less(left, right),
17               int LessEqual(left, right),

This defines an enum and some cases. The impl_astnode! macro takes some the enum name, a node type number, and the various case arms, describing whether they’re internal or leaf nodes, and generates the required traits.

Mark internal nodes with the prefix int, and leaf nodes with the prefix leaf. This will be used in the generated implementation of RandNode to pick either the provided weight for internal nodes or for leaf nodes.

Moonlander

In the moonlander problem, the objective is to evolve a program that will succesfully control a spacecraft to safely land on the surface of a moon.

First, this requires that we have a spacecraft to control, a surface to land on, and physics to which the lander will be subject.

Simulation

As part of the evaluation of a program (i.e., calculating its fitness), we will run the program through a simulation of the landing. This simulation will have a model of physics. For instance, the moonlander will have a position (x, y) in space, somewhere above the surface. It will have a velocity (vx, vy), and every tick of the simulation, the position will be updated with the velocity. in the same way, the ship will have a rotation and a rotational velocity.

The moonlander can affect its velocities, and hence ultimately its position, by controlling the actuators on the ship. There are three actuators: the main thrusters, boosting the ship forward in its current direction, or attitute thrusters which will change the ships rotational velocity.

When the y position of the moonlander has hit 0, signifying that it has hit the surface, we’ll evaluate the ship’s angle and vertical velocity, and deem it to having succeeded when the velocity is low enough and the angle is close enough to vertical.

To make sure the program we’re about to evaluate has enough information to make correct decisions, we’ll equip the ship with virtual “sensors”, allowing it to sense various properties about itself. In this case, we’ll allow the ship access to all properties of the simulation, so that includes the aforementioned (x, y), (vx, vy) and so on.

You can imagine a simulation in which there is information that the control program does not have access to. For example, the exact velocity might not be known. In this case however, we want to give our control program the greatest chance of success, and we’ll supply all information.

The information will be made available to the control program in the SensorData structure:

1 pub struct SensorData {
2     pub x:  Number,
3     pub y:  Number,
4     pub vx: Number,
5     pub vy: Number,
6     pub o:  Number,
7     pub w:  Number,
8     pub fuel: Number,
9 }

Grammar

All computer programs have an associated grammar that describes the space of possible programs. It’s important that this space can in principle contain program that solve the problem.

There is a choice of the style of program to allow. We can consider generating either imperative programs or functional programs. The space of functional programs without recursion is very safe, because programs are guaranteed to terminate, and there are no data dependencies between statements (such as variables having been declared or initialized).

For this problem we’ll opt to model a simple function that every tick takes the SensorData and outputs a single Command, which is the action to perform that round.

We’ll start off even simpler than the fully general program, and simply generate a program that evaluates to a boolean. We’ll couple this boolean to the main thrusters, firing the thrusters if the program returns true, and do nother otherwise.

Obviously, this simple program will not allow us to do attitute control, because there’s no way for us to trigger the Left and Right thrusters, but we can always extend the grammar later.

Let’s start by defining the sensors:

We want to define arbitrary conditions. Those conditions will eventually contain numerical expressions, and some of those numbers may come from the sensors.

This leads us to the following initial grammar:

Condition:

 1 #[derive(Clone)]
 2 pub enum Condition {
 3 	True, False,
 4 
 5 	Not(Box<Condition>),
 6     Or(Box<Condition>, Box<Condition>), And(Box<Condition>, Box<Condition>),
 7 
 8 	Less(Box<Expression>, Box<Expression>), LessEqual(Box<Expression>, Box<Expressi\
 9 on>),
10 	Equal(Box<Expression>, Box<Expression>),
11     GreaterEqual(Box<Expression>, Box<Expression>), Greater(Box<Expression>, Box\
12 <Expression>),
13 }
14 
15 impl_astnode!(Condition, 2,
16               leaf True(), leaf False(),
17               int Not(cond),
18               int Or(left, right), int And(left, right),
19               int Less(left, right), int LessEqual(left, right),
20               int Equal(left, right),
21               int GreaterEqual(left, right), int Greater(left, right));

Expression:

 1 #[derive(Clone)]
 2 pub enum Expression {
 3 	  Constant(Number),
 4 	  Sensor(Box<Sensor>),
 5 	  Plus(Box<Expression>, Box<Expression>), Minus(Box<Expression>, Box<Expression\
 6 >),
 7 	  Multiply(Box<Expression>, Box<Expression>), Divide(Box<Expression>, Box<Expre\
 8 ssion>),
 9 }
10 
11 impl_astnode!(Expression, 3,
12               leaf Constant((data n random_constant)),
13               leaf Sensor(sensor),
14               int Plus(left, right), int Minus(left, right),
15               int Multiply(left, right), int Divide(left, right));

Sensor:

1 #[derive(Clone)]
2 pub enum Sensor { X, Y, Vx, Vy, O, W, Fuel, }
3 
4 impl_astnode!(Sensor, 0,
5               leaf X(), leaf Y(), leaf Vx(), leaf Vy(),
6               leaf O(), leaf W(), leaf Fuel());

Evaluation

In every step of the simulation, the program will be given a chance to decide whether or not to fire its thrusters. This means we have to evaluate the AST of the program with the current values of the sensors to produce the output for this decision.

In the moonlander framework, this means implementing the trait EvaluateToCommand for the root node of the program. Evaluation of the AST uses the following traits, whose implementations are quite straightforward (refer to the moonlander-ast source if you’re curious):

 1 trait EvaluateToCommand {
 2 	fn evaluate(&self, sensor_data: &SensorData) -> Command;
 3 }
 4 
 5 trait BooleanValue {
 6 	fn is_true(&self, sensor_data: &SensorData) -> bool;
 7 }
 8 
 9 trait NumericValue {
10 	fn num_value(&self, sensor_data: &SensorData) -> Number;
11 }

The magic that connects the abstract and generic AST to the actual simulation is in the implementation of NumericValue for Sensor:

 1 impl NumericValue for Sensor {
 2     fn num_value(&self, sensor_data: &SensorData) -> Number {
 3         match *self {
 4             Sensor::X    => sensor_data.x,
 5             Sensor::Y    => sensor_data.y,
 6             Sensor::Vx   => sensor_data.vx,
 7             // ...etc...
 8         }
 9     }
10 }

Fitness

Now that we can run the simulation, and can have the program make decisions inside it that affect the simulation, it’s time to define the criteria that make a program successful.

We do this by writing a fitness function. This function assigns a numerical score to the performance of a program in the simulation. The goal of the genetic algorithm will be to maximize the value of the fitness function. It’s therefore helpful if higher values for the fitness function corresponds to better actual performance in ways that we care about!

It’s also helpful if the fitness function can be made as monotonically increasing towards the better program space as possible. If the fitness function returns 1,000 if the program finally met the victory condition, but 0 everywhere else, the intermediate values will not help guide the evolutionary algorithm towards the more successful part of the program space. The evolutionary algorithm will most likely be wandering around arbitrary parts of the program space, generating and selecting all equivalently successful programs (all scoring 0), until by chance it might hit upon a winning program. Depending on how rare successful program are, this might take quite a while.

In the framework, the fitness function takes a reference to a program, and returns any object that implements the Fitness trait. The only notable quality of the Fitness trait is that it has something called a ScoreCard:

1 pub trait Fitness: Send {
2     fn score_card(&self) -> &ScoreCard;
3 }

The ScoreCard represents the actual numerical score of an algorithm, which can be composed of many subscores and penalties. This labeled subdivision will be useful for human operators, to judge whether the scores assigned by the fitness function make sense.

Fitness is a trait, so that any actual object may be substituted. This allows us to collect and export extra data during simulation in a struct of our choosing. In the moonlander-ast project, we use this to collect trace of all frames of the moonlander during each simulation, so that we can export and visualize those traces later.

We collect the fitness like this:

 1 pub fn score_lander<P>(program: &P, _: &mut Rng, mut sensor_data: SensorData, wo\
 2 rld: &World) -> LandingTrace
 3     where P: EvaluateToCommand+AstNode
 4 {
 5     // ...
 6     while !sensor_data.hit_ground {
 7         // ...
 8     }
 9 
10     LandingTrace {
11         trace: trace,
12         score_card: ScoreCard::new(vec![
13             ("survival_bonus",   3.0 * frames),
14             ("fuel_bonus",       (100.0 * total_fuel / frames)),
15             // ...more scores here...
16         ])
17     }
18 }

Of course, this gives us the score for one run.

We probably want to run the simulation multiple times under randomized conditions. This will help us avoid overfitting the program to a fixed problem, and generate programs that generalize well.

However, randomized conditions introduce the chance that a potentially good program has a bad luck of the draw, performs poorly in its simulation, and gets culled from the population.

To reduce the chances of this happening, we’ll run the simulation multiple times and average the scores. For that we use the function score_lander_multi, which takes a number of rounds to evaluate, calls score_lander N times, and combines the results.

1 pub fn score_lander_multi<P, G>(n: usize, program: &P, rng: &mut Rng, sensor_dat\
2 a_fn: G, world: &World) -> LandingTrace
3     where P : EvaluateToCommand+AstNode,
4           G : Fn(&mut Rng) -> SensorData