L-Systems

While reading Kenneth Stanley’s paper A Taxonomy for Artificial Embryonogeny I kept coming across new techniques I want to implement in biogenesis. The L-Systems seemed the simplest to try out, so here are the results of various grammars:

These don’t really look like ‘organisms’ yet because I was just testing out the capabilities and am still developing rules for body generation. The rules take the most work to think out; the code is incredibly simple. Here is the function of applying the rewrite rules:

View function ‘Apply’ here: http://pastebin.com/QMxjzzTy

The code to derive is a little more complex, but still manageable (this is only the derivation of the ‘Fern’ shown above:

View function ‘Derive’ here: http://pastebin.com/qCpHMhqk

That these were so easy to implement is going to prove helpful when I implement a body evolution system, so biogenesis gets both dynamic, evolved bodies and children that look like their parents with the ability to mutate.

The End of Summer

A disappointing end, but an end. I’ve only come so far with my Diet ambitions. I mapped out a lot on paper, but only got as far as creating the first draft of the house map. I found a couple free tilesets that allowed me to create something above the minimal amount of pixel art standards I have for the first phase of Diet (traditional RPG knock off). Here is a screenshot of the initial map:

As far as biogenesis work I have been diligently trying to improve the trainer. The fundamental problems of behavioral complexity lie in two places: the limitations of the current version of NEAT, and the lack of interaction organisms have with the environment. This splits up the work evenly, so the former problem has driven me to isolate the training system and work on it’s own screen allowing me to try all kinds of crazy ideas:

  • “plug and play” training scenarios
  • formally defining ‘serial’ and ‘parallel’ training scenarios
  • using CPPNs and HyperNEAT
  • threading out genome training batches, since no genome training depends on another
  • defining high level goals and have ANNs only access those functions
  • indirect encoding

Only a few of these have been tested since my updates to all the sensors (optimization, improvement). The good news is that, by the time the environment and event systems are in place the new training system will be able to utilize them to their potential.

Lastly, my first attempt at a Ludum Dare will be this weekend. The only thing I have to prepare is what engine I’m going to use. It’s looking like GameMaker, but I’m hoping to gather up the courage and determination to use a more coded approach like pygame or XNA. Results\updates will be posted.

biogenesis – New Training System

Most of my time has resided in biogenesis. I created a branch of our first version, gutted the behavior manager, sensors, and training thread to start from scratch. I got rid of NEAT’s INetworkEvaluators and implemented my own similar interface that I think will allow for the type of training I have in mind.

The population evaluator now runs in ‘Passes’, where a Pass defines two objects: a system of evaluation, and a space in which training takes place. These were designed under the ‘design by contract’ paradigm which I’m exploring heavily with this new system.

The problem I hope this solves is setting up a system where the following training methods are easily implemented with my updated TrainingManager class:

  • Remedial Training
  • Small goal oriented training
  • Simultaneous and Sequential sets of above goals
  • Dynamic input\output neurons depending on the trainee
  • ‘Plug and Play’ training systems

This method was designed to reinforce our original design which included remedial training and training in sequences of small goals, but done in a much more abstract, robust, and well defined way. The idea to continue and strengthen\revisit this design includes several reasons:

  • The organisms were training quickly and had potential but anything past basic field navigation, evasion, and food gathering never showed. Complex networks also had trouble showing up.
  • The ideas described in http://www.pnas.org/content/102/39/13773.abstract formally addressed our approach.

The aforementioned paper details how spontaneous modularity in evolved networks arise in evaluation that include frequent sub goal changes (where sub goals have some underlying similarity). This is one approach to the problem of having a population be stuck in a local maxima “because the fitness landscape changes each time that the goal changes, modularly varying goals can help move the population from these local traps.” It is my hope that by creating an abstracted implementation of the TrainingManager that furthers these ideas, when we get to the point of greatly varying organisms the training will naturally develop unique behavior based on the organism’s abilities.

My tests are currently running correctly without sensors, so my current goal is to finish the sensor and behavior management systems and integrate them with the trainer.