Context: where we use this

“Creature Cognitive Replay” covers how we code video-game entities, such as Space-Marines or Martian-Primates, to learn and store ‘knowledge’, such as “where are my favorite forest hiding places”, “is that large blinking-red object blocking my path dangerous?”, or “how do i rebuild my destroyed wigwam”.  Creating a believable, living world requires us to separate the omniscient simulation or ‘game’ knowledge from what an individual creature should be able to know, such as the difference between castle guards automatically knowing the player’s position versus seeing/hearing an object move and having to investigate what the sound might be.  While every game’s needs are quite different, the focus here is what approaches have worked best simulating 100s to 10,000s of unique creatures in a reasonably balanced ecosystem of producers (plants) and consumers (herbivores & carnivores).  The last three approaches we used were given between 8 & 64 MiB of active ‘current-time’ creature-knowledge-memory for PC desktop applications but could easily move to mobile or console spaces.  The ‘replay’ part of the title handles timeline based thinking so we can ‘rewind/fast-forward’ our simulation and the knowledge will be accurate/consistent.  In this discussion, we use ‘knowledge’ as a unit of information and a ‘thought’ as a piece of knowledge accessible to an entity which can change over time.

Goals: what we need

  • Speedy Queries: entities can ask questions about their knowledge to make decisions (Note: ‘decisions’ are made using an interpreted script or pre-coded behaviors)
  • Instancing & Breeding: supports ‘generic’ abstractions such as ‘space-tiger’ vs specific tigers and allows cloning, breeding, and other ways to mix any inherited knowledge when they are created at runtime.
  • Trending: can store the expected/default/popular thoughts (everybody knows that SpaceCola increases your speed) vs fringe/unique thoughts (SpaceCola decreases my health)
  • Trust: supports ‘certainty’ to help decide how reliable the knowledge is to the entity
  • Origin: stores knowledge from different sources, such as instinct vs communicated vs sensory vs analytical thought-process
  • Rewind: handles recording/playback of each creature’s knowledge over time to permit simulation rewind and cognitive replay

Solutions: how we tried

Technique:  Straightforward-Object-Oriented Programming ( SOOP ) 
Our first attempt at representing “knowledge per entity” used classic Smalltalk/C++ object-oriented programming with a hierarchy of ‘classes of knowledge’ and a vast virtual-function interface to communicate with each class.  Templates were not supported back when we tried this approach and we had a huge base of files to handle any interactions with the knowledge and ownership-access.  We had to anticipate any possible query an entity might want to make ahead of time to match the entity type (Gorilla) with the member functions of the knowledge types ( Mammal, Physical, Jungle_Prowler, Nest_Builder ) and the permutations quickly got out of hand.
Pros: Easy to code & debug using a design doc for each type of entity.  Given fixed entity behaviors, little to no design changes when coding, and few iterations of tweaking the behavior code, this is a safe, reliable choice.  Easy to rewind/fast-forward given fixed structures of the SOOP approach as we can just difference the bits between two structures and run-length/huffman-encode those deltas.
Cons: Creating new queries required new code which was expensive in time and couldn’t be done by a designer.  Given the classes were fixed at runtime, we could not create new knowledge-classes or modify their member types in real-time.  Using over 600 knowledge-class files took a long time to compile and made a ginormous executable.  Given the number of classes involved to instantiate most entities, such as dog or cat or space-tank, it also took a while to create and destroy creatures, which did not scale well to large numbers.  While a straightforward OOP is an appealing approach to begin with, maintaining/evolving the code was not well suited to the real-world development demands of sudden/frequent design changes and complex iterations of tweaking behaviors.  After missing two milestones, we reworked our schedule to switch to the TAM approach below.

Technique:  Tuple-Arranged-Minds ( TAM )
Inspired by the admirable and rich AI legacy of functional languages like LISP, Prolog, & Scheme, we decided to implement something similar that interfaced with our existing codebase.  Back then in the mid-1990′s, we simply couldn’t find an ‘open-source’ engine to plug-in, so we gave each entity a red-black-tree of linked lists.  Each ‘list’ node had a knowledge-type string such as enemies, goal_location, or favorite_weapon and one or more thought-strings such as ‘robots, mosquitoes’, ‘treehouse’, or ‘shotgun, gasgun, sonic-screwdriver‘.  We called these nodes a “thought-tuple” and we used it for function-style queries, such as returning ‘no‘ for “Does_like( self, mosquitoes )“.  Most importantly, we could combine queries such as “Create_Match_List( my.enemies, my.favorite_weapon, my.enemies.favorite_weapon ) Sort_by Distance( enemies.location, my.location )” to find an ordered list what enemy we should hunt next to stock up on our favorite ammo.  All of the strings used were stored per game level (a simulation-scene in a particular space and time).  We avoided storing any string duplicates by giving each entity’s thought-tuples fixed indices to strings (similar to LISP’s atoms).  On the downside, we never destroyed knowledge as reference counting was too expensive and this resulted in some levels growing too large.  We tried building an ‘amnesia-processor’ to search all entities and reduce any unused knowledge or collapse new thought-tuples into old ones, but this simply generated weird behaviors.  To handle the rewind/fast-forward for cognitive-replay, we kept a list of offsets/deltas to each entity’s tree of thought-tuples but this required updating the tree for each change.
Pros: TAM had tremendous flexibility in creating new knowledge.  If the designer can write it in a sentence, it can be stored for an entity.  TAM let designers edit in realtime using text strings and make any sort of query they could consider.  TAM permitted real-time cloning/breeding of entities and there behaviors diverged based on what happened to them over time.  TAM had easy life-cycle management of code ( around 15 files ) compared to SOOP.
Cons: Sometimes it was difficult for designers to use and understand the results of their queries.  Although quite flexible, it ran terribly slow for all but the simplest queries.  Modest queries that ran @ 1 or 2 Hz for each creature, such as “Distance( target.location, my.location )” could back up and stall the main loop when walking the thought-trees for 100s of entities ( Pentium Pro & II era ).  While the thought-tuples delivered some of what we needed to simulate large evolving worlds, the red-black-tree w/ pointers to lists of strings approach had too much overhead in answering queries.

Technique:  Individual_Mind_Packages ( IMP )
IMP was an approach to collapse the string-storage of TAM’s thought-tuples into pre-defined binary types for faster processing and compact space.  Packaging everything into a linear memory block of ‘thoughts’ per entity greatly improved cache-coherence.  Instead of a red-black tree that matched strings and returned linked lists, we created a hash-key for each type of knowledge and used it to retrieve an offset into that linear memory block or ‘mind-package’ to retrieve the knowledge queried.
Pros:  IMP translated most queries into only a few instructions which made it quite speedy compared to TAM while retaining the same flexibility of run-time queries.   Designers really needed those run-time queries to rapidly iterate on behaviors as they made game levels.  IMP gave us fast create/destroy mechanics as we only needed to allocate/free a chunk of memory in a pre-allocated pool.
Cons: Although we saved space by compacting thoughts into various binary types, this approach required each unique entity to allocate all the possible knowledge it could ever store upfront.  The upfront cost was because we needed to re-use the large array of hash-table offsets per entity type and that ‘large-array’ was usually larger than the size of per-entity knowledge.   This limited what an entity could learn/store, such as only 5 friends and 3 enemies, and used the same amount of space as if each entity had learned/stored everything it could.  Since all entities started off using ‘defaults’ or a blended combo (chromosomal-style mixing from two parents), most of the knowledge was being duplicated, unlike in the TAM system.  That made changing any defaults cause a large hiccup as we iterated all entities who used those values.  IMP is hard to parallelize for large (10,000+) numbers of entities doing lots of queries per update as we can’t lock each unit of knowledge without increasing its size with lock/unlock semantics.

Survivor: who proved best & why

Technique:  Sparse_Individual_Thought_Tables ( SITT ) 
After analyzing several game levels we realized that around 2/3s of our entity minds were still using defaults as many entities would converge on similar thoughts or never really had an experience that changed their initial thoughts.  This lead us to extracting an entities thoughts into database-style tables for each knowledge type.  This compacted per-entity storage by using the ‘atom’ technique from TAM (and indirectly from LISP) for all of the default/inherited/trending information and still gave us unique, changing thoughts.  Now each entity has its own skip-tree of thought-atoms that would reference the knowledge as well as its access/ownership traits in the event of any changes.   Cognitive replay becomes much more efficient as previous thought values are likely to be stored in the same database table, acting like dictionary compression before we even begin to compare the thought deltas.
Pros: Having removed the large linear memory block per entity and large hash-tables per entity type, we can efficiently run on multiple threads by using atomic compare-and-exchange to lock the database knowledge instead of the entire entity.  That means that for a single entity at one time there can be a physics thread can update forces, a thinking thread determining a new set of goal-priorities, a path-thread planning obstacle avoidance, a communication thread interpreting (or misinterpreting) dialog, and a metrics-thread tallying up statistics that this entity remembers.
Cons: For behaviors that access a lot of different thoughts for a single entity, the cache-coherence is poor as many different database tables are loaded.  Requires a lengthy processing step to separate the thought-database tables for batches of ‘game-levels’ or simulation-scenarios.

Future: Work on ecologies with millions of creatures and see how multi-CPU-cores or GPU (OpenCL?) processing does with complicated behaviors and frequent changes to entity thoughts.

 

Survivor_Datastructs

1
 
  2
 
  3
 
  4
 
  5
 
  6
 
  7
 
  8
 
  9
 
  10
 
  11
 
  12
 
  13
 
  14
 
  15
 
  16
 
  17
 
  18
 
  19
 
  20
 
  21
 
  22
 
  23
 
  24
 
  25
 
  26
 
  27
 
  28
 
  29
 
  30
 
  31
 
  32
 
  33
 
  34
 
  35
 
  36
 
  37
 
  38
 
  39
 
  40
 
  41
 
  42
 
  43
 
  44
 
  45
 
  46
 
  47
 
  48
 
  49
 
  50
 
  51
 
  52
 
  53
 
  54
 
  55
 
  56
 
  57
 
  58
 
  59
 
  60
 
  61
 
  62
 
  63
 
  64
 
  65
 
  66
 
  67
 
  68
 
  69
 
  70
 
  71
 
  72
 
  73
 
  74
 
  75
 
  76
 
  77
 
  78
 
  79
 
  80
 
  81
 
  82
 
  83
 
  84
 
  85
 
  86
 
  87
 
  88
 
  89
 
  
//===
 
  //PSUEDOCODE
 
  //===
 
   
 
  //THOUGHT_ACCESS
 
  enum Thought_Access_t
 
  {
 
  Thought_Access__Universal_k,
 
  //fixed, accessible by all,
 
  //generally used for math constants or universal enumerations like
 
  //the 6 3D cardinal directions (up/down/left/right/ahead/back),
 
  //names of places, or the 7 colors of the rainbow, etc)
 
   
 
  Thought_Access__Entity_Type_k,
 
  //thought that is stored specific to the "abstract description" of the entity, such as
 
  //all cats, or all siamese-cats, etc.
 
  //This is where you'd store knowledge that all cats would
 
  //have such as 'love mice', 'hate dogs' or 'fear vacuum cleaners'
 
   
 
  Thought_Access__Shared_k,
 
  //these are thoughts that are shared amongst a group, such as 'who is the Orc leader we follow' //or what is our 'marching-song' or 'slogan'.  Any change here happens for the entire group
 
   
 
  Thought_Access__Unique_k,
 
  //Unique thoughts that can be changed for a unique individual,
 
  //such as favorite_food, suspicion_of_traitor_ID, or hunger_level
 
  }
 
   
 
  //===
 
  //THOUGHT_SOURCE
 
  enum Thought_Source_t
 
  {
 
  Thought_Source__Instinct_t,
 
  //Inherited instinct
 
  //or taught/told by a trusted person so long ago it is equivalent to instinct
 
   
 
  Thought_Source__Sensed_t, //Sensory experience, as in seen, heard, tasted, etc.
 
   
 
  Thought_Source__Communicated_t, //Thought came from dialog or body language communication
 
   
 
  Thought_Source__Thinking_t,
 
  //Thought came from examining existing thoughts
 
  //and generating new ones using code
 
  }
 
   
 
  //===
 
  //KNOWLEDGE_FORM
 
  enum Knowledge_Form_t
 
  {
 
  Form_Number_t, //Quantity
 
  Form_Name_t, //Reference or ID#
 
  Form_Style_t, //Quality
 
  Form_Collection_t, //Aggregate of more information units (deeper type)
 
  }
 
   
 
  //===
 
  //KNOWLEDGE_LOCATION
 
  struct Knowledge_Location_t
 
  {
 
  Database_Table table;
 
  Database_Key Begin_key; //Represents a range of keys that can be used
 
  Database_Key End_key;
 
  }
 
   
 
  //===
 
  //ENTITY_THOUGHT
 
  //Each Entity has an array of "Thoughts" for the types of Knowledge it
 
  //can understand
 
  struct Entity_Thought_t
 
  {
 
  Thought_Access_t access;
 
  Thought_Source_t source;
 
  Knowledge_Location_t loc;
 
  Percent_t Certainty_f;
 
  Time_t time;  //when this thought occurs, helps to interpolate between past and future thoughts
 
  Linked_list_t Prev_thought; //reference to previous thought
 
  Linked_list_t Next_thought; //reference to next thought
 
  }
 
   
 
  //===
 
  //KNOWLEDGE_UNIT
 
  struct Knowledge_Unit_t
 
  {
 
  Entity_t owner;
 
  Knowledge_Form_t form;
 
  Information_Unit_t value; //int, float, unsigned_int index, etc.
 
  }
 
   
 
  //===
 
  //END

How do we code what these beasts know...