Building the server

Now that we’re officially out of stealth mode, I can talk a bit about some of the things I’m working on at Namaste.

I’m currently building the AI architecture for the project and one aspect of this is the need for an AI server. Building servers isn’t particularly new to me, although it has been a while since I worked on network code. But I thought I’d blog about the various frameworks and tools I’m using to build out the server and more importantly to help me debug it.

Debugging AI is hard enough when you can throw debug data around within a single exe. But adding the fact that this has to be a server, it adds an interesting issue for debugging entity behavior.

The server itself is built using C++ and uses the POCO framework as a method of enabling it to run on both Linux as a daemon and on windows as a service. One of the benefits of using POCO is that it comes with a set of libraries that allow you to run an embedded web server, which is really useful for debugging. But we’ll get to that in a second.

Now poco comes with some pretty well specified logging functionality, which I’m using to create debug logs for the AI. Debug logging is the staple of most AI debugging techniques and can really help, especially when coupled with useful inspection tools. You can see a brilliant example of such a tool in an interview with Mika Vehkala over at, Mika shows a brilliant tool set that they built for IO interactive that has a sort of timeline approach to reviewing logs.

I think debug logs are perhaps the most powerful approach, but I tend to think a bit more visually. What I prefer to do, is actually show in-game what the AI is doing, in terms of movement selection, path generation, focus of interest etc. Unfortunately that simply isn’t available to me right now because I don’t have a direct connection to the client side as all AI communication is done with a game server.

What I needed, was some method of displaying debug information, such as a dump of the current behavior tree of a character, plus a representation of the world it is aware of, along with positional and other information.

The great thing about networked code, is that you start thinking in terms of distributed systems a lot more. What I needed was a protocol and a client that could attach to the server and pull the information I needed and display it. I already had the embedded web server, so I thought why not try that?

Using a web client as debug viewer

I’d played around with jQuery and JavaScript about a year ago, when I was looking at using webkit as a UI for my indie games. At the time I rejected it because of its lack of animation speed, but the techniques of using JavaScript to create a nice user interface have stuck with me. I decided to try and use a JavaScript based browser client that somehow would pull data from the web server and display the debug information I needed.

So the first thing I needed to do was pull the data from the server. Luckily, that’s actually quite a simple process. All you really need to do, is have a factory method derived from HTTPRequestHandlerFactory that you pass to the web server framework. When you call a specific URL, it goes to the factory method, which then creates a request handler object to handle the request based on the parameters you passed in the URL.

I created a number of methods on the server to respond to the calls I needed. The root “/” call simply presented an interface to the other methods, such as list entities for a world, view the current running world etc. Having a few test handlers in place, I had to get the data back to the client in a manner that would work for the JavaScript end. The easiest method was to simply return HTML content that displayed the appropriate href links. This worked ok, but wasn’t very pretty and I still needed some method of having JavaScript actually understand some of the object data in a more structured way.

Choosing a protocol

I could have used XML data as my protocol, as libraries for that are available both in the POCO framework and my own AI framework, but I chose instead to use JSON format. I used libjson to encode the data in json format (in this case a list of entities) and passed that out as response to a “/listentities” url hit. Unfortunately I hit a snag in that the browser simply wouldn’t let me receive the data. Essentially I was hitting a security feature in the browser that stops it from reading data from a different URL than it had originally loaded.

I had loaded the browser client UI by storing a local HTML file which then references local JavaScript files as I didn’t want to have to deal with sending those files via the web server embedded in the AI server. After talking with someone else at Namaste I learnt that you can use JSONP to get around this particular issue. In a nutshell JSONP is a way to “wrap” JSON data in a script call, which gets parsed on the client as though it is a local script call, thus if you have matching script function names on the wrapped data and on the client you can call the function and pass the data back to the client.

So the functionality is that on page loading in the browser, you attach a script call to a button that creates a new element in the browser dom and then attaches a script call to it. This script call uses a URL which pulls the jsonp data via the URL and calls a script function as callback. The script function runs and then removes the element from the dom. Once the json data is parsed from the receiving function, you can pretty much do whatever JavaScript you want to it. Originally I had some Three.js and webgl to render the entities (which is nicer because then I can add different camera views and the rendering is far more responsive).

Here is how the Javascript code looks:

function parseData(data)
// remove the script element so we can recreate a new one
var nItems = data.Gobs.length;
//alert(data); //uncomment this for debug

// draw entities positions
for( var i = 0; i < nItems; i++ )
var circle =[i].xpos * 1 + 320,data.Gobs[i].zpos * 1 + 240,4);

// create raphael canvas to draw on
window.paper = Raphael(document.getElementById("showdata"),640,480);

//attach a jQuery live event to the button
$('#getdata-button').live('click', function()
// use a JSONP hack to get the json data in a viable format
scriptElement = document.createElement("SCRIPT");
scriptElement.type = "text/javascript";
scriptElement.src = "http://localhost:8080/jsonents";

What is happening is that in the handler for the document “ready” event from jQuery, we first create the Raphael object to draw with, then setup a handler for a click event on a jQuery based link. That link calls a url on the server and that server responds with JSON formatted data wrapped in the method to respond as callback. So in this case it might return something like:

parseData( json formatted data here );

You simply ensure that the server wraps the data in the appropriate response method call name. Essentially the client runs the script as though it is native to the client.

What next?

My next big task is to add support for rendering the navmesh of the world in the browser and add support for rendering the paths and avoidance vectors of selected agents. But that is for another sprint. I think the technique of using a browser for debugging the AI will be especially useful as our designers play with the AI representations. There is of course a lot of interface to build up and as we’re still a small team, I’m having to balance working on debug functionality with actually delivering the AI behaviour.

In general I’d strongly suggest considering the use of embedded webservers and browser based code to enhance your toolset. I know that quite a few people are already doing this and it is a relatively easy using libraries like mongoose to support a web interface. I expect that as webgl matures it will mean that more people use this powerful approach. I really want to add some flash based UI to the mix as I think that Mika’s timeline based approach would work pretty well via flash. Plus I think that at some point we’ll need to actually add live updating to the server via this interface and maybe even live editing of the behaviour trees.