Building the server
Now that we’re officially out of stealth mode, I can talk a bit about some of the things I’m working on at Namaste.
I’m currently building the AI architecture for the project and one aspect of this is the need for an AI server. Building servers isn’t particularly new to me, although it has been a while since I worked on network code. But I thought I’d blog about the various frameworks and tools I’m using to build out the server and more importantly to help me debug it.
Debugging AI is hard enough when you can throw debug data around within a single exe. But adding the fact that this has to be a server, it adds an interesting issue for debugging entity behavior.
The server itself is built using C++ and uses the POCO framework as a method of enabling it to run on both Linux as a daemon and on windows as a service. One of the benefits of using POCO is that it comes with a set of libraries that allow you to run an embedded web server, which is really useful for debugging. But we’ll get to that in a second.
Now poco comes with some pretty well specified logging functionality, which I’m using to create debug logs for the AI. Debug logging is the staple of most AI debugging techniques and can really help, especially when coupled with useful inspection tools. You can see a brilliant example of such a tool in an interview with Mika Vehkala over at aigamedev.com, Mika shows a brilliant tool set that they built for IO interactive that has a sort of timeline approach to reviewing logs.
I think debug logs are perhaps the most powerful approach, but I tend to think a bit more visually. What I prefer to do, is actually show in-game what the AI is doing, in terms of movement selection, path generation, focus of interest etc. Unfortunately that simply isn’t available to me right now because I don’t have a direct connection to the client side as all AI communication is done with a game server.
What I needed, was some method of displaying debug information, such as a dump of the current behavior tree of a character, plus a representation of the world it is aware of, along with positional and other information.
The great thing about networked code, is that you start thinking in terms of distributed systems a lot more. What I needed was a protocol and a client that could attach to the server and pull the information I needed and display it. I already had the embedded web server, so I thought why not try that?
Using a web client as debug viewer
So the first thing I needed to do was pull the data from the server. Luckily, that’s actually quite a simple process. All you really need to do, is have a factory method derived from HTTPRequestHandlerFactory that you pass to the web server framework. When you call a specific URL, it goes to the factory method, which then creates a request handler object to handle the request based on the parameters you passed in the URL.
Choosing a protocol
I could have used XML data as my protocol, as libraries for that are available both in the POCO framework and my own AI framework, but I chose instead to use JSON format. I used libjson to encode the data in json format (in this case a list of entities) and passed that out as response to a “/listentities” url hit. Unfortunately I hit a snag in that the browser simply wouldn’t let me receive the data. Essentially I was hitting a security feature in the browser that stops it from reading data from a different URL than it had originally loaded.
// remove the script element so we can recreate a new one
var nItems = data.Gobs.length;
//alert(data); //uncomment this for debug
// draw entities positions
for( var i = 0; i < nItems; i++ )
var circle = window.paper.circle(data.Gobs[i].xpos * 1 + 320,data.Gobs[i].zpos * 1 + 240,4);
// create raphael canvas to draw on
window.paper = Raphael(document.getElementById("showdata"),640,480);
//attach a jQuery live event to the button
// use a JSONP hack to get the json data in a viable format
scriptElement = document.createElement("SCRIPT");
scriptElement.src = "http://localhost:8080/jsonents";
What is happening is that in the handler for the document “ready” event from jQuery, we first create the Raphael object to draw with, then setup a handler for a click event on a jQuery based link. That link calls a url on the server and that server responds with JSON formatted data wrapped in the method to respond as callback. So in this case it might return something like:
parseData( json formatted data here );
You simply ensure that the server wraps the data in the appropriate response method call name. Essentially the client runs the script as though it is native to the client.
My next big task is to add support for rendering the navmesh of the world in the browser and add support for rendering the paths and avoidance vectors of selected agents. But that is for another sprint. I think the technique of using a browser for debugging the AI will be especially useful as our designers play with the AI representations. There is of course a lot of interface to build up and as we’re still a small team, I’m having to balance working on debug functionality with actually delivering the AI behaviour.
In general I’d strongly suggest considering the use of embedded webservers and browser based code to enhance your toolset. I know that quite a few people are already doing this and it is a relatively easy using libraries like mongoose to support a web interface. I expect that as webgl matures it will mean that more people use this powerful approach. I really want to add some flash based UI to the mix as I think that Mika’s timeline based approach would work pretty well via flash. Plus I think that at some point we’ll need to actually add live updating to the server via this interface and maybe even live editing of the behaviour trees.