Lisa Brown’s recent article “Students, Version Control!” got me thinking about what I’d really like to pass on to students. The result is this – a summary of the distinction between what I’ve called “Traditional AI”, that you probably learn about on a typical degree course in Computer Science, and “Game AI” which you maybe won’t.


The guys writing AI for games are clueless. The technology they use is old and out of date. I’m going to show them how it should be done.

It’s not an uncommon view amongst people who’ve had a bit of background in academic AI. In fact, once upon a time I used to be one of these people. I’d read Russel and Norvig’s “AI – A Modern Approach”, I’d got a decent understanding of Genetic Algorithms, Neural Nets, Robotic Control Systems and everything else that a couple of degrees in (for the sake of distinction let’s call it) Traditional AI might teach. I was quite comfortable in the certain knowledge that not only could you write better AI for games quite easily, but that was all that really stood between where the industry was then and the most awesome games ever.

Well quite.

Fortunately, I had some great people around to take me back to school, so this article is aimed at passing on the insights that they shared with me, so that budding young AI students don’t fall into the same traps that I did, and that I continue to see throughout a chunk of academia.

The Hubris of the “Expert”

This is as close to knowing how to run a farm as I'd got before taking one on as a client.

Once upon a time I worked around the UK as an IT consultant, and one of the reasons I feel I had a lot of success was that I didn’t make any assumptions when working on projects with clients. I don’t know how to pack confectionary – tell me. I don’t know how to run a farm – show me. I’ve never sold or bought a house – explain the process. It seems to me like it’s a far rarer approach than it should be; all too often consultants (and even mid-level management) think they know what happens at ground-level, but all too often they overlook significant parts of the process or misunderstand what is actually required. There’s a great example I love to bring up about my local high school. A dilapidated, rundown place, the council decided to build an entire new structure for the school on a new lot. Architects were commissioned and a lovely design was created that looked fantastic on paper – huge high glass walls along the frontage – the full treatment. Nobody at any point in the design process considered that the building was going to be full of teenage miscreants, nor did they talk to the cleaning staff about how exactly they might go about keeping essentially a big glass wall clean. You can guess how the story ends.

Unfortunately, this kind of attitude extends throughout academia, even to the academics acting as consultants. I was in a meeting not too long ago listening to a progress report on a collaborative project – the new technique worked very well and in simulation showed a significant drop in costs and an increase in performance for the industrial partner. The only caveat, it was mentioned at the end, was that the technique required Equipment A. The company used Equipment B, which couldn’t handle the kind of process that was required. The comment from the lead researcher? “Well, they’ll just have to put Equipment A into all their locations then”.

Knowing is Half the Battle

As soon as I realised that I was falling into the same patterns with my view of Game AI, I decided to learn. I talked to as many people in industry I could and visited as many studios as would give me the time of day. What I wanted to know is how they dealt with AI and what the biggest challenges were.

What I found out was enlightening. It turned out that in general, most companies do use old AI technology that isn’t what academics consider state of the art. In fact, a few years ago I attended a seminar in London where it was suggested that 90% of Game AI still came down to A* – none of the industry people in the room seemed inclined to argue with that. The situation is getting better now, but as an example, the most frequently used technique in a specific sub-field of Game AI (“Planning”) was lifted almost directly from an approach created in 1973. We might be diversifying our techniques now, but as academics see it, Game AI is still largely in the dark ages.

Battlefield 3 - The graphics are shinier than ever, and need more processing time!

However, there’s a flip side to the coin. One of the most profound realisations for me was just how indescribably different in terms of scale Game AI is from Traditional AI. For most AI students, they are used to seeing techniques showcased in desktop applications. That will be likely the primary thing that their workstation is doing, and if there is a visualisation at all, it will be in general a simplistic thing – 2D drawings perhaps. With all that power available, Traditional AI can do quite a lot, but it misses the significant fact that in Game AI, most of your processing time is being spent elsewhere. Per frame, an AI routine might get 1-2ms of execution, and whilst you can do both jiggery and pokery to delay decisions allowing several frames of execution, you’re still in a massively constrained environment compared to where academia likes to play. To go back to planning (it’s my speciality, I apologise), the academic community holds a bi-annual competition in which all AI planning systems compete to see who can solve problems better. For each distinct problem, each system is given 30 minutes of processor time – this year a multi-core track provided even longer for multithreaded systems. Not only that, but 7GB of RAM was made available. Now, I have to admit (in order to placate my academic friends who might read this) that these are upperbounds, but it gives you a sense of the scale of the problem that is being tackled in some areas of academia, and a stark contrast with the kind of approach you need in order to make things work in games.

In short, as much as many academics and students might consider Game AI to be in the dark ages, there’s a reason that it’s that way: That’s the level of processing power available to AI routines! Game AI just doesn’t have the kind of power that many more “state of the art” techniques require. This situation is also improving as we move forward, but it won’t ever catch up to a dedicated system like a planning agent, for the simple fact that games will always need to spend some of the power available doing other things that a dedicated AI system won’t. Until Traditional AI is “solved”, Game AI will by necessity always lag behind.

Some Smoke and Also, Some Mirrors

One of the most startling revelations however, was also one of the most obvious. Game developers don’t get paid to be clever. Even more obvious – Studios survive by selling games. Unfortunately (or fortunately, in some cases…) the strength of your AI system does not pay the bills. I wrote previously about how complex AI systems can sometimes backfire, and there’s a second aspect to that – lost time. Time spent on unnecessary work equates to wasted money, so if a quick hack gets you most of the way to where you need to be, it doesn’t make sense (from a business perspective at least) to spend 3 years researching a technique that is precisely correct. Good enough probably is good enough.

However, talking about selling games, they only sell if they’re entertaining. Again, it’s kind of obvious, but there it is. What academics might term “good” AI probably isn’t entertaining. If we had the horsepower to make everything solvable using minimax or some other exhaustive optimal algorithm, it would make for a very dull experience in which the computer plays a perfect game consistently. So Game AI needs to beatable, so that players don’t get incredibly frustrated with our games. At the same time, there’s an element of realism required. Soldiers need to act like soldiers, and whilst an “optimal” solution might see a soldier dive from a 5th story window, taking a 90% hit in health doing so, in order to kill an entire squad this isn’t really realistic of how a soldier would act. Whilst that means that a soldier character shouldn’t act like the Rambo example suggests, it also means that they shouldn’t be tactical geniuses either. One of the best examples of this I can give is from Batman Arkham Asylum, in which the player’s Batman character would fight groups of thugs. The thugs would surround Batman and then take it in turns to try to beat him up. If it sounds easy to implement, it is. If it sounds sub-optimal, it is. If it sounds like something the player could easily deal with, it is. Where Traditional AI would see the problem as one of defeating Batman, Game AI instead aims to create a world in which the player is immersed in being Batman, and through that create an engaging and entertaining experience.

BF Bad Company 2 - It's possible these guys aren't even trying to hit you.

Finally, its OK to cheat. This I think is the biggest disconnect between Traditional AI and Game AI, but again it speaks to this concept of entertainment. The player doesn’t care how you make things happen in the game, and they don’t really care how you make the game entertaining for them. They want their soldiers to act like soldiers, but that doesn’t mean they care about how that works behind the scenes. Mikael Hedberg gave a great presentation at the Paris Game AI Conference 2010 that included some of the ways the AI characters “cheat” in Battlefield Bad Company 2. In one set piece, the player stumbles on an ambush while in a vehicle. To get a cinematic experience, the ambushing soldiers fire away, as do their tanks, but for the most part they are aiming above the player’s head – they physically can’t kill anything. That’s an example of the way an AI “cheats” to be dumbed down; we’ve all most likely experienced AI that cheats to improve its performance as well – and as a result we all have anecdotes of impossible instant head-shots from ridiculous range and RTS opponents that seem to know our every move. Things are definitely improving in this regard – “positive” cheating is now much more subtle, and the extra knowledge gained can even be used to disadvantage the AI, for example allowing the player to feel good that they stocked up on a particular item that the enemy character doesn’t seem to have “thought” to defend against – the player need never know this was a deliberate choice to let them win. The point is that the game and the entertainment value are king – not the design philosophy of the AI system or its strict adherence to a realistic world model – it doesn’t matter if it is all smoke and mirrors provided it looks right!

The upshot is that Traditional AI and Game AI have very different motivations, and they broadly (certain areas such as UAVs notwithstanding) work in very different environments with different constraints. Traditional AI aims to be smart. Game AI just has to look smart, and typically, it doesn’t matter how smart it truly is. That isn’t to say that there’s no overlap between these two aims of course, but it is a different ballgame. Its important to understand the distinction before you try to claim knowledge over both.

Game AI in the 21st Century

With all of this said though, it’s an incredibly exciting time to have a good grounding in AI and a passion for the games industry. We’re finally reaching a point where hardware isn’t proving to be as major a bottleneck as it has been in the past. Left4Dead showed just a small example of what AI can do in contexts other than “making characters do things”, and already we’re seeing AI Directors as a significant trend in a lot of titles. Academic projects such as Infinite Mario and Galactic Arms Race showcase that we can use AI for content generation as well – not just for trees but whole levels, weapon systems, not only generated randomly but specifically tailored to a player’s taste and skill level at runtime. Studios seem to be increasingly willing to pay attention to AI, to take a bit of a punt on something a bit novel and see if they can’t come up with something a bit less mundane.

So for all the graduating students who think they know AI – now learn about the mechanics of games. If you’re reading this site that’s a really good start, and I hope this post has helped a little on your way to becoming a serious Game AI professional. Talk to people, try things out for yourself, but most importantly remember – seeing something in a textbook isn’t the same as being able to get it working in the real world! :)


I mentioned right at the start of the article that some people took the time to “re-educate” me, and I’d really like to take a moment to thank them for that. I owe a lot to Alex Champandard (and frankly the whole community) for bringing me into the light as it were. Chris Preston (Ubisoft Reflections), Jack Potter (Rockstar North) and Duncan Harrison (Ruffian Games) all helped massively in properly contextualizing what “Game AI” means. And finally all the academics – those who do it right are great role models, those who do it wrong are a great cautionary tale! :D