For some developers, all this talk of procedural content generation is bittersweet. If you’re generating a world map, for instance, you probably don’t have too many requirements of it. A bit of land, not too many mountains – other than that, the randomness is part of the appeal. But what if you have very specific requirements? Well, then you’d need to test the content you generate to make sure it’s okay to give to the player. For many games this is a tricky and daunting task. How on earth would you automatically test a physics puzzle game, for instance? I’m glad you asked. This week on Reading the Papers – a novel approach to simulation that can test real-time physics-based games (and more).
We’re reading Cut The Rope, and specifically talks about how they implemented a solver for the game, despite the fact that it’s complex, physics-driven and real-time. It’s a neat little project in its own right, but the authors are hopeful that the approach could be applied to many other types of game, too.
The authors have done a lot of work on Ropossum, and written multiple papers about different elements of the project. The level generator uses computational evolution to place objects in the game world to form a puzzle, and then can evaluate those levels using a variety of approaches – human designers can give feedback for a mixed-initiative approach; it can use a selection of custom heuristics (more on those in this paper); and as we’re going to see today, it can also automatically verify playability of a level it generated. We’ll no doubt see the rest of the Ropossum system in another column some other day, but for now – solvability.
Procedural content generation is all about the balance between unpredictability and reliability. On the one hand, we want to generate content that no-one has seen before, without needing a human designer to be there to do it. But at the same time, if there’s no human designer to curate the content, we need to be confident that what comes out is usable, and hopefully fun. If we’re generating puzzles, we need to be confident that a solution exists.
For turn-based or grid-based puzzle games, this isn’t too tricky. In fact, artificial intelligence has been looking at this kind of problem for a long time, inventing chess puzzles or producing altered versions of existing boardgames. Making a game turn-based and grid-based means that it’s easy to represent within a computer, and you can easily explore the space of possibilities. When a game is real-time, though, and when it has lots of complex systems at work in each frame like a physics-based game, then the problem gets many orders of magnitude harder.
So the problem is this: given a level design for a game like Cut The Rope, decide whether or not it can be solved, and if possible provide a series of actions that would lead to a solution. The way proposed in this paper uses a Prolog solver that runs alongside a simulation of the game. At each timestep, the game is translated into a description which Prolog can understand, and then the solver uses this representation to work out what sensible moves can be taken right now. If you’ve never heard of Prolog before, don’t worry – you’ll get the gist from a quick example. Below is a description of a Cut The Rope level in the paper’s Prolog representation:
candy(527, 140). velocity_up. velocity_to_right. frog(390, 330). rope(550, 50, 105). rope(440, 50, 195). air_cush(507, 195, 4). reachable(rocket).
Some of these are obvious facts, like candy(X, Y) which states the candy’s co-ordinates in 2D space. Others are more subtle – reachable(C) means that the component C is accessible to the candy given its current trajectory, so if a candy is floating upwards then things below it aren’t reachable. This helps the system infer what moves are sensible or not – if the candy is never going to get near a component no matter how long you wait, then it’s not sensible to activate it.
Now, suppose we’re looking at this description of the game state, and we want to figure out what to do next. There are only a few moves we can make in a given Cut The Rope level, like pushing an air cushion or cutting a rope. How do we know which actions are likely to have an effect? The authors suggest a set of rules which Prolog can check to see if a move is ‘sensible’ currently. So for instance, for pressing an air cushion we might check:
air-cush_press :- air-cush(X,Y,Dir), candy_in_dir_of_air-cush(X,Y), distance(X, Y, 80).
In other words, we can press an air cushion if there is an air cushion in the level (obviously); if the candy is in the direction of the air cushion (otherwise there’s no point) and if the candy is near enough to the cushion to be affected by it. The authors suggest heuristics for each of these actions – if you were writing them for your own game, you can imagine inventing your own heuristics for whatever mechanics you’re dealing with. Crucially, this set of rules includes a ‘void’ action which equates to waiting, or not taking an action. Most of the time, this is what the simulator will be doing.
Running alongside this Prolog system is the physics engine of the game itself. The system works by simulating an interval of time in the game engine, and then sending the game state over to the Prolog system to see which actions, if any, are available to the player. If there’s more than one action available, the system branches its simulation of the game and continues to search, in a depth-first fashion, through the game. The image below shows how a tree of possible moves is built up as the system explore its options. When it reaches a failure state, the system simply backtracks until it finds an option it hasn’t explored yet.
Of course, if the game is updating sixty times a second then there probably isn’t much going on between any two timesteps. The authors note that if you check with your Prolog system every single frame then your system very quickly slows down. So instead, they implemented a variable timestep which adds in a delay after an action has been taken, where no checks are made with the Prolog engine. Some actions have small delays, because they often are followed very quickly by new actions (like cutting ropes). Others, like firing rockets, have longer delays. This is all unique to whatever game you’re trying to analyse, but these heuristics are normally easy to figure out for the game’s designer.
So the essence of the system boils down to the relationship between the game’s raw, continuous, real-time simulation, and the elegant logical representation in Prolog. By choosing reasonable time intervals, and having a good intuition of which moves are sensible at any moment, the Prolog system can help tame this huge state space, and figure out a sequence of actions that completes a level. Of course, some game designs may not suit this. Cut The Rope only lets the player interact with what’s on screen, there’s no placement of new components or anything complex like that. And many actions in the game actually reduce the possibility space, by popping bubbles or cutting ropes. Nevertheless, the paper clearly describes a really nice framework for solving these kinds of problems. I think it could easily be extended to a wide variety of game types.
I didn’t have time to cover everything mentioned in this particular paper – and as I mentioned in the introduction, there are other papers about Ropossum because it’s such a large piece of research. For example, the evolutionary level generator can work with a human designer to co-operate on a level design. The human designer can fix certain parts of the design and ask the system to redesign the remaining elements, for example – all the while ensuring the results are solvable. Mixed initiative tools like this are going to be an exciting feature of game design in the future.
I spoke to Noor about the research at AIIDE this year, where this paper was presented. She has a great outlook on this kind of procedural generation, and she and her co-authors seem passionate about tackling these harder generation tasks, where you often need to explore huge, real-time state spaces. We often propose the idea of expert AI players being used to test levels, but this isn’t a good solution for so many reasons. If we can find ways to intelligently explore levels to find ways through them, we give ourselves so many more tools to design and analyse the content we generate. Exciting stuff.
Where To Find More
the Platformer AI Competition - she’s also made great contributions to understanding platformer level design, which we’ll cover in a future column) All three are very friendly and active on Twitter, and this work is active and ongoing for Mohammad I believe, so if you’re interested in more information I’d recommend getting in touch with him.
One final note on this edition of The Saturday Paper – thanks to some very kind suggestions and the lovely people at AltDevBlog, The Saturday Papers will now be appearing over there as well, under the title ‘Reading The Papers’ – thanks to Jurie Hornemann for the title. Hello to all our new readers. I’m really excited about hearing new opinions about the column. Don’t forget, there’s a healthy back catalogue of papers you can read over on my website.