Replaying actions in a game is a surprisingly common feature. There are match replays of course, but recorded actions are also used in cutscenes, ‘ghost’ players in racing games, and a variety of puzzles, like those for Clank in Ratchet & Clank:A Crack in Time. Many engines support at least a limited ability to record action in-game, as opposed to recording a movie, the demo command in Source for example. Unity, however, does not.

One of my current projects is heavily reliant on character replays. I’ve written several replay methods in past projects, and knew how much work it ended up being. I never want to have to go through that again, so decided to roll my own, generic, adaptable record/replay system for Unity. It went…OK…

(For anyone interested, but not enough to read about it, the script is on Github)

Past Experiences

I think the first time I had to record something in Unity was for the beta of Robert Yang for a gamejam at Babycastles. It’s an arcade style game with novel controls, and needed some sort of attract mode/tutorial. At first I tried recording a movie with Fraps or something. This never worked, to make the quality high enough meant the frame rate took too much of a hit, not to mention that tacking a 40Mb video on to a 3Mb webplayer game was kinda overkill. In the end, I just decided to take down the positions of the two players each frame, and when replaying just snap them to the recorded positions. Simple, and as long as you didn’t look too closely or have a really bad/good framerate, pretty robust. It was good enough for a gamejam, anyway.

Another game that Robert and I worked on for a while was Muckraker, about investigative journalism. This involved ‘filming’ certain events, and replaying (even editing) them, so some sort of in-game recording system was needed. I quickly rigged up a system where you would drop a ‘Recordable’ script on each gameobject you wanted to record. When the player started filming, a master recorder would take down the position and rotation of each Recordable object within the cameras view. Our recordings were all short, and we never got too far into production, so this system worked well enough. However it was unwieldy, completely frame-rate reliant, and inflexible. I couldn’t have two different recordings at once, and couldn’t easily take down any other events.

All rather dire. The next time, I vowed to do things right.

The VCR

More recently, I started working on No Architect, a cooperative FPS platformer/short story/dull-party-adventure. I wanted the coop part to be available offline too, but it still needed at least some of the quality of live players. Recorded playthroughs could also be easily shared, so I could still have people playing with each other, without having to rely on enough players to keep live multiplayer going.

I decided to only record inputs, and deal with the expected inaccuracies at runtime. Mostly because syncing the position and rotation of an object every frame takes up a lot of room, precious on web based games. However it also gives me the flexibility to mess with the playback. If, say, a wall has appeared, a recorded agent will get stuck against it, rather than clip through.

I set up a basic script (InputVCR.cs, for those of you playing along at home) where the user chooses which named buttons/axes to record from those already set up in Unity’s input manager. Each frame, the VCR will record the status of these inputs, along with the position of the mouse. (As an aside, I set up my own recording format, but in hindsight should have just used JSON). Now, you can give this recording to any InputVCR, and it will spit out the same inputs in the same frames. Let’s painfully stretch the VCR metaphor. Say user Input is the signal from an aerial. You can plug this input into the VCR so gameobjects(TVs) can get their inputs from it rather than the direct user feed. A Gameobject now doesn’t have to know whether it’s getting live input or not, and a VCR can record, pause/rewind, or even swap recordings(VHS tapes) with other VCRs, all without having to care about what uses its output.

Pretty simple, and except for replacing static calls to Input ( like Input.GetMouseButtonDown() ) with a reference to an instance to an InputVCR, you don’t have to change existing scripts.

Of course, this didn’t work out so smoothly. Over my recordings of around 2 minutes, the characters would slowly drift off course as framerate differences started to add up.  And in a game about making long jumps onto small platforms, there’s not a whole lot of wiggle room. I added timecodes each frame, so that playback could slow/speed up if the recording frame times didn’t match the playback times. This helped somewhat, but if there was a spike during recording or playback, the character would still freak out. For short recordings, or those less reliant on physics calculations, this might be enough, but I needed some sort of position syncing.

I added the ability to record position/rotation information with any frame (as well as arbitrary information). For many uses, this would be all you need. Sync the position every second or so, and interpolate back to the desired location if the playback goes off kilter. Unfortunately, I was playing recordings in a changing environment, and still needed the real time character to override the recording if, say, a platform disappeared from under their feet. There’s also a bug in Mono that makes StringReader.ReadLine() impossibly slow, so parsing all that extra information was really unpleasant (I fixed it eventually, but at first thought it was just the large file size). I got around this by recording what platform (if any) the recording was meant to be on, and syncing the location each time the playback landed on the ‘correct’ platform. If a platform was missing during playback , the controller wouldn’t sync and would be allowed to fall, as expected. There are still some inconsistencies during framerate spikes, but at least they won’t kill the character.

Lessons

Recording stuff is a pain! And, unfortunately, there is no all-knowing solution for every game. Syncing position/rotation of an object in guaranteed to be accurate, but the playback is stuck on rails and a changing world won’t affect it. Recording inputs allows for interactive playback, but in many cases won’t be accurate over time.

I found the best solution is to start with a recording of the input only, and add syncing until the playback is accurate enough for you. If there are events that rely on the playback, you can record them separately from the input to ensure they still happen. InputVCR still does the heavy lifting when dealing with recordings, but I’ve had to accept that you’ll always need a decent amount of human input to get the results you need.

Check out the source Asset Store(older version). If you use it or make any changes, I’d love to hear what you did!

Note: Reposted at my own site, grapefruitgames.com