Massively multiplayer games usually have a persistent storage layer that connects to a SQL database. Or to put it less technically, a way to manage the world and player data on disk instead of a less forgiving method, like say, volatile system memory.

MMO’s have a huge amount of data to handle. From the digital assets and gameplay information to live user accounts and logs. There is definitely no silver bullet here so one needs to tread lightly before making any decisions that, in case of a bad one, could hurt the development of the game later on.

Now, there are a slew of approaches and options which are beyond the scope of this article. On this particular case, I’m going to concentrate on why we chose PostgreSQL over similar options and what it did for us.

A little background

The year was 2002, I was part of a team that was about to embark on the development of an MMO (known as Regnum Online). Since we were using proprietary engine technology, all options were on the table and we had the “luxury” of deciding what frameworks and tools to use.

The game was going to be a 3d MMO, faction based, inspired on games like Asheron’s Call and Dark Age of Camelot. Several of us already had worked with Open Source databases like MySQL and PostgreSQL and commercial ones like SQL Server or Oracle, so we could go either way. Nonetheless, not one used any of those for a MMO development, so we were uncertain.

Our requirements

The first thing we did was layout a set of requirements for the game. When it comes to the database, we knew we were going to use some kind of RDBMS for the accounting system, for the gameplay database and since our tool pipeline was using a client-server approach with a MySQL on the back-end, it was a good idea to keep using that for development. Our thoughts centered on rapid development and left performance issues for later.

These were the things we were concerned about and ultimately forged our decision:

  • Cost: This is a no-brainer, but in our case the initial funding was meager to say the least. The cost included our development databases, as well as live ones. And since we had multiple shards in mind, we thought at the time that it would probably scale up really quickly.
  • Robustness: Massive Online games have a lot of interactions that could be considered redundant or safe to lose if something like a server crash would happen (although some players would probably disagree). On the other hand, there are parts like player trading or more obvious things like login, creating an account or buying premium items, that require transactions and as much features one can get to ensure data reliability.
  • Stored procedures: Don’t ask me why, we were fixed on this one. We wanted stored procedures. You could actually build an entire MMO without using them, but we were used to have the code organized in a certain way. Also, add to that the benefit of caching on some parts.
  • Flexibility: external tools, third-party programs, IDE and add-ons that would add to the base setup. Any ability to extend the basic offering as well.
  • Familiarity: We didn’t want to train the entire team on a new database engine. We also wanted to have as much documentation as possible since we knew there was going to be a lot of tweaking involved.
  • Future-proof: Something that could stand the test of time (Yeah, I stole that phrase from Civilization). A database that was going to be maintained and improved alongside the game, without our intervention.

The decision

At this point, we had arrived to the conclusion that we were going to need a Open Source database, mostly driven by the cost requirement and community support as well.

Most of us had experience with PostgreSQL and MySQL. Other options were discussed but were immediately discarded due to our lack of familiarity with them, stick to what you know, right?.

At the time, MySQL was more widely known than PostgreSQL. There was a big community on MySQL behalf. Point for MySQL. On the other hand, PostgreSQL was the only one that had stored procedures, point for PostgreSQL.

When it comes to external tools and IDEs. MySQL definitely had more, from web based ones (phpMyAdmin was more mature than phpPgAdmin) to all kind of importers/exporters. PostgreSQL have been already around for quite a while but the options so far seemed less. Point for MySQL.

Regarding robustness, both had transactions but on MySQL it was required to have a specific table type (InnoDB). As usual, there were a lot of “PostgreSQL vs MySQL” articles that ended up saying, just use the one that fits. Coupled that with our experience at the time, this was a point for PostgreSQL.

The Future-proof requirement was more inclined to MySQL. Since there wasn’t many developers around that knew PostgreSQL, that made us doubt about how much it would be supported and for how long. Point for MySQL.

As you can see, if we take into consideration that every option weights the same, it was a tie. And also, we were already using MySQL for the centralized pipeline. But then we decided to put everything on a scale, and since the phrase “the right tool for the job” was a motto of ours, we made an unorthodox decision.

We went for both.

That’s right, everyone on software development was telling us about consistency, about ease of maintenance and who knows how many other guidelines that they teach you at the university. As a rebellious child, we made the decision of using MySQL for digital assets and content delivery (Since blobs processing, at the time at least, were faster on MySQL) and use PostgreSQL for the account and game databases.

Why choosing PostgreSQL was a good idea?

Fast forward 9 years (geez!), I see it clearly now why it was a good decision. Hindsight is always 20-20? you got that right!

To summarize the good reasons:

  • Stored procedures: Or functions as they are called in PostgreSQL. A great deal of the game database operations was built using functions. One of the great things about it, was that it had several options for the language the function was built upon (for instance: PL/pgSQL, Python or plain C). It really helped us keep everything organized and have complex operations for important events, like creating a character or an item, in an atomic way.
  • Robustness: Referential integrity, foreign keys, constraints, you name it. We needed all those options and more, and we used it all. Also, the fact that it behaves really nice on a “live” environment (hot is the keyword), was great to keep a good uptime.
  • Really stable development: To be honest, I was personally nervous every time we needed to upgrade the game database when a new PostgreSQL version came out. Granted, you could keep the previous version but we were update freaks, we wanted every bit of new functionality the new versions were offering. We also could have been a little more cautious by doing upgrade simulations. I can happily attest that we went from PostgreSQL 7.x to 8.4 in the span of several years and not once we had a serious issue. There were times where we had to update the database structure to accommodate for changes, but it was usually due to our design mistakes.
  • Future-proof: Okay, we didn’t really know it was going to gain more steam throughout the years. Some of us were really skeptic but if you took the projections and the quality updates, there was nothing to worry about.
  • Open source: This was always a good option to have on the table. Of course, it was also a good point for MySQL. While we actually didn’t need to change any of the code, we went to sleep tightly knowing that, given a situation that required it, we could actually go and get our hands dirty .
  • Performance: There were very vocal people in favor of MySQL. At the time, on the Open Source world the motto was “MySQL for speed, PostgreSQL advanced operations”. Since we were always behind schedule, there wasn’t any room for stress testing so we trusted it was going to be able to handle anything we would throw at it. We went from a few thousands rows to several hundreds millions spanning several databases and servers. I know that’s not impressive on certain levels (a Google engineer would probably laugh at those numbers) but we were sailing on uncharted waters.
    To be fair, we encountered a lot of situations where you don’t want to be in. You know, the one that have to do with “100% cpu load” or “extreme swap usage”. Mostly it was our fault and it was relatively easy to fix by updating the database design structure.

Not everything that shines is gold

Yes, we had some issues. A lesson that we learned here is that you cannot blame the tool, framework or engine you are using if you didn’t anticipate the problems. In that case, you only have yourself to blame.

Clearly, the good outweighed the bad, since the game is still running after four years, with an increasingly higher amount of data. But even so, they are worth mentioning.

Fragmentation: Fragmentation is the scary monster of the long lived processes world. Anything that requests dynamic memory ad nauseam, is going to clog up your ram sooner or later. In the case of PostgreSQL, on certain operations it doesn’t claim disk space and it’s understandable since it couldn’t reach the desired performance if it had to order/claim everything, every time. To solve this, you have a command called “vacuum” that basically defragments the database. You have several options, that take less or more time, according to the case.

Now, when you are in development and you can get the database down, there is no problem, you just automate it or manually do it every time you get the server down or up. Also, when you have a MMO that’s region locked, you have a time of the day where people go to sleep (usually) giving you a window of opportunity since the database is not being hit hard. When you have a multi-region MMO, the plot thickens.

In our case, the symptom was obvious. a) A user complaints that the sword that he just bought, is not on his inventory, b) one of us immediately thinks “Item lag! Activate the alarm, this is not a drill!”, c) somebody on operations is told that the database should be checked (in our small development team, this would usually be the same programmer that was programming and attending support at the same time), d) 90% of the cases, we forgot to not put “autovacuum” on.

To be fair, most of these issues were because we were learning about live operating an MMO (yeah, it was our first). After a while, we implemented procedures to do vacuuming at less crowded hours and on updates. And recently, on scheduled maintenance updates.

Scaling: A big subject on these matters, how to scale horizontally and/or vertically according to the demand. At the time, there wasn’t any mature solution that wasn’t commercial, so we were left to build our own solutions. In an MMO, you actually have replication on several stages. Since you have a copy of the game database on your HQ, on a testing environment (local or live), on a live environment and in case of multi-sharded MMO’s, across several worlds or dimensions. Since we were mostly Jack-of-all-trades and not specialized in database development, we ended up building a simple web-based synchronization. Even though there were several options for replication, a customized approach catered  better to our needs.

…but the kitchen sink: There were a couple of places we left wanting more. Specifically having to do with big databases such as the log one and everything that has to do with processing a big chunk of data in real time. You see, when you database is going to be read a lot, you set it up in a way (i.e.: Indexes). Otherwise, if it’s going to be heavy on writes (updates/deletes), you do it in another way.

In our case, most of the game database required both at the same time. For instance, the log database was heavy on updates since every player that does something worth logging (from slaying a monster which doesn’t happen that often, to sending a chat message) but it was also heavy on read, since we needed to process statistics of all kinds, the support team needed to browse for exploits, the design team needed it for juicy metrics and the game scripts needed it for ranking calculation or active users processing.

This is one of those challenging situations that you get a kick out of when you have the time to do proper research, test and iterate. When you have partners breathing down your neck along the marketing department, not so much. We ended up implementing things like table partitioning (that maybe we did it wrong but I still have nightmares about that. And I wasn’t the one implementing it). Where were you NoSQL 9 years ago?!

Conclusion

I’d like to wrap up with one question:

Would I use it for another MMO or would I recommend it to others?

My answer is, definitely yes! PostgreSQL has proven to be a great option for a complex game database and I would use it again in a heartbeat. However, since complex challenges sometimes require complex solutions, I’d probably throw other options into the mix. For instance, for logging and parts that need cost-effective scaling, Redis for real-time operations (like achievements processing).

The world of databases has changed a great deal the last few years. PostgreSQL has grown out to be a great champion of the Open Source world and of course, it always ends up being a matter of analyzing your own requirements and choosing the right tool for the job.

The panorama is excitingly changing with the advent of NoSQL and document-mapped databases, it’s definitely interesting to sit back and watch what the new MMO games and the updated old ones come up with to solve complex data storage challenges. Or even more interesting, to engage oneself into the challenges of this everything-is-real-time-and-connected world.