Archive for the ‘Day Job’ Category

Pirates NDA drops

We just dropped our NDA with the start of open beta today. Hopefully a billion forums will be abuzz with talk of Pirates today. :)

This is a pretty big deal around Flying Lab. We’ve been working on the game for more than five years and have been in closed beta for two. Now we’re entering the final stretch before the pre-order head start on January 7th.  (The game “launches” on January 22, but paying customers will gain access on the 7th and get to keep their characters, so that’s our real launch date by every measure that matters.)  It will be nice to have people who have actually played the game get a chance to talk about it.

How to make Microsoft SQL Server cry like a baby

Earlier this year we switched from MySQL to MS SQL Server. I don’t regret the switch at all; MS SQL Server has been far more stable than MySQL was, and has lots of whizzy new features. The MySQL client library was dropping connections under load and then crashing when it reconnected. That is what pushed us to switch in the first place. Well it turns out that MS SQL Server has some scaling problems of its own. It doesn’t crash, but it does get so slow as to be non-functional. This is a helpful guide that will help you make your own installation of SQL Server whimper.

Our server boxes are 8-way 2.6GHx Xeons with 16GB RAM running Windows Server 2003 64-bit and SQL Server Enterprise Edition 64-bit. If your configuration is different your mileage may vary.

Technique #1

We are using a system called the Flogger to record gameplay event into a database. To make this happen, all server processes connect to one central DB and call a stored procedure per event. This works fine when the number of processes is low, as in under 500. When the load on a world instance grows the number of processes connecting to the flogger DB increases to 1200.

Exactly how long seems to vary from a few hours to a few days, but after a while at this load SQL server decides that it has had enough and stops accepting new connections. New processes starting up time out eventually and things generally start going badly on the servers. Once SQL Server starts timing out connections the only way we’ve found to get the database running again is to restart the SQL Server service. While it’s in this state the server is only using moderate server resources.

The way we’re working around this problem is to use files as a buffer between the server processes and the database. Every so often (depending on activity) each process will dump the events it wants to record out to file. Some time later (well under a second when there’s no load, but potentially longer on a well loaded cluster) another process that maintains a connection to the flog database reads the file, dumps it to the database, and then deletes the file. This eliminates the need for the game servers to connect to this database at all, so if it decides to go out to lunch the game is unaffected. It also makes the data collection more reliable by putting any backlog into one directory full of files instead of in memory on 1500 different processes spread across five server machines.

Technique #2

We have another database exhibiting similar problems, though not quite as severely. Each process in a game cluster connects to a shared database called Serverdir and uses the DB to report its status back to operations tools and the “keep everything running” processes. This data is strictly temporary and probably doesn’t belong in a database all, but Horrible Design Flaws That Are All My Fault aside, it’s just not that many queries and they’re all very simple selects and updates. This shouldn’t be a problem for server hardware as beefy as ours.

That argument doesn’t convince SQL Server, however. After a few days SQL Server pauses for a few minutes. The CPU goes to 0% and no queries return for the entire time it’s paused. Our code responds to that by closing things down because it can’t currently tell the difference between “Query takes over a minute” and “Crashed process.” At that point half the cluster shuts down.

We don’t have a great workaround for this one yet. We’ve been steadily reducing the load on the Serverdir database, but it doesn’t seem to take all that much load to make it happen. Our best bet is to make the code smarter and have it detect these situations. If it just sits tight for a few minutes everything will return to normal without needing to restart anything. Fortunately it only happens a couple times a week so while it’s something we definitely need to fix before launch it isn’t impacting beta tester’s ability to play.

Making an MMO scale is a pain

None of the profiling tools we’re using at the SQL Server or OS levels are much help with either of these problems. Nothing tells us why SQL Server is refusing connections, or why it’s refusing to work on queries. Most database books and websites think that a slow query is one that takes longer than a minute or two, but in our world that’s a dead process and a disappointed customer.

We have made great strides in scalability since the first stress test, but no matter how many things you fix there is always one more waiting to bite you on the ass. *sigh*  We’ll get it figured out and apart from these DB troubles everything is staying up quite well at this point. We have 43 more days until the pre-order head start, so there’s still plenty of time to get through this round of problems. Then we break through into the infinite!

My fix for the flogger scale problem is now ready for a code review, so I’m going home to play Rock Band.

Now hiring for Operations

We are looking to staff up some more in the operations department in preparation for our launch.  If you’ve always wanted to work on an MMO and are a whiz at IT, one of these two openings may be for you:

I’m not actually the one doing the hiring, so reply through the ads if you’re interested.

PotBS Stress Test this weekend

We are running our second stress test this weekend, and so far it’s going quite well.  Fileplanet just opened it up to non-subscribers, so head on over and give the game a try!

Scripting in PotBS

The sweng-gamedev list is all a-flutter with a debate about the merits of scripting in games.  I wrote up a response that describes our experiences and figured I’d share it here too.

We had Lua integrated into the client and wrote much of our UI logic written in it. We struggled with bugs in our glue layer, difficulty debugging, and major spikes in our frame times whenever the garbage collector ran. Of course the glue code was terrible to begin with and we were exporting script hooks for much of the game instead of a nice clean interface, so that didn’t help. After a while we started moving away from Lua and began implementing new UI in C++. We’ve now removed all the Lua from the game.

On the gameplay side we use a rich data-drive system that lets designers define an arbitrary list of “requirements” with which they are able to test most any condition. When a trigger fires, object is used, or skill is activated, an arbitrary list of “results” is activated which is capable of modifying just about any state in the game.  The designers also have a few ways of maintaining persistent state on the characters depending on the circumstances.  This system is working pretty well for us and eliminates the need for any designer-written scripts.

If I ever integrate scripting into an engine again, there are several things I’ll do differently to make it go more smoothly:

  1. No designer scripting.  If designers are writing scripts, You’re Doing It Wrong. Scripts are code, and need to be just as maintainable as all your other code.
  2. A much cleaner API layer between the C++ code and the script code.  Exporting the whole game to Lua was just dumb.
  3. A built-in debugger. Printf-style debugging is so incredibly painful when you’re used to having a rich source-level debugger.
  4. Built-in profiling. All calls across the native/script interface should be timed and memory consumption should be strictly monitored.
  5. Dynamic script loading. Part was stupid glue and part was just our poor use of the scripts, but the first time around we ended up loading all the scripts when the client booted and couldn’t reload most of them. This one of the major advantages of scripting and we were missing out on it.
  6. Much more evaluation time. We know a bunch of things to look for the next time around including slow garbage collection, object lifecycle issues, memory corruption in the glue, testability of the scripts in isolation, etc.

On the other hand, I think writing servers in a higher-level-than-C++ language like C# or Java makes a lot of sense and would save us tons of development time. It’s the dynamically typed language with no debugger that didn’t work well for us.