Archive for the ‘Engineering’ Category

Six degrees of fish

A demonstration of using a SparkFun 6DOF IMU to control models in a scene. The fish on the left uses just the pitch and roll as determined by the accelerometers. The fish on the right uses primarily the gyros and lets the accelerometers correct against bias drift.

The rendering is done in Ogre. The fish are example models that come with the engine.

The hard part of continuous deployment

Last week I posted what sort of changes you might need to make to your deployment process to support updates multiple times a day. I said I would follow up with a description of the technical challenges involved in making your users not hate you if you actually did deploy multiple times a day.

The reasons your users will hate frequent deployments are:

  • Non-zero server downtime when a new build is released
  • Being forced to log out of their existing play session for a new build
  • Waiting for patches to download for every little change

I will describe solutions to each of these problems below.

Eliminating Lengthy Server Downtime

The database architecture in Pirates requires that every character in the database be loaded and processed whenever there is a schema change.  From the programmer’s point of view such changes are easy to make, so they occur frequently. Unfortunately this processing is why the Pirates servers take an hour to upgrade. I don’t know how common this kind of processing is in other games, but I know we weren’t the only ones who had to deal with it.

The need to process character data stems from the fact that Pirates stores persistent character data in a SQL database, but the actual data is in opaque blobs. The blobs themselves contain just the field data for each object and do not include any kind of version information. The whole system is only able to contain one version of each class at one time.

A better way to go would be to build your persistence layer so that it could handle any number of historical versions for player data. If you add a schema version to each blob and register each new schema as it is encountered. When a server needs to read an object it can perform the schema upgrades on the fly and avoid the long slow upgrade of each object.  If necessary you could even have a process that crawls the database upgrading inactive characters in the background so that they will be ready to go the next time they login.

Supporting multiple concurrent versions

Database updates are the longest section of server downtime at patch time, but they aren’t the only one.  For Pirates (and most MMOs) each world instance must be taken offline so that the new version can be installed on the server hardware and then launched. This adds some downtime for the upgrade, but most painfully, it also disconnects all active players. A better method would be to build your server clusters in such a way that they tolerate multiple simultaneous versions of the code.

It turns out that Guild Wars already supports this.  One of the comments from the last post describes a bit about how this works:

Once the new code was on all the servers in the datacenters, we flipped a switch that notified the live servers that any new ‘instances’ should be started using the latest version of the game code. At this point, any users running older builds of the client got told that they needed to update, and attempts to load into a new instance would be rejected until they updated.
To summarize how the live update worked on the server, I believe we atomically loaded the new game content onto the servers, and then loaded the newest version of the game code into the server processes alongside the older version, because each build was a single loadable DLL. That let us keep old instances alive alongside new instances on the same hardware, and isolated us from some of the hairy problems you’d get from multiple client versions running in the same world.

For players in the PvE parts of the game this method apparently worked quite well. However PvP players could only play with each other when everyone was running the same version of the game, so this caused problems with PvP events. It isn’t perfect, but this is a huge step in the right direction.

The Guild Wars approach worked for them, but would be less suitable for Pirates which has eight different types of processes making up its server clusters.  However with some changes to the Pirates Server Directory multiple versions of those exes could easily co-exist on a single server machine. When a client connects to Pirates it queries the server directory process to find an IP address and port number to connect to. That query already takes the version number into account, and could be adapted to simply never return cluster servers from an older version.

Another approach that would eliminate the need for any client downtime for many patches is to make those systems work more like the web. The next time I architect an MMO (at Divide by Zero Games, actually) I intend to move most of what the traditional game client does outside of the typical persistent client-server connection entirely. Systems like mission interaction, guild management, in-game mail, and the like don’t require the same level of responsiveness as moving and attacking. If those systems use non-persistent HTTP connections as their transport the same protocols can be re-used to support web, mobile, or social network front ends  to the same data. For chat a standarized protocol (Jabber maybe?) will let you use off the shelf servers and let chat move in and out of game easily. The more locked down these APIs are the less likely you are to affect them with your small patches.

Eliminating patch time

So you have eliminated all server downtime and even allowed outdated clients to stay online and play after a new version of your game is deployed. If players have to download a small patch and then spend a minute or two applying it to your giant pack files they are still going to be annoyed.  Well it seems that Guild Wars did a great job here too.

Almost all of the patching in Guild Wars is built right into the client and can be delayed until right before it is actually needed. When you download Guild Wars it is less than 1MB to start.  The first time you run the game it downloads everything you need to launch the game and get into character creation. After character creation it downloads enough to load the first town.  When you leave the first town and go out into the wilderness it loads what it needs for that zone before you leave the loading screen, and so on.  All of this data is stored on the user’s machine, so going back into town is fast.

The natural result of this is that if a user has outdated data when they reach the loading screen for a zone, tiny patches for the updated files are downloaded before they finishing zoning. The game never makes the user wait to patch data for sections of the world they aren’t anywhere near. When the client is updated an up to date list of the files in the latest version is downloaded and the client uses that to request updated data as necessary.

The next logical step once you have this partial patching system in place is to enable both servers and clients to load new data on the fly without shutting down. If you are doing it right, many of your new builds are going to be data-only and include no code changes. Small changes of that sort could easily be downloaded in the background and then switched on via a broadcast from the servers.

Fixing further technical problems

There are two remaining technical hurdles that are not directly visible to the players but still need to be solved if the latency between making a change and deploying it is to drop to under an hour. These two are strongly tied to the partial patching process described above: slow data file packing, and slow uploads to the datacenter patch servers.

The packed data files on Pirates are rebuilt from scratch every time a build is run. This is an artifact of my hacking in pack file support over a long weekend that could be solved with incremental pack file updates. Building them from scratch involves opening tens of thousands of files totalling 6GB in size, after compression, and compressing them out into 66 pack files and takes 80 minutes. Usually a much smaller number of files has actually changed, so an incremental build would reduce both the file opens and the data packing work itself.

That same incremental process could be applied to the patch deployment process itself.  Because of how Flying Lab’s process was arranged with SOE each packed data file had to be uploaded again in its entirity if it had any changes. That slowed the transfer to SOE considerably. I am not intimately familiar with the SOE patch servers themselves, but I suspect a similiar inefficiency existed on their end when a build was deployed.  This could also be eliminated with more meta-data of exactly what had changed, and you will need that information for the partial patching above to work anyway.

So is it worth all this trouble?

These changes represent a “never going to happen” amount of work for an existing game. While the work involved in building your game to eliminate these deployment issues is less for a game that is being build with that in mind, it still isn’t free.  Is it worth the expense to allow your game to deploy in an hour instead of seven and a half hours?

I think it is worth it from a purely operational perspective, and here’s why: One of the first few monthly builds released for Pirates included a bug that caused players to lose a ship at random whenever they scuttled a ship.  Their client would ask them to confirm that they wanted to scuttle whatever ship they clicked on, but when it got to the server it would instead delete the first ship in their list. Though it was only a one line code change to fix the bug it took many hours to deploy the changes. In the meantime quite a few players had deleted their favorite ships and all the cargo in them. We knew about the build in the morning of patch day, but couldn’t get a new build out for 8-10 hours, which put us into prime time. It caused a lot of bad blood that could have been avoided if we had been able to deploy the build faster. This is a particularly bad example, but this kind of thing happens all the time with live MMOs. (SWG had a bug go live where the command to launch fireworks would let players launch anything they liked into the air and then delete the object afterward.  That included but was not limited to fireworks, monsters, buildings, and other players.)

There are plenty of other reasons to be able to patch very quickly, and I may go into them in a future post. I think it’s worth ensuring you are able to push new builds within an hour simply to be able to fix your major screw-ups as quickly as possible and save your players grief.

Continuous Deployment with Thick Clients

(Update: Part two is up now. It details the technical changes that are required to make this work.)

Earlier this week Eric Ries and Timothy Fitz posted on a technique called Continuous Deployment that they use at IMVU.  Timothy describes it like this:

The high level of our process is dead simple: Continuously integrate (commit early and often). On commit automatically run all tests. If the tests pass deploy to the cluster. If the deploy succeeds, repeat.

Those posts seem to be mostly about deploying new code to web servers. I thought it would be interesting to consider what it would take to follow a similar process for an MMO with a big installed client. I will do that in two parts with today’s part being the deployment process itself.  In a future post I will describe the architectural changes you would have to make to not have frequent updates annoy your players enough that they riot and burn down your datacenter (or go play someone else’s game instead.)

As a point of comparison, here’s how patches are typically developed and deployed for Pirates of the Burning Sea:

  1. The team makes the changes. That takes three weeks for monthly milestone builds with a lot of the integration happening in the last week. For a hotfix it usually takes a few hours to get to the root cause of the problem and then half an hour to make the change, build, and test on the developer’s machine.
  2. A build is run on the affected branch. When that’s a full build it takes 1:20 hours. Incremental builds can be ten minutes if no major headers changed. The form of build that people on the dev team use is copied to a shared server at this point.
  3.  Some quick automated tests are run to fail seriously broken builds as early as possible. These take 10 minutes.
  4. The build is packed up for deployment. This takes 80 minutes. The build is now ready for manual testing (since the testers use the packed retail files to be as close to a customer install as possible.)
  5. Slow automated tests run. These take 30 minutes.
  6. (concurrent with 5 if necessary) The testers run a suite of manual tests called BVTs. These take about half an hour. These tests run on each nightly build even if it isn’t going to be deployed to keep people from losing much time to bad builds. Most people wait for the BVTs to finish before upgrading to the previous night’s build.
  7. For milestone builds the testers spend at least two weeks testing all the new stuff. Builds are sent to SOE for localization of new content at the same time.
  8. A release test pass is run with a larger set of tests. These take about three hours.
  9. The build is uploaded to SOE for client patching and to the datacenter for server upgrades. This takes a few minutes for a hotfix or 6-8 hours for a month’s worth of changes.
  10. At this point the build is pushed to the test server. Milestone builds sit there for a couple weeks. Hot fixes might stay for a few hours or even go straight to live in an emergency.
  11. SOE pushes the patch to their patch servers without activating the patch for download. That takes between one and eight hours depending on the size of the patch.
  12. At the pre-determined downtime, the servers are taken offline and SOE is told to activate the patch. This usually happens at 1am PST and doesn’t take any time. Customers can begin downloading the patch at this point.
  13. Flying Lab operations people upgrade all the servers in parallel. This takes up to three hours for milestone builds, but more like an hour for hotfixes. They bring the servers up locked so that they can be tested before people come in.
  14. The customer service team tests each server to make sure it’s pretty much working and then operations unlocks the servers.  That takes about 20 minutes.
  15. If the build was being deployed to the test server, after some days of testing steps 11-14 will happen again for the live servers.

This may seem like a long, slow process but it actually works pretty well.  Pirates has had 11 “monthly” patches since it launched, and they go well.  Not many other MMOs are pushing content with that frequency.  Some of the slow bits (like the upload to SOE and their push to their patchers) are the result of organizational handoffs that would take quite a bit of work to change. Flying Lab also spends a fair amount of time testing manually over and above the automated tests. Those manual tests have kept thousands of bugs from going live and as far as I’m concerned they are an excellent use of resources. I am not trying to bash Flying Lab in any way, just provide a baseline for what an MMO deployment process looks like. I suspect many other games have a similar sequence of events going into each patch, and would love to hear confirmation or rebuttal from people who have experience on other games.

Assuming minimum time for a patch, these are the times we are left with (in hours and minutes):

Pirates Deployment Time

0:10 Run incremental build
0:10 Run quick automated tests
1:20 Pack the build
0:30 Run quick manual tests/slow automated tests
3:00 Run release pass smoke tests
0:05 Upload build for patching
1:00 Build is pushed to patchers
1:00 Servers are upgraded
0:20 A final quick test is made on each server
7:35 Total time to deploy


If it takes seven and a half hours to deploy a new build you obviously aren’t going to get more than one of them out in an 8 hour work day. Let’s forget for a moment that IMVU is able to do this in 15 minutes and pretend that our target is an hour. For now let’s assume that we will spend 10 minutes on building, 10 minutes on automated testing, 30 minutes on manual testing, and the last 10 minute actually deploying the build. We will also assume that this is the time to deploy to the test server and that this is the slower than the actual push from test to live.

Without going into detail I am going to assume three architectural changes to make these targets much more likely. First, the way that the game uses packed files of assets will need to be changed to allow trivial incremental changes on every file in the game without any 80 minute pack processes. I will assume that pack time can go away and be a minor part of deployment.  The other assumption is that unlike Pirates, our theoretical continuously deployed MMO will not require every object in the database to be touched when a new build is deployed so the hour spent upgrading databases on the servers is reduced to about five minutes of file copying. The third architectural change I will assume involves moving almost all of the patching process into the client itself and out of the launcher. This eliminates both the transfer of bits to Southern California and the push of pre-built patches out to the patcher clusters around the world. I will go into each of these assumptions in my next post, but let’s just pretend for now that they appear magically and reduce our time accordingly.

The amount of time spent on automated testing in this deployment process is 40 minutes. Fortunately this work is relatively easy to spread out over multiple machines to reduce the total time involved. Assuming a test farm of at least five machines we should be able to accomplish our ten minute goal.

It is possible that we could have multiple testers working in parallel to perform the manual tests more quickly. There are 230 minutes spent on testing in this sequence, so if we expect to do that amount of work with 11.5 manual testers assuming perfect division of labor. There are two things wrong with that statement. The first is that perfect division of labor among 12 people is impossible so the number is probably closer to 20.  The second problem is that having a team of 12 on staff to do nothing but test new builds with tiny incremental changes in them is not worth nearly what it costs you in salaries. A much more likely outcome is that the amount of manually testing in the build process is drastically reduced. In addition to that we can get our playerbase involved in the deployment testing if we swap things around a bit.

Trimmed Deployment Time

0:10 Run incremental build
0:10 Run automated tests on five machines
0:05 Upload only changed files to game servers and patch servers
0:05 Deploy build on test server
0:30 Manual smoke test on public test on public test server
Players can test at the same time.
1:00 Total time to deploy

In fact, if those 30 minutes of manual testing are your bottleneck and you can keep the pipeline full, you can push a fresh build every 30 minutes or 16 times a day. Forget entirely about pushing to live for a moment and consider what kind of impact that would have on your test server. Your team could focus on fixing issues that players on the test server find while those players are still online. Assuming a small change that you can make in half an hour, it would take only an hour from the start of the work on that fix to when it is visible to players. That pace is fast enough that it would be possible to run experiments with tuning values, prices of items, or even algorithms.Of course for any of this to work the entire organization needs to be arranged around responding to player feedback multiple times per day. The real advantage of a rapid deployment system is to make your change -> test -> respond loop faster. If your plans are set a month out and your developers don’t have time to work on anything other than their assigned tasks for that entire time, there is not much point in pushing builds this quickly.What do you think?  Obviously actually implementing this is a more work than moving numbers around in a table. :)

Computer Clubs

I’m old. Well I’m not really that old in the grand scheme of things, I just feel that way when I hang around game developers.

I got my first real computer time in the fall of 1982 by hanging around after school and hacking some stuff in BASIC on the Vic-20 in the library.  I was in 5th grade at the time, and was by far the most computer-obsessed person I knew. That christmas my parents bought me a TI-99/4A and a little black and white TV to hook it up to. Technically the computer was a present for “the family”, but in practice it didn’t really work out that way. I was obsessed with the TI, and wrote all sorts of little games and other programs on it.

A few years later I spent all my accumulated allowance and paper route money on a Commodore 64. The C64 was a big upgrade, and included such advanced features as a floppy drive and a 300 baud modem. It also had the advantage of having a manufacturer that was still in the PC business. (TI abandoned its home computer line shortly after we got ours.)  I spent quite a bit of time on the local BBSes, much to the delight of the other 4 people I shared a phone line with. Once I had a car began participating in one of the staples of the personal computer revolution: the computer club.

The local commodore user’s group met once a month in one of the classrooms at the University of Northern Colorado in Greeley. It was a group of 20-30 people, many of which came from the university or worked at the local Hewlett-Packard site. Computer enthusiasts were pretty few and far between in those days, and this was one place where we all fit in. Just about everybody in that room was a geeky, sci-fi reading, D&D playing male. Everybody could program to one degree or another, and more than a few knew their way around a soldering iron. Despite all the other things they had in common it was those last two that brought this group together: everyone wanted to do cool stuff with computers.

I don’t know if that kind of community disappeared or if I just fell out of touch with it. There are millions of programmers these days, and they are usually specialized enough that they barely speak the same language let alone program in it. Being a “hardware guy” now means that you are comfortable plugging together prebuilt components and hunting down device drivers online. The inexorable march of progress has pretty much made the computer itself disappear as something people get excited about. Nobody cares enough about specific platforms these days to even have the sort of trash-talking arguments Commodore and Apple fans used to have with each other.

Does this sort of passionate niche club still exist? The Seattle Robotics Society might fall into that category. They spend their meetings talking about various components to build robots from and what sort of code to put on microcontrollers to make their robots do interesting things. The meetings feature lots of teenagers learning things about robots that they would never have any exposure to at school. There seems to be the same mix of Boeing engineers and college students that the computer clubs had.

What about others? Are there clubs for wearable computer enthusiasts? People who design programming languages? Quantum computing fans? Or are we nearing the end of the innovative period for computing and somewhere there are developing pockets of interest around nanotech or some other technology that doesn’t really exist yet?

It’s funny that I’m so nostalgic for something that was already going extinct by the time I got involved. My experience with the computer clubs was 10-15 years after the Homebrew Computer Club spawned Apple Computer and others. The people I met in the clubs were not entrepreneurs to be, they were more like fans and maybe the occasional shareware developer. It’s been twenty years, and I’ve never seen any of those names show up as leaders of industry.

What about you? Are any of you old enough to have belonged to a computer club?  :)

StackOverflow is amazing

A couple of weeks ago, Jeff Atwood and crew launched the public beta of Stack Overflow. Stack Overflow lets programmers ask questions and other programmers answer them. That’s it.  They just did it with a lot less suck than all the other programming community help sites: The ads are unobtrusive, there is no login requirement just to see an answer, answers are listed from best to worst instead of first to last, and anyone can edit a question or answer to make it better.

For instance, look at this question I asked about boost shared pointers. I have work-arounds for the problem in my code, but figured that there had to be a better way. Turns out that the boost experts on Stack Overflow knew exactly what I needed, and answered within a few hours.  Then some other people read the question, picked the best answer, and by voting it up made that answer appear prominently.  By the time I got back to check to see if my question had been answered, there was a clear winner. To make it even more prominent, I marked that answer as “accepted” and now it’s highlighted.

If you’re a programmer, I suggest you check it out. Next time you’re looking for the answer to a programming question, see if it’s been asked on Stack Overflow. If not, ask your question. I think you’ll be pleased with the results.

(Back in July I joined a company called Divide by Zero.  Now I’m singing the praises of a site called Stack Overflow.  Next thing you know I’ll be renaming my blog “Access violation”. :) )