The hard part of continuous deployment
Last week I posted what sort of changes you might need to make to your deployment process to support updates multiple times a day. I said I would follow up with a description of the technical challenges involved in making your users not hate you if you actually did deploy multiple times a day.
The reasons your users will hate frequent deployments are:
- Non-zero server downtime when a new build is released
- Being forced to log out of their existing play session for a new build
- Waiting for patches to download for every little change
I will describe solutions to each of these problems below.
Eliminating Lengthy Server Downtime
The database architecture in Pirates requires that every character in the database be loaded and processed whenever there is a schema change. From the programmer’s point of view such changes are easy to make, so they occur frequently. Unfortunately this processing is why the Pirates servers take an hour to upgrade. I don’t know how common this kind of processing is in other games, but I know we weren’t the only ones who had to deal with it.
The need to process character data stems from the fact that Pirates stores persistent character data in a SQL database, but the actual data is in opaque blobs. The blobs themselves contain just the field data for each object and do not include any kind of version information. The whole system is only able to contain one version of each class at one time.
A better way to go would be to build your persistence layer so that it could handle any number of historical versions for player data. If you add a schema version to each blob and register each new schema as it is encountered. When a server needs to read an object it can perform the schema upgrades on the fly and avoid the long slow upgrade of each object. If necessary you could even have a process that crawls the database upgrading inactive characters in the background so that they will be ready to go the next time they login.
Supporting multiple concurrent versions
Database updates are the longest section of server downtime at patch time, but they aren’t the only one. For Pirates (and most MMOs) each world instance must be taken offline so that the new version can be installed on the server hardware and then launched. This adds some downtime for the upgrade, but most painfully, it also disconnects all active players. A better method would be to build your server clusters in such a way that they tolerate multiple simultaneous versions of the code.
It turns out that Guild Wars already supports this. One of the comments from the last post describes a bit about how this works:
Once the new code was on all the servers in the datacenters, we flipped a switch that notified the live servers that any new ‘instances’ should be started using the latest version of the game code. At this point, any users running older builds of the client got told that they needed to update, and attempts to load into a new instance would be rejected until they updated.
To summarize how the live update worked on the server, I believe we atomically loaded the new game content onto the servers, and then loaded the newest version of the game code into the server processes alongside the older version, because each build was a single loadable DLL. That let us keep old instances alive alongside new instances on the same hardware, and isolated us from some of the hairy problems you’d get from multiple client versions running in the same world.
For players in the PvE parts of the game this method apparently worked quite well. However PvP players could only play with each other when everyone was running the same version of the game, so this caused problems with PvP events. It isn’t perfect, but this is a huge step in the right direction.
The Guild Wars approach worked for them, but would be less suitable for Pirates which has eight different types of processes making up its server clusters. However with some changes to the Pirates Server Directory multiple versions of those exes could easily co-exist on a single server machine. When a client connects to Pirates it queries the server directory process to find an IP address and port number to connect to. That query already takes the version number into account, and could be adapted to simply never return cluster servers from an older version.
Another approach that would eliminate the need for any client downtime for many patches is to make those systems work more like the web. The next time I architect an MMO (at Divide by Zero Games, actually) I intend to move most of what the traditional game client does outside of the typical persistent client-server connection entirely. Systems like mission interaction, guild management, in-game mail, and the like don’t require the same level of responsiveness as moving and attacking. If those systems use non-persistent HTTP connections as their transport the same protocols can be re-used to support web, mobile, or social network front ends to the same data. For chat a standarized protocol (Jabber maybe?) will let you use off the shelf servers and let chat move in and out of game easily. The more locked down these APIs are the less likely you are to affect them with your small patches.
Eliminating patch time
So you have eliminated all server downtime and even allowed outdated clients to stay online and play after a new version of your game is deployed. If players have to download a small patch and then spend a minute or two applying it to your giant pack files they are still going to be annoyed. Well it seems that Guild Wars did a great job here too.
Almost all of the patching in Guild Wars is built right into the client and can be delayed until right before it is actually needed. When you download Guild Wars it is less than 1MB to start. The first time you run the game it downloads everything you need to launch the game and get into character creation. After character creation it downloads enough to load the first town. When you leave the first town and go out into the wilderness it loads what it needs for that zone before you leave the loading screen, and so on. All of this data is stored on the user’s machine, so going back into town is fast.
The natural result of this is that if a user has outdated data when they reach the loading screen for a zone, tiny patches for the updated files are downloaded before they finishing zoning. The game never makes the user wait to patch data for sections of the world they aren’t anywhere near. When the client is updated an up to date list of the files in the latest version is downloaded and the client uses that to request updated data as necessary.
The next logical step once you have this partial patching system in place is to enable both servers and clients to load new data on the fly without shutting down. If you are doing it right, many of your new builds are going to be data-only and include no code changes. Small changes of that sort could easily be downloaded in the background and then switched on via a broadcast from the servers.
Fixing further technical problems
There are two remaining technical hurdles that are not directly visible to the players but still need to be solved if the latency between making a change and deploying it is to drop to under an hour. These two are strongly tied to the partial patching process described above: slow data file packing, and slow uploads to the datacenter patch servers.
The packed data files on Pirates are rebuilt from scratch every time a build is run. This is an artifact of my hacking in pack file support over a long weekend that could be solved with incremental pack file updates. Building them from scratch involves opening tens of thousands of files totalling 6GB in size, after compression, and compressing them out into 66 pack files and takes 80 minutes. Usually a much smaller number of files has actually changed, so an incremental build would reduce both the file opens and the data packing work itself.
That same incremental process could be applied to the patch deployment process itself. Because of how Flying Lab’s process was arranged with SOE each packed data file had to be uploaded again in its entirity if it had any changes. That slowed the transfer to SOE considerably. I am not intimately familiar with the SOE patch servers themselves, but I suspect a similiar inefficiency existed on their end when a build was deployed. This could also be eliminated with more meta-data of exactly what had changed, and you will need that information for the partial patching above to work anyway.
So is it worth all this trouble?
These changes represent a “never going to happen” amount of work for an existing game. While the work involved in building your game to eliminate these deployment issues is less for a game that is being build with that in mind, it still isn’t free. Is it worth the expense to allow your game to deploy in an hour instead of seven and a half hours?
I think it is worth it from a purely operational perspective, and here’s why: One of the first few monthly builds released for Pirates included a bug that caused players to lose a ship at random whenever they scuttled a ship. Their client would ask them to confirm that they wanted to scuttle whatever ship they clicked on, but when it got to the server it would instead delete the first ship in their list. Though it was only a one line code change to fix the bug it took many hours to deploy the changes. In the meantime quite a few players had deleted their favorite ships and all the cargo in them. We knew about the build in the morning of patch day, but couldn’t get a new build out for 8-10 hours, which put us into prime time. It caused a lot of bad blood that could have been avoided if we had been able to deploy the build faster. This is a particularly bad example, but this kind of thing happens all the time with live MMOs. (SWG had a bug go live where the command to launch fireworks would let players launch anything they liked into the air and then delete the object afterward. That included but was not limited to fireworks, monsters, buildings, and other players.)
There are plenty of other reasons to be able to patch very quickly, and I may go into them in a future post. I think it’s worth ensuring you are able to push new builds within an hour simply to be able to fix your major screw-ups as quickly as possible and save your players grief.