Hard Takeoff – Devlog #0

About a year ago I started working on a board game (code named “Calvinball” because at first I changed the rules often during the game.) That was an interesting process to go through and one I enjoyed. For the past couple of months I’ve had a new game bouncing around in my head that I’m going to try to make real. This time around I thought I would try to document the process a bit more.

This is the first in what will hopefully become a series of posts on Hard Takeoff, which is the working title for the new game. So far all I have are some vague ideas rattling around in my head:

  • Each player in the game plays a roughly human-equivalent AI.
  • A player wins the game by being the first AI to acquire the resources required to trigger an intelligence explosion.
  • Each player has a slightly different set of requirements to trigger that explosion. Each player also starts with a slightly different set of resources and abilities.
  • The game is about accumulating resources and capabilities as quickly as possible without being so obvious that the pesky humans take notice and try to stop the player.
  • The AIs in question are self-interested, but don’t particularly hate humans. At worst they are indifferent to the goals of humans. (That means they’re more like ELOPe than Skynet.)

In addition to these thematic and gameplay goals, there are also a couple of more practical limitations I’m going to try to impose on the game:

  • The game should be easily portable. I’m going to try to build it with only cards and no other pieces.
  • Game sessions should take about an hour.
  • This should not be a worker placement game because that’s what Calvinball is and I want to try something different.

And with that I’m going to go prototype some cards.  Here goes!

Tools for generating tabletop cards

I’ve been working on a board game for about a year now. The code name is Calvinball because I make so many rule changes in the middle of a game. It’s a worker placement and resource management game. This post is about my latest attempt to improve the tools I use to print the cards for that game.

I’m pretty comfortable in Visio so I started out with a big Visio document with all the cards in it. This was annoying because I had a zillion instances of everything and if I wanted to change the way one of the icons looked I would have to go retouch all the cards. So I did some scripting in Visio to automate some of that. Unfortunately that made each element complex enough that Visio runs out of memory and starts corrupting things. It’s also not nearly procedural enough for my tastes and made me to a bunch of redundant dragging and clicking.

When I was trying to bring up a second card game (a coop wilderness exploration thing) I tried something completely different. Using Django and Python I generated the cards in HTML and then printed them from the browser. This kinda worked, but HTML is really not meant to have tight registration on its printed output so getting the front and back of cards to line up was constantly a problem. There were also strange scale and background color problems depending on what options I forgot to pick in the print dialog. So this approach wasn’t very satisfying either.

This weekend I started working on a new technique, and so far that’s going pretty well. I build card templates as SVG files in Inkscape. Then I load those with a Python script, process them a bit and dump out a card-specific SVG file. Then I use more Python to combine blocks of card files into pages. Then I turn those pages into PDFs and use PDFtk to make one big PDF with everything in it. At that point a PDF reader (I use FoxIt) can print everything in one batch.

The card templates are pretty straightforward. Here is one of them:

All the information required to render the cards is in one big list of dictionaries. When I get the rest of the cards switched over I’ll make this representation more powerful:

	cards = [
			"params" : {
				"cardtitle" : "Construction Zone",
				"carddescription" : "Players take one victory point when they build a building.",
			"back" : "back_goal",
			"front" : "front_goal",
			"params" : {
				"cardtitle" : "Research Center",
				"cost" : "RRR",
			"back" : "back_goal",
			"front" : "front_goal_donate",
			"params" : {
				"cardtitle" : "Breeding Stock",
				"carddescription" : "Players take one victory point when they opt to not gain a colonist and instead donate it to the galactic core.",
			"back" : "back_goal",
			"front" : "front_goal",
			"params" : {
				"cardtitle" : "One for All",
				"carddescription" : "Players take one victory point when they make a donation.",
			"back" : "back_goal",
			"front" : "front_goal_allforone",

The code to combine the two is too long to paste inline, but the whole python script to do all of this is here. It walks through every entry in the cards list loading the front and back templates for each card. Then it tries to find elements for each entry in the “params” dictionary and replace the text of that element. You may have some difficulty getting this to run on any computer other than mind. Install lxml and svg_stack into Python, and put Inkscape and PDFtk in your path and you should be most of the way there.

One tricky bit is that the node layout for flowing text is different from a single line of text. This function finds the node with the text in it in each case:

def FindNode( root, name ):
	el = root.findall( ".//n:text[@id='" + name + "']/n:tspan", xmlNamespaces)
	if( el ) :
		return el[0]

	el = root.findall( ".//n:flowRoot[@id='" + name + "']/n:flowPara", xmlNamespaces )
	if( el ):
		return el[0]

	return None

It’s also worth noting that flowRoot doesn’t seem to work anywhere but Inkscape. Apparently it was in the final spec for some version of SVG that nobody supports. I use it for descriptions on some cards where I need word wrapping.

The other kind of parameterization that’s currently supported is recoloring or hiding the resource icons to match the cost string on the card definition. That just finds paths under a group with a special ID and sets their style to something specific to that resource type:

resourceStyles = {
	"F" : "fill: #16CD22; stroke: #000000; stroke-width: 0.5;",
	"E" : "fill: #D5FF2D; stroke: #000000; stroke-width: 0.5;",
	"M" : "fill: #404040; stroke: #000000; stroke-width: 0.5;",
	"R" : "fill: #FF69F7; stroke: #000000; stroke-width: 0.5;",
	"C" : "fill: #0000F7; stroke: #000000; stroke-width: 0.5;",
hiddenResourceStyle = "opacity:0"

def SubstCost( root, cost ):
	for n in range( 0, 15 ):
		style = hiddenResourceStyle
		if( n < len( cost ) ):
			style = resourceStyles[ cost[n] ]

		el = root.findall( ".//n:g[@id='cost" + str(n) + "']/n:path", xmlNamespaces )
		for path in el:
			path.set( "style", style )

The card SVG files are all dumped into a directory that contains all the script's output. For my game I write card back and fronts for every card. Most of the cards in my game are double-sided so having a shared back between multiple cards is actually kinda rare. You might be able to do something less spammy. Here are all the files generated by my initial test case:

From there the script builds "pages" where a page is a list of svg filenames. With standard 2.5" x 3.5" cards you can fit 8 cards on an 8.5" x 11.00" page, so every 8 cards forms a page. The pages alternate between fronts and backs with the fronts shown top to bottom and the backs shown bottom to top (to match the duplex feature on my printer.) If there aren't an even multiple of 8 cards blank cards are added to the pages so that each page is exactly 7" x 10". See lines 140-170 in the script for the code that does this pagination.

Using the lists of SVG filenames the script uses a Python module called svg_stack to combine them into bigger SVG files. svg_stack has horizontal and vertical box layouts, but it's pretty easy to combine those two to get a 2x4 grid to fit 8 cards per page. those cards are then written out to an SVG file that looks like this:

From there it's a quick trip through the Inkscape command line to converting that 7" x 10" SVG into a 7" x 10" PDF:
inkscape -f output\page0.svg -A output\page0.pdf

And once I have a bunch of those files I can cat them all together with pdftk:
pdftk output\*.pdf cat output output\allcards.pdf

Then you can print the whole thing in a PDF reader. In FoxIt I have to make sure to turn off scaling and auto-center on an 8.5"x11.0" page when I print:

Hopefully that brain dump is useful to somebody! I sure wish I'd had something like this to read yesterday morning.

Using heightmaps in the Terrain component in Unity 4.2

This weekend I spent some time playing around with procedural heightmap generation in Unity. Since the documentation for these APIs is pretty sparse, I wanted to post about all the traps I fell into and maybe save other people some time.

Here’s the code I ended up with. ChunkId.chunkSize is a Vector2 that contains {100, 100}. Perlin is a noise generator from this LibNoise port to Unity. This is a method on a custom component I added to each of my Terrain GameObjects.

public void UpdateHeight( Perlin noise, float height )
Terrain terrain = gameObject.GetComponent();
TerrainData terData = terrain.terrainData;
Vector3 pos = gameObject.transform.localPosition;

float[,] rHeight = new float[terData.heightmapWidth, terData.heightmapHeight];

float xStart = (float)id.x * ChunkId.chunkSize.x;
float yStart = (float)id.y * ChunkId.chunkSize.y;
float xStep = ChunkId.chunkSize.x / (float)(terData.heightmapWidth - 1 );
float yStep = ChunkId.chunkSize.y / (float)(terData.heightmapHeight - 1);
for( int x = 0; x< terData.heightmapWidth; x++ )
for( int y=0; y< terData.heightmapHeight; y++ )
float xCurrent = xStart + (float)x * xStep;
float yCurrent = yStart + (float)y * yStep;
double v = noise.GetValue( xCurrent, yCurrent, 0 );
rHeight[ y, x ] = (float)(v + 1.0) * 0.5f;

terData.SetHeights( 0, 0, rHeight );

When I ran the initial version of this code, I ended up with gaps all over the place wherever two terrain chunks met. These are all the things I had to learn to fix those gaps:

Trap #1: Heightmap data is duplicated on the edges

This should have been obvious. It was also an easy fix. That's why there is a -1 in this code:

float xStep = ChunkId.chunkSize.x / (float)(terData.heightmapWidth - 1 );
float yStep = ChunkId.chunkSize.y / (float)(terData.heightmapHeight - 1);

Trap #2: Height values must be in the range 0 to 1
The values you put into your heightmap aren't actually "height" values even though they're floats. They are the parameter that will let the terrain system scale the height between 0 and the "Terrain Height" parameter in the editor. Because the noise generator produces roughly -1 to 1 values, I had to scale them like this:

rHeight[ y, x ] = (float)(v + 1.0) * 0.5f;

Values above 1 or below 0 are clamped to 1 or 0 respectively so if you pass them in you will end up with some flattened sections of terrain.

Trap #3: Height arrays are Column-Major

You might have noticed something else that's funny about the line of code I just listed:

rHeight[ y, x ] = (float)(v + 1.0) * 0.5f;

For most of the time I was fighting with this I had the x and y reversed because I assumed that the 2D array expected by SetHeights was row-major just like everything else in the universe. This is the biggest documentation hole I ran into and what cost me the most time.

Trap #4: The "top" neighbor is in the positive Z direction

In order to make the LODs and normals on the transitions between terrain chunks work Unity keeps track of which terrains border which other terrains. You tell the engine about these relationships via the SetNeighbors call. For no particularly good reason I was expecting that left was -x, right was +x, top was -z and bottom was +z. Turns out that I had two of those backwards:

  • Left: -x
  • Right: +x
  • Top: +z
  • Bottom: -z

7 kinds of Media?

I recently watched this video from AWE 2013 by Toni Ahonen on why Augmented Reality is the 8th mass media. The part of his talk that didn’t quite sit right with me is his list of types of Mass Media. Here’s the list (including his new 8th type):

  1. Print (books, pamphlets, newspapers, magazines, etc.) from the late 15th century
  2. Recordings (gramophone records, magnetic tapes, cassettes, cartridges, CDs, DVDs) from the late 19th century Cinema from about 1900
  3. Radio from about 1910
  4. Television from about 1950
  5. Internet from about 1990
  6. Mobile phones from about 2000
  7. Augmented Reality from about now

I have two problems with this list. The first is that software is mostly absent. If I download an app from the app store or install a piece of software off a physical disk isn’t that a kind of media? One of those is arguably “Internet” and “Mobile”, but they’re both basically the same thing and both certainly qualify as “mass”.

My second problem is that the list seems to be an odd mix of content types and distribution mechanisms. Print gets one entry despite having a zillion forms. Audio and video both get two entries even though they’re both distributed in significant ways over the Internet. And Augmented Reality isn’t really either one, it’s sort of a kind of software that might not even be distributed.

I would use a different list:

  1. Print – The written word, including digital words like the ones you are reading right now. This includes still photography and all the flat kinds of art. – from the late 15th century
  2. Audio – Spoken words, music, and other kinds of sound regardless of distribution mechanism. – From about 1900
  3. Video – Moving pictures regardless of the distribution mechanism. – From about 1900
  4. Software – Pretty much anything where you are interacting with an automated system. This became a mass media with the personal computer revolution. – From about 1980

If you need to talk about how this media is transferred you could build a related list of distribution mechanisms:

  1. Physical – Somebody drops a hunk of dead tree on your doorstep or you buy a movie on physical media at a store and carry it home. – Since forever
  2. Radio – An analog or digital signal using radio waves. – Since about 1900
  3. Land Line – An analog or digital signal travelling down a wire or hunk of optical fiber. – Since the late 1800s
  4. Internet – This actually happens on top of land lines and radio, but it abstracts away all of that so well that it probably deserves to be its own distribution mechanism. – Since about 1995 (as a mass thing)
  5. Undistributed – Live performances or one-of-a-kind artifacts. The consumer has to physically go somewhere to experience things that are transferred this way. – Since forever

And if you care about the style of distribution there’s a third list:

  1. Broadcast – One producer, many consumers. The printing press started this, arguably. – Since the late 15th century
  2. Peer to Peer – Many producers, many consumers. For print this would include letter writing. – Since the invention of written language
  3. Many to One – Many producers, one consumer. This is used for things like the census, tax returns, and polls. – Since the invention of governments

Maybe it’s just my engineer-brain talking, but this seems like a much more clear way to express the various types of mass media. The thing is, I’m not sure it’s actually any more useful than that first list. The point of the first list seemed to be to make first Internet companies feel good about themselves. Then later that was expanded to Mobile companies and now it is being expanded to Augmented reality companies. Did anybody ever look at that list and gain any kind of insight?

What do you think? Is this sort of breakdown of media types useful?

Secret Innovation

One of the staples of near-future science fiction is organizations working in absolute secrecy to produce big game-changing innovation. I’ve been trying to come up with examples of this in the real world, but haven’t found any.

A few examples from science fiction: (These are kind of spoilers, but not very good ones.)

  • In Daniel Suarez’s Daemon a character named Matthew Sobel invents a new world order in the form of a not-quite-intelligent internet bot. Even the contractors working with Sobel don’t really know what they’re working on until Sobel dies right before the book begins, causing the Daemon to be unleashed.
  • In Ernest Cline’s Ready Player One James Halliday and his company go complete dark for several years and emerge with OASIS. This was a combination of hardware and software that created a globe spanning, latency-free shared virtual world for the people of earth to inhabit.
  • In Kill Decision (also) by Daniel Suarez, a shadowy government contractor builds automatous drones and deploys them for months before anyone realizes what’s going on.
  • The society ending technology from Directive 51 by John Barnes is a prime example. The book uses nanotech, extreme mental subversion and idealist/zealot manipulation to hatch a world wide society ending event. (via Jake)
  • In Nexus by Ramez Naam somebody engineers nanobots that link people’s emotions. Then some other people develop software to run on Nexus that give them super-human abilities. All of this happens in secret.

(There are certainly more examples than these. Mention them in the comments and I’ll add them to the list.)

In reality it doesn’t work like this, and there are two reasons for that.  The first is that the secrecy is far from perfect and the world gets a gradually better picture of whatever the thing is as it approaches completion. The second is that in the real world every successful innovation is an improvement on a usually-less-successful innovation that came before it.

The most likely counter-example people will bring up is the iPhone. Didn’t it spring fully-formed form the head of Steve Jobs on January 9, 2007?  Well no. Not even remotely. Apple’s own Newton, the Palm Pilot, the HP 200LX, Blackberry, and many more were clear fore-bearers of the iPhone. Apple certainly drove the design of that sort of device further than anyone else had. They definitely improved it to the point that millions of people bought them as quickly as they could be manufactured. They didn’t spring an entirely new kind of device on the world in a surprise announcement. The iPhone was basically a Palm Treo with the suck squeezed out.

Unfortunately as the iPhone demonstrates, “Big game-changing innovation” is not very easy to define. Let’s go with:

A new product or service that is so advanced that society doesn’t have a cultural niche in which to put it.

The Daemon, OASIS, and the swarms of killer drones from Kill Decision certainly fit this definition.  Are there any examples of products or services from the real world that do? If you can think of one, please leave a comment below and tell us about it.