3D Printing and The Humble Toaster

I have been thinking quite a bit about 3D printing lately. Maybe that’s because I’m now surrounded by 3D printed prototypes at work. Maybe it’s because the news in the tech world is full of stories about products and services related to 3D printing. All of this has lead me to two conclusions:

  1. In the short to medium term 3D printing will be something that companies use to provide customized goods to consumers, not something that consumers use directly.
  2. 3D printing won’t take off for in-home manufacturing until a printer can build something like a toaster

A 3D printer is an incredibly powerful tool for prototyping. In the hardware lab at the office we have a couple of Dimension printers, a laser cutter, a PCB mill, a Vinyl cutter, and a fairly complete set of power and hand tools. With those tools, a few McMaster-Carr and Digikey orders, and enough assembly time we can build a prototype of just about anything. A dizzying array of goods has come out of that lab and let us try out things in a few days to a week that would have taken us a few months to a year if we had tried to do the same thing ten years ago.

The problem is that the output of the printer requires additional off the shelf parts and significant assembly time to turn an ABS structure into something functional. This is where I think the toaster is a useful mechanism to talk about.

The pop-up toaster is arguably the simplest appliance in your kitchen. And yet it is also filled with components that are beyond the reach of 3D printing today.  (You can learn more than you ever wanted to know about how a toaster works here.) Here are several challenges that toasters present for 3D printers:

  1. The whole device is heat-resistant. Toasters heat to about 300 degrees Fahrenheit, but the melting point of ABS plastic is only 221 degrees. Clearly printing the toaster body out of ABS isn’t going to work.
  2. The power cord of a toaster contains both conductive elements and insulating elements. There are also insulating elements scattered around inside the toaster. Extrusion printers can handle the insulating elements, but not the heat resistance. Binding printers (sintering or EBM) can handle the conductive elements, but only print one material at a time so they can’t combine the conductive elements with the insulating elements in one object.
  3. “A toaster’s heating element usually consists of Nichrome ribbon wound on mica strips.” Nichrome is actually used in extrusion printers to heat the plastic. I can’t find any reference to either it or Mica being printable, and that would definitely require multiple materials in a binding printer.

None of these problems are insurmountable. Several 3D printers can already print in multiple materials. They just all happen to be different kinds of extruded plastic. Eventually those printers will figure out how to include metal in their parts. Heat resistance is probably a bigger challenge given how the printers work, but in theory they could use materials with even higher melting points so the resulting products could handle 300 degrees without a problem. And the list of materials that can be printed is growing every year. Eventually these problems will be solved.

That brings me back to the first conclusion, however, because I don’t think those solutions are going to come from the low-end printers you can afford to put in your house. Figuring out how to print conductive and insulating materials in the same batch is a hard problem. Heat resistance is a hard problem (for extrusion printers). Printing new and exotic materials is hard problem. These are the sort of thing that researchers are going to chew on for a while, then expensive first generation commercial implementations will need to be built.

As MakerBot eats away at the low end businesses of Objet and Dimension these bigger companies will move further up-market and add some of these higher-end features. Eventually MakerBot and its ilk will get those features too, but that is going to take a long time, probably ten or more years. In the meantime, the only printers capable of printing “advanced” devices like a toaster will be those that cost tens or hundreds of thousands of dollars. Price alone will keep these printers out of the hands (and garages) of individuals for a long time.

The only people who will be able to afford the next generation of toaster-capable printers are companies that use that capital investment as part of their business. That includes hardware labs that use these as tools for prototyping, but also mass-customization companies that build iPhone covers or print MineCraft levels today. These companies can charge a premium for their products because of the customization so they can amortize the cost of the printers (and learning how to use them) over thousands of customers. They will also be the primary market for companies like Dimension and Objet, so those printer providers will have no reason to drop their prices to a level normal people can afford.

One last random thought before I end this meandering post:  The printers that we’re running in twenty years will bear about as much resemblance to today’s 3D printers as my Nexus 4 bears to my first computer (a TI 99-4/A). They will be a combination of binding and extrusion, or something we can’t even imagine now. They will include many elements of a pick-and-place machine to include complex semiconductors in the resulting devices. And, once their utility reaches a certain point, their prices will be in free-fall. Eventually the future of 3D printing is bright. I just don’t think it’s going to happen overnight.

How did I do with my 2011 predictions?

A year ago I posted a list of predictions for 2011. Let’s see how I did:

  1. Wrong. Netflix did pick up more content. They are doing some interesting things with Arrested Development, for instance. However, I don’t think people will look back on all the pricing changes, Qwikster, and the general user annoyance as a year when Netflix “kicked ass”.
  2. Correct. 50 Mbit FiOS! Woot! We moved from Seattle to Kirkland into a house where the previous owners bullied Verizon into running the fiber. It’s completely awesome.
  3. Sort-of Correct. It’s not 60%, but Android is up to 43% market share. “Dizzying array” is the only way to describe the number of Android devices out there, though that’s not entirely a positive thing.
  4. Correct. Google and some of their partners announced a plan to improve the update situation. Hopefully that will work out.
  5. Correct. What new glasses? The Vuzix Star 1200s look cool, but $5k is a bit out of the consumer price range.
  6. Correct. This one was sort of a gimme since there’s no way to tell if this is really the start of anything. I’ve seem a bunch of Nissan Leafs around my neighborhood though.
  7. Wrong. I certainly haven’t heard of any such advance. Did I miss one?

4.5 out of 7 isn’t so bad. Some of my predictions were pretty soft-ball though, so I kind of cheated. :)

I think I’m going to skip writing my own 2012 predictions post this year. I would love to hear what you think is going to happen this year. Post them in the comments! (I’ll probably pick the best of them and put up a new post collecting them.)

Edit: @JZig points out that math is hard. 7 – 2.5 = 4.5

Figuring out a common time base

Using the locator beacon network from the simulator for an indoor navigation system that follows my own requirements means that a receiver must be able to use the network without sending out signals to each node. The receiver must be able to determine its distance from multiple nodes without actually sending anything to those nodes. This is accomplished in the GPS network by synchronizing all the satellites on a very precise clock. Once the transmitter and the receiver have a clock in common the receiver can easily compute its distance to the transmitter:

  1. Transmitter sends a signal at time X saying “It’s time X!”
  2. Receiver receives that signal at time Y
  3. Distance = ( Y – X ) / speed_of_light

So how do you build a common time base on a ad-hoc network of locator nodes run by a random assortment of people? The short answer is that you don’t. Assuming a few things about all the hardware involved you can get by coming up with an ad-hoc time base that still lets you compute distance without requiring any kind of central authority.

This assumes a few things about beacons and receivers in the network:

  • Each node (beacon or receiver) has a fixed (and known) amount of receiver lag. This is the time between when a signal hits the antenna and when it’s pushed to whatever internal system can tag it with the internal clock of the node. This is Send(N) for node N.
  • Each node has a fixed (and known) amount of transmitter lag. This is the time between when a message is sent and timestamped, and when it actually leaves the transmitter’s antenna. This is Recv(N) for node N.
  • None of the nodes are lying.

With these assumptions, the system is relatively straightforward. Any node can compute the translation from its own time base Time(n, t) to some other node’s time base Time(m, t) by pinging that node:

  1. Node N generates a random number ping_key
  2. Node N broadcasts a message containing ping_key and records ping_time = Time(N, t).
  3. Node M receives that broadcast message and records receipt_time = Time(M, t).
  4. Node M sends out a broadcast of its own with: NodeID(M), ping_key, Send(M), Recv(M), receipt_time, Time(M, reply)
  5. Node N receives the response and notices that ping_key matches its own ping_key.
  6. Node N computes its relative time base with M as follows:
    • Total_time = Time(N, reply_receipt) – Time(N, transmission) – ( receipt_time – Time(M, reply))
    • distance_time = (Total_time – Send(N) – Recv(N) – Send(M) – Recv(M) )/2
    • distance = distance_time/speed_of_light
    • time_base_difference = Time(M, t) – Time(N, t) = Time(M, receipt) – Time(N, transmission) – (distance_time + Send(N) + Recv(M))
    • time_base_difference2 = (Time(M, reply) + distance_time + Send(M) + Recv(N) ) – Time(N, reply_receipt)
  7. Node N stores NodeID(M), time_base_difference, distance in its table of nearby beacons. (time_base_difference and time_base_difference2 should be equal assuming that all the lag numbers are right and the distance hasn’t changed. If they drift apart something is wrong.)

This mechanism enables each beacon to keep a constantly updated time base for every node in range via the same packets it is using to determine distance to those beacons. Beacons can then send out everything they know:

  • Their own location (estimated from GPS and refined via the algorithm in the simulator)
  • Time(beacon, broadcast)
  • For each beacon N in range:
    • Node ID of N
    • Time(N, broadcast)

Here comes the unfortunate part: In order to figure out its own time base relative to a beacon each receiver must ping at least one beacon. Once it has that beacon’s time base it can figure out every other beacon in range of that beacon and work out from there. Because clocks on nodes will drift apart over time this ping will need to be repeated every X minutes. Because the last pinged beacon will eventually be out of range, it should also be repeated whenever the receiver moves more than Y distance. This ping need not contain any identifying information about the receiver, so it shouldn’t have privacy implications, but it will reduce the scalability of the system from receiver_count = infinite to receiver_count = bandwidth / (ping_size * ping_frequency). As long as the pings are relatively small and infrequent that should not be an issue. Multiple receivers attached to the same clock (i.e. multiple receivers carried by the same person) could also share time base information and would not need separate pings.

What do you think? Will it work? Are predictable transmit and receive lag even realistic?

Simulating Locator Beacons

Recently I’ve been thinking a lot about how a locator system that satisfies my own requirements could be put together. My current approach is a system of beacons in fixed positions that communicate with each other and broadcast to receivers via radio. This is basically the same system described in Rainbow’s End

Both receivers and beacons use transmit time to compute distance. Beacons use those distances (and the knowledge that they are all actually in fixed positions) to build an accurate mesh of their relative positions to each other. Those relative positions are fixed to absolute positions by including beacons with very precise known positions in the network.

I wanted to see if I could figure out how to actually find beacon positions relatively and then absolutely based on those few known beacons, so I built a simulator. The beacons use a mass and spring system to push and pull each other around into usually the right positions. This has the advantage of working without any central authority computing beacon positions. If its neighbors are trustworthy, each beacon can “move” itself around until it finds a stable location. This simulation is in 2D, but there’s nothing preventing it from working just as well in 3D.

The simulator is in Javascript, so I’ve just embedded it below. You can find some instructions on how to use it if you scroll down. Comments appreciated!

Select a beacon type from the UI and click anywhere in the frame to add a beacon at that location. Or click Random to add a new beacon.

Units are in pixels, except for the mass and spring weight values which are in Foozles and Smurfs respectively.

You can also use these key equivalents:
k: select Known
u: select Unknown
g: select GPS
r: place a beacon of the selected type at a random location

7 Requirements for an Augmented Reality Positioning System

For me, a positioning system has a few requirements to be appropriate for widespread use in Rainbow’s End-style augmented reality:

  1. The system should scale to any number of mobile devices.
  2. The system should work indoors and outdoors. It should also work underground in places like subway stations.
  3. No one should be able to track the position of devices in the system.
  4. A mobile device should require minimal warm-up time of less than ten seconds
  5. A mobile device should be able to determine its position on an ongoing basis with a frequency of at least 30Hz.
  6. A mobile device should be able to pinpoint its position down to 1cm or less.
  7. A mobile device should be able to operate with its positioning system activate at all times and still maintain a reasonable battery life.

The closest current contender is GPS. Let’s see how it does on each of those front:

  1. So far so good. The GPS satellites don’t care how many receivers there are. GPS has weathered an explosion in the number of receivers over the past ten years and come through just fine.
  2. GPS fails this one. It works outdoors most of the time but indoors only if you are near an equator-facing window. It never works underground.
  3. Since GPS receivers only listen, this is generally true.  The 911-driven remote activation requirements allow some GPS devices to be trackable, but the tracking happens through the phone’s network connection not through the positioning system itself.
  4. GPS manufacturers claim warm-start times under ten seconds. According TTFF measurements for many models from 2003 some models can warm-start in under ten seconds. Things have significantly improved since then.
  5. GPS receivers typically send an NMEA position sentence once per second (or 1Hz). SparkFun lists a few GPS components in the 5-10Hz range. It’s not clear if this is a limitation of the system or if GPS has an inherent update frequency limitation, so we’ll assume that improved chipsets will get the frequency up to 30Hz.
  6. GPS completely fails this one. Under ideal circumstances and non-real-time post-processing GPS will get you down to about 2cm. Under normal circumstances the accuracy is more like 10-50m. GPS will tell you what street you’re on (if you assume you’re on a street) or what house you’re in, but it can’t tell you what room you’re in.
  7. Current GPS receivers still draw too much power to leave them on all the time, but Moore’s Law is changing that. They should be always-on in a few more years.

GPS fails in two very important requirements: where you can use it and how accurate it is.  Satellite-based replacements for GPS are likely to have the same failure indoors and underground. If it ever launches, Galileo is supposed to have a commercial encrypted system that provides accuracy down to 1cm, but it still won’t work indoors or underground. Relying on satellite-based positioning is a dead-end for augmented reality.

The other way that AR researchers are tracking position is with a camera-based system. No one has yet built such a system that operates out in the wild, but it would be theoretically possible. A visual tracking system would operate by comparing the stream of images from the camera against a database of images that is stored in the cloud. The exact form of that comparison is a matter of much research. Whether the comparison happens in the cloud or on the mobile device is also an open question. The general form of the system (large database in the cloud and a stream of images from the camera on the mobile device) is pretty stable though. One key assumption here is that the image database for a city-sized area is far too large to download to the mobile device. Let’s see how that does on our requirements:

  1. Because of the requirement that we either stream the camera images to the cloud or the local portion of the database from the cloud to the mobile device, each additional user puts incremental load on the system. The number of users in a local area will be limited by the mobile network bandwidth available to those users. The number of total users of the system will also be limited by the server capacity of the system’s provider, but that end of things can scale out more easily.
  2. This system would work anywhere the database covered. Indoor and underground environments would be fine. Areas where the camera could only see other people (i.e. crowds) would be a problem because the database wouldn’t have anything static to compare against.  If the camera depends on environmental light this system would perform poorly in dark areas (or at night.)
  3. If the camera’s images are streamed to the cloud the system’s provider would know exactly where each device was at all times. If the portion of the database related to a small area is streamed down to the device then the service provider will only be able to locate the device to within that small area. Either way, the provider will know where the user is to within a few hundred feet.
  4. If the camera images are streamed to the cloud, start-up times should be more or less instant. If the database is streamed down to the device it may take a few seconds to get things started, which is well within our tolerance.
  5. Current visual tracking systems have trouble reaching 30Hz, but Moore’s Law should take care of that eventually. For a system that streams the video to the cloud bandwidth can also affect update frequency. Once the link starts filling up with streams from other devices the update frequency goes down for every device.
  6. Visual tracking systems are quite accurate. Finding hard numbers is difficult, but there’s no reason to believe that a visual tracking system would be less accurate than 1cm.
  7. Visual tracking systems are power-hungry at the moment. They require fast cameras, fast network connections, fast CPUs on the mobile devices, and lots of memory. Because so much of the system is unknown, it’s hard to pin down numbers, but I would estimate that we need 100x power reduction before leaving this system on all the time is realistic. That will take Moore’s Law about ten years to accomplish.

If we can solve the low-light and power issues, a visual tracking system would certainly work for a small number of users. Solving the bandwidth constraint for a system that much of the population is using is a more daunting issue. All that bandwidth also makes the system expensive to operate, which will be passed on to end users as either usage fees or advertising. Building a workable generally available visual tracking system not an impossible problem, but it’s certainly a difficult one.

Personally, I’m not satisfied with either of these systems. I have thoughts on how to build a better one, but I’ll save those for a future post. What do you think? Am I missing any major requirements? Are any of mine unnecessary? Am I representing GPS or the imagined visual tracking system unfairly? Let me know in the comments!