Using the locator beacon network from the simulator for an indoor navigation system that follows my own requirements means that a receiver must be able to use the network without sending out signals to each node. The receiver must be able to determine its distance from multiple nodes without actually sending anything to those nodes. This is accomplished in the GPS network by synchronizing all the satellites on a very precise clock. Once the transmitter and the receiver have a clock in common the receiver can easily compute its distance to the transmitter:
- Transmitter sends a signal at time X saying “It’s time X!”
- Receiver receives that signal at time Y
- Distance = ( Y – X ) / speed_of_light
So how do you build a common time base on a ad-hoc network of locator nodes run by a random assortment of people? The short answer is that you don’t. Assuming a few things about all the hardware involved you can get by coming up with an ad-hoc time base that still lets you compute distance without requiring any kind of central authority.
This assumes a few things about beacons and receivers in the network:
- Each node (beacon or receiver) has a fixed (and known) amount of receiver lag. This is the time between when a signal hits the antenna and when it’s pushed to whatever internal system can tag it with the internal clock of the node. This is Send(N) for node N.
- Each node has a fixed (and known) amount of transmitter lag. This is the time between when a message is sent and timestamped, and when it actually leaves the transmitter’s antenna. This is Recv(N) for node N.
- None of the nodes are lying.
With these assumptions, the system is relatively straightforward. Any node can compute the translation from its own time base Time(n, t) to some other node’s time base Time(m, t) by pinging that node:
- Node N generates a random number ping_key
- Node N broadcasts a message containing ping_key and records ping_time = Time(N, t).
- Node M receives that broadcast message and records receipt_time = Time(M, t).
- Node M sends out a broadcast of its own with: NodeID(M), ping_key, Send(M), Recv(M), receipt_time, Time(M, reply)
- Node N receives the response and notices that ping_key matches its own ping_key.
- Node N computes its relative time base with M as follows:
- Total_time = Time(N, reply_receipt) – Time(N, transmission) – ( receipt_time – Time(M, reply))
- distance_time = (Total_time – Send(N) – Recv(N) – Send(M) – Recv(M) )/2
- distance = distance_time/speed_of_light
- time_base_difference = Time(M, t) – Time(N, t) = Time(M, receipt) – Time(N, transmission) – (distance_time + Send(N) + Recv(M))
- time_base_difference2 = (Time(M, reply) + distance_time + Send(M) + Recv(N) ) – Time(N, reply_receipt)
- Node N stores NodeID(M), time_base_difference, distance in its table of nearby beacons. (time_base_difference and time_base_difference2 should be equal assuming that all the lag numbers are right and the distance hasn’t changed. If they drift apart something is wrong.)
This mechanism enables each beacon to keep a constantly updated time base for every node in range via the same packets it is using to determine distance to those beacons. Beacons can then send out everything they know:
- Their own location (estimated from GPS and refined via the algorithm in the simulator)
- Time(beacon, broadcast)
- For each beacon N in range:
- Node ID of N
- Time(N, broadcast)
Here comes the unfortunate part: In order to figure out its own time base relative to a beacon each receiver must ping at least one beacon. Once it has that beacon’s time base it can figure out every other beacon in range of that beacon and work out from there. Because clocks on nodes will drift apart over time this ping will need to be repeated every X minutes. Because the last pinged beacon will eventually be out of range, it should also be repeated whenever the receiver moves more than Y distance. This ping need not contain any identifying information about the receiver, so it shouldn’t have privacy implications, but it will reduce the scalability of the system from receiver_count = infinite to receiver_count = bandwidth / (ping_size * ping_frequency). As long as the pings are relatively small and infrequent that should not be an issue. Multiple receivers attached to the same clock (i.e. multiple receivers carried by the same person) could also share time base information and would not need separate pings.
What do you think? Will it work? Are predictable transmit and receive lag even realistic?
Recently I’ve been thinking a lot about how a locator system that satisfies my own requirements could be put together. My current approach is a system of beacons in fixed positions that communicate with each other and broadcast to receivers via radio. This is basically the same system described in Rainbow’s End
Both receivers and beacons use transmit time to compute distance. Beacons use those distances (and the knowledge that they are all actually in fixed positions) to build an accurate mesh of their relative positions to each other. Those relative positions are fixed to absolute positions by including beacons with very precise known positions in the network.
I wanted to see if I could figure out how to actually find beacon positions relatively and then absolutely based on those few known beacons, so I built a simulator. The beacons use a mass and spring system to push and pull each other around into usually the right positions. This has the advantage of working without any central authority computing beacon positions. If its neighbors are trustworthy, each beacon can “move” itself around until it finds a stable location. This simulation is in 2D, but there’s nothing preventing it from working just as well in 3D.
Select a beacon type from the UI and click anywhere in the frame to add a beacon at that location. Or click Random to add a new beacon.
Units are in pixels, except for the mass and spring weight values which are in Foozles and Smurfs respectively.
You can also use these key equivalents:
k: select Known
u: select Unknown
g: select GPS
r: place a beacon of the selected type at a random location
For me, a positioning system has a few requirements to be appropriate for widespread use in Rainbow’s End-style augmented reality:
- The system should scale to any number of mobile devices.
- The system should work indoors and outdoors. It should also work underground in places like subway stations.
- No one should be able to track the position of devices in the system.
- A mobile device should require minimal warm-up time of less than ten seconds
- A mobile device should be able to determine its position on an ongoing basis with a frequency of at least 30Hz.
- A mobile device should be able to pinpoint its position down to 1cm or less.
- A mobile device should be able to operate with its positioning system activate at all times and still maintain a reasonable battery life.
The closest current contender is GPS. Let’s see how it does on each of those front:
- So far so good. The GPS satellites don’t care how many receivers there are. GPS has weathered an explosion in the number of receivers over the past ten years and come through just fine.
- GPS fails this one. It works outdoors most of the time but indoors only if you are near an equator-facing window. It never works underground.
- Since GPS receivers only listen, this is generally true. The 911-driven remote activation requirements allow some GPS devices to be trackable, but the tracking happens through the phone’s network connection not through the positioning system itself.
- GPS manufacturers claim warm-start times under ten seconds. According TTFF measurements for many models from 2003 some models can warm-start in under ten seconds. Things have significantly improved since then.
- GPS receivers typically send an NMEA position sentence once per second (or 1Hz). SparkFun lists a few GPS components in the 5-10Hz range. It’s not clear if this is a limitation of the system or if GPS has an inherent update frequency limitation, so we’ll assume that improved chipsets will get the frequency up to 30Hz.
- GPS completely fails this one. Under ideal circumstances and non-real-time post-processing GPS will get you down to about 2cm. Under normal circumstances the accuracy is more like 10-50m. GPS will tell you what street you’re on (if you assume you’re on a street) or what house you’re in, but it can’t tell you what room you’re in.
- Current GPS receivers still draw too much power to leave them on all the time, but Moore’s Law is changing that. They should be always-on in a few more years.
GPS fails in two very important requirements: where you can use it and how accurate it is. Satellite-based replacements for GPS are likely to have the same failure indoors and underground. If it ever launches, Galileo is supposed to have a commercial encrypted system that provides accuracy down to 1cm, but it still won’t work indoors or underground. Relying on satellite-based positioning is a dead-end for augmented reality.
The other way that AR researchers are tracking position is with a camera-based system. No one has yet built such a system that operates out in the wild, but it would be theoretically possible. A visual tracking system would operate by comparing the stream of images from the camera against a database of images that is stored in the cloud. The exact form of that comparison is a matter of much research. Whether the comparison happens in the cloud or on the mobile device is also an open question. The general form of the system (large database in the cloud and a stream of images from the camera on the mobile device) is pretty stable though. One key assumption here is that the image database for a city-sized area is far too large to download to the mobile device. Let’s see how that does on our requirements:
- Because of the requirement that we either stream the camera images to the cloud or the local portion of the database from the cloud to the mobile device, each additional user puts incremental load on the system. The number of users in a local area will be limited by the mobile network bandwidth available to those users. The number of total users of the system will also be limited by the server capacity of the system’s provider, but that end of things can scale out more easily.
- This system would work anywhere the database covered. Indoor and underground environments would be fine. Areas where the camera could only see other people (i.e. crowds) would be a problem because the database wouldn’t have anything static to compare against. If the camera depends on environmental light this system would perform poorly in dark areas (or at night.)
- If the camera’s images are streamed to the cloud the system’s provider would know exactly where each device was at all times. If the portion of the database related to a small area is streamed down to the device then the service provider will only be able to locate the device to within that small area. Either way, the provider will know where the user is to within a few hundred feet.
- If the camera images are streamed to the cloud, start-up times should be more or less instant. If the database is streamed down to the device it may take a few seconds to get things started, which is well within our tolerance.
- Current visual tracking systems have trouble reaching 30Hz, but Moore’s Law should take care of that eventually. For a system that streams the video to the cloud bandwidth can also affect update frequency. Once the link starts filling up with streams from other devices the update frequency goes down for every device.
- Visual tracking systems are quite accurate. Finding hard numbers is difficult, but there’s no reason to believe that a visual tracking system would be less accurate than 1cm.
- Visual tracking systems are power-hungry at the moment. They require fast cameras, fast network connections, fast CPUs on the mobile devices, and lots of memory. Because so much of the system is unknown, it’s hard to pin down numbers, but I would estimate that we need 100x power reduction before leaving this system on all the time is realistic. That will take Moore’s Law about ten years to accomplish.
If we can solve the low-light and power issues, a visual tracking system would certainly work for a small number of users. Solving the bandwidth constraint for a system that much of the population is using is a more daunting issue. All that bandwidth also makes the system expensive to operate, which will be passed on to end users as either usage fees or advertising. Building a workable generally available visual tracking system not an impossible problem, but it’s certainly a difficult one.
Personally, I’m not satisfied with either of these systems. I have thoughts on how to build a better one, but I’ll save those for a future post. What do you think? Am I missing any major requirements? Are any of mine unnecessary? Am I representing GPS or the imagined visual tracking system unfairly? Let me know in the comments!
In the interest of seeing just how wrong I can be twelve months from now, here is a list of things I think will happen in 2011. This is possibly the worst day of the year to write such a post, what with CES starting on Thursday, but that’s never stopped me before.
- Netflix will continue to kick ass. Their selection of streaming movies and TV shows will explode in 2011, though they will have to pay more for all that content.
- My internet connection will improve. Self-fulfilling prophecy? I hope so! I’ve had 1.5Mb/768kb DSL for ten years. It’s well past time to upgrade. In theory Qwest will be putting 20Mb service into my neighborhood soon, so maybe that’s in my future.
- Android will continue to kick ass and take names. 2011 will see >60% smartphone market share, a dizzying array of tablets and phones, and probably even some netbooks by fall. More and more apps will start to ship on both Android and iOS at the same time.
- Android 3.0 will include improvements for the annoying OS upgrade delays on that platform. Google will come up with some way to apply pressure on handset manufacturers and carriers to deliver the latest version of Android to uses in a more timely fashion.
- Still no consumer-level visual pass-through AR glasses. I said it last year, and I’ll keep saying it every year until I’m wrong.
- This will be the year the electric car revolution began. The Nissan Leaf and Chevy Volt will both sell well and set the stage for the electric cars of 2012 (including the Tesla Model S) to blow the doors off.
- This year will feature one “unthinkable ten years ago” level medical advance. Will it be a cure for cancer? Regrowing limbs from your own stem cells? Repair of severed spinal cords? Pain medication with no side effects? Who knows, but something big is going to happen this year.
And that’s it! If it’s not on this list it’s not going to happen in 2011!
(Think maybe something might happen in 2011 that wasn’t on this list? Please add your own prediction in the comments and we’ll see how you do!)
I realize these “predictions at new years” posts are a little cheesy and that you see them everywhere. I enjoy writing them, so I’m going to do it anyway. This is my look back at my predictions of one year ago to see how I did.
- Correct. It is arguably fair to call STO the only significant MMO launch of 2010. APB sort of fizzled, after all. I haven’t heard much about STO since its launch though… not sure how it’s actually doing.
- Sort of Correct. I could only come up with two cancellations from my list:
- APB was actually cancelled after it came out. That’s the wrong way around.
- The Agency is rumored to be more or less shut down at this point. Nothings been announced here and probably never will be.
- Correct. Reports are that they’ve both sold millions of units. Natal (now named Kinect) has also incited thousands a cool Kinect Hack YouTube videos. Dance Central is pretty cool, so at least one great Kinect game is already out.
- Correct. According to this chart the unemployment rate in the US peaked at 10.6% in January 2010.
- Wrong. I’ve seen no evidence that Junaio, Layar, or Wikitude are ready to stray from their AR roots yet. In fact they seem to be doubling down by making the set of things they can position at a GPS location much more complete.
- Correct. There haven’t been any interesting new products in the area of wearable displays. Lots of talk at ARE2010 and elsewhere, but nothing concrete yet.
- Correct. Google Goggles came to the iPhone, but other than that neither company has done anything on the AR front.
- Wrong. There’s no indication that the marketing world (or consumers) are tired of simple AR campaigns. If anything the campaigns are continuing to grow in popularity and complexity.
- Correct. The iPad and iPhone 4 came out. Good thing I didn’t predict how well the iPad would do… I would have massively underestimated it.
- Correct. App store approval times are reported to be under a week these days. They also published the review guidelines, which is a big step up from 2009.
- Sort of wrong. Technically 200,000 is more than 50,000, but I completely underestimated the meteoric rise of Android during 2010. I thought that Android phones would only outsell iPhones until iPhone 4 came out, but they topped the iPhone in May (in the US) and never looked back.
- Correct. Nobody figured out what to do with it so Google mothballed the project.
- Wrong. Wave doesn’t inter-operate with anything. That’s a bit part of why it failed in my opinion.
My score was 9 correct and 4 wrong. Better numbers than last year, but I think I made more safe bets for 2010 too. 2010 went pretty much how I expected it would (with the notable exception of Android going gangbusters.)
How was your year? Did anything surprising happen?