Over the past year I’ve spent a lot of time thinking about what piece of the augmented reality ecosystem would be the best to start a business around. I’m still not ready to take that jump so, in my case at least, the answer is still “none yet”. However, in my exploring I keep coming up against a problem:
- The absolute most profitable place to be in augmented reality is the platform provider at the center of everything.
- The profit motives of that platform provider could set the development of AR back by about ten years.
A brief history of the web
Whether by design or happy accident the protocols (HTML and HTTP) behind the web are easy to implement and completely open. This meant that by the time Netscape came along, there were already browsers on the Macintosh (CERN’s and Mosaic), Windows (Mosaic), and X (CERN, Mosaic, Viola, etc.) There were also 200 active web servers and port 80 accounted for more than 1% of the traffic on the NSF backbone.
That ecosystem meant that Netscape remain compatible with what already existed in order to succeed. Sure, they were selling licenses to their own software, which let them cash in on the shocking growth of the web, but the Netscape browser had to work just as well against pages served by HTTPD, IIS, Apache, and any other random web server anyone decided to write. The same thing was true from the other side. Netscape Now! buttons aside, website operators soon had to deal with at least two and possibly more different browser, as well as various versions of each browser.
This made life interesting for web designers, but it was good for the web as an platform. The nature of the web meant that nobody had to convince somebody else to say “Yes” to get involved. There is no way that any one company (or any ten companies for that matter) could have even authorized, let alone managed, all of the initiatives that went on with the web between 1994 and 2000. There was just too much stuff happening.
The open nature of the web allowed the cost of innovation to be spread around to thousands of organizations around the world. It also let anyone with enough cash to buy some hosting try out their big idea. Most of those ideas failed, of course, but when taken as a whole they succeeded beyond anyone’s wildest expectations.
I think that augmented reality has the potential to follow a growth curve with the same shape as the one the web followed. The web had very few institutional barriers standing in the way of its growth, and the AR ecosystem would do well to learn from that.
Open Augmented Reality
If the emerging augmented reality ecosystem wants to grow as quickly as the web it cannot include anyone who must say “Yes” to allow existing users to get a new capability. That implies a few things:
- Anyone can publish content into the system. There are no controls for quality or appropriateness of content on this ability to publish.
- Clients from multiple vendors are able to view that content. Anyone who choses to can write a new client that works with existing content.
- Servers from multiple vendors are able to respond to requests for data. Choosing server technology is primarily a decision for content providers to make and their choice is invisible to end users.
- The network itself is neutral to the data being transmitted across it. This means the mobile internet providers must not white-list content from publishers that it has partnerships with.
- There is no single central directory that all content (or every content provider) must be listed in to be available.
Note, that this does not require that the software in question be open source. Open source software (in the form of Linux, HTTPD, Apache, Perl, PHP, and others) was instrumental in spreading the web far and wide. However, the personal computer revolution happened with little in the way of open source software and was just as rapid as the spread of the internet.
As VRML and many other standards over the years have taught us, developing a new standard from whole cloth is fraught with peril. It is even more difficult (as in the case of VRML) when there is not an existing standard that the new standard is intended to supplant. The AR community must avoid repeating the history of VRML. Fortunately there are existing standards that lend themselves well to the problems augmented reality developers are trying to solve.
The first of these is good old HTTP. As a transport protocol, HTTP fits the list above very well. The protocol is well understood, decentralized, and available in server or client library form for every platform. Minor new standards for querying location-specific data are already emerging.
The second current standard that the augmented reality developers can adopt and bend to their will is KML. KML is the file format that Google Earth uses to represent geocoded information. It has support for points, lines, and shapes. KML is an open standard and is supported by many GIS packages in addition to Google Maps and Google Earth. Google has open-sourced its own KML parsing library so there is a place to start there too.
Any augmented reality client that supports attaching web browsers (including URLs) to locations can also take advantage of most other existing web standards for whatever happens to be in those browsers.
Is this how things are actually going?
So far, I have seen very little discussion of how different augmented reality systems will work together. In large part that is the point of this post. But then there are also very few AR systems that exist outside of laboratories, so we could just be in the bad old proprietary hypertext system days of the late 80s.
So far the AR systems that seem to be designed for lots of different kinds of data (Layar and Seer) have not announce any way for third parties to publish data for their clients. My twitter exchanges with Raimo at SPRXMobile make me think that Layar is at least thinking about it. Hopefully they will turn out to be as open as I’ve outlined above.
How important do you think open AR standards are? Can an AR solution succeed without them?