The Upcoming Parallel Digital Universe

Multithreaded JavaScript has been published with O'Reilly!

Parallel Digital Universe
Parallel Digital Universe

Today I checked out Google Earth for the first time in a year. It's integrated into the Google Maps application now and displays via WebGL, which is badass in and of itself.

The picture from above is of Portland, Oregon. I zoomed in a lot and was able to simulate walking around the streets. As you can see, the visuals are rather funky looking, kinda melty. The meshes they're attached to aren't perfect either, like a super low-poly version of the building. I'm pretty sure someone had to go through and build them all by hand. I'd also guess that someone had to go through and take pictures of the sides of buildings, while the rooftops and streets and other horizontal imagery would have been taken from satellite imagery.

This is really a lot of work for a finite team of employees to do isn't it? At least the altitude information of the ground was something we've known for a long time, thanks to those folks doing survey stuff.

When I was in Y Combinator, one of the companies in my batch had this really badass product called Matterport, which would take imagery or video of the real world and build 3D meshes for you and apply the texture to the meshes, which could allow for some really fast 3D prototyping. Matterport is, however, a hardware solution, and likely won't be as ubiquitous as something like a cellphone.

Every photo currently taken by a smart phone has a bunch of meta-data tied to it (EXIF), which contains information such as GPS coordinates. Our phones are aware of more than the current location though; they also know cardinal directions, altitude, acceleration, and even to an extent their rotation in the 3 spacial directions. Say that Google gets your permissions to use the photos you take and apply them to this virtual world. As you take your photos and they're automatically sent to Google, you'll end up seeing your images applied in that imagionary world. As higher resolution, less blurry versions of images are uploaded, the “weaker” ones are slowly phased out. Images could be overlapped and have their opacity changed algorithmically to come up with the best possible versions. As people attempt to trick the system with fake images, they'll be overwhelmed with correct images.

Thus, a digital alternate reality could be easily built, crowdsourced from everyone taking simple pictures with their phones. And, if I know Google, this data will be available via API, so it can be used in various games, training applications, architectural markets, you name it.

Taking this a step further, we combine it with the Google Glass project, and by wearing these Augmented Reality capable glasses, you're able to walk around in a parallel universe. Things which cannot or do not yet exist in the real world can be superimposed in 3D space. One could imagine a completed version of an under-construction building, sit on the couch next to a distant loved-one, play Rockband on stage in front of a crowded theater, or even interact with an AI controlled virtual girlfriend. I haven't played it in years, but AR Pokémon would be badass.

While this data could be useful for traditional computing devices, such as a PC or a Game Console, I see it being most beneficial with Augmented Reality. This 3D alternate reality could itself become a platform, games being built within it, instead of it being integrated into games. By controlling a HUD, one could pick the type of game to be played within this parallel world.

Thomas Hunter II Avatar

Thomas has contributed to dozens of enterprise Node.js services and has worked for a company dedicated to securing Node.js. He has spoken at several conferences on Node.js and JavaScript and is an O'Reilly published author.