NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Spatial Computing, Sci-Fi Style

The technology seen in Minority Report is not as far off as you think Before PCs and laptops, computers filled entire rooms. In the future, they may be all over your room. There will be no mouse. In fact, there will be little familiar about it at all: The user will stand in front a series of screens,..

The technology seen in Minority Report is not as far off as you think

Before PCs and laptops, computers filled entire rooms. In the future, they may be all over your room.There will be no mouse. In fact, there will be little familiar about it at all: The user will stand in front a series of screens, gesturing with gloved hands, moving images around, touching individual pixels, tracing shapes, and navigating complex fields of data. To the untrained observer, it will look as though they were conducting an imaginary orchestra. Instead of pointing and clicking, the user will just point.Inconceivably, this computer already exists. It's powered by g-speak-which isn't an operating system, like Windows Vista or Apple's Leopard-but rather a "spatial operating environment."Sure, it sounds new age, but it's exactly what the makers of the 2002 sci-fi film Minority Report were looking for when they encountered an early version of g-speak at Massachusetts Institute of Technology's Media Lab. They were in the market for a futuristic technology--one that looked like it could plausibly exist in, say, 2054-and g-speak fit the bill. John Underkoffler, the system's designer, soon found himself on the Minority Report set. Acting as a science advisor, he explained the finer points of the "gestural language" used to navigate the then-nascent interface.More than just an opportunity to show off the system to the Hollywood set, the film also proved to be an unorthodox direction in the evolution of g-speak.

Sure, the scenes where a furrow-browed Tom Cruise manipulates police forensics data on giant screens were initially shot on blank glass screens, with the software added in post-production. Still, it was really g-speak. The film's actors were fully trained in the complex workings of the operating environment and were essentially miming real actions.When the film came out, Underkoffler (pictured above) and his collaborators had the best demonstration video of all time, complete with a cameo by Cruise! "Audiences really responded to those scenes," Underkoffler told me. "You could tell, talking to people about it, that they felt like they'd seen something that either was real or should be."The notion that the spatial operating environment should already exist was a welcomed reaction, since the design of g-speak was a step towards making computers more intuitive and logical to the human brain. Catalyzed by the positive reaction, and the thrill of seeing a fully-functional version of their idea come to life, they went straight back to the drawing board. The operating environment had been built twice already: once in the lab at MIT, and once in a high-profile piece of popular media. Both versions had their own sets of limitations: The financial constraints of the academic sphere--rendering the prototype g-speak to an intellectual parlor game saved for "grimly serious applications in the field of optics"-and the aesthetic standards of the film industry, respectively.Its use in the film suggested that a spatial operating environment could be applied to a practical purpose-not that predicting crime is practical, but you get the point. Underkoffler and his collaborators desperately want to see that happen in real life. So they embarked in a third direction: commercial.Under the moniker Oblong Industries, the creators of the g-speak now customize their futuristic computer platform for whoever needs it -- including Fortune 50 companies, government agencies, and universities. They also sell g-speak to companies with big-time data issues, like those in telecommunications and network management, financial services, and medical imaging and bioinformatics. That is to say, people for whom the process of reaching into, pointing, poking, and spinning data around might be a much-needed respite from traditional number-crunching--and might provide valuable new insights.
After all, computers-with their processors, memory, graphics, and networked view of the world-are offering us increasingly complex possibilities for translating and interacting with 1s and 0s. Yet, the way we use computers hasn't changed appreciably since the 1980s: we still click around a screen with a mouse or track pad.The makers of g-speak know that this sort of control doesn't take advantage of how the human brain works. According to Underkoffler, the brain regions that controls muscles, muscle memory, and proprioception (the sense of where your body is in space) and the visual system evolved to work together to deal with spatial situations. "That's why we're all such experts at getting around and manipulating the real world," he says. "So it seems clear to us that computers should work the same way."Minority Report predicted this technology would be ready by 2054. In 2008, and it's nearly ready to go. Though, don't worry: a g-speak platform probably won't replace our laptops anytime soon. (For one, not many of us can afford it.) In the future, however, it could allow us to blend virtual reality with computing, immersing ourselves in a Google Map or nearly walking inside a photo album.As our interconnected world evolves, so will our interfaces, and our newest tools (think multi-touch iPhones and Microsoft's Surface) will reflect our changing needs--one of which, perhaps, is a new desire to be able to reach out and touch information.(All photos by Will Etling)

More Stories on Good