SpaceTop Minority Report Style Computer Is Amazing But My Gimpy Elbow Says, "No Thanks"

Ah, Minority Report. Do you think that a decade ago, Stephen Spielberg knew that the tech he revealed to us would become the de-facto ideal for the future of computer displays? Well, for some. Specifically the athletic and able-bodied. But the rest of us aging or injured folks look at those displays and think, Ugh, really? That looks painful.

Take for instance the stand-up display where Tom Cruise's character spends the majority of the movie with his arms raised at heart level using sweeping arm and hand gestures to manipulate the controls. The G-Speak was demoed at TED back in 2010 and it's an astounding piece of technology. And it's great if you:
  • can see
  • have full arm mobility
  • can stand for long periods of time
Obviously, this is not a universal user interface that is suitable to anyone and everyone. Including me. Since I broke my elbow snowboarding a couple of years ago, my gimpy elbow likes to yell at me when I subject it to overuse. This even includes lying on my back in bed, playing some game on my smartphone as it rests on my chest. Hell, even bowling on the Xbox 360 Kinect hurts after a few rounds. The idea of holding my arms up all day and manipulating a screen using gestures just sounds exhausting. 

And then comes along this: the SpaceTop 3D Display. Demoed recently at TEd (where else?), SpaceTop developed by Jinha Lee, is:
a three-dimensional computer interface that allows a user to “reach inside” a computer screen and grab web pages, documents, and videos like real-world objects. More advanced tasks can be triggered with hand gestures. The system is powered by a transparent LED display and a system of two cameras, one tracking the users’ gestures and the other watching her eyes to assess gaze and adjust the perspective on the projection.
Credit: TED via Wired
Now, let's be honest. As with all 3D displays, this one looks very impressive. And the technology behind is it remarkable. And the potential uses are manifold: anything that requires visual-spatial manipulation from design (like architecture and product development) to science (like DNA manipulation and theoretical physics models) to play (gaming and game development). The potential is astounding for this type of technology. Imagine our kids being able to walk up to a "smartboard" at school and just move math equations around. No chalk required. Or having a gesture-dependent interface at home to control your entire house: your tv, gaming console plus your lights and temperature settings... all from a few sweeping gestures. And at the hospital, when the doctor can pull up a 3D scan of your internal organs, reach into the image and point directly to the problem site to show you exactly how the procedure you need will work. Fascinating.

But how does it adapt to become assistive? This has become a recurring question in my mind with all sorts of new tech emerging, including the latest Google Glass. My colleague, @SteveBuell, and I are exploring how Glass and other wearable technologies can be modified to offer assistance to the blind or people with mobility issues. When I look at developments like gestural interfaces, I can't help but wonder how it can be modified to suit a variety of demographics.

Apparently, I'm not the only one who thinks that this tech is a bit too labour-intensive, as evidenced by this recent article by Tested and this one in Wired which asks:
"People are used to gently flicking computer mice and grazing keyboards and tablet screens; do they really have the stamina to reach into their computers and flail their arms around?"
No. No, they won't. But you know what they can do? Speak. Use their eyes to point to things. Gestural exertions that require much less physical effort, much less strain overall and would be sustainable over the course of a whole work session, even work day.

These phenomenal, physical technologies will be useful in specific circumstances, but we shouldn't expect them to become universal; they just won't meet the needs of the majority. And that should be the next step in their exploration: how do they need to be modified to be useful to everyone? It's taken us a decade to make real gestural tech. If we focus on its accessibility, maybe we'll be able to leap frog and get to a usable, day-to-day version even faster. I look forward to watching the continued developments as they emerge.

Check out SpaceTop in action and then ask yourself: could you do this all day?