What's new

New hardware lets any computer run an interactive, 3D interface

Hamartia Antidote

ELITE MEMBER
Joined
Nov 17, 2013
Messages
35,188
Reaction score
30
Country
United States
Location
United States
https://arstechnica.com/science/201...any-computer-run-an-interactive-3d-interface/


My first experience with a hologram was, like so many other people's, completely fictional: a small, blue figure projected from R2-D2 in the original Star Wars. About a decade later, I got a taste of the real-world state of the art from New York City's Museum of Holography, now closed. Holograms did exist in all their 3D glory, but they were static. You committed to displaying one image when the hologram was made, and that was it. No animated messages from princesses.

But there has been progress since. Holographic displays with actual refresh rates—albeit painfully slow ones—and other approaches have been described, but products based on any of this have yet to appear. Meanwhile, non-holographic approaches to 3D have taken off. TV and movie screens feature 3D viewing with simple glasses but don't allow interactions. Immersive goggles and gear do allow interaction, but only for the people wearing the goggles, which isolates them from anyone nearby.

So we were intrigued when Shawn Frayne, founder of the Brooklyn-based company Looking Glass, offered us the chance to have a look at what they're calling Holoplayer One. It's a 3D projection system that lets users interact with the projected images, all without goggles or glasses. And, perhaps most importantly, it was almost ready for market.

3D in the brain
Non-holographic systems create the illusion of visual depth by taking advantage of how our visual system normally works. Our eyes aren't capable of much in the way of depth perception individually; instead, slight differences in the information obtained by our two eyes are interpreted by the brain to provide information on depth and relative locations. This happens naturally because our eyes are slightly separated. That separation is enough so that they view three-dimensional objects from slightly different distances and perspectives. To appreciate just how much this matters, all you need to do is spend a chunk of your day trying to navigate life with one of your eyes closed.

But it's possible to trick the brain into thinking an image projected from a flat screen has depth. The oldest, dating back to the 16th century, is Pepper's Ghost, which relies on a combination of mirrors and some partly reflective glass, as well as distance—you need to be far enough away from it so that other visual cues about depth don't spoil the illusion. This was most famously used to create a "live" performance from deceased rapper Tupac.

But the alternative is to simply feed your two eyes slightly different images. The 3D glasses you get at the movies simply place different filters in front of each of your eyes. Combined with the right content, this ensures that your eyes see slightly different images. VR goggles simplify matters even further by placing different screens in front of each eye.

Shawn Frayne, founder of Looking Glass, indicated that the Holoplayer operates under similar principles. It ensures that, as long as your face is positioned within a set distance (a meter or two) and viewing angle, your two eyes will see different images, allowing your brain to interpret the display as three-dimensional. But it doesn't need any goggles or glasses in order to do so. How does that work?

The foundation of the system is a standard LED display. But the hardware is split up into 32 overlapping subdisplays, each showing a slightly different perspective on the item that's on-screen. These subdisplays are interlaced, meaning their pixels are mixed in together, rather than being individual, discrete images arranged on a grid. Looking directly at the LED display, this produces a fuzzy-looking version of the object.

The magic happens after the light leaves the LED display. First, it reaches a partially reflective, polarization-sensitive mirror called a beam-splitter. This will reflect light only when it has a specific polarization and is arranged to match the polarization of the LED. It sends light back into the hardware and toward a space above the LED display that's covered by a reflective coating that also rotates the polarization of the light. When the light comes back out of the device and reaches the beam splitter, the polarization has changed so that it's no longer reflected, allowing the images out of the Holoplayer.

As a result of all of these reflections, the individual displays end up slightly separated in space. The separation is enough to ensure that your eyes will see different images, which your brain then interprets as a three-dimensional view. "Without headgear, we blast out nearly three dozen views at a time," Frayne told Ars, "and your eyes kind of intercept those views that are spilling out of the system into space."

The system is flexible enough to improve with technology. Frayne showed us a version with a higher-resolution display that looked notably better. It's also possible to have the 3D view appear inside the hardware, which also seemed to improve the image quality. But that takes away from the system's other selling point: depth-sensitive interactivity.

Touching 3D
Just above the light-manipulating hardware, the Holoplayer is equipped with an Intel RealSense camera that is capable of tracking fingers and gestures in space. This makes the system interactive, as it can compare the position of a user's finger with the space occupied by displayed items. "We track your fingers over top of the Holoplayer 1 and that tracking is then fed into the application and allows you to manipulate and interact directly with that floating three-dimensional scene," Frayne said.

Frayne showed off software that lets users draw in 3D with their fingers and other software that acts like a virtual lathe, sculpting down a spinning object (the software can send the output straight to a 3D printer). There was even a game that involved moving blocks around a rotating 3D landscape.

For the game, a standard game controller was outfitted with a stalk topped by a white plastic ball that was easy to track, allowing the integration of gestures and some button-mashing. Right now, Looking Glass is only tracking a single finger, but Frayne said there's no reason that additional digits couldn't be tracked as well (the Intel tech can handle all 10). And, perhaps more dramatically, those fingers don't have to be from the same user.

The projection method is the key to making this something more than a personal interface. Anyone within the effective area of the display will see exactly the same thing as the person using it—and will perceive that user's finger as interacting with projected content in exactly the same way as the user does. "If I touch a spot on a three-dimensional scene—let's say the front of an X-wing floating over the Holoplayer 1—my buddy sitting next to me looking over my shoulder sees my finger coincident with the same tip of that X-wing that I see," Frayne said. "That means we can have a shared experience for how we're interacting with that floating three-dimensional scene without headgear." This opens up the sorts of interactions you can have with the system, and even creates the possibility that more than one user can interact with a project at the same time.

How easy is it to work within this interface? My experience was pretty mixed. When drawing, I could definitely get the system to reflect what I intended. Trying to play the game, however, was a major failure, as it was too challenging to figure out the depth of the controller relative to the depth I perceived of the images. Frayne told us that things do get better with practice, and younger users tend to have an easier time adjusting to it.

A future interface?
The hardware Looking Glass is offering right now is not for a casual user—Frayne says they're looking to get it in the hands of developers so that they can start exploring how to effectively use a 3D interface. Still, at $750, it's not too far off the price of some VR headsets on the market, and mass production should bring that price down. The resolution of this version isn't brilliant, but (as mentioned above) this can be increased without changing anything fundamental about its operation.

And there's no question that this is a neat way to experience a 3D interface. It doesn't require any special preparation to get started—no googles or glasses—and as the iPhone first demonstrated, using fingers for interaction cuts down greatly on the hassle involved in manipulating a gesture-based interface. The hardware itself is compact enough that it could easily be adapted to work in things like informational kiosks. And in uses like that, the fact that it's a virtual environment shared with anyone looking could have some big advantages.

There are also some specialty niches where a full 3D environment would seem to be a huge plus, like architecture, manufacturing, and repair shops.

But Frayne is definitely hoping that there will be a general market for a 3D interface. And, while he has some definite ideas of where it might work out, his goal with the developer kit is to let others start thinking about how to apply the technology.
 
.

Pakistan Defence Latest Posts

Pakistan Affairs Latest Posts

Back
Top Bottom