Taking a look at 3D interfaces

I’m spending the better part of my summer working on a software engineering research project at Virginia Tech’s computer science department. Coming from a small liberal arts college with a small computer science department, it’s quite an interesting experience for me to learn about all the cool things that the various groups, faculty members and grad students are doing. The computer science has a well established Research Experience for Undergraduates program for students in the area of human-computer interaction (HCI). Though my work isn’t really HCI, I tag along with the other students in the program and as a result get to attend interesing presentations and discussions by faculty members working on interesting HCI related problems.

Today we got to listen to a presentation by Dr. Nicholas Polys, the Director of the Visual Computing Group. His presentation was mainly about 3D interfaces, how they were being implemented (on open W3C standards) and how people were using them to solve real problems. I thought that the projects that he showed us were interesting, but I had one major problem with what we were shown: even though these 3D environments were really well thought out, user’s interactions would still be via flat 2D monitors and devices like mice and keyboards, which weren’t the best for navigating a complex 3D environment. The way I’d like to see a 3D environment work would be like the Iron Man movie, where Tony Stark manipulates a 3D projection of his suit design using simple, intuitive and direct hand movements.

Dr. Polys answered my question by suggesting that the 3D interfaces helped primarily by helping put more information on a 3D surface and that a fully immersive “move-your-hands” interaction system might be too cumbersome for day to day use. But he acknowledged that I brought up valid questions worth looking into. The presentation was fun, but was even better was the demonstration we got a later of the Virginia Tech CAVE. The CAVE is an immersive virtual reality environment. But instead of being used for entertainment or simulation, it’s used for visualization of scientific data.

The system is actually quite simple in concept. Projectors are used to project images onto large screens on three sides and below the user. The images are special in that they take advantage of the fact our two eyes see slightly different images. Combined with special glasses that synchronize with the projectors to block corresponding images for each eye, they give the illusion of being in fully three dimensional environment. There can be any number of people wearing these glasses and being in the CAVE, but there is one ‘pilot’ who wears a set of glasses that have sensors allowing for head-movement tracking: the images change to adjust for how the user is looking at it. Movement is via a simple pointing device that allows for 6 degrees of movement. The CAVE information page has more details and videos as well.

After interacting with various different environments and simulations, there are a number of things I observed (which most of my friends agreed with). The most realistic simulations aren’t the fully immersive environments, but rather projections of smaller discrete objects. Our first demonstration was projections of various insects. These seemed very real and it was quite easy to believe that they weren’t projected onto the walls, but actually occupying the space in between them. I think this was due to the fact that they were small, discrete objects and that they were rendered with a very high level of detail as compared to some of the later demonstrations.

The most interesting demo we saw was a 3D model of the myoglobin molecule. This was representative of the scientific visualization work that the CAVE is primarily used for. It was interesting to fly around the molecule and see how the different atoms and ions connected together. The remaining simulations were all of real-world environments. One was a model of a solar powered house that Virginia Tech students had designed and another was a model of a city that was used for early detection of Alzheimer’s by testing subjects’ abilities to navigate streets and complete day-to-day tasks. They were both very interesting, but seemed less real than the bug or molecule simulations. For one thing the graphics were of much lower quality in order to be rendered fast enough to be responsive, and because they took up the entire view field, it was easier to notice the edges where the walls met and that hindered the illusion considerably.

We all had a really good and I’m sure it’s an experience many of us will remember for a long time. In the end, I have mixed feelings about 3D interfaces. The bug simulation has convinced me that it’s very useful for looking at small objects or designs where you’d like to be able move the object around and look at it from all angles. It’s also good for larger models (such as molecules) as long as you’re not looking for photo-realism. I think it’s definitely worth using when a simple 2D image or even a 3D model on a small screen just doesn’t cut it. However, the technology isn’t quite at the stage where a full-blown immersive simulation like a city can be made to look real enough to be truly satisfying (especially if your standards are the 3D environments you see in modern computer games). Immersive virtual reality technologies like the CAVE are definitely important and I’m sure more and more scientists will be using them for modelling and visualization work in the near future.

3D environments on the desktop are a somewhat different matter. Using 3D models for things like design are of course very helpful, but as a general paradigm for interaction, I think 3D on the desktop isn’t a very good idea, at least not with the current interface tools that we have. Controlling a 3D setup with a mouse can be very tiresome at times and I don’t like the idea of having to ‘walk’ through a virtual space to find something when I could find it much faster if it was laid out as simple menu or set of buttons. Things like BumpTop and Shock Desktop 3D look really cool, but I wonder how easy they would be to use in day-to-day work. Then again, I’m pretty much a confirmed minimalist so I’m probably not unbiased. I think that the way modern desktops work in 2.5D with having 3D-like effects (like piling windows on top of each other, transparency, gradients) but still having a 2D interface is good way for people to work while they’re looking at 2D screens.  Of course automating those features and making them easy to use is wonderful. Apple’s Expose is a good example.

What I’d really like to see is some sort of projection technology allowing people to interact with 3d representations in a simple way. That being said, I don’t think that 3D interfaces are going to take over any time soon. The truth is that the simple 2D format is deeply entrenched and works well enough for most intents and purposes. Also the keyboard and plain text is a very efficient way for communication with a computer that’s not going away anytime soon. Though I would really like to see large touchscreens become cheap. 3D interfaces are an interesting technology and I would love to see them evolve. I have my doubts as to whether or not they are ready for the mainstream, but they’re certainly worth looking at, especially if you do data visualization of any kind.

Advertisements

Published by

Shrutarshi Basu

Programmer, writer and engineer, currently working out of Cornell University in Ithaca, New York.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s