An invisible aura of computing

Thanks to research at Carnegie Mellon, the computing environment of your next meeting room might even recognise who you are!

Have you ever experienced technical difficulties during a presentation? The material you needed to prepare the show was scattered among text, spreadsheet and multimedia files on different computers.

After arriving at the meeting, you realise your computer won’t communicate with the projector. In anticipation of an urgent call you’ve left your cell phone on, only to be interrupted several times by trivial calls.

After tape recording the meeting, you must fast-forward and rewind the tape again and again to locate relevant information.

Thanks to research at Carnegie Mellon, your next presentation might go more smoothly: the meeting room’s computing environment recognises who you are as soon as you enter and finds your show, automatically transmitting it to the projector. It screens your incoming cell phone calls, flashing ones vital to the meeting on the wall while “tapping” you on the arm (via your wearable computer) for private, urgent ones. The whole meeting is automatically videotaped, transcribed and indexed for easy retrieval of information later.

An interdisciplinary team at Carnegie Mellon is working to make this a reality through the Aura Project. The project aims to provide a total, seamless computing environment—an “invisible aura” of computing.

Dan Siewiorek, an electrical and computer engineering professor and director of the Human-Computer Interaction Institute (HCII), is part of the Aura Project team. “Distraction is a major issue in our electronic world,” he says. “The things we use have become more complicated, and there are more of them, carrying more information. We’re forced to do too much manipulating and bookkeeping.”

Desktop, handheld, wearable and infrastructure computers will all be linkable along with devices embedded in the walls. They will allow interaction by voice and gesture as well as by keypads and displays. And, this interaction will be at the “task level.”

Siewiorek explains, “Instead of clicking applications or files on a desktop, you won’t even think in terms of applications and files. You’ll think in terms of tasks and information. You’ll tell your computer, ‘I want to make a presentation, and here’s what I want to include.'”

How close is this dream state? Many pieces exist now or are in development at Carnegie Mellon: wearable computers, speech recognition, smart conference rooms, sophisticated new network-management technologies and more. The Aura game plan is to move various pieces along-for instance, building a simple message-flashing system-while tackling the fundamental work that will pull the pieces together.

The key item is “basic research on the task-level architecture,” Siewiorek says. “That would allow all kinds of devices and applications to become aware of their environment and adapt to it. If we do a good job of that, we like to think that lots of people will develop new applications.”

If momentum builds, Aura’s expanded coverage could someday equal that of cell phones or the Internet—which is precisely the big idea.

More at