Wednesday, August 04, 2004

e21 - Oxygen MIT

e21 wish to invert this, to have the power of computation come into the human world. As one example, collaborative work should not mean having to use a computer; it should instead mean people collaborating as they always have, with face to face interaction and the richness of human communcation and interaction. The computation ought to be embedded in that familiar environment of human interaction -- meeting rooms alive with speech, gesture, drawing -- rather than embedding (a limited part of) that interaction in the environment of computation, i.e., people typing, clicking and dragging. Our notion of human-centered computation thus means computation focused on human social interaction in its familiar context.
It aim to make the computation transparent in several senses. It ought to be transparent first in the sense that it feels natural. We aim for new forms of human-computer interaction that make interaction with software feel as familiar and natural as interaction with people. Second, it ought to be transparent because it's embedded, i.e., an invisible part of our everyday environments.
It aim to make the computation intelligent because we believe this is crucial to enabling natural interaction. O2 group find it (relatively!) easy to communicate with each other because of shared knowledge and intelligence. They must give our programs some part of that knowledge and intelligence if interaction with them is to be as easy and natural.
A significant part of the intelligence will be embedded in a variety of interfaces. They are building perceptual interfaces -- eyes and ears -- so that our environments can watch, listen, and talk to us. We are building sketching and gestural interfaces so that they can understand when we draw, point, wave, frown, and look interested.
Our work is organized around a number specific projects: Work on the intelligent room is focused on equipping spaces with the appropriate hardware (e.g., cameras, projectors, microphone arrays, live-boards) and on developing a major body of infrastructure software -- MetaGlue and the 100 or so computational agents that run the space. Work on perceptual interfaces is focused on building the "eyes and ears" for smart interfaces and environments, endowing the computer with a visual awareness of human users, so that it can pay attention to their location, gesture, and expression. These visual interfaces are being integrated with spoken language understanding systems, enabling seamless use of speech and "body language" to communicate with computational agents.

No comments: