Conversational Humanoid

Humanoids are one of the fast developing areas in the robotics. Many research institutes and companies are doing researches in areas such as human-robot interactions, biomimetics and so on. Companies such as Honda, Sony together with leading institutions in Japan and US developed many interesting humanoid technologies. One such example is the conversational humanoid from MIT Media Lab.

They are developing autonomous agents that are capable of having a real-time face-to-face conversation with a human. These agents are human in form and communicate using both verbal and non-verbal modalities. We believe that such agents provide a new form of human-computer interface which users can interact with naturally, without training, since they already know how to engage in face-to-face conversation with other people. In addition to providing a platform for evaluating theories of human communicative behavior, these agents can be used in applications from virtual salespeople and support personnel to virtual playmates for children.

Their first generation humanoid system was Animated Conversation, where two autonomous humanoid animated characters carried on a conversation. While there was no human participant in these dialogues, Animated Conversation was the first system to automatically generate verbal and non-verbal communicative behaviors such as hand gestures, facial displays, intonation and speech.

The second generation humanoid was Gandalf, an animated cartoon face on a screen that could answer spoken questions about the solar system. Gandalf could sense the user's motion by having the user wear an electro-magnetic tracking system, and thus could respond to non-verbal behavior as well as verbal behavior. Although Gandalf operated in real-time, his outputs were simply selected from a library of stock responses.

The next generation has a fully articulated body and senses the user passively through cameras. The agent, named Rea (for Real Estate Agent), plays the role of a real estate salesperson who interacts with users to determine their needs, shows them around virtual properties, and attempts to sell them a house. We selected real estate sales as our application area, since there are opportunities for both task-oriented and socially-oriented conversation. As opposed to Gandalf, Rea actually synthesizes her responses--including speech and accompanying hand gestures--based on a grammar and lexicon and communicative context.

Their challenges comprise creating a synthetic human is a large undertaking that introduces a wide range of hard research issues. Current research directions we are pursuing include the recognition of classes of user conversational hand gestures, the synthesis of Rea's hand gestures based on a more detailed understanding of pragmatic information, and the planning of mixed-initiative dialog including non-task-oriented 'small talk' and conversational storytelling. The research version of Rea runs on a collection of five SGIs and PCs. A German version of Rea has also been developed and we are currently working on a PC-based application of the technology in which the agent plays the role of a child's virtual playmate.

Their team involves Justine Cassell, Tim Bickmore, Lee Campbell, Hannes Vilhjálmsson,and Hao Yan from MIT Media Lab.


Reference: http://www.media.mit.edu/gnl/projects/humanoid/
Related Posts Plugin for WordPress, Blogger...