Kismet is a robot made in the late 1990's at MIT with auditory, visual and expressive systems intended to participate in human social interaction and to demonstrate simulated human emotion and appearance. In order for Kismet to properly interact with human beings, it contains input devices that give it auditory, visual, and proprioception abilities. Kismet simulates emotion through various facial expressions, vocalizations, and movement. Facial expressions are created through movements of the ears, eyebrows, eyelids, lips, jaw, and head.
Four color CCD cameras mounted on a stereo active vision head and two wide field of view cameras allow Kismet to decide what to pay attention to and to estimate distances. A .5 inch CCD foveal camera with an 8 mm focal length lense is used for higher resolution post-attentional processing, such as eye detection.
By wearing a small microphone, a user can influence Kismet's behaviour. An auditory signal is carried into a 500 MHz PC running Linux, using software developed at MIT by the Spoken Language Systems Group that can process real-time, low-level speech patterns. A 450 MHz PC running NT processes these features in real-time to recognize the spoken affective intent of the caregiver.
In addition to the computers mentioned above, there are 4 Motorola 68332s, 9 400MHz PCs, and another 500 MHz PC.
No comments:
Post a Comment