M.I.T. Media Laboratory Vision and Modeling Group Technical Report No. 312
Attention-driven Expression and Gesture Analysis in an Interactive Environment
Trevor Darrell and Alex P. Pentland
To provide natural user interfaces to interactive environments, accurate
and fast recognition of gestures and expressions is needed. We adopt a view-based
gesture recognition strategy that runs in an unconstrained interactive environment,
which uses active vision methods to determine context cuess for the view-based
method. Using vision routines already implemented for an interactive environment,
we determine the spatial location of salient body parts and guide an active
camera to obtain foveated images of gestures or expressions. Face recognition
routines used to obtain an estimate of the identity of the user, and provide
an index into the best set of view templates to use. The resulting system
combines low-resolution, user-independent processing with high-resolution,
user-specific models, all of which are computed in real time as part of
an interactive environment.
version of this report is available.