Integrated Audio/Visual Speaker Detection using Dynamic Bayesian Networks.
Ashutosh Garg, Vladimir Pavlovic, and James M. Rehg
The development of human-computer interfaces poses a challenging problem:
actions and intentions of different users have to be inferred from sequences
of noisy and ambiguous sensory data. Temporal fusion of multiple sensors can
be efficiently formulated using the "language" of dynamic Bayesian networks
(DBNs). DBN framework allows the power of statistical inference and learning
to be combined with contextual knowledge of the problem. We demonstrate the use
of DBNs in tackling the problem of audio/visual speaker detection.
"Off-the-shelf" visual and audio sensors (face, skin, texture, mouth motion,
and silence detectors) are optimally fused along with contextual information
in a DBN architecture which infers instances when an individual is speaking.
Results obtained in the setup of an actual human-machine interaction system
(Genie Casino Kiosk) demonstrate superiority of our approach over that of a
static, context-free fusion architecture.