The ongoing migration of computing and information access from stationary environments to mobile computing devices for eventual use in mobile environments, such as Personal Digital Assistants (PDAs), tablet PCs, next generation mobile phones, and in-car driver assistance systems, poses critical challenges for natural human-computer interaction. Spoken dialogue is a key factor in ensuring natural and user-friendly interaction with such devices which are meant not only for computer specialists, but also for everyday users.
Speech supports hands-free and eyes-free operation, and becomes a key alternative interaction mode in mobile environments, e.g. in cars where driver distraction by manually operated devices may be a significant problem. On the other hand, the use of mobile devices in public places, may make the possibility of using alternative modalities possibly in combination with speech, such as graphics output and gesture input, preferable due to e.g. privacy issues. Researchers' interest is progressively turning to the integration of speech with other modalities such as gesture input and graphics output, partly to accommodate more efficient interaction and partly to accommodate different user preferences.
-combines overview chapters of key areas in spoken multimodal dialogue (systems and components, architectures, and evaluation) with chapters focussed on particular applications or problems in the field;
-focuses on the influence of the environment when building and evaluating an application.
Audience: Computer scientists, engineers, and others who work in the area of spoken multimodal dialogue systems in academia and in the industry;
graduate students and Ph.D. students specialising in spoken multimodal dialogue systems in general, or focusing on issues in these systems in mobile environments in particular.