Читать книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt - Страница 7
ОглавлениеContents
Introduction: Scope, Trends, and Paradigm Shift in the Field of Computer Interfaces
Why Multimodal-Multisensor Interfaces Have Become Dominant
Flexible Multiple-Component Tools as a Catalyst for Performance
More Expressively Powerful Tools Are Capable of Stimulating Cognition
One Example of How Multimodal-Multisensor Interfaces Are Changing Today
Insights in the Chapters Ahead
Expert Exchange on Multidisciplinary Challenge Topic
PART I THEORY AND NEUROSCIENCE FOUNDATIONS
Chapter 1 Theoretical Foundations of Multimodal Interfaces and Systems
Sharon Oviatt
1.1 Gestalt Theory: Understanding Multimodal Coherence, Stability, and Robustness
1.2 Working Memory Theory: Performance Advantages of Distributing Multimodal Processing
1.3 Activity Theory, Embodied Cognition, and Multisensory-Multimodal Facilitation of Cognition
Karin H. James, Sophia Vinci-Booher, Felipe Munoz-Rubke
2.2 The Multimodal-Multisensory Body
2.3 Facilitatory Effects on Learning as a Function of Active Interactions
2.4 Behavioral Results in Children
2.5 Neuroimaging Studies in Adults
2.6 Neuroimaging Studies in Developing Populations
2.7 Theoretical Implications–Embodied Cognition
2.8 Implications for Multimodal-Multisensor Interface Design
PART II APPROACHES TO DESIGN AND USER MODELING
Chapter 3 Multisensory Haptic Interactions: Understanding the Sense and Designing for It
Karon E. MacLean, Oliver S. Schneider, Hasti Seifi
3.2 Interaction Models for Multimodal Applications
3.3 Physical Design Space of Haptic Media
3.5 Frontiers for Haptic Design
Focus Questions
References
Chapter 4 A Background Perspective on Touch as a Multimodal (and Multisensor) Construct
Ken Hinckley
4.2 The Duality of Sensors and Modalities
4.3 A Model of Foreground and Background Interaction
4.4 Seven Views of Touch Interaction
4.5 Summary and Discussion
Focus Questions
References
Chapter 5 Understanding and Supporting Modality Choices
Anthony Jameson, Per Ola Kristensson
5.2 Synthesis of Research on Modality Choices
5.3 Brief Introduction to the ASPECT and ARCADE Models
5.4 Consequence-Based Choice
5.5 Trial-and-Error-Based Choice
5.6 Policy-Based Choice
5.7 Experience-Based Choice
5.8 Other Choice Patterns
5.9 Recapitulation and Ideas for Future Research
Focus Questions
References
Stefan Kopp, Kirsten Bergmann
6.1 Introduction
6.2 Multimodal Communication with Speech and Gesture
6.3 Models of Speech and Gesture Production
6.4 A Computational Cognitive Model of Speech and Gesture Production
6.5 Simulation-based Testing
6.6 Summary
Focus Questions
References
Chapter 7 Multimodal Feedback in HCI: Haptics, Non-Speech Audio, and Their Applications
Euan Freeman, Graham Wilson, Dong-Bach Vo, Alex Ng, Ioannis Politis, Stephen Brewster
7.1 Overview of Non-Visual Feedback Modalities
7.2 Applications of Multimodal Feedback: Accessibility and Mobility
7.3 Conclusions and Future Directions
Focus Questions
References
Chapter 8 Multimodal Technologies for Seniors: Challenges and Opportunities
Cosmin Munteanu, Albert Ali Salah
8.2 Senior Users and Challenges
8.3 Specific Application Areas
8.4 Available Multimodal-Multisensor Technologies
8.5 Multimodal Interaction for Older Adults—Usability, Design, and Adoption Challenges
8.6 Conclusions
Focus Questions
References
PART III COMMON MODALITY COMBINATIONS
Chapter 9 Gaze-Informed Multimodal Interaction
Pernilla Qvarfordt
9.2 Eye Movements and Eye Tracking Data Analysis
9.3 Eye Movements in Relation to Other Modalities
9.4 Gaze in Multimodal Interaction and Systems
9.5 Conclusion and Outlook
Focus Questions
References
Chapter 10 Multimodal Speech and Pen Interfaces
Philip R. Cohen, Sharon Oviatt
10.2 Empirical Research on Multimodal Speech and Pen Interaction
10.3 Design Prototyping and Data Collection
10.4 Flow of Signal and Information Processing
10.5 Distributed Architectural Components
10.6 Multimodal Fusion and Semantic Integration Architectures
10.7 Multimodal Speech and Pen Systems
10.8 Conclusion and Future Directions
Focus Questions
References
Chapter 11 Multimodal Gesture Recognition
Athanasios Katsamanis, Vassilis Pitsikalis, Stavros Theodorakis, Petros Maragos
11.1 Introduction
11.2 Multimodal Communication and Gestures
11.3 Recognizing Speech and Gestures
11.4 A System in Detail
11.5 Conclusions and Outlook
Focus Questions
References
Chapter 12 Audio and Visual Modality Combination in Speech Processing Applications
Gerasimos Potamianos, Etienne Marcheret, Youssef Mroueh, Vaibhava Goel, Alexandros Koumbaroulis, Argyrios Vartholomaios, Spyridon Thermos
12.1 Introduction
12.2 Bimodality in Perception and Production of Human Speech
12.3 AVASR Applications and Resources
12.4 The Visual Front-End
12.5 Audio-Visual Fusion Models and Experimental Results
12.6 Other Audio-Visual Speech Applications
12.7 Conclusions and Outlook
Focus Questions
References
PART IV MULTIDISCIPLINARY CHALLENGE TOPIC: PERSPECTIVES ON LEARNING WITH MULTIMODAL TECHNOLOGY
Chapter 13 Perspectives on Learning with Multimodal Technology
Karin H. James, James Lester, Dan Schwartz, Katherine M. Cheng, Sharon Oviatt
13.1 Perspectives from Neuroscience and Human-Centered Interfaces
13.2 Perspectives from Artificial Intelligence and Adaptive Computation
13.3 The Enablers: New Techniques and Models
13.4 Opening Up New Research Horizons
13.5 Conclusion
References