Multimodal Human Communication

Powered by Visual Analytics

Introduction

Human communication is naturely based on dynamic information exchange of the multiple communication channels, such as facial expressions, prosody, speech content, and body language. The multimodal data generated during communication have brought new inspirations to intelligence education and smart training services, especially for classroom scenarios and presentation training scenarios. Processing vast amounts of information into knowledge and improving data utilization capabilities have become opportunities for these new inspirations. Visualization has a direct and close connection with knowledge expression and is an important means of interpreting complex data.

We are MMComVis group of HKUST VisLab. To help people have a deeper understanding of human communication behavior and performance, we have explored this study for years and published many related work in this field. Besides, we build some useful techniques for human communication training. We aim to harness the power of visual analytics to effectively utilize and explore multimodal data generated during communication. If you have interests in our works, please feel free to contact us.

Members

NEWS

Publications

×

A Visual Analytics Approach to Facilitate the Proctoring of Online Exams

CHI, 2021

Author: Haotian Li , Min Xu , Yong Wang , Huan Wei , Huamin Qu

Links: [pdf]

VoiceCoach: Interactive Evidence-based Training for Voice Modulation Skills in Public Speaking

CHI, 2020

Author: Xingbo Wang, Haipeng Zeng, Yong Wang, Aoyu Wu, Zhida Sun, Xiaojuan Ma and Huamin Qu.

Links: [pdf]

EmotionCues: Emotion-Oriented Visual Summarization of Classroom Videos

IEEE Transactions on Visualization & Computer Graphics, 2020

Author: Haipeng Zeng, Xinhuan Shu, Yanbang Wang, Yong Wang, Liguo Zhang, Ting-Chuen Pong, and Huamin Qu.

Links: [pdf]

A Survey on Intelligent User Interfaces for the Learning of Verbal Communication Skills

PhD Qualifying Exam Slides, 2020

Author: Xingbo Wang

Links: [slides]

EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos

IEEE Transactions on Visualization & Computer Graphics, 2019

Author: Haipeng Zeng, Xingbo Wang, Aoyu Wu, Yong Wang, Quan Li, Alex Endert and Huamin Qu.

Links: [pdf]

SpeechLens: Using Visual Analytics to Explore Narration Strategies in TED Talks

IEEE International Conference on Big Data and Smart Computing, 2019

Author: Linping Yuan, Yuanzhe Chen, Siwei Fu, Aoyu Wu and Huamin Qu

Links: [pdf]

Multimodal Analysis of Video Collections: Visual Exploration of Presentation Techniques in TED Talks

IEEE Transactions on Visualization and Computer Graphics, 2018

Author: Aoyu Wu, Huamin Qu

Links: [pdf]

Collaborations

Contact Us

huamin@cse.ust.hk, xingbo.wang@connect.ust.hk

CYT3007, Hong Kong University of Science and Technology, Clear Water Bay, Sai Kung, New Territories, Hong Kong