Multimodal Human Communication

Powered by Visual Analytics


Human communication is naturely based on dynamic information exchange of the multiple communication channels, such as facial expressions, prosody, speech content, and body language. The multimodal data generated during communication have brought new inspirations to intelligence education and smart training services, especially for classroom scenarios and presentation training scenarios. Processing vast amounts of information into knowledge and improving data utilization capabilities have become opportunities for these new inspirations. Visualization has a direct and close connection with knowledge expression and is an important means of interpreting complex data.

We are MMComVis group of HKUST VisLab. To help people have a deeper understanding of human communication behavior and performance, we have explored this study for years and published many related work in this field. Besides, we build some useful techniques for human communication training. We aim to harness the power of visual analytics to effectively utilize and explore multimodal data generated during communication. If you have interests in our works, please feel free to contact us.



  • Apr. 2022: Our paper GestureLens (visual analysis of gesture in presentation videos) is accepted by IEEE TVCG!
  • Jan. 2022: Our paper Persua (visual analytics of argumentation strategies) is accepted by CSCW 2022!
  • Sep. 2021: Our paper M^2Lens (explaining multimodal models) is accepted by IEEE VIS 2021 and receives Best Paper Hornorable Mention !
  • Jul. 2021: Our paper DeHumor (multimodal humor analysis in public speeches) is accepted by IEEE TVCG, 2021!
  • Jan. 2021: Our paper, which presents visual analytics approach for online proctoring, is accepted by CHI 2021!
  • Nov. 2020: Dr. Haipeng Zeng joins Sun Yat-sen University as a tenure-track assistant professor.
  • Oct. 2020: LifeHikes App launch! Available on both iOS AppStore and Android PlayStore globally.
  • Sep. 2020: Dr. Yong Wang joins Singapore Management University as a tenure-track assistant professor.
  • Jan. 2020: EmotionCues (emotion-oriented classroom video summarization) is covered by IEEE Spectrum!
  • Dec. 2019: After a Long March of two years, our paper EmotionCues is finally accepted by TVCG!
  • Dec. 2019: Our paper VoiceCoach (for voice modulation training) is accepted by CHI 2020! See you in Honolulu!
  • Oct. 2019: Our paper Emoco (emotion coherence) is accepted by IEEE VIS 2019! See you in Vancouver!
  • Feb. 2019: Our paper SpeechLens is accepted by IEEE BigComp! See you in Japan!
  • Dec. 2018: Our paper TEDVis is accepted by TVCG!
  • Oct. 2018: Our project EmotionCues is highlighted by Japan NHK TV.



GestureLens: Visual Analysis of Gestures in Presentation Videos

IEEE Transactions on Visualization & Computer Graphics, 2022

Author: Haipeng Zeng, Xingbo Wang, Yong Wang, Aoyu Wu, Ting Chuen Pong, Huamin Qu

Links: [pdf]

Persua: A Visual Interactive System to Enhance the Persuasiveness of Arguments in Online Discussion

The 25th ACM Conference On Computer-Supported Cooperative Work And Social Computing (CSCW 2022)

Author: Meng Xia, Qian Zhu, Xingbo Wang, Fei Nie, Huamin Qu, Xiaojuan Ma

Links: [pdf]

M^2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis

IEEE Transactions on Visualization & Computer Graphics (IEEE VIS 2021)

Author: Xingbo Wang, Jianben He, Zhihua Jin, Muqiao Yang, Yong Wang and Huamin Qu

Links: [pdf]

DeHumor: Visual Analytics for Decomposing Humor

IEEE Transactions on Visualization & Computer Graphics, 2021

Author: Xingbo Wang, Yao Ming, Tongshuang Wu, Haipeng Zeng, Yong Wang and Huamin Qu

Links: [pdf]

A Visual Analytics Approach to Facilitate the Proctoring of Online Exams

CHI, 2021

Author: Haotian Li , Min Xu , Yong Wang , Huan Wei , Huamin Qu

Links: [pdf]

VoiceCoach: Interactive Evidence-based Training for Voice Modulation Skills in Public Speaking

CHI, 2020

Author: Xingbo Wang, Haipeng Zeng, Yong Wang, Aoyu Wu, Zhida Sun, Xiaojuan Ma and Huamin Qu.

Links: [pdf]

EmotionCues: Emotion-Oriented Visual Summarization of Classroom Videos

IEEE Transactions on Visualization & Computer Graphics, 2020

Author: Haipeng Zeng, Xinhuan Shu, Yanbang Wang, Yong Wang, Liguo Zhang, Ting-Chuen Pong, and Huamin Qu.

Links: [pdf]

A Survey on Intelligent User Interfaces for the Learning of Verbal Communication Skills

PhD Qualifying Exam Slides, 2020

Author: Xingbo Wang

Links: [slides]

EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos

IEEE Transactions on Visualization & Computer Graphics, 2019

Author: Haipeng Zeng, Xingbo Wang, Aoyu Wu, Yong Wang, Quan Li, Alex Endert and Huamin Qu.

Links: [pdf]

SpeechLens: Using Visual Analytics to Explore Narration Strategies in TED Talks

IEEE International Conference on Big Data and Smart Computing, 2019

Author: Linping Yuan, Yuanzhe Chen, Siwei Fu, Aoyu Wu and Huamin Qu

Links: [pdf]

Multimodal Analysis of Video Collections: Visual Exploration of Presentation Techniques in TED Talks

IEEE Transactions on Visualization and Computer Graphics, 2018

Author: Aoyu Wu, Huamin Qu

Links: [pdf]


Contact Info

Xingbo Wang:

CYT3007, Hong Kong University of Science and Technology, Clear Water Bay, Sai Kung, New Territories, Hong Kong