MuSe '21: Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge
ACM2021 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
MM '21: ACM Multimedia Conference Virtual Event China 24 October 2021
ISBN:
978-1-4503-8678-4
Sponsors:
Next Conference
October 10 - 14, 2022
Lisboa , Portugal
Bibliometrics

Abstract

It is our great pleasure to welcome you to the 2nd Multimodal Sentiment Analysis Challenge and Workshop (MuSe 2021), held in conjunction with the ACM Multimedia 2021. The MuSe challenge and associated workshop continue to push the boundaries of integrated audio-visual and textual based sentiment analysis and emotion sensing. In its 2nd edition, we posed the problem of the prediction of continuous-valued dimensional affect in YouTube reviews and stress-induced scenarios. Further tasks were the classification of 5 artificially created arousal and valence classes, and the recognition of a fused physio-arousal signal also in a stressful situation.

The mission of the MuSe Challenge and Workshop is to provide a common benchmark for individual multimodal information processing and to bring together the symbolic-based Sentiment Analysis and the signal-based Affective Computing communities, to compare the merits of multimodal fusion for the three core modalities under well-defined conditions. Another motivation is the need to advance sentiment and emotion recognition systems to be able to deal with unsegmented and previously unexplored naturalistic behaviour in large amounts of in-the-wild data, as this is exactly the type of data that we face in real life. As you will see, these goals have been reached with the selection of the data and the (challenge) contributions.

SESSION: Keynotes
keynote
Getting Really Wild: Challenges and Opportunities of Real-World Multimodal Affect Detection

Affect detection in the "real" wild - where people go about their daily routines in their homes and workplaces - is arguably a different problem than affect detection in the lab or in the "quasi" wild (e.g., YouTube videos). How will our affect ...

keynote
New Directions in Emotion Theory

Emotional intelligence is a fundamental component towards a complete and natural interaction between human and machine. Towards this goal several emotion theories have been exploited in the affective computing domain. Along with the studies developed in ...

SESSION: Papers
research-article
The MuSe 2021 Multimodal Sentiment Analysis Challenge: Sentiment, Emotion, Physiological-Emotion, and Stress

Multimodal Sentiment Analysis (MuSe) 2021 is a challenge focusing on the tasks of sentiment and emotion, as well as physiological-emotion and emotion-based stress recognition through more comprehensively integrating the audio-visual, language, and ...

research-article
Multimodal Emotion Recognition and Sentiment Analysis via Attention Enhanced Recurrent Model

With the proliferation of user-generated videos in online websites, it becomes particularly important to achieve automatic perception and understanding of human emotion/sentiment from these videos. In this paper, we present our solutions to the MuSe-...

research-article
Multi-modal Fusion for Continuous Emotion Recognition by Using Auto-Encoders

Human stress detection is of great importance for monitoring mental health. The Multimodal Sentiment Analysis Challenge (MuSe) 2021 focuses on emotion, physiological-emotion, and stress recognition as well as sentiment classification by exploiting ...

research-article
Hybrid Mutimodal Fusion for Dimensional Emotion Recognition

In this paper, we extensively present our solutions for the MuSe-Stress sub-challenge and the MuSe-Physio sub-challenge of Multimodal Sentiment Challenge (MuSe) 2021. The goal of MuSe-Stress sub-challenge is to predict the level of emotional arousal and ...

research-article
Multi-modal Stress Recognition Using Temporal Convolution and Recurrent Network with Positional Embedding

Chronic stress causes cancer, cardiovascular disease, depression, and diabetes, therefore, it is profoundly harmful to physiologic and psychological health. Various works have examined ways to identify, prevent, and manage people's stress conditions by ...

research-article
Multimodal Fusion Strategies for Physiological-emotion Analysis

Physiological-emotion analysis is a novel aspect of automatic emotion analysis. It can support revealing a subject's emotional state, even if he/she consciously suppresses the emotional expression. In this paper, we present our solutions for the MuSe-...

research-article
Fusion of Acoustic and Linguistic Information using Supervised Autoencoder for Improved Emotion Recognition

Automatic recognition of human emotion has a wide range of applications and has always attracted increasing attention. Expressions of human emotions can apparently be identified across different modalities of communication, such as speech, text, mimics, ...

research-article
Multimodal Sentiment Analysis based on Recurrent Neural Network and Multimodal Attention

Automatic estimation of emotional state has a wide application in human-computer interaction. In this paper, we present our solutions for the MuSe-Stress and MuSe-Physio sub-challenge of Multimodal Sentiment Analysis (MuSe 2021). The goal of these two ...

research-article
A Physiologically-Adapted Gold Standard for Arousal during Stress

Emotion is an inherently subjective psycho-physiological human state and to produce an agreed-upon representation (gold standard) for continuously perceived emotion requires time-consuming and costly training of multiple human annotators. With this in ...

research-article
MuSe-Toolbox: The Multimodal Sentiment Analysis Continuous Annotation Fusion and Discrete Class Transformation Toolbox

We introduce the MuSe-Toolbox - a Python-based open-source toolkit for creating a variety of continuous and discrete emotion gold standards. In a single framework, we unify a wide range of fusion methods and propose the novel Rater Aligned Annotation ...

Comments

About Cookies On This Site

We use cookies to ensure that we give you the best experience on our website.

Learn more

Got it!