Quantum-inspired Multimodal Fusion for Video Sentiment Analysis

Published in Information Fusion, 2020

Recommended citation: Qiuchi Li, Dmitris Gkoumas, Christina Lioma and Massimo Melucci. (2021). "Quantum-inspired Multimodal Fusion for Video Sentiment Analysis." In Information Fusion. https://qiuchili.github.io/files/if2020-1.pdf

We tackle the crucial challenge of fusing different modalities of features for multimodal sentiment analysis. Mainly based on neural networks, existing approaches largely model multimodal interactions in an implicit and hard-to-understand manner. We address this limitation with inspirations from quantum theory, which contains principled methods for modeling complicated interactions and correlations. In our quantum-inspired framework, the word interaction within a single modality and the interaction across modalities are formulated with superposition and entanglement respectively at different stages. The complex-valued neural network implementation of the framework achieves comparable results to state-of-the-art systems on two benchmarking video sentiment analysis datasets. In the meantime, we produce the unimodal and bimodal sentiment directly from the model to interpret the entangled decision. Download paper here