The 2016 Emotional Impact of Movies Task
Register to participate in this challenge on the MediaEval 2016 registration site.

Affective video content analysis aims at the automatic recognition of emotions elicited by videos. It has a large number of applications, including emotion-based personalized content delivery, video indexing, and summarization. While major progress has been achieved in computer vision for visual object detection, scene understanding and high-level concept recognition, a natural further step is the modeling and recognition of affective concepts. This has recently received increasing interest from research communities, e.g., computer vision, machine learning, with an overall goal of endowing computers with human-like perception capabilities. Thus, this task is proposed to offer researchers a place to compare their approaches for the prediction of the emotional impact of movies.

This task continues builds on previous years' editions of the Affect in Multimedia Task: Violent Scenes Detection. It is not necessary to have participated in previous years to be successful in the 2015 task.

There are two subtasks:
1. Global emotion prediction: given a short video clip (around 10 seconds), participants’ systems are expected to predict a score of induced valence (negative-positive) and induced arousal (calm-excited) for the whole clip;
2. Continuous emotion prediction: as an emotion felt during a scene may be influenced by the emotions felt during the previous ones, the purpose here is to consider longer videos, and to predict the valence and arousal continuously along the video. Thus, a score of induced valence and arousal should be provided for each 1s-segment of the video.   

Target group
This task targets (but is not limited to) researchers in the areas of multimedia information retrieval, machine learning, event-based processing and analysis, affective computing and multimedia content analysis.

Data
The dataset used in this task is the LIRIS-ACCEDE dataset (liris-accede.ec-lyon.fr). It contains videos from a set of 160 professionally made and amateur movies, shared under Creative Commons licenses that allow redistribution.

For the first subtask, 9800 video excerpts (around 10s) are provided with the global valence and arousal annotations. For the second subtask, 30 movies are provided with the continuous annotation according to valence and arousal.

Additional data and annotations for both subtasks will be provided as test set.

In addition to the data, participants will also be provided with general purpose audio and visual content descriptors.

In solving the task, participants are expected to exploit the provided resources. Use of external resources (e.g., Internet data) will be however allowed as specific runs.

Ground truth and evaluation
Standard evaluation metrics will be used to assess the system performance, including Mean Square Error and Pearson correlation coefficient.

Recommended reading
[1] Sjöberg, M., Baveye, Y., Wang, H., Quang, V. L., Ionescu, B., Dellandréa, E., Schedl, M., Demarty, C.-H., Chen, L. The Mediaeval 2015 Affective Impact of Movies Task. In MediaEval 2015 Workshop, 2015.

[2] Baveye, Y., Dellandrea, E., Chamaret, C., Chen, L. LIRIS-ACCEDE: A Video Database for Affective Content Analysis. In IEEE Transactions on Affective Computing, 2015.

[3] Baveye, Y., Dellandrea, E., Chamaret, C., Chen, L. Deep Learning vs. Kernel Methods: Performance for Emotion Prediction in Videos. In 2015 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), 2015.

[4] Eggink, J., A large scale experiment for mood-based classification of TV programmes. In IEEE ICME 2012.

[5] Benini, S., Canini, L., Leonardi, R., A connotative space for supporting movie affective recommendation. In IEEE Transactions on Multimedia, 13.6, 2011, 1356-1370.

Task organizers
Emmanuel Dellandréa, Ecole Centrale de Lyon, France
(emmanuel.dellandrea@ec-lyon.fr)
Liming Chen, Ecole Centrale de Lyon, France
Yoann Baveye, Université de Nantes, France
Christel Chamaret, Technicolor, France
Mats Sjöberg, University of Helsinki, Finland

Task schedule
1 May 2016 : Development data release
20 June 2016 : Test data release
16 Sept. 2016 (updated deadline): Run submission deadline
30 Sept. 2016: Working notes paper deadline
20-21 Oct. 2016: MediaEval 2016 Workshop, Right after ACM MM 2016 in Amsterdam.

Acknowledgments
Visen
VideoSense