Announcement of Data Release
The task has concluded at the data has been released. Please see MediaEval Datasets.

The 2013 Emotion in Music Task (A new "Affect Task")
This task is a new task on emotional characterization of music. This task comprises of two subtasks. The first task, the per song emotion characterization task, requires participants to deploy multimodal features to automatically detect arousal and valence on 9 points scale for each song. In the second task, the continuous emotion characterization task, the emotional dimensions, arousal and valence, should be determined for the given song continuously in time, The quantization scale will be per frame (e.g., 20ms). In contrast to MIREX, we are using creative common licensed music that can be distributed and we will permit using metadata crawlable from the internet, e.g., last.fm, twitter, etc. We will also let the participants to run their own code and do not oblige the participants to submit their code to be run.

The participants are required to analyze the songs with the goal to recover the arousal and valence scores for each song. They will receive both continuous and per song annotation for the development set. We are targeting a dataset of 1000 songs for the first year of this task which will between the development set and the test set.

This affective features can be used in recommendation and retrieval platforms. There are already examples of mood based or emotion based online radios, e.g., Streomood (www.stereomood.com). However, these systems do not use a consistent definition of emotion or mood and only rely on user generated tags. The songs will be collected from Free Music Archive (FMA, http://freemusicarchive.org/). The annotations will be generated by crowdsourcing.

Target group
Researchers in the areas of multimedia affect or music retrieval.

Data
The songs are from Free Music Archive covering different genres of mainstream western music. We exclude experimental and low quality music to avoid uncontrollable diversity or controversial annotations. The annotations are collected on Amazon Mechanical Turk. Single workers will provide A-V labels for songs. We try to recruit trained annotators onsite to have at least one “expert” annotation in addition to the Turkers. The labels will be collected per-second. Workers will be given detailed instructions describing the A-V space and only qualified Turkers will be invited to perform the annotations.

Ground truth and evaluation
The ground truth is created by human assessors and is provided by the task organizers. Since we are using a dimensional representation of emotions, we will use regression evaluation metrics, e.g., ranking correlation, R-squared.

Recommended reading
1. Kim, Y. E., Schmidt, E. M., Migneco, R., Morton, B. G., Richardson, P., Scott, J., & Turnbull, D. (2010, August). Music emotion recognition: A state of the art review. In Proc. ISMIR (pp. 255-266).
2. Yang Y.-H. and Chen H.-H. (2012). Machine recognition of music emotion: A review. ACM Transactions on Intelligent Systems and Technology, vol. 3, no. 3.
3. Barthet M., Fazekas G., and Sandler M. (2012). Multidisciplinary perspectives on music emotion recognition: Implications for content and context-based models. In Proc. CMMR (pp. 492-507).

Task organizers (alphabetical)
Erik Schmidt, Drexel University, USA
Mohammad Soleymani, Imperial College London, UK
Yi-Hsuan Yang, Academia Sinica, Taiwan

Note that this task is a "Brave New Task" and 2013 is the first year that it is running in MediaEval. If you sign up for this task, you will be asked to keep in particularly close touch with the task organizers concerning the task goals and the task timeline.

Task schedule
5 June 2013: Development data release (newly updated deadline)
15 June 2013: Test data release
7 September: Run submission due
15 September 2013: Results returned
28 September: Working notes paper deadline