MediaEval 2014 Workshop
The MediaEval benchmarking season culminates with a workshop that brings together researchers who participated in benchmark tasks to report on their findings, discuss their approaches, establish collaborations and plan future work. The MediaEval 2014 Workshop included presentations and posters from MediaEval participants, technical retreats dedicated to individual tasks, and an invited talk. The workshop took place in Barcelona, Catalunya, Spain, on Thursday-Friday 16-17 October 2014.
- Download the program: MediaEval 2014 Workshop Program
- The Working Notes Proceedings of the MediaEval 2014 Benchmark is available at: http://ceur-ws.org/Vol-1263
- For pictures, check out MediaEval on Flickr.
Example of how to cite a paper from the proceedings:
Manchon-Vizuete, D., Gris-Sarabia, I., Giro-i-Nieto, G. UPC at MediaEval 2014 Social Event Detection Task. Working Notes Proceedings of the MediaEval 2014 Workshop, Barcelona, Catalunya, Spain, October 16-17, 2014, CEUR-WS.org, online ceur-ws.org/Vol-1263/mediaeval2014_submission_58.pdf
MediaEval 2014 Organizers
- For the complete list of task organizers the formed the over all organization of the MediaEval 2014 benchmark, please see "Who are we?" and also the individual task pages.
- For a list of the people whose organizational effort made the workshop possible, please see below.
In 2014, MediaEval offered eight classic tasks and three Brave New Tasks:
Synchronization of multi-user Event Media (New!) This task requires participants to automatically create a chronologically-ordered outline of multiple image galleries corresponding to the same event, where data collections are synchronized altogether and aligned along parallel lines over the same time axis, or mixed in the correct order. Read more...
C@merata: Question Answering on Classical Music Scores (New!) In this task, systems take as input a noun phrase (e.g. 'harmonic perfect fifth') and a short score in MusicXML (e.g. J.S. Bach, Suite No. 3 in C Major for Cello, BWV 1009, Sarabande) and return an answer stating the location of the requested feature (e.g. 'Bar 206'). Read more...
Retrieving Diverse Social Images Retrieving Diverse Social Images Task. This task requires participants to automatically refine a ranked list of Flickr photos with landmarks using provided visual and textual information. The objective is to select only a small number of photos that are equally representative matches but also diverse representations of the query. Read more...
Search and Hyperlinking This task requires participants to find video segments relevant to an information need and to provide a list of useful hyperlinks for each of these segments. The hyperlinks point to other video segments in the same collection and should allow the user of the system to explore the collection with respect to the current information need in a non-linear fashion. The task focuses on television data provided by the BBC and real information needs from home users. Read more...
QUESST: Query by Example Search on Speech (ex SWS) The task involves searching FOR audio content WITHIN audio content USING an audio content query. This task is particularly interesting for speech researchers in the area of spoken term detection or low-resource speech processing. Read more...
Visual Privacy This task requires participants to implement privacy filtering solutions that provide an optimal balance between obscuring information that personally identifies people in a video, and retraining information that allows viewers otherwise to interpret the video. Read more...
Emotion in Music (an Affect Task) We aim at detecting emotional dynamics of music using its content. Given a set of songs, participants are asked to automatically generate continuous emotional representations in arousal and valence. Read more...
Placing: Geo-coordinate Prediction for Social Multimedia This task requires participants to estimate the geographical coordinates (latitude and longitude) of multimedia items (photos, videos and accompanying metadata), as well as predicting how “placeable” a media item actually is. The Placing Task integrates all aspects of multimedia: textual meta-data, audio, image, video, location, time, users and context. Read more...
Affect Task: Violent Scenes Detection This task requires participants to automatically detect portions of movies depicting violence. Participants are encouraged to deploy multimodal approaches (audio, visual, text) to solve the task. Read more....
Social Event Detection in Web Multimedia This task requires participants to discover, retrieve and summarize social events, within a collection of Web multimedia. Social events are events that are planned by people, attended by people and for which the social multimedia are also captured by people. Read more...
Crowdsourcing: Crowdsorting Multimedia Comments (New!) This task sorts timed-comments added by users to music tracks on SoundCloud. Task participants are provided with a set of noisy labels collected from crowdworkers, and asked to generate a reliable prediction (consensus computation). Optionally, participants can make the prediction by combining crowd input with automatic music content analysis. Read more...
MediaEval is a "bottom-up benchmark", which means that the tasks that it offers are highly autonomous and are selected by a grassroots process. Each year, task proposals are submitted by teams who wish to organize a task. We put the proposals that we receive into a survey, which is circulated community-wide. Tasks are then selected on the basis of the number of people who express interest in participation and also of the feasibility of the task organization (i.e., we look for tasks that are designed such they can achieve interesting in productive results given the timeframe of the MediaEval season and the available resources.)
MediaEval 2014 Workshop Organizing Committee
Xavier Anguera, Telefonica Research, Spain
Mohammad Soleymani, University of Geneva, Switzerland
Xavier Giró-i-Nieto, Universitat Politecnica de Catalunya, Barcelona
Bogdan Ionescu, University Politehnica of Bucharest, Romania
Saskia Peters, Delft University of Technology, Netherlands
Michael Riegler, Simula Research Lab, Norway
Richard Sutcliffe, University of Essex, UK
Resources and Recognition committee:
Mohammad Soleymani, University of Geneva, Switzerland
Bogdan Ionescu, Politehnica of Bucharest, Romania
Guillaume Gravier, IRISA, France
Gareth Jones, Dublin City University, Ireland
Martha Larson, Delft University of Technology, Netherlands
Statistics and Research Assistants:
Bogdan Boteanu and Anca Radu, University Politehnica of Bucharest, Romania
Filmmaker: Olivier Van Laere and company
Sandra Avila, Bogdan Boteanu, Shu Chen, Justin Chiu, Tom Collins, Taufik Edy Sutanto, Maria Eskevich, Vaiva Imbrasaite, Irene Gris, Santosh Kesiraju, Daniel Manchon, Amaia Salvador, Emanuele Sansone, Patrick Schwab, Carles Ventura.
General Chair/Main workshop contact: Martha Larson
MediaEval 2014 Community Council: Martha Larson, Gareth Jones, Mohammad Soleymani, Guillaume Gravier, and Bogdan Ionescu (i.e., same as the R&R committee above)
MediaEval 2014 would like to thank the following sponsors and supporters: