New MediaEval Website
This website was used from 2010-2019. For more up-to-date information see the new MediaEval Website.

What is MediaEval?
MediaEval is a benchmarking initiative dedicated to evaluating new algorithms for multimedia access and retrieval. It emphasizes the 'multi' in multimedia and focuses on human and social aspects of multimedia tasks. MediaEval attracts participants who are interested in multimodal approaches to multimedia involving, e.g., speech recognition, multimedia content analysis, music and audio analysis, user-contributed information (tags, tweets), viewer affective response, social networks, temporal and geo-coordinates.

For more information about past years see:

MediaEval 2017 Overview of the Year
IEEE Multimedia Vol. 24, No. 1, 93-96, 2016 The Benchmarking Initiative for Multimedia Evaluation: MediaEval 2016
ERCIM News 97, April 2014
IEEE Speech & Language Processing Technical Committee Newsletter Feb. 2014
SIGMM Records Volume 5, Issue 2, June 2013
ERCIM News 94, July 2013
ERCIM News 88, January 2012
SIGMM Records Volume 3, Issue 2, June 2011
SIGMM Records Volume 2, Issue 2, June 2010

What is the MediaEval Workshop?
The culmination of the yearly MediaEval benchmarking cycle is the MediaEval Workshop. The workshop brings together task participants to present their findings, discuss their approaches, learn from each other, and make plans for future work. The MediaEval Workshop co-located itself with ACM Multimedia conferences in 2010, 2013, and 2016 and with the European Conference on Computer Vision in 2012. It was an official satellite event of Interspeech conferences in 2011 and 2015. The workshop also welcomes attendees who did not participate in specific tasks, but who are interested in multimedia research, or getting involved in MediaEval in the future.

How can I get involved?
MediaEval is an open initiative, meaning that any interested research group is free to signup and participate. Groups sign up for one or more tasks, they then receive task definitions, data sets and supporting resources, which they use to develop their algorithms. At the very end of the summer, groups submit their results and in the fall they attend the MediaEval workshop. See also Why Participate? or watch some video on the MediaEval video page.

What is a MediaEval task?
A MediaEval task consists of four parts:
  • Data provided to the benchmark participants,
  • A task definition that describes the problem to be solved,
  • Ground truth against which participants’ algorithms are evaluated,
  • An evaluation metric.
MediaEval tasks are oriented towards user needs in specific application settings and, to the extent possible, are based on scenarios of use derived from real-world problems.

What is a MediaEval Task Force?
Proposing a task requires creating a task organization team, creating a task design (task definition that fits the user scenario, evaluation methodology) and laying the ground work for task logistics (source of data, source of ground truth, evaluation metric). MediaEval Task Forces are groups of people informally working towards a task to be proposed in a future year of MediaEval.

Who runs MediaEval?
MediaEval is a community-driven benchmark that is run by the MediaEval organizing committee consisting of the task organizers of all the individual task in a given year. The overall coordination is carried out by Martha Larson and Gareth Jones, who founded MediaEval in 2008 as VideoCLEF, a track in the CLEF Campaign, and the other members of the MediaEval Community Council. Martha Larson serves as the overall contact person and the organizing force behind the MediaEval Workshop. MediaEval became an independent benchmarking initiative in 2010 under the auspices of the PetaMedia Network of Excellence. In 2011, it also received support from ICT Labs of EIT. It has also received support from various sources in order to offer student travel grants. We would particularly like to thank the ELIAS (Evaluating Information Access Systems), an ESF Research Networking Programme. Please refer to the pages of the individual years for complete lists of supporters. Since 2012, MediaEval has run as a fully bottom-up benchmark, in that it is not associated with a single "parent project".

Can I get MediaEval data from past years? Many MediaEval tasks make data available from past years. Data release is made possible in cases where tasks are able to focus on creative commons content. Note that because the test data of one year often become the training data for the next, decisions about the release of the past year's data are often delayed until after the next year's tasks have been decided.

How is MediaEval related to other benchmarks?
MediaEval is distinguished from other benchmarks by its focus on the social and human aspects of multimedia access and retrieval. This leads naturally to an emphasis on "the 'multi' of multimedia": the multitude of information sources that are available to tackle multimedia problem. The MediaEval community fosters collaboration and values bringing researchers together to work on new challenges. Although benchmarks, by their nature, involve quantitative comparison of systems. MediaEval also emphasizes qualitative insight into algorithms and techniques that participants develop.

MediaEval takes great pains to complement rather than duplicate tasks running in other benchmarks, e.g., TRECVid video retrieval evaluation. MediaEval actively works to foster cooperation with other benchmarks, for example ImageCLEF, FIRE and NTCIR. MediaEval is proud of its heritage as the VideoCLEF track of CLEF (Cross Language Evaluation Forum 2000-2009 and Conference and Labs of the Evaluation Forum 2010-present). For a historical picture of VideoCLEF in CLEF, check out the CLEF timeline graphic at http://www.clef-initiative.eu/track/series

How are the MediaEval tasks chosen?
Each year there is an open call for new task proposals. Anyone who is interested in organizing a task can submit a call: usually task organizers are teams from multiple institutions. We chose tasks by first formulating a list of proposed tasks and then ascertaining which tasks are popular enough to run using a yearly survey. If you or your project would like to propose or organize a task in future years, please contact Martha Larson (m.a.larson at tudelft.nl).

Are there intellectual property rights (IPR) issues with the data?
MediaEval uses Creative Commons data wherever possible. When we need to make an exception we make effort to ensure that participants can license data through the appropriate channels.

For more information contact Martha Larson m.a.larson (at) tudelft.nl.