MediaEval 2018

MediaEval is a benchmarking initiative dedicated to developing and evaluating new algorithms and technologies for multimedia retrieval, access and exploration. It offers tasks to the research community that are related to human and social aspects of multimedia. MediaEval emphasizes the 'multi' in multimedia and seeks tasks involving multiple modalities, e.g., audio, visual, textual, and/or contextual. Our larger aim is to promote reproducible research that makes multimedia a positive force for society.

Organizing a task: MediaEval Tasks are run by autonomous groups of task organizers, who submit a task proposals. Please see the MediaEval 2018 Call for Task Proposals for full details. Proposals are chosen using a viability check, and also a survey that confirms community interest. The force that makes MediaEval possible is the vision, dedication and hard work of the MediaEval task organizers: we look forward to welcoming you into the team.

Participating in a task: MediaEval attracts researchers who are interested in community-based benchmarking. This means that they are not only interested in creating solutions to MediaEval tasks, but they are interested in discussing and exchanging ideas with other researchers who are taking part in MediaEval. We strongly discourage participation in MediaEval if you are only looking to achieve a ranking in a benchmark, and are not interested in attending the workshop or in engaging in exchange with others.

MediaEval 2018 Timeline

If you are interested in proposing a task:

Friday 22 December 2017: Indication of Intent
Friday 26 January 2018: Full proposal deadline

If you are interested in participating in a task:
March-May 2018: Registration for task participation
May-June 2018: Development data release
June-July 2018: Test data release
Run submission: End September 2018
Workshop: November 2018

MediaEval 2018 Workshop

The MediaEval 2018 Workshop will be held in November 2018 at EURECOM, Sophia Antipolis, France.

Did you know?
Over its lifetime, MediaEval teamwork and collaboration has given rise to over 700 papers in the MediaEval workshop proceedings, but also at conferences and in journals. Check out the MediaEval bibliography.

General Information about MediaEval

MediaEval was founded in 2008 as a track called "VideoCLEF" within the CLEF benchmark campaign. In 2010, it became an independent benchmark and in 2012 it ran for the first time as a fully "bottom-up benchmark", meaning that it is organized for the community, by the community, independently of a "parent" project or organization. The MediaEval benchmarking season culminates with the MediaEval workshop. Participants come together at the workshop to present and discuss their results, build collaborations, and develop future task editions or entirely new tasks. MediaEval co-located itself with CLEF in 2017, with ACM Multimedia in 2010, 2013, and 2016, and with the European Conference on Computer Vision in 2012. It was an official satellite event of Interspeech in 2011 and 2015. Past working notes proceedings of the workshop include:

MediaEval 2015: http://ceur-ws.org/Vol-1436
MediaEval 2016: http://ceur-ws.org/Vol-1739
MediaEval 2017: http://ceur-ws.org/Vol-1984