MediaEval 2018

MediaEval is a benchmarking initiative dedicated to developing and evaluating new algorithms and technologies for multimedia retrieval, access and exploration. It offers tasks to the research community that are related to human and social aspects of multimedia. MediaEval emphasizes the 'multi' in multimedia and seeks tasks involving multiple modalities, e.g., audio, visual, textual, and/or contextual. Our larger aim is to promote reproducible research that makes multimedia a positive force for society.

Organizing a task: MediaEval Tasks are run by autonomous groups of task organizers, who submit a task proposals. Please see the MediaEval 2018 Call for Task Proposals for full details. Proposals are chosen using a viability check, and also a survey that confirms community interest. The force that makes MediaEval possible is the vision, dedication and hard work of the MediaEval task organizers: we look forward to welcoming you into the team.

2018 Task List
The 2018 task list will be finalized at the end of February. We will continue adding tasks to this previsionary list, so that potential participants know what to expect:

Multimedia Satellite Task: Emergency Response for Flooding Events
The purpose of this task is to augment events captured by satellite images with social multimedia content in order to provide a more comprehensive view. In 2018, we will again focus on flooding. The task involves two subtasks: Flood detection in satellite images and Flood classification in social multimedia. Participants receive data and are required to train classifiers. Fusion of satellite and social multimedia information is encouraged. The task moves forward the state of the art by concentrating on aspects that are important to people, but are not generally studied by multimedia researchers, such as the level to which an area has been effected by a flood in terms of human-specific aspects such as road access.

Medico Multimedia Task
The goal of the task is efficient processing of medical multimedia data for disease prediction. Participants are provided with images and videos of the human gastrointestinal tract, and are required to develop classifier that minimizes necessary resources (processing time, training data). The ground truth labels are created by medical experts. The task differs from existing medical imaging tasks in that is uses only multimedia data (i.e., images and videos) and not medical imaging data (i.e., CT scans). A further innovation is its focuses on two non-functional requirements: using as little training data as possible and being computationally effective. The task lays the basis for automatic, real-time generation of medical reports on the basis of recordings made by capsule endoscopy.

Pixel Privacy Task
This task develops image enhancement approaches that project user privacy. Specifically, it is dedicated to creating technology that invisibly changes or visibly enhances images in such a way that it is no longer possible to automatically infer the location at which they were taken. The task has two sub-tasks: "geo-protect" and "geo-predict". The "geo-protect" task requires participants to develop protective image enhancements (evaluated with respect to user study) and the "geo-predict" task requires participants to predict the geo-location of an image despite the protective enhancement (evaluated using distance to the correct location). This task advances the state of the art in multimedia analysis by investigating the interplay between what users intend to communicate with their images (and must be preserved) and what users do not intend to communicate (and must be protected).

AcousticBrainz Genre Task: Content-based music genre recognition from multiple sources
The goal of our task is to understand how genre classification can explore and address the subjective and culturally-dependent nature of genre categories. Traditionally genre classification is performed using a single source of ground truth with broad genre categories as class labels. In contrast, this task is aimed at exploring how to explore and combine multiple sources of annotations, each of which are more detailed. Each source has a different genre class space, providing an opportunity to analyze the problem of music genre recognition from new perspectives and with the potential of reducing evaluation bias.

We also anticipate that MediaEval 2018 will run other tasks from 2017: See 2017 Task Page

Participating in a task: MediaEval attracts researchers who are interested in community-based benchmarking. This means that they are not only interested in creating solutions to MediaEval tasks, but they are interested in discussing and exchanging ideas with other researchers who are taking part in MediaEval. Researchers who are primarily looking to achieve a high rank in a benchmark, and are not interested in attending the workshop or in engaging in discussion of techniques in results with other researchers, do not benefit from the unique community-driven character of MediaEval.

MediaEval 2018 Timeline

If you are interested in proposing a task:

Indication of Intent: Friday 26 January 2018
Full proposal deadline: Friday 16 February (updated deadline)

If you are interested in participating in a task:
March-May 2018: Registration for task participation
May-June 2018: Development data release
June-July 2018: Test data release
Run submission: End September 2018
Workshop: Late Oct. or Early Nov. 2018

MediaEval 2018 Workshop

The MediaEval 2018 Workshop will be held late October 2018 at EURECOM, Sophia Antipolis, France.

Did you know?
Over its lifetime, MediaEval teamwork and collaboration has given rise to over 700 papers in the MediaEval workshop proceedings, but also at conferences and in journals. Check out the MediaEval bibliography.

General Information about MediaEval

MediaEval was founded in 2008 as a track called "VideoCLEF" within the CLEF benchmark campaign. In 2010, it became an independent benchmark and in 2012 it ran for the first time as a fully "bottom-up benchmark", meaning that it is organized for the community, by the community, independently of a "parent" project or organization. The MediaEval benchmarking season culminates with the MediaEval workshop. Participants come together at the workshop to present and discuss their results, build collaborations, and develop future task editions or entirely new tasks. MediaEval co-located itself with CLEF in 2017, with ACM Multimedia in 2010, 2013, and 2016, and with the European Conference on Computer Vision in 2012. It was an official satellite event of Interspeech in 2011 and 2015. Past working notes proceedings of the workshop include:

MediaEval 2015:
MediaEval 2016:
MediaEval 2017: