MediaEval 2016 Call for Task Proposals

MediaEval 2016 Multimedia Evaluation Benchmark
Call for Task Proposals
Proposal Deadline: 8 January 2016

MediaEval is a benchmarking initiative dedicated to developing and evaluating new algorithms and technologies for multimedia retrieval, access, and exploration. It offers tasks to the research community that are related to human and social aspects of multimedia. MediaEval emphasizes the 'multi' in multimedia and seeks tasks involving multiple modalities, including audio, visual, textual, and contextual.

MediaEval is now calling for proposals for tasks to run in the 2016 benchmarking season. Details on the content of the proposal can be found below.

If you are interested in submitting a proposal, you are welcome to start the process by writing an email to Martha Larson m (dot) a (dot) larson (at) tudelft (dot) nl and Gareth Jones gareth (dot) jones (at) computing (dot) dcu (dot) ie to express your intention. They can also answer any questions that you may have, or connect you with other members of the community with similar interests to form a team of task organizers.

Content of the Task Proposal

A task proposal is a .pdf containing the following four elements. Note that there is no specified length for the proposal, but in general proposals do not exceed three pages.

1. Task Description: This is an initial version of how your task would be described on the MediaEval website, should your task proposal is accepted. It consists of the following parts:
  • Task Title: Give your task an informative title.
  • Introduction: Describe the motivating use scenario, i.e., which application(s) motivate the task. State what the task requires of participants.
  • Target Group: Describe the type of researchers who would be interested in participating in the task.
  • Data: Describe the data set, including how the data will be collected and licensed.
  • Evaluation Methodology: Describe the evaluation methodology, including how the ground truth will be created.
  • References and recommended reading: list 3-4 references related to the task that teams should have read before attempting the task.
  • List of task organizers.
For example task descriptions, please see the website, e.g., Task links in sidebar of http://multimediaeval.org/mediaeval2015

2. Task Blurb: Write 2-3 sentences that summarizes key information on the task. It should be informative and well-crafted to attract potential participants. A standard pattern is to have each sentence answer in turn the major questions about the task: First sentence: What is the input and the output of the algorithm that participants need to design for the task? Second sentence: What is the data? Third sentence: How is the task evaluated?

3. Task Organization Team: Write a short paragraph describing the organizing team. Your team should be large enough to handle organizing the task. Teams should consist of members from multiple research sites and multiple projects. A mix of experienced and early-career researchers is recommended. Note that your task team can add members after the proposal has been accepted.

4. Survey Questions: Write a list of questions (3-5) that you would like to include on the survey. These questions help you to ascertain the preferences of the research community for the aspects of the design of the task formulation, the data set design, and the evaluation methodology. For examples of the types of questions asked by tasks, please have a look at this .pdf for the MediaEval 2013 survey.

Proposal deadline: 8 January 2016

Please email your proposal (as a .pdf) to Martha Larson m (dot) a (dot) larson (at) tudelft (dot) nl and Gareth Jones gareth (dot) jones (at) computing (dot) dcu (dot) ie

Additional information on proposing a MediaEval Task

MediaEval offers two types of tasks: Brave New Tasks, which are tasks that are opening up a new multimedia challenge, and General Tasks, in already-established research topics. Task proposals are accepted on the basis of the existence of a community of task supporters (i.e., researchers who are interested and would plan to participate in the task). Support is determined using a survey, which is circulated widely to the multimedia research community. Tasks must also be viable given the design of the task and the resources available to the task organization team.

All participants must sign a MediaEval usage agreement, see last year’s agreement for example of task-specific licensing. MediaEval prefers to use Creative Commons data wherever possible.

In MediaEval tasks are run autonomously by the task organization team. However, they must respect the overall schedule:
  • May 1: First date for development data release
  • July 1: Latest date for test data release
  • Mid-Sept: Run submission
  • 30 Sept: Deadline for two-page working notes papers
  • 20-21 October MediaEval 2016 Workshop (in Netherlands just after ACM Multimedia 2016)

We encourage task proposers to join forces with colleagues from other institutions and other projects to create an organizing team large enough to bear the burden of data set generation, results evaluation, and working notes paper review.

MediaEval was founded in 2008 as a track called "VideoCLEF" within the CLEF benchmark campaign. In 2010, it became an independent benchmark and in 2012 it ran for the first time as a fully "bottom-up benchmark", meaning that it is organized for the community, by the community, independently of a "parent" project. The MediaEval benchmarking season culminates with the MediaEval workshop. Participants come together at the workshop to present and discuss their results, build collaborations, and develop future task editions or entirely new tasks. Past working notes proceedings of the workshop include:

MediaEval 2012: http://ceur-ws.org/Vol-807
MediaEval 2013: http://ceur-ws.org/Vol-1043
MediaEval 2014: http://ceur-ws.org/Vol-1263
MediaEval 2015: http://ceur-ws.org/Vol-1436

Example tasks that have run in past years are:
  • Placing Task: Predict the geo-coordinates of user-contributed photos.
  • Query by Example Search on Speech Task (QUESST): Search FOR audio content WITHIN audio content USING an audio content query.
  • C@merata: Querying Musical Scores
  • Search and Hyperlinking: Multi-modal search and automated hyperlinking of user-generated and commercial video.
  • Multimodal Person Discovery in Broadcast TV
  • Verifying Multimedia Use: Does a tweet with an image reflect reality?
  • Violent Scenes Detection Task: Automatically detect violence in movies.