MediaEval Moves Towards 2015

MediaEval 2015 Call for Task Proposals
Proposal deadline: 22 December 2014

MediaEval is a benchmarking initiative dedicated to evaluating new algorithms for multimedia access and retrieval. It offers tasks and datasets to the research community that are related to human and social aspects of multimedia. MediaEval also emphasizes the 'multi' in multimedia (speech, audio, visual content, tags, users, context).

MediaEval is now calling for proposals for tasks to run in the 2015 benchmarking season. The proposal consists of a description of the motivation for the task, and challenges that task participants must address, as well as information on the data and evaluation methodology to be used. The proposal also includes a statement of how the task is related to MediaEval (i.e., its human or social component), and how it extends the state of the art in an area related to multimedia indexing, search or other technologies that support users in access multimedia collections.

Task proposals are chosen on the basis of their feasibility, their match with the topical focus of MediaEval, and also according to the outcome of a survey circulated to the wider multimedia research community. First data release for MediaEval 2015 tasks is due May 1, and the MediaEval workshop for participants to report task outcomes will be held 14-15 September in Germany, as a satellite event of Interspeech 2015.

For more information about MediaEval contact Martha Larson m (dot) a (dot) larson (at) tudelft (dot) nl

Content of the Task Proposal

If you are interested in submitting a proposal, you are welcome to start the process by writing an email to Martha Larson m (dot) a (dot) larson (at) tudelft (dot) nl and Gareth Jones gareth (dot) jones (at) computing (dot) dcu (dot) ie to express your intention. They can also answer any questions that you may have, or connect you with other members of the community with similar interests to form a team of task organizers.

A task proposal is a .pdf containing the following four elements. Note that there is no specified length for the proposal, but in general proposals do not exceed three pages.

1. Task Description: This is an initial version of how your task would be described on the MediaEval website, should your task proposal is accepted. It consists of the following parts:
  • Task Title: Give your task an informative title.
  • Introduction: Describe the motivating use scenario, i.e,. which application(s) motivate the task. State what the task requires of participants.
  • Target Group: Describe the type of researchers who would be interested in participating in the task.
  • Data: Describe the data set, including how the data will be collected and licensed.
  • Evaluation Methodology: Describe the evaluation methodology, including how the ground truth will be created.
  • References and recommended reading: list 3-4 references related to the task that teams should have read before attempting the task.
  • List of task organizers.
For example task descriptions, please see the website, e.g., Task links in sidebar of http://multimediaeval.org/mediaeval2014

2. Task Blurb: Write 2-3 sentences that summarizes key information on the task. It should be informative and well-crafted to attract potential participants. A standard pattern is to have each sentence answer in turn the major questions about the task: First sentence: What is the input and the output of the algorithm that participants need to design for the task? Second sentence: What is the data? Third sentence: How is the task evaluated?

3. Task Organization Team: Write a short paragraph describing the organizing team. Your team should be large enough to handle organizing the task. Teams should consists of members from multiple research sites and multiple projects. A mix of experienced and early-career researchers is recommended. Note that your task team can add members after the proposal has been accepted.

4. Survey Questions: Write a list of questions (3-5) that you would like to include on the survey. These questions help you to ascertain the preferences of the research community for the aspects of the design of the task formulation, the data set design, and the evaluation methodology. For examples of the types of questions asked by tasks, please have a look at this .pdf for the MediaEval 2013 survey.

Proposal deadline: 22 December 2014

Please email your proposal (as a .pdf) to Martha Larson m (dot) a (dot) larson (at) tudelft (dot) nl and Gareth Jones gareth (dot) jones (at) computing (dot) dcu (dot) ie

Additional information on proposing a MediaEval Task

MediaEval offers two types of tasks: Brave New Tasks, which are tasks that are opening up a new multimedia challenge, and General Tasks, in already-established research topics. Task proposals are accepted on the basis of the existence of a community of task supporters (i.e., researchers who are interested and would plan to participate in the task). Support is determined using a survey, which is circulated widely to the multimedia research community. Tasks must also be viable given the design of the task and the resources available to the task organization team.

All participants must sign a MediaEval usage agreement, see last year’s agreement for example of task-specific licensing. MediaEval prefers to use Creative Commons data wherever possible.

In MediaEval tasks are run autonomously by the task organization team. However, they must respect the overall schedule:
  • May 1: Latest date for development data release
  • June 1: Latest date for test data release
  • Mid-August: Run submission
  • Friday 28 August: Deadline for two-page working notes papers
This schedule is a bit earlier than previous years. Task proposers should take care that there is someone on the task organizing team who is available during the last weeks of August and the first weeks of September to evaluate runs, and edit the working notes papers.

We encourage task proposers to join forces with colleagues from other institutions and other projects to create an organizing team large enough to bear the burden of data set generation, results evaluation, and working notes paper review.

MediaEval was founded in 2008 as a track called "VideoCLEF" within the CLEF benchmark campaign. In 2010, it became an independent benchmark and in 2012 it ran for the first time as a fully "bottom-up benchmark", meaning that it is organized for the community, by the community, independently of a "parent" project. The MediaEval benchmarking season culminates with the MediaEval workshop. Participants come together at the workshop to present and discuss their results, build collaborations, and develop future task editions or entirely new tasks. Past working notes proceedings of the workshop include:

MediaEval 2012: http://ceur-ws.org/Vol-807
MediaEval 2013: http://ceur-ws.org/Vol-1043
MediaEval 2014: http://ceur-ws.org/Vol-1263

Example tasks that have run in past years are:
  • Placing Task: Predict the geo-coordinates of user-contributed photos.
  • Tagging Task: Automatically assign tags to user-generated videos.
  • Query by Example Search on Speech Task (QUESST): Search FOR audio content WITHIN audio content USING an audio content query.
  • Search and Hyperlinking: Multi-modal search and automated hyperlinking of user-generated and commercial video.
  • Social Event Detection: Find multimedia items related to a particular event within a social multimedia collection.
  • Violent Scenes Detection Task: Automatically detect violence in movies.