MediaEval offers shared-tasks in multimedia retrieval and analysis. Successful approaches make use of multiple modalities and go beyond visual content to also exploit audio, text, and other contextual information. For each task, participants receive a task definition, task data, and accompanying resources (dependent on task) such as keyframes, visual features, and social metadata.
Participating teams submit results, which are evaluated. The teams then write up their results and present them at the MediaEval 2016 workshop. This year’s workshop will be held 20-21 October, right after ACM Multimedia in Amsterdam, Netherlands. Over its lifetime, MediaEval teamwork and collaboration has given rise to nearly 600 publications in the workshop proceedings, at conferences, and in journals.
MediaEval 2016 is offering the following tasks:
*Verifying Multimedia Use Task* Detect misleading or deceptive images and videos shared on Twitter.
*Emotional Impact of Movies Task* Infer the affective impact of film clips based on audio-visual content.
*C at MERATA: Querying Musical Scores with English Noun Phrases Task* Information retrieval on musical scores.
*Predicting Media Interestingness Task* Infer interesting frames and segments of movies (using audio, visual features, text).
*Zero Cost Speech Recognition Task (ex QUESST)* Train the best possible speech recognition system (for Vietnamese) using only free resources.
*Placing Task* Automatically estimate the locations of photos and videos (using audio, visual features, text).
*Multimodal Person Discovery in Broadcast TV Task* Automatically name the people occurring in broadcast content (audio, visual features, text).
*Retrieving Diverse Social Images Task* Diversify image results lists (text, visual features).
*Context of Multimedia Experience Task* Predict multimedia content suitable for watching in stressful situations.
***Participation in MediaEval is open to all interested research groups*** Click the “MediaEval 2016 registration page” link at: http://multimediaeval.org/mediaeval2016
***MediaEval 2016 Timeline*** (dates vary slightly from task to task, see the individual task pages for the individual deadlines: http://www.multimediaeval.org/mediaeval2016) May-June: Release of development/training data. June-July: Release of test data. Begin-Mid. September: Run submission End Sept.: Participants submit their 2-page working notes papers 20-21 October: MediaEval 2016 Workshop right after ACM Multimedia in Amsterdam, Netherlands
*Contact* For questions or additional information please contact Martha Larson m.a.larson at tudelft.nl or visit http://www.multimediaeval.org