Announcement of Data Release
The task has concluded at the data has been released. Please see MediaEval Datasets.
The 2013 Visual Privacy Task
This task builds on the 2012 Privacy Task and aims at finding ways to protect the privacy of people in videos. The insights arising from the Privacy Task as was run last year showed that, regardless of the methods used for filtering out the elements of the video image that can potentially reveal the identity of the data subject, the best results belonged to those using superior object detectors. Hence, this year the participants will be offered the object detections as given.
Accordingly, participants will be invited to propose methods whereby elements of the image of persons featured in video frames can be obscured so to render them unrecognizable. This is intended to ensure that a person appearing in a video frame will not be visually identifiable based on an image which is thus obscured. This action is to be performed as a matter of privacy protection for any persons whose picture might happen to be knowingly or unknowingly captured in a video frame.
The main elements to be obscured are as follows: faces, ethnicity, gender, and accessories. Participants are free to filter other elements such distinctive clothing or gait, but should keep in mind that as much information as possible should be retained in order to keep the video understandable.
Since the resulting partly obscured videos would nonetheless have to remain available for viewing, an optimal balance should be struck, so that despite the extent of such masking of the identity as may be necessary, the categorical identity of any masked data subject therein, i.e. as a human being, or the type of their activity as may have been evident in the original video frame, can still be recognisable to the viewer. Furthermore, the proposed obscuring techniques should take into consideration the acceptability and attractiveness of the resulting obscured and scrambled regions.
This task is of interest to researchers in the areas of image processing privacy protection techniques including also those with an interest in assessment of the impact of various levels of privacy protection in video images. Note that although the 2013 task builds on the 2012 task, the 2013 task is designed to be independent of the 2012 task. In other words, it is not necessary to have participated in 2012 to participate in 2013.
The data set will consist of about 60 high resolution video files of an average length of 20 seconds each. The scenes will be varied with a mixture of indoors and outdoors scenarios, as well as some night-time videos. The people featured in the videos will be performing various actions, such as exchanging objects, talking, fighting or simply walking by.
Ground truth and evaluation
The ground truth will be created manually by the task organizers and will consist of annotations of the bounding boxes containing the objects of interest (e.g., people, faces, accessories).
The resulting obscured video will be evaluated using object detection algorithms and metrics that are based on human perception of salience in images (e.g., shape, brightness, density, and color) and visual appropriateness. As a complement to these official metrics, a certain number of key runs from those submitted will also be evaluated through user studies aimed at developing a deeper understanding of the user perceptions of appropriateness in privacy protection. This will also result in a more realistic evaluation of the privacy protection systems and allows participant to obscure specific elements that are not taken into account during the objective evaluation.
Senior, A., Privacy Protection in a Video Surveillance System, Privacy Protection in Video Surveillance, Springer, 2009.
Dufaux, F. & Ebrahimi, T., A framework for the validation of privacy protection solutions in video surveillance, 2010 IEEE International Conference on Multimedia and Expo (ICME), pp.66-71, 19-23 July 2010.
Dufaux, F. & Ebrahimi, T., Scrambling for Privacy Protection in Video Surveillance Systems, IEEE Transaction on Circuits and Systems for Video Technology, Vol. 18, Nr. 8 (2008), p. 1168-1174.
Tomas Piatrik, Queen Mary University of London, UK (tomas.piatrik at eecs.qmul.ac dot uk)
Atta Badii, University of Reading, UK (atta.badii at reading.ac dot uk)
Christian Fedorczak, Thales Security Solutions & Services (christian.fedorczak at thalesgroup dot com)
Ahmed Al-Obaidi, University of Reading, UK
Mathieu Einig, University of Reading, UK
June 3rd: data set release
September 1st: test data release
September 13th: run submission
September 25th: results returned
28 September: Working notes paper deadline