|What is this project about?|
|What is the analog gap?|
|Many countries define legal rights regarding a person’s own image. However, they are not easy for a person to enforce. The image of a person might have been unintentionally captured by a photographer without the person noticing that his/her picture was taken, the person may simply not know the photographer, or the person may not know when and where his/her picture was published and in which context. This lack of knowledge can hinder the person from exercising his/her legal rights. Moreover, the person has no way to inform potential or actual picture takers of their self-chosen restrictions on how their image shall be handled. Likewise, a conscientious photographer might not have the chance to ask all the people whose image he/she captured for their consent to use their images. In any case, the person’s right to control how his/her image is used is lost due to a gap in the communication and control path from the person to the photographer and/or publisher of the photo.|
|How do you encode this information?|
|A modular visual coding system is used to convey the policy information across the communication gap described above. The policy is embedded in the visual information of the photograph (e.g., as part of the clothing), making it an inseparable part of the picture so that it is highly likely to survive along the publishing path. Under favorable conditions, this information is hidden in such a way that it is unnoticed by the human eye.|
|How does this encoding look like?|
|Many pieces of wardrobe are made with some pattern or print. By varying the appearance of this pattern slightly, information can be encoded. This information can be automatically extracted by social networks, publishing websites and search engines.|
|So you are using DRM techniques for the privacy of ordinary people instead for the good of big content industries?|
|Yes, if you like.|
|Can the big internet companies be forced to obey my encoded privacy restrictions?|
|Maybe. There are a few examples in the past where exactly this has been done.
After a public outcry shortly after the introduction of Google Street View, the service started to blur faces and license plates. In Germany, Google additionally agreed to provide an opt-out feature after the Minister of Justice of Rhineland-Palatinate, the data protection supervisor for Schleswig-Holstein, and Germany’s Federal Consumer Protection Minister threatened the company with legal actions. Since 2009, German home owners can blur the image of their home.
Another example is the integration of a banknote detection algorithm in popular software (e.g., Photoshop and PaintShop Pro), several printers, several scanners, and most color copying machines. In 2004, the Central Bank Counterfeit Deterrence Group (founded by the G10) published a Counterfeit Deterrence System software module for detecting banknotes that has subsequently found its way into many products although it is only available as a closed source module and there is no legal obligation for companies to include it.
Despite that, if P3F becames an accepted standard, no one handling pictures professionally will be able to deny that he/she did not know about your wishes regarding your own picture.
|So, you are a robots.txt for real world objects?|
|With P3F you can restrict usage of your personal image in more ways than just exclude it from search engines. Our framework consists of three simple person-related restrictions and two picture-wide restrictions.
The Do not Search flag specifies that the user does not want to be found through an internal or external search engine using a person-specific keyword. This includes the person’s real name, user name, birth date, and any other indexable data. Furthermore, it includes other images (e.g., ”find similar faces,” ”find other pictures of the same user”) or joined data (e.g. ”other customers who bought this product,” ”friend of the person”). In the case of Facebook, the user accepts being identified (”tagged”) in a photo but does not want this photo to show up if someone searches on his/her name or visits his/her timeline.
The Do not Identify flag specifies that the user does not want to be identified in a picture. This includes automatic face identification as well as manual name tagging by other users. If this information should become available by other means despite this specification, it is not to be included in a search index.
The Do not Publish flag specifies that the user does not want to have any pictures of him or her published. If the person is not the main subject (e.g., his/her image was unintentionally captured) his or her face should be blurred, pixelated, or covered to make identification impossible. The publisher (e.g., newspaper editor, blog writer, or uploading social network user) can also crop the picture to exclude the person in question. A modern publishing system can blur faces automatically in accordance with P3F policy.
|Under what license are the artefacts of this project distributed?|
|Artefacts will have a Creative Commons (CC-SA-BY) license; Source code will be published under GPLv2 or similar. Scientific publications will be offered for free download under the preprint/open-access/self-publishing license of the appropriete conference. Citations welcomed.|