Wired’s Matt Burgess has written recently about the rise of fake pornography created using artificial intelligence software, something that I didn’t know existed (and now rather wish I hadn’t found out about):
A small community on Reddit has created and fine-tuned a desktop application that uses machine learning to morph non-sexual photos and transplant them seamlessly into pornographic videos.
The FacesApp, created by Reddit user DeepFakesApp, uses fairly rudimental machine learning technology to graft a face onto still frames of a video and string a whole clip together. To date, most creations are short videos of high-profile female actors.
The piece goes on to discuss the various potential legal restrictions or remedies which might be available to prevent or remove content created this way. Specifically within a UK context, Matt quotes lawyer Max Campbell:
“It may amount to harassment or a malicious communication,” he explains. “Equally, the civil courts recognise a concept of ‘false privacy’, that is to say, information which is false, but which is nevertheless private in nature.” There are also copyright issues for the re-use of images and video that wasn’t created by a person.
However, what I think this analysis misses is that the manipulation of digital images of identifiable individuals lands this sort of sordid practice squarely in the field of data protection. Data protection law relates to “personal data” – information relating to an identifiable person – and “processing” thereof. “Processing” is (inter alia)
any operation…which is performed upon personal data, whether or not by automatic means, such as…adaptation or alteration…disclosure by transmission, dissemination or otherwise making available…
That pretty much seems to encapsulate the activities being undertaken here. The people making these videos would be considered data controllers (persons who determine the purposes and means of the processing), and subject to data protection law, with the caveat that, currently, European data protection law, as a matter of general principle, only applies to processing undertaken by controllers established in the European Union. (In passing, I would note that the exemption for processing done in the course of a purely personal or household activity would not apply to the extent that the videos are being distributed and otherwise made public).
Personal data must be processed “fairly”, and, as a matter of blinding obviousness, it is hard to see any way in which the processing here could conceivably be fair.
Whether victims of this odious sort of behaviour will find it easy to assert their rights, or bring claims, against the creators is another matter. But it does seem to me to be the case here, unlike in some other cases, that (within a European context/jurisdiction) data protection law potentially provides a primary initial means of confronting the behaviour.
The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.