As far as I know the Information Commissioner has never investigated this issue (I’ve made an FOI request to find out more), but this, on the Mishcon site, is an overview of the key issue.
Category Archives: fairness
My firm is acting for the students, and there’s a link to the detailed grounds in this explanatory piece.
One thing in particular struck me about the statement from the Information Commissioner’s Office (ICO) in response to the huge distress and uncertainty facing thousands of students and their families, following the announcement of A-level grades:
Anyone with any concerns about how their data has been handled should raise those concerns with the exam boards first, then report to us if they are not satisfied
In some ways, this is standard. Even the ICO’s “contact us” page leads a potential complainant through various stages before telling people who haven’t raised their concerns by “contacting the [offending] organisation in writing” to “Raise your concern with the organisation handling your information”.
Whilst I can understand the reason for this general approach (ICO’s resources are limited, and many complaints can no doubt be resolved at source), it is difficult to reconcile it with what the law requires the ICO to do. Article 77 GDPR says that a supervisory authority must handle complaints lodged by a data subject, and investigate, to the extent appropriate, the subject matter of the complaint. There is no caveat, no exemption. It does leave the option open for the ICO to handle a complaint, and choose not to investigate it all, but that is not what the ICO is doing here (and in its general approach).
But it must be said that sometimes, as it is permitted to, under Articles 57 and 58, the ICO does conduct investigations of its volition. It also has a range of powers, including the power to give an opinion to parliament and/or the government. Given that its Norwegian counterpart has indicated it will take strong action against the International Baccalaureate Organisation, I am hopeful that, as a new week of uncertainty for students approaches, the ICO will take this particular bit between its teeth, and properly investigate such a pressing issue.
The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.
The first principle of GDPR says that personal data shall be processed in a transparent manner. Articles 13 and 14 give details of what information should be provided to data subjects to comply with that principle (and that information should be provided at the time it is collected (if it is collected directly from the data subject)).
As the Information Commissioner’s Office (ICO) says
Individuals have the right to be informed about the collection and use of their personal data. This is a key transparency requirement under the GDPR. [emphasis added]
Getting the right to be informed correct can help you to comply with other aspects of the GDPR and build trust with people, but getting it wrong can leave you open to fines and lead to reputational damage
If you read the ICO’s Guide to GDPR, it is largely predicated on the understanding that privacy notices will be made available to data subjects, effectively as a prerequisite to overall compliance.
So, one thing a data controller must – surely – prioritise (and have prioritised, in advance of GDPR becoming applicable in May 2018) is the preparation and giving of appropriate privacy notices, including to its own employees.
With that in mind, I was
interested surprised astounded well-and-truly-gobsmacked to see an admission, on the “WhatDoTheyKnow” website, that the ICO itself has – almost a year on from GDPR’s start – not yet prepared, let alone given, its own staff a GDPR privacy notice
I can confirm we do not currently hold the information you have requested. The privacy notice for ICO employees is currently under construction.
As getting the right to be informed wrong can leave one open to fines (as well as reputational damage), one wonders if ICO is considering fining itself for this fundamental infringement of a fundamental right?
The views in this post (and indeed all posts on this blog, unless they indicate otherwise) are my personal ones, and do not represent the views of any organisation I am involved with.
Wired’s Matt Burgess has written recently about the rise of fake pornography created using artificial intelligence software, something that I didn’t know existed (and now rather wish I hadn’t found out about):
A small community on Reddit has created and fine-tuned a desktop application that uses machine learning to morph non-sexual photos and transplant them seamlessly into pornographic videos.
The FacesApp, created by Reddit user DeepFakesApp, uses fairly rudimental machine learning technology to graft a face onto still frames of a video and string a whole clip together. To date, most creations are short videos of high-profile female actors.
The piece goes on to discuss the various potential legal restrictions or remedies which might be available to prevent or remove content created this way. Specifically within a UK context, Matt quotes lawyer Max Campbell:
“It may amount to harassment or a malicious communication,” he explains. “Equally, the civil courts recognise a concept of ‘false privacy’, that is to say, information which is false, but which is nevertheless private in nature.” There are also copyright issues for the re-use of images and video that wasn’t created by a person.
However, what I think this analysis misses is that the manipulation of digital images of identifiable individuals lands this sort of sordid practice squarely in the field of data protection. Data protection law relates to “personal data” – information relating to an identifiable person – and “processing” thereof. “Processing” is (inter alia)
any operation…which is performed upon personal data, whether or not by automatic means, such as…adaptation or alteration…disclosure by transmission, dissemination or otherwise making available…
That pretty much seems to encapsulate the activities being undertaken here. The people making these videos would be considered data controllers (persons who determine the purposes and means of the processing), and subject to data protection law, with the caveat that, currently, European data protection law, as a matter of general principle, only applies to processing undertaken by controllers established in the European Union. (In passing, I would note that the exemption for processing done in the course of a purely personal or household activity would not apply to the extent that the videos are being distributed and otherwise made public).
Personal data must be processed “fairly”, and, as a matter of blinding obviousness, it is hard to see any way in which the processing here could conceivably be fair.
Whether victims of this odious sort of behaviour will find it easy to assert their rights, or bring claims, against the creators is another matter. But it does seem to me to be the case here, unlike in some other cases, that (within a European context/jurisdiction) data protection law potentially provides a primary initial means of confronting the behaviour.
The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.