Category Archives: Data Protection

Disclosing details of successful candidates from jobs

[reposted from my LinkedIn account]

Jones v Secretary of State for Health And Social Care [2024] EWCA Civ 1568

A question for data protection advisers. If you are asked by an unsuccessful candidate for a job what the age, gender and ethnic origin of the successful candidate was, do you disclose? (And what is your Article 6 basis and Article 9 UK GDPR condition for doing so?)

These questions are prompted by an interesting employment case in the Court of Appeal.

The appellant, who self-describes as black Caribbean, interviewed for a business development role at Public Health England (PHE) on 28 March 2019 but was not told, despite chasing, until 3 July 2019 that he had been unsuccessful. This was already outside the primary three month limitation period for bringing a claim in the employment tribunal (ET).

He then asked PHE for “age, gender and ethnic origin” of the successful candidate, and explained he needed to information to decide whether or not to make a claim in the ET.

It is not entirely clear what then happened: it’s suggested that PHE initially refused, but told the claimant he could make an FOI request, and there is also a suggestion that he was told that if he provided proof of his identity they would provide the information. In any event, he was not informed until much later in the proceedings that the successful candidate was white British.

His ET claim for discrimination was, therefore, submitted out of time. The ET can only extend the time for such a claim where it is “just and equitable” to do so, and, here, the ET held that it was not: he put off making his claim “because he was on an information gathering exercise. He was looking for the evidence to bolster his claim…Despite the Claimant’s criticisms, the respondent did in fact provide him with information and an explanation of its actions quite early on in the chronology. It gave him enough information to know that there was a claim for him to make if he wanted to present it to the Tribunal”. And, in any case, the ET dismissed the claim on its merits.

On appeal to the Employment Appeal Tribunal (EAT) the claimant submitted that it had been perverse of the ET to refuse to exercise its discretion to extend the time for making the application, but the EAT held that the ET had made no error of law in that regard.

The Court of Appeal felt differently; it was wrong for the ET to have held that the claimant had had, much earlier, the “raw materials” on which to formulate his claim, and it although it was correct that he was looking for information to bolster his claim, this ought not to have been held against him. “The information he was seeking about the ethnicity of the successful candidate was an essential part of his claim”.

Accordingly, the ET’s decision not to extend time under the “just and equitable” test was perverse, and the order of the EAT to uphold that decision was set aside, and the case on merits was remitted to the EAT.

And I guess my answer to my own questions at the start of this post would be: one or both of Articles 6(1)(c) and 6(1)(f), and Article 9(2)(f). But in all those cases, it’s going to be difficult for the controller to make the appropriate call on whether the request for information means that it’s necessary to make the disclosure, or whether it’s just a frivolous or aimless request.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under access to information, Data Protection, employment, judgments

Data protection claims against persons unknown

[reposted from my LinkedIn account]

Chirkunov v Person(s) Unknown & Ors [2024] EWHC 3177 (KB)

This is an important, and quite withering, judgment from Mr Justice Nicklin, which ends with a suggestion that from now on applications for permission to serve a Claim Form on ‘Persons Unknown’ out of the jurisdiction in claims in the Media & Communications List should not be dealt with without a hearing, unless a Master or Judge directs that a hearing is not necessary. The judgment records that, before hand down Nicklin J consulted the Judges in charge of the MAC List and they have endorsed his suggestion as the practice now to be followed in the MAC List.

The judgment is on an application to serve, out of the jurisdiction, a data protection claim on two persons unknown (the publishers of two websites said to infringe the data protection rights of the claimant). The claimant initially applied for orders to be made with a hearing, but Mrs Justice Steyn and gave directions for there to be a hearing.

Nicklin J was clearly unimpressed by the limited efforts the claimant and his lawyers had made to identify/locate the defendants, noting that the Norwich Pharmacal procedures had been available to the claimant, and concluded that “the Claimant has simply chosen not to pursue several avenues of investigation, including applications for Norwich Pharmacal relief. The basis for this decision is unpersuasive and unimpressive. On the evidence that has been provided, I am left with a very clear impression that the Claimant thought that he could avail himself of a simple short-cut – avoiding the cost of further investigations to identify the Defendants – by the expedient of issuing a claim against ‘Persons Unknown’”.

For this and other reasons the judge was also unwilling to give permission to serve out on persons unknown. Although such litigation can serve a purpose in some blackmail/cyber attack cases, for instance to “obtain interim remedies which can be used to counter the defendant’s threat to publish information that forms the basis of the blackmail/extortion threat”, he was not prepared to permit “litigation against someone who cannot be identified other than a description of his/her role, and with no indication of the state in which s/he is domiciled”.

Also notable was the judge’s approach to the part of the application which sought a declaration that the personal data on the website was inaccurate. The claimant was not “entitled” to such a declaration, and, in fact (Cleary v Marston Holdings and Aven v Orbis applied) declarations are not provided for under the data protection legislation and not generally granted in such litigation. The judge had “real difficulty in imagining the circumstances in which the Court would grant a declaration of “inaccuracy” in a data protection claim following a default judgment”.

The application was refused on all grounds.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, judgments, Norwich Pharmacal

Could the right to erasure in data protection law break AI?

[reposted from my LinkedIn account]

I ask this only partly in jest.

The story of how ChatGPT refused to acknowledge the existence of “David Mayer” and some others, perhaps (probably?) because people with those names had exercised their erasure rights (such as the right at Articles 17 of the GDPR and the UK GDPR), raises the interesting question: if a sufficient number of people made such requests, would the LLM begin to fail?

If so, a further question of rights arises. If I, Jon Baines, exercise my erasure right against ChatGPT (or another platform/LLM), and it suppresses any processing of the words “Jon Baines”, what effect might that have on my namesake Jon Baines, and his travel company? Or Jon the Ocean Specialist working on the Ocean Watch program?

Because the words “Jon Baines”, in isolation are not my personal data. In isolation, they do not relate to me. A crude response to an erasure request, just as with any of the other crude approaches which AI is capable of (for instance in relation to accuracy), runs the risk of interfering with others’ rights, including rights to operate a business, our rights to freedom of expression.

I don’t have an answer, but this is just one extra point and possible flaw in AI which will no doubt play out over the coming years.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under AI, Data Protection, erasure

The Data Protection Act 2018 does not “implement” the GDPR

They are separate instruments and the GDPR, pre-Brexit, did not require implementation – as a Regulation of the European Parliament and of the Council of the European Union, it had direct effect.

Since Brexit, by the effect of, among other laws, the European Union (Withdrawal Act) 2018 and the Data Protection, Privacy and Electronic Communications (Amendments Etc.) (EU Exit) Regulations 2019, we now have a retained-and-assimilated domestic version of the GDPR, called the UK GDPR.

Most processing of personal data is subject to the UK GDPR. The Data Protection Act 2018 deals with processing that is not subject to it, such as by law enforcement and security service agencies. It also provides some of the conditions and exemptions in relation to processing under the UK GDPR.

[None of this is new, and none of it will be unknown to genuine practitioners in the field, but I’m posting it here as a convenient sign to tap, at appropriate moments.]

Leave a comment

Filed under Data Protection, Data Protection Act 2018, GDPR, UK GDPR

Banks to be required to snoop on customers’ accounts

[reposted from my LinkedIn account]

A recently announced “DWP Fraud, Error and Debt Bill” will propose obligations on banks and financial institutions to “examine their own data sets to highlight where someone may not be eligible for the benefits they are being paid” and share relevant information with the Department of Work and Pensions (DWP).

This appears to be a new approach to the broad powers which would have been conferred on the DWP under clause 131 and schedule 11 of the shelved Data Protection and Digital Information Bill. Under those provisions the DWP would have been able to require banks and financial institutions to give general access to customer accounts (rather than on a targeted basis) for the purpose of identifying benefit fraud. Although the proposed powers were subject to a fair deal of criticism on the grounds of disproportionality, they remained in the final version of the bill which would almost certainly have been enacted if Mr Sunak had called a later election.

The DWP Fraud, Error and Debt Bill (which has not yet been introduced into Parliament but will be this session – so probably by Spring 2025) will propose an “Eligibility Verification measure” which, in figurative terms, will result in server side snooping on accounts (i.e. by banks themselves) rather than the demand-side snooping the previous bill would have introduced.

We will have to wait for the details, but one thing is certain – this will require a lot of algorithmic automation, no doubt AI-driven, and the potential for errors will need to be addressed and mitigated.

It will also, surely, be a significant cost burden on banks and financial institutions. Whilst it’s generally hard to muster much sympathy in those circumstances, here we must consider the risk that the lowest-cost, highest-efficiency models which will be adopted may be the least protective of customers’ banking privacy and data protection rights.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, data sharing, DWP, Privacy

ICO Annual Reports 1985 to date

I’ve had to retrieve a lot of these from the National Archives Web Archive.

The sixteenth report looks like it was co-published with the Australian Commissioner.

All reports are published under the Open Government Licence by which the licensor grants a worldwide, royalty-free, perpetual, non-exclusive licence to use the information, subject to conditions.

1 Comment

Filed under Data Protection, Freedom of Information, Information Commissioner

Can directors and trustees of charities be controllers?

[reposted from LinkedIn]

Savva v Leather Inside Out & Ors [2024] EWHC 2867 (KB), Sam Jacobs of Doughty Street Chambers, instructed by Forsters LLP for the defendants (the applicant in the instant application)

Is it the case that a director or trustee of a charity (which is a controller) cannot be a controller? That, in effect, was one of the grounds of an application by two defendants to strike out and grant summary judgment in a claim arising from alleged failures to comply with subject access requests.

The claim arises from a dispute between the claimant, a former prisoner, employed by a subsidiary of a charity (“Leather Inside Out” – currently in administration), and the charity itself. The claim is advanced against the charity, but also against the charity’s founder and two trustees, who are said on the claim form to be controllers of the claimant’s data, in addition to, or jointly with, the charity.

In a solid judgment, Deputy Master Alleyne refused to accept that such natural persons were not capable of being a controller: the term is given a broad definition in Article 4(7) UK GDPR, and “includes a natural or legal person, public authority, agency or other body and that there may be joint controllers. On plain reading of the provisions, it is incorrect to suggest that an allegation of joint controllers is, per se, not a legally recognisable claim” (re Southern Pacific Loans applied).

However, on the specific facts of this case, the pleading of the claimant (the respondent to the strike out application) failed “to allege any decisions or acts in respect of personal data which were outside the authority of the trustees as agents for [the charity]…the Respondent’s submissions demonstrated he wrongly conflated the immutable fact that a legal person must have a natural person through whom its decisions are carried into effect, with his case that the natural person must be assuming the defined status of data controller in their personal capacity”. That was not the case here – the founder and the trustees had not acted other than as agents for the charity.

Accordingly, the strike out application succeeded (notably, though, there Deputy Master said he had reached his conclusion
“not without some caution”).

Assuming the claim goes forward to trial, therefore, it can only be advanced against the charity, as sole controller.


The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under charities, controller, Data Protection, judgments, subject access, UK GDPR

Dismissed FE teacher’s data protection, MOPI, HRA claims fail

[reposted from LinkedIn]

Claims in misuse of private information, data protection and for breach of the Human Rights Act, by a dismissed further education teacher against Tameside College and three employees are struck out/subject to summary judgment for the defendant.

The claimant was initially suspended after evidence came to light that he had been dismissed from previous roles. The College’s investigation involved the sending of reference requests to two previous employers, and was also informed by disclosures of Facebook and WhatsApp messages which revealed the teacher had, contrary to instruction, communicated with students on social media whilst suspended, and “sent a threatening message to a WhatsApp Group chat comprising members of staff”.

The deputy master found that in relation to the misuse of private information claims, although the claimant had a reasonable expectation of privacy in the social media messages, “those expectations were greatly outweighed by the need to investigate those messages for the purposes of the disciplinary process”. These were subject to summary judgment for the defendant.

The data protection and human rights claims against individual employees were bound to fail, as they were neither data controllers nor public authorities.

As to the data protection claim against the college, a previous determination by the ICO that the sending of the reference requests was not fair and transparent, because it was contrary to the claimant’s expectations, was wrong: it was “plain that it ought to have been well within the Claimant’s reasonable expectation that, in order to investigate whether he had failed to disclose the fact of his dismissal from those two institutions, each would be contacted and asked about it.”

The college’s processing was lawful under Article 6(1)(b) and (c) of the UK GDPR: “The processing was necessary for the purposes of the contract of employment between the [college] and the Claimant and for the performance of the [college’s] obligations to its other staff, and to safeguard and promote the welfare of its students.” The various safeguarding legal duties and obligations on the college established a clear legal basis for the processing.

Similarly, the human rights claims against the college, which included complaints of unlawful monitoring and surveillance, were bound to fail: “There is no real prospect of establishing a breach of Article 8 for the same reasons that there is no real prospect of establishing misuse of private information. The alleged breaches of Articles 10 and 11 appear to relate to the College’s instructions to the Claimant not to communicate with other staff except with permission. The instruction was plainly a reasonable one made for a legitimate purpose.”

Accordingly, the data protection and Human Rights Act claims were struck out.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, employment, Further education, human rights, Information Commissioner, judgments, LinkedIn Post, misuse of private information

Pacini & Geyer v Dow Jones – at the interface between libel and data protection

[reposted from LinkedIn]

This is an important judgment on preliminary issues (the second preliminary issues judgment in the case – the first was on an unsuccessful strike out application by the defendants) in a data protection claim brought by two businessmen against Dow Jones, in relation to articles in the Wall Street Journal in 2017 and 2018. The claim is for damages and for erasure of personal data which is said to be inaccurate.

It is believed to be the first time in a data protection claim that a court has been required to determine the meaning of personal data as a preliminary issue in an accuracy claim.

Determination of meaning is, of course, something that is common in defamation claims. The judgment is a fascinating, but complex, analysis of the parallels between determining the meaning of personal data in a publication and determining the meaning of allegedly defamatory statements in a publication. Although the judge is wary of importing rules of defamation law, such as the “single meaning rule” and “repetition rule” a key part of the discussion is taken up by them.

The single meaning rule, whereby “the court must identify the single meaning of a publication by reference to the response of the ordinary reader to the entire publication” (NT 1 & NT 2 v Google LLC [2018] EWHC 799 (QB)) is potentially problematic in a data protection claim such as this where the claimants argue that it is not the ordinary reader they are concerned about, but a reader who might be a potential business investor.

Similarly, it is not at all clear that the repetition rule, which broadly seeks to avoid a defamatory loophole by which someone argues “but I’m only reporting what someone else said – their words might be defamatory, but mine merely report the fact that they said them”, should carry over to data protection claims, not least because what will matter in defamation claims is the factual matrix at the time of publication, whereas with data protection claims “a claim for inaccuracy may be made on the basis that personal data are inaccurate at the time of the processing complained of, including because they have become misleading or out of date, regardless of whether they were accurate at the time of original publication. In that event, what matters is the factual matrix at the time when relief is sought” (at 66).

Nonetheless, and in a leap I can’t quite follow on first of the judgment, but which seems to be on the basis that the potential problems raised can be addressed at trial when fairness of processing (rather than accuracy) arises, the judge decides to determine meaning on a single meaning/repetition rule basis (at 82-84).

There’s a huge amount to take in though, and the judgment demands close reading (and re-reading). If a full trial and judgment ensue, the case will probably be a landmark one.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

1 Comment

Filed under accuracy, Data Protection, Data Protection Act 2018, judgments, UK GDPR

ICO, Clearview AI and Tribunal delays

[reposted from LinkedIn]

On 28 October the Information Commissioner’s Office (ICO) made the following statement in respect of the November 2023 judgment of the First Tier Tribunal upholding Clearview AI’s successful appeal of the ICO’s £7.5m fine, and posted it in an update to its original announcement about appealing:

The Commissioner has renewed his application for permission to appeal the First-tier Tribunal’s judgment to the Upper Tribunal, having now received notification that the FTT refused permission of the application filed in November 2023.

It is extraordinary that it has taken 11 months to get to this point.

So what does this mean?

If a party (here, the ICO) wishes to appeal a judgment by the First Tier Tribunal (FTT) to the next level Upper Tribunal (UT), they must first make an application to the FTT itself, which must decide “as soon as practicable” whether to grant permission to appeal its own judgment (rules 42 and 43 of the Tribunal Procedure (First-tier Tribunal) (General Regulatory Chamber) Rules 2009).

If the FTT refuses permission to appeal (as has happened here), the application may be “renewed” (i.e. made again) directly to the UT itself (rule 21(2) of the Tribunal Procedure (Upper Tribunal) Rules 2008).

So, here, after 11 months (“as soon as reasonably practicable”?) the ICO has just had its initial application refused, and is now going to make an applicant under rule 21(2) of the UT Rules.

The ICO’s wording in its statement is slightly odd though: it talks of “having now received notification” that the FTT “refused” (not, say, “has now refused”) the November 2023 application. The tense used half implies that the refusal happened at the time and they’ve only just been told. If so, something must have gone badly wrong at the Tribunal.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under Data Protection, GDPR, Information Commissioner, Information Tribunal, judgments, Upper Tribunal