Category Archives: Data Protection

Personal use of work devices – an Irish judgment

A frequent headache for data protection practitioners and lawyers is how to separate (conceptually and actually) professional and personal information on work devices and accounts. It is a rare employer (and an even rarer employee) who doesn’t encounter a mix of the two categories.

But, if I use, say, my work phone to send a couple of text messages (as I did on Saturday after the stupid SIM in my personal phone decided to stop working), who is the controller of the personal data involved in that activity? I’d be minded to say that I am, (and that my employer becomes, at most, a processor).

That is also the view taken by the High Court in Ireland, in an interesting recent judgment.

The applicant was an employee of the Health Service Executive (HSE), and did not, in this case, have authority or permission to use his work phone for personal use. He nonetheless did so, and then claimed that a major data breach in 2021 at the HSE led to his personal email account and a cryptocurrency account being hacked, with a resultant loss of €1400. He complained to the Irish Data Protection Commissioner, who said that as his personal use was not authorised, the HSE was not the controller in respect of the personal data at issue.

The applicant sought judicial review of the DPC decision. This of course meant the application would only succeed if it met the high bar of showing that the DPC had acted unlawfully or irrationally. That bar was not met, with the judge holding that:

The DPC did not purport to adopt an unorthodox interpretation of the definition of data controller. Instead, against the backdrop of the factual matrix before it, it found that the HSE had not “determined the purposes and means 28of the processing” of the data relating to the Gmail, Yahoo, Fitbit and Binance accounts accessed by the applicant on his work phone. That finding appears to me to be self-evident, where that use of the phone clearly was not authorised by the HSE.

I think that has to be correct. But I’m not sure I quite accept the full premise, because I think that even if the HSE had authorised personal use, the legal position would be the same (although possibly not quite as unequivocally so).

In genuinely interested in others’ thoughts though.

The views in this post (and indeed most posts on blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under controller, Data Protection, employment, GDPR, Ireland, judgments, Uncategorized

Subject access, Leeds United, and ****

[reposted from my LinkedIn account]

You’d have thought most football fans would be keen to prove they’d not attended a Leeds United match [#bantz], but when Melvyn Flower was told by the club he couldn’t renew his season ticket for next season, because he’d not used his current one often enough, he resorted to data protection law to vindicate his support for the club.

The information disclosed to him showed that he attended matches on all the occasions the club had said he hadn’t.

I don’t quite understand how the club searched for and disclosed his personal data, without (when doing so) realising its mistake (maybe he asked for footage from a specific camera near his reserved seat). But in any case, it’s a nice little story, and topped off with an excellent point from Mr Flower:

Why would I buy a season ticket and not go this season, of all seasons, given the **** I’ve sat through since 1978?

1 Comment

Filed under Data Protection, not-entirely-serious, Sport, subject access

Machine learning lawful basis on a case-by-case approach – really?

The Information Commissioner’s Office has published its response to the government’s consultation on Copyright and AI. There’s an interesting example in it of a “oh really?!” statement.

The government proposes that, when it comes to text and data-mining (TDM) of datasets that contain copyright works) a broad exception to copyright protection should apply, under which “AI developers would be able to train on material to which they have lawful access, but only to the extent that right holders had not expressly reserved their rights”. Effectively, rights holders would have to opt out of “allowing” their works to be mined.

This is highly controversial, and may be the reason that the Data (Use and Access) Bill has stalled slightly in its passage through Parliament. When the Bill was in the Lords, Baroness Kidron successfully introduced a number of amendments in relation to use of copyright info for training AI models, saying that she feared that the government’s proposals in its consultation “would transfer [rights holders’] hard-earned property from them to another sector without compensation, and with it their possibility of a creative life, or a creative life for the next generation”. Although the government managed to get the Baroness’s amendments removed in Commons’ committee stage, the debate rumbles on.

The ICO’s response to the consultation notes the government’s preferred option of a broad TDM exception, with opt-out, but says that, where personal data is contained in the training data, such an exception would not “in and of itself constitute a determination of the lawful basis for any personal data processing that may be involved under data protection law”. This must be correct: an Article 6(1) UK GDPR lawful basis will still be required. But it goes on to say “the lawfulness of processing would need to be evaluated on a case-by-case basis”. A straightforward reading of this is that for each instance of personal data processing when training a model on a dataset, a developer would have to identify a lawful basis. But this, inevitably, would negate the whole purpose of using machine learning on the data. What I imagine the ICO intended to mean was that a developer should identify a broad, general lawful basis for each dataset. But a) I don’t think that’s what the words used mean, and b) I struggle to reconcile that approach with the fact that a developer is very unlikely to know exactly what personal data is in a training dataset, before undertaking TDM – so how can they properly identify a lawful basis?

I should stress that these are complex and pressing issues. I don’t have answers. But opponents of the consultation will be likely to jump on anything they can.

The views in this post (and indeed most posts on blog) are my personal ones, and do not represent the views of any organisation I am involved with.

1 Comment

Filed under AI, Data Protection, datasets, DUAB, Information Commissioner, Lawful basis, parliament, Uncategorized

The legality of data processing in the course of litigation

There is very convoluted litigation taking place which has as its focus a witness statement, prepared by a solicitor acting for a number of insurance companies who are defending personal injury claims arising from road traffic accidents (RTAs). And part of the argument (and a satellite claim) has now become about compliance with data protection law.

Five original claims were made for damages arising from RTAs. The defendant insurance companies were represented by law firm DWF, and one of DWF’s solicitors prepared a witness statement which contained an analysis of claims data collected by DWF in relation to a number of claims submitted by claimants represented by the solicitors who acted on behalf of the five claimants. The statement sought to adduce that in an unusually high number of the claims claimants had been referred for further psychological assessment, by a doctor who in 100% of those cases diagnosed a psychiatric condition and in two thirds of those cases said that the recovery period would be over two years. In short, a large number of claimants in the relevant RTAs appeared to develop long-term psychiatric conditions.

The claimant sought unsuccessfully to debar the witness statement, although the judge (on appeal) noted that it would be “for the Judge at trial to make of this evidence what they will [although] there are questions as to the extent to which this evidence assists without more in proving fundamental dishonesty”.

Notwithstanding this, an initial 317 (now reduced to three) claims were then made by people whose personal data was accepted to have been processed by DWF for the purposes of preparing the witness statement above. The claims here are for various breaches of the UK GDPR (such as excessive processing, and lack of fairness, lawful basis and transparency).

In a judgment handed down on 1 April, on an application by the claimants for specific disclosure in the UK GDPR claim (and an application by the defendant to amend its defence and strike out a witness statement of the claimants’ solicitor) Mrs Justice Eady DBE dismissed the disclosure applications (made under various headings), on the basis that much of the information would clearly be privileged material, or not relevant, or that the application was a fishing expedition.

If this gets to trial it will be interesting though. This sort of processing of personal data takes place in the course of (non-data-protection) private litigation routinely. It is generally not assumed that any issues of illegality arise. Any ultimate findings would be notable for litigators, and those who need to advise them on data protection compliance.

The views in this post (and indeed most posts on blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, judgments, litigation, UK GDPR

A new data protection duty?

I’ve been looking in more detail at the recent subject access judgment in Ashley v HMRC. One key point of general application stands out for me, and that is that it states that in some cases (i.e. where it is necessary for intelligibility purposes) a controller has a duty to provide contextual information in addition to copies of personal data.

As the judge put it

Article 15(1) and 15(3), read with Article 12(1) and (2) of the UK GDPR, did require the Defendant to go beyond providing a copy of the Claimant’s personal data where contextual information was necessary for that personal data to be intelligible in the sense of enabling the data subject to exercise their rights conferred by the UK GDPR effectively. It follows that insofar as the Defendant did not adopt this approach, it was in breach of this duty.

And although she couched the following as “guidance” for the HMRC when reconsidering the request, I feel it has general application:

…it is unlikely that providing an extract that simply comprises the Claimant’s name or his initials or other entirely decontextualised personal data of that sort, will amount to compliance with this obligation.

In arriving at this conclusion the judge drew in part on both pre- and post-Brexit case law of the Court of Justice of the European Union. Most notably she decided to have regard to case C-487/21. Even though this does not bind the domestic courts, the effect of section 6(2) of European Union (Withdrawal) Act 2018 is that courts may have regard to EU case law where it is relevant to the matter before them.

Of course, there are also times when merely providing a snippet in the form of a name constitutes a failure to provide all of the personal data in scope (omitting the final five words of “Jon Baines works at Mishcon de Reya” would be to omit some of my personal data). But the “context duty” seems to me to go further, and creates, where it is necessary, an obligation to provide information beyond what is in the source documents.

Most of the other points in the judgment, as important as they were to the facts, and as interesting they are, particularly on the concept of “relating to” in the definition of “personal data”, will not necessarily change things for most data subjects and controllers.

But this “context duty” feels to me to be an advancement of the law. And I suspect controllers can now expect to see data subjects and their lawyers, when making subject access requests (or when challenging responses), begin to argue that the “context duty” applies.

The views in this post (and indeed most posts on blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, GDPR, judgments, subject access, UK GDPR

O’Carroll v Meta – what now for targeted adverts on Facebook

Following the news that claimant Tanya O’Carroll and defendant Meta have settled ahead of what was likely to be a landmark data protection case, what are the implications?

Ms O’Carroll argued that advertising served to her on Facebook, because it was targeted at her, met the definition of “direct marketing” under section 122(5) of the Data Protection Act 2018 (“the communication (by whatever means) of advertising or marketing material which is directed to particular individuals”) and thus the processing of her personal data for the purposes of serving that direct marketing was subject to the absolute right to object under Article 21(2) and (3) UK GDPR.

Meta had disputed that the advertising was direct marketing.

The “mutually agreed statement” from Ms O’Carroll says “In agreeing to conclude the case, Meta Platforms, Inc. has agreed that it will not display any direct marketing ads to me on Facebook, will not process my data for direct marketing purposes and will not undertake such processing (including any profiling) to the extent it is related to such direct marketing”.

One concludes from this that Meta will, at least insofar as the UK GDPR applies to its processing, now comply with any Article 21(2) objection, and, indeed, that is how it is being reported.

But will the upshot of this be that Meta will introduce ad-free services in the UK, but for a charge (because its advertising revenues will be likely to drop if people object to targeted ads)? It is indicating so, with a statement saying “Facebook and Instagram cost a significant amount of money to build and maintain, and these services are free for British consumers because of personalised advertising. Like many internet services, we are exploring the option of offering people based in the UK a subscription and will share further information in due course”.

The ICO intervened in the case, and have uploaded a summary of their arguments, which were supportive of Ms O’Carroll’s case, and her lawyers AWO Agency have also posted an article on the news.

The views in this post (and indeed most posts on blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, Data Protection Act 2018, facebook, Information Commissioner, marketing, Meta, Right to object, UK GDPR

Why is the ICO so quiet about prosecutions?

Not infrequently, I get contacted (personally and professionally) by individuals who are concerned that their personal data has been compromised in circumstances that may constitute the criminal offence of “obtaining” or “retaining”, under section 170 of the Data Protection Act 2018.

In many cases, there is not much I can bring to the table. If an offence has been committed then this is a matter for the prosecutor. Normally, for data protection offences, this is the Information Commissioner’s Office.

But what strikes me is that there appears to be no information on the ICO website for anyone who wants to report an alleged or potential offence. Their “For the public” pages don’t cover the scenario, and all of the data protection complaints information there is predicated on the assumption that the individual will be complaining about the data controller’s compliance (whereas, in a section 170 offence, the controller is more of the status of “victim”).

In fact, the best I can find is one brief reference (at page 61) of a lengthy guide to the DPA 2018, aimed at “organisations and individuals who are already familiar with data protection law”, and which doesn’t even actually explain that the offences described can be prosecuted by the ICO.

Dr David Erdos has recently highlighted both the low number of ICO prosecutions, and the rather slapdash way in which the ICO appears to be handling information about them. But the section 170 provisions are criminal ones for a reason: they will sometimes involve the most distressing and serious interferences with people’s data protection and privacy rights.

Surely the ICO should pay more attention to such incidents, and assist concerned data subjects (or others) who might want to report potential offences?

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, Data Protection Act 2018, Information Commissioner, offences

Concerns over the Public Authorities (Fraud, Error and Recovery) Bill

When it comes to proposed legislation, most data protection commentary has understandably been on the Data (Use and Access) Bill, but it’s important also to note some of the provisions of the Public Authorities (Fraud, Error and Recovery) Bill, introduced in the Commons on 25 January.

The abandoned Tory Data Protection and Digital Information Bill would have conferred powers on the DWP to inspect bank accounts for evidence of fraud. To his credit, the Information Commissioner John Edwards, in evidence given on that earlier Bill, had warned about the “significant intrusion” those powers would have created, and that he had not seen evidence to assure him that they were proportionate. This may be a key reason why they didn’t reappear in the DUA Bill.

The Public Authorities (Fraud, Error and Recovery) Bill does, however, at clause 74 and schedule 3, propose that the DWP will be able to require banks to search their own data to identify whether recipients of Universal Credit, ESA and Pension Credit meet criteria for investigation for potential fraud.

But such investigative powers are only as good as the data, and the data governance, in place. And as the redoubtable John Pring of Disability News Service reports, many disabled activists are rightly concerned about the potential for damaging errors. In evidence to the Bill Committee one activist noted that “even if there was an error rate of just 0.1 per cent during this process, that would still mean thousands of people showing up as ‘false positives’, even if it just examined those on means-tested benefits”.

The Bill does not appear to confer any specific role on the Information Commissioner in this regard, although there will be an independent reviewer, and – again, creditably – the Commissioner has said that although he could not be the reviewer himself, he would expect to be involved.

It is worth also reading the concerns of the Public Law Project, contained in written evidence to the Bill committee.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under accuracy, Data Protection, data sharing, Information Commissioner

Clarity needed on NHS publication of reports into homicides

[reposted from my LinkedIn account]

Does the law need clarifying on the publication of reviews into homicides by those receiving mental health services from the NHS?

The Times led recently on stories that NHS England was refusing to publish the full independent report into the health care and treatment of Valdo Calocane prior to his manslaughter of three people in Nottingham in 2023. NHSE apparently argued that data protection and patient confidentiality concerns prevented them publishing anything but a summary. Under pressure from victims’ families, and the media, NHSE about-turned, and the full report is reported to contain damning details of failings in Calocane’s treatment which were not in the summary version.

Now The Times reports that this is part of a pattern, since last year, of failure to publish full reviews of homicides by mental health patients, contrary to previous practice. It says that NHSE received legal advice that the practice “could breach data protection rules and the killers’ right to patient confidentiality”. The charity Hundred Families talks of cases where the names of victims are not published, or even the identity of the NHS Trust involved.

Of course, without seeing the advice, it is difficult to comment with any conviction, but I did write in recent days about how the law can justify publication where it is “necessary for a protective function” such as exposing malpractice, or failures in services. And it’s important to note that, in many cases, such reports show failings that mean that killers themselves have been let down by the adequacy of treatment: publication can surely, in some cases, cast light on this so that similar failings don’t happen in the future. In any case, guidance says that those preparing reports should do so with a view to their being published, and so confidentiality concerns should be taken into account in the drafting.

However, if NHSE remains concerned about the legality of publication, and if its legal advice continues to say that data protection and medical confidentiality law militated against disclosure, it strikes me that this might call for Parliament to legislate. I also believe that it would be welcomed if the Information Commissioner’s Office issued a statement on the legal issues arising.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under access to information, Confidentiality, Data Protection, Information Commissioner, NHS

Is the legal sector really suffering a flood of databreaches?

[reposted from my LinkedIn account]

There have been various articles in the media recently, reporting a significant rise in personal data breaches reported by the legal sector to the Information Commissioner’s Office. I have some real doubts about the figures.

An example article says

A new analysis of data from the Information Commissioner’s Office (ICO) by NetDocuments has revealed a sharp increase in data breaches across the UK legal sector. In the period between Q3 2023 and Q2 2024, the number of identified data breaches in the UK legal sector rose by 39% (2,284 cases were reported to the ICO, compared to 1,633 the previous year)

But something didn’t seem right about those numbers. The ICO say that they have received 60,607 personal data breach reports since their current reporting methods began in Q2 2019 (see their business intelligence visualised database), so it seemed remarkable to suggest that the legal sector was scoring so highly. And, indeed, when I look at the ICO BI data for self-reported personal data breaches, filtered for the legal sector, I see only 197 reported in Q3 2023, and, coincidentally, 197 in Q2 2024 (see attached visuals) – an increase from one relatively low number to another relatively low number of precisely 0%.

A serious question to those more proficient with data than I am – am I missing something?

If I’m not, I really think the ICO should issue some sort of corrective statement.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, data security, Information Commissioner, personal data breach