Category Archives: Let’s Blame Data Protection

Consent is not the only basis

In 2017 I attended a free event run by a “GDPR consultancy”. The presenter confidently told us that we were going to have to get consent from customers in order to process their personal data. One attendee said they worked at the DWP, so how were they going to get consent from benefits claimants who didn’t want to disclose their income, to which the presenter rather awkwardly said “I think that’s one you’ll have to discuss with your lawyers”. Another attendee, who was now most irritated that he’d taken time out from work for this, could hold his thoughts in no longer, and rudely announced that this was complete nonsense.

That attendee was the – much ruder in those days – 2017 version of me.

I never imagined (although I probably should have done) that eight years on the same nonsense would still be spouted.

Just as the Data Protection Act 2018 did not implement the GDPR in the UK (despite the embarrassing government page that until recently, despite people raising it countless times, said so) just as the GDPR does not limit its protections to “EU citizens”, so GDPR and the UK GDPR do not require consent for all processing.

Anyone who says so has not applied a smidgeon of thought or research to the question, and is probably taking content from generative AI, which, on the time-honoured principle of garbage-in, garbage-out, has been in part trained on the existing nonsense. To realise why it’s garbage, they should just start with the DWP example above and work outwards from there.

Consent is one of the six lawful bases, any one or more of which can justify processing. No one basis is better than or takes precedence over the other.

To those who know this, I apologise for having to write it down, but I want to have a sign to tap for any time I see someone amplifying the garbage on LinkedIn.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, Data Protection Act 2018, DWP, GDPR, Let's Blame Data Protection, UK GDPR

Crowdstrike and personal data breaches: loss vs unavailability

I ran a poll on LinkedIn in recent days which asked “If a controller temporarily can’t access personal data on its systems because of the Crowdstrike/MSFT incident is it a personal data breach?” 

I worded the question carefully.

50% of the 100-odd people who voted said “no” and 50% said “yes”. The latter group are wrong. I say this with some trepidation because there are people in that group whose opinion I greatly respect. 

But here’s why they, and, indeed, the Information Commissioner’s Office and the European Data Protection Board, are wrong.

Article 4(12) of the GDPR/UK GDPR defines a “personal data breach”. This means that it is a thing in itself. And that is why I try always to use the full term, or abbreviate it, as I will here, to “PDB”. 

This is about the law, and in law, words are important. To refer to a PDB as the single word “breach” is a potential cause of confusion, and both the ICO and the EDPB guidance are infected by and diminished by sloppy conflation of the terms “personal data breach” and “breach”. In English, at least, and in English law, the word “breach” will often be used to refer to a contravention of a legal obligation: a “breach of the law”. (And in information security terminology, a “breach” is generally used to refer to any sort of security breach.) But a “breach” is not coterminous with a “personal data breach”.

And a PDB is not a breach of the law: it is a neutral thing. It is also crucial to note that nowhere do the GDPR/UK GDPR say that there is an obligation on a person (whether controller or processor) not to experience a PDB, and nowhere do GDPR/UK GDPR create liability for failing to prevent one occurring. This does not mean that where a PDB has occurred because of an infringement of other provisions which do create obligations and do confer liability (primarily Article 5(1)(f) and Article 32) there is no potential liability. But not every PDB arises from an infringement of those provisions.

The Article 4(12) definition is “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed”. Let us break that down:

  • A breach of security…
  • leading to [one or more of]
  • accidental or unlawful…
  • 1. destruction of…
  • 2. loss of…
  • 3. alteration of…
  • 4. unauthorised disclosure of…
  • 5. unauthorised access to…
  • personal data processed.

If an incident is not a breach of security, then it’s not a PDB. And if it is a breach of security but doesn’t involve personal data, it’s not a PDB. But even if it is a breach of security, and involves personal data, it’s only a PDB if one of the eventualities I’ve numbered 1 to 5 occurs.

Note that nowhere in 1 to 5 is there “unavailability of…” or “loss of access to…”. 

Now, both the ICO, and the EDPB, read into the words “loss of…personal data…” the meaning, or potential meaning “loss of availability of personal data”. But in both cases they appear to do so in the context of saying, in terms, “loss of availability is Article 4(12) ‘loss’ because it can cause harm to data subjects”. I don’t dispute, and nor will many millions of people affected by the Crowdstrike incident, that unavailability of personal data can cause harm. But to me, “loss” means loss: I had something, and I no longer have it. I believe that that is how a judge in the England and Wales courts would read the plain words of Article 4(12), and decide that if the legislator had intended “loss” to mean something more than the plain meaning of “loss” – so that it included a meaning of “temporary lack of access to” – then the legislator would have said so. 

Quite frankly, I believe the ICO and EDPB guidance are reading into the plain wording of the law a meaning which they would like to see, and they are straining that plain wording beyond what is permissible.

The reason, of course, that this has some importance is that Article 33 of the GDPR/UK GDPR provides that “in the case of” (note the neutral, “passive” language) a PDB, a controller must in general make a notification to the supervisory authority (which, in the UK, is the ICO), and Article 34 provides that where a PDB is likely to result in a high risk to the rights and freedoms of natural persons, those persons should be notified. If a PDB has not occurred, no obligation to make such notifications arises. That does not mean of course, that notifications cannot be made, through an exercise of discretion (let’s forget for the time being – because they silently resiled from the point – that the ICO once bizarrely and cruelly suggested that unnecessary Article 33 notifications might be a contravention of the GDPR accountability principle.)

It might well be that the actions or omissions leading to a PDB would constitute an infringement of Articles 5(1)(f) and 32, but if an incident does not meet the definition in Article 4(12), then it’s not a PDB, and no notification obligation arises. (Note that this is an analysis of the position under the GDPR/UK GDPR – I am not dealing with whether notification obligations to any other regulator arise.)

I can’t pretend I’m wholly comfortable saying to 50% of the data protection community, and to the ICO and EDPB, that they’re wrong on this point, but I’m comfortable that I have a good arguable position, and that it’s one that a judge would, on balance agree with. 

If I’m right, maybe the legislator of the GDPR/UK GDPR missed something, and maybe availability issues should be contained within the Article 4(12) definition. If so, there’s nothing to stop both the UK and the EU legislators amending Article 4(12) accordingly. And if I’m wrong, there’s nothing to stop them amending it to make it more clear. In the UK, in particular, with a new, energised government, a new Minister for Data Protection, and a legislative agenda that will include bills dealing with data issues, this would be relatively straightforward. Let’s see.

And I would not criticise any controller which decided it was appropriate to make an Article 33 notification. It might, on balance, be the prudent thing for some affected controllers to do so. The 50/50 split on my poll indicates the level of uncertainty on the part of the profession. One also suspects that the ICO and the EU supervisory authorities might get a lot of precautionary notifications.

Heck, I’ll say it – if anyone wants to instruct me and my firm to advise, both on law and on legal strategy – we would of course be delighted to do so.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

1 Comment

Filed under Data Protection, EDPB, GDPR, Information Commissioner, Let's Blame Data Protection, LinkedIn Post, personal data breach, UK GDPR

SRA, data protection and the solicitors roll

In August 2022 the Solicitors Regulation Authority (SRA) announced plans to change its rules and reinstate the annual “keeping of the roll” exercise. Until 2014, all solicitors without practising certificates were required to complete an application each year and pay an administration fee if they wished to remain on the roll. This requirement was dispensed with in 2014 in part because the annual process was seen as burdensome for solicitors.

One of the justifications now for reintroducing the keeping of the roll is given by the SRA as

There are also requirements under the General Data Protection Regulation (GDPR) 2016 [sic] and the seven principles that govern the holding and retention of data. Under GDPR we have responsibility as a data controller to ensure we maintain accurate data relating to individuals and we are processing it fairly and lawfully.

What is slightly odd is that when, in 2014, the SRA proposed to scrap the keeping of the roll, it was not troubled by the observations of the then Information Commissioner about the importance of accuracy and privacy of information. In its reply to the then Commissioner’s consultation response it said that it had “fully considered the issues” and

We consider that the availability of the SRA’s online system, mySRA, to non- practising solicitors as a means of keeping their details up to date, serves to mitigate the possibility of data become inaccurate…To further mitigate the risk of deterioration of the information held on the roll, the SRA can include reminders to keep contact details up to date in standard communications sent to solicitors.

If that was the position in 2014, it is difficult to understand why it is any different today. The data protection principles – including the “accuracy principle” – in the UK GDPR (not in fact the “GDPR 2016” that the SRA refers to) are effectively identical to those in the prior Data Protection Act 1998.

If the SRA was not concerned by data protection considerations in 2014 but is so now, one might argue that it should explain why. The Information Commissioner does not appear to have responded to the consultation this time around, so there is no indication that his views swayed the SRA.

If the SRA was concerned about the risk of administrative fines (potentially larger under the UK GDPR than under the Data Protection Act 1998) it should have reassured itself that any such fines must be proportionate (Article 83(1) UK GDPR) and by the fact that the Commissioner has repeatedly stressed that he is not in the business of handing out fines for minor infringements to otherwise responsible data controllers.

I should emphasise that data protection considerations were not the only ones taken into account by the SRA, and I don’t wish to discuss whether, in the round, the decision to reintroduce the keeping of the roll was correct or not (Joshua Rozenberg has written on this, and the effect on him). But I do feel that the arguments around data protection show a confused approach to that particular issue.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under accuracy, Data Protection, Information Commissioner, Let's Blame Data Protection, UK GDPR

Data protection nonsense on gov.uk

It feels like a while since I randomly picked on some wild online disinformation about data protection, but when you get an itch, you gotta scratch, and this page of government guidance for businesses – “Get your business ready to employ staff: step by step” – specifically on “Personal data an employer can keep about an employee” certainly got me itching. It starts off sensibly enough by saying that

Employers must keep their employees’ personal data safe, secure and up to date.

This is true (Article 5(1)(f) and part of 5(1)(c) UK GDPR). And the page goes on to list some information can be “kept” (for which I charitably read “processed”) without employees’ permission, such as: name, address, date of birth, sex, education and qualifications, work experience, National Insurance number, tax code, emergency contact details, employment history with the organisation, employment terms and conditions, any accidents connected with work, any training taken, any disciplinary action. All pretty inoffensive, although I’m not sure what it’s trying to achieve. But then…oh my. Then, it says

Employers need their employees’ permission to keep certain types of ’sensitive’ data

We could stop there really, and snigger cruelly, Consent (aka “permission”) as a condition for processing personal data is complicated and quite frankly to be avoided if possible. It comes laden with quite strict requirements. The Information Commissioner puts it quite well

Consent is appropriate if you can offer people real choice and control over how you use their data, and want to build their trust and engagement. But if you cannot offer a genuine choice, consent is not appropriate. If you would still process the personal data without consent, asking for consent is misleading and inherently unfair…employers and other organisations in a position of power over individuals should avoid relying on consent unless they are confident they can demonstrate it is freely given

And let’s consider the categories of personal data the government page thinks employers should get “permission” to “keep”: race and ethnicity, religion, political membership or opinions, trade union membership, genetics [sic], biometrics, , health and medical conditions, sexual history or orientation.

But how quickly would an employer’s wheels grind to a halt if it couldn’t process personal data on an employee’s health “without her permission”? It would be unable to refer her to occupational health if she didn’t “permit” it. It would be unable to keep a record of her sickness absence if she withdrew her consent (consent should be as easy to withdraw as it is to give (see Article 7(3)). During the COVID pandemic, it would have been unable to keep a record of whether she had tested positive or not, if she said she didn’t want a record kept.

It’s nonsense, of course. There’s a whole range of gateways, plus a whole Schedule of the Data Protection Act 2018), which provide conditions for processing special categories of data without having to get someone’s consent. They include pressing social imperatives, like compliance with public health law, and promotion of equality of treatment and safeguarding of children or other vulnerable people. The conditions don’t apply across the board, but the point is that employees’ permission – their consent – is rarely, if ever, required when there is another compelling reason for processing their data.

I don’t really understand what need, what gap, the government page is trying to fill, but the guidance is pretty calamitous. And it is only likely to lead to confusion for business owners and employers, and runs the risk of pitting themselves against each other – with disputes arising – amidst the confusion.

BAH!

Now, that felt better. Like I say, sometimes it’s good to scratch that itch.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under consent, Data Protection, Data Protection Act 2018, Let's Blame Data Protection, UK GDPR

Search and (don’t) destroy

Martin Lewis’s Money Saving Expert (MSE) site reports that over £1m is apparently held by Highways England (HE) in respect of Dartford Crossing pre-paid online accounts (Freedom of Information requests were apparently used to establish the amount). It is of course by no means uncommon for money to lie dormant in money accounts – for instance, banks across the world hold fantastic sums which never get claimed. MSE itself suggests elsewhere that the total amount in the UK alone might be around £15bn – but what these FOI requests to HE also revealed is an approach to retention of personal data which may not comply with HE’s legal obligations.

People appear to have received penalty charges after assuming that their pre-paid accounts – in credit when they were last used – would still cover the crossing charge (even where the drivers had been informed that their accounts had been closed for lack of use). MSE reports the case of Richard Riley, who

had been notified by email that his account would be closed, but he’d wrongly assumed it would be reactivated when he next made the crossing (this is only the case if you cross again within 90 days of being notified). On looking into it further, Richard also realised he had £16 in his closed account

However, HE apparently explained to MSE that

…it’s unable to reopen automatically closed accounts or automatically refund account-holders because it has to delete personal data to comply with data protection rules.

This cannot be right. Firstly, as the MSE article goes on to explain, if someone suspects or discovers that they have credit in a closed Dartford Crossing account, they can telephone HE and “any money will be paid back to the debit or credit card which was linked to the account. If this isn’t possible, a refund will be issued by cheque.”

So HE must retain some personal data which enables them to confirm whose money it is that they hold. But if it is true that HE feels that data protection law requires them to delete personal data which would otherwise enable them to refund account-holders when accounts are closed, then I fear that they are misreading two of the key principles of that law.

Article 5(1)(e) of the UK GDPR (the “storage limitation principle”) requires that personal data be “kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed” (emphasis added), and Article 5(1)(c) ( the “data minimisation principle”) requires that personal data be “limited to what is necessary in relation to the purposes for which they are processed” (emphasis added). Both of these make clear that where personal data is still needed for the purposes for which it is processed, then it can (and should) be retained. And when one adds the point, under Article 5(1)(c), that personal data should also be “adequate” for the purposes for which it is processed, it becomes evident that unnecessary deletion of personal data which causes a detriment or damage to the data subject can in itself be an infringement.

This matter is, of course, on a much lower level of seriousness than, for instance, the unnecessary destruction of landing cards of members of the Windrush Generation, or recordings of witnesses in the Ireland Mother and Baby Homes enquiry, but it strikes me that it is – in general – a subject that is crying out for guidance (and where necessary enforcement) by the Information Commissioner. Too many people feel, it seems, that “data protection” means they have to delete, or erase or destroy personal data.

Sometimes, that is the worst thing to do.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under accuracy, adequacy, Data Protection, Information Commissioner, Let's Blame Data Protection, UK GDPR

Virgin on the ridiculous

UPDATE 15.12.14: I think the comments on this piece take it further, and I do accept (as I did at the time, in fact) that the “password” in question was not likely to relate to customers’ accounts.
END UPDATE.

I got into a rather odd exchange over the weekend with the people running the Virgin Media twitter account. It began when, as is my wont, I was searching for tweets about “data protection” and noticed an exchange in which someone had asked Virgin Media whether their sales people rang customers and asked them to give their passwords. Virgin Media kindly appeared to confirm they did, and that

it’s for security as we can’t make any changes without data protection being passed

I asked for clarification, and this exchange ensued

[ME] Is it true your sales people call customers and ask for their account passwords? If so, are these unsolicited calls?

[VM] Yes this is true, our sales team would call and before entering your account, would need you to pass account security. I understand for your own security purposes why you wouldn’t feel great doing this, i’d be the same. If you give us a call on 150/03454541111 we can get this cleared up. Let me know how you get on

[ME] Thanks. Not a customer. Just interested in what seems like questionable practice being defended under guise of data protection

[VM] We contact our customers if there upgrade is due, or for a heath check on accounts, and a few other instances, but I get where your coming from [sic]

There’s nothing unlawful about this practice, and I assume that the accounts in question are service and not financial ones, but it doesn’t accord with normal industry practice. Moreover, one is warned often enough about the risks of phishing calls asking for account passwords. If a legitimate company requires or encourages its sales staff to do this, it adds to a culture of unnecessary risk. There are better ways of verifying identity, as their social media person seems to accept, when they say “I understand for your own security purposes why you wouldn’t feel great doing this, i’d be the same”.

One thing I’m certain about, though, is that isn’t any part of “passing data protection” (unless they mean bypassing) to make outbound calls and ask for customer passwords.

On a final note, and in admiration of bare-faced cheek, I highlight the end of my exchange with Virgin Media

If you want, as your not a customer, you can check out our brill offers here [removed] maybe we could save you a few pounds?

That’s an offer I most certainly can refuse.

(By the way, as it’s an official Virgin Media account, I’ve taken what I was told on Twitter at face value. If I have misunderstood any of their policies on this I’d be happy to correct).

UPDATE:

Virgin Media’s Twitter account appears to have confirmed to me a) that they do ask for customers’ passwords on outbound sales calls, and b) that they see nothing wrong with it. And rather hilariously, they say that “we can discuss further” if I will “pop a few details” on their web form for social media enquiries. No thanks.

12 Comments

Filed under Data Protection, Let's Blame Data Protection, marketing, nuisance calls, PECR, social media

The Moanliness of the Long-distance Runner

Another in the Let’s Blame Data Protection series, in which I waste a lot of energy on something not really worth the effort

The Bournemouth Daily Echo reports that

Hundreds of disgruntled runners who took part in the inaugural Bournemouth Marathon Festival have accused event organisers of withholding information by failing to provide full race results.

and, with rather dull predicability, there’s a familiar apparent culprit

GSi Events Ltd, the team behind the BMF, has published the top ten runners in the various age categories, but is refusing to publish all the results on the grounds of data protection.

But does data protection law really prevent publication of this sort of information? The answer, I think, is “no”, and the reason for this is tied to issues of fairness and consent

The first data protection principle, in Schedule One of the Data Protection Act 1998 (DPA) says that personal data (broadly, information relating to an identifiable individual) must be “processed” (publication is one form of processing) fairly and lawfully.

The concept of fairness is not an easy one to grasp or define, but helpfully the DPA provides a gloss on it, which, to paraphrase, is that if people are properly informed about how their data is going to be processed (who is doing the processing, and for what purpose)  then a key element of “fairness” is met. The Information Commissioner’s Privacy Notices Code of Practice explains

A privacy notice should be genuinely informative. Properly and thoughtfully drawn up, it can make your organisation more transparent and should reassure people that they can trust you with their personal information

The first data protection principle goes on to say that (in particular) personal data shall not be processed at least one of the conditions in Schedule 2 of the Act is met (and Schedule 3, in the case of higher-category sensitive personal data). One of those conditions is

The data subject has given his consent to the processing.

“Consent” is not defined in the DPA, but it is given a definition in the EC Data Protection Directive, to which the DPA gives domestic effect. The Directive says that consent

shall mean any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed

“Specific” and “signifies” are generally taken to mean that implied consent is not valid in this context, (although the practice of implying consent to processing is widespread). Nonetheless, it seems clear that, with a privacy notice, sensibly drafted, the organisers of the Bournemouth Marathon could easily have said to those registering to race “your race result/time will be published, unless you object”. When one looks at the actual privacy notice, however, such a term is absent. 

I suppose that means one could argue that, under the current privacy notice, publishing the race details would be in breach of the DPA. I suppose I could also construct a counter-argument to that to the effect that publication is necessary in pursuance of legitimate interests of the race organisers (for instance to show that it was a real flipping race) when balanced against the legitimate interests of the racers.

But ultimately, come on, it’s just silly to blame data protection: the vast, vast majority of people take part in a marathon knowing that it’s a public event, where they’ll gather plaudits or attract ridicule. Any expectation of privacy of race results is effectively non-existent.

Publish the damn race results, take the infinitesimal risk of someone complaining (a complaint which no one, i.e. the Information Commissioner and the courts, will take seriously or be able to offer a remedy to) and sort your privacy notice out for next year.

Leave a comment

Filed under Data Protection, Let's Blame Data Protection

Let’s Blame Data Protection – the Gove files

Thanks to Tim Turner, for letting me blog about the FOI request he made which gives rise to this piece

On the 12th September the Education Secretary, Michael Gove, in an op-ed piece in the Telegraph, sub-headed “No longer will the quality, policies and location of care homes be kept a secret” said

A year ago, when the first shocking cases of sexual exploitation in Rochdale were prosecuted, we set up expert groups to help us understand what we might do better…Was cost a factor? Did we need to spend more? There was a lack of clarity about costs. And – most worrying of all – there was a lack of the most basic information about where these homes existed, who was responsible for them, and how good they were….To my astonishment, when I tried to find out more, I was met with a wall of silence

And he was in doubt about where the blame lay (no guesses…)

The only responsible body with the information we needed was Ofsted, which registers children’s homes – yet Ofsted was prevented by “data protection” rules, “child protection” concerns and other bewildering regulations from sharing that data with us, or even with the police. Local authorities could only access information via a complex and time-consuming application process – and some simply did not bother…[so] we changed the absurd rules that prevented information being shared

This seemed a bit odd. Why on earth would “data protection” rules prevent disclosure of location, ownership and standards of children’s homes? I could understand that there were potentially child protection concerns in the too-broad-sharing of information about locations (and I don’t find that “bewildering”) but data protection rules, as laid out in the Data Protection Act 1998 (DPA), only apply to information relating to identifiable individuals. This seemd odd, and Tim Turner took it upon himself to delve deeper. He made a freedom of information request to the Department for Education, asking

1) Which ‘absurd’ rules was Mr. Gove referring to in the first
statement?

2) What changes were made that Mr. Gove referred to in the second
statement?

3) Mr Gove referred to ‘Data Protection’ rules. As part of the
process that he is describing, has any problem been identified with
the Data Protection Act?

Fair play to the DfE – they responded within the statutory timescales, explaining

Regulation 7(5) of the Care Standards Act 2000 (Registration) (England) Regulations 2010 …prohibited Ofsted from disclosing parts of its register of children’s homes to any body other than to a local authority where a home is located. Whatever the original intention behind this limitation, it represented a barrier preventing Ofsted from providing information about homes’ locations to local police forces, which have explicit responsibilities for safeguarding all children in their area…we introduced an amendment to Regulation 7 with effect from April 2013

But their response also revealed what had been very obvious all along: this had nothing to do with data protection rules:

the reference to “data protection” rules in Mr Gove’s article involved the Regulations discussed above, made under section 36 of the Care Standards Act 2000. His comments were not intended as a reference to the Data Protection Act 1998

This is disingenuous: “data protection” has a very clear and statutory context, and to extend it to more broadly mean “information sharing” is misleading and pointless. One could perhaps understand it if Gove had said this in an oral interview, but his piece will have been checked carefully before publication, and personally I am in no doubt that blaming data protection has a political dimension. The government is determined, for some right reasons, and some wrong ones, to make the sharing of public sector data more easy, and data protection does, sometimes – and rightly – present an obstacle to this, when the data in question is personal data and the sharing is potentially unfair or unlawful. Anything which associates “data protection” with a risk to child safety, serves to represent it as bureaucratic and dangerous, and serves the government agenda.

And the rather delicious irony of all this – as pointed out on twitter by Rich Greenhill – is that the “absurd rules” (the Care Standards Act 2000 (Registration) (England) Regulations 2010) criticised by Gove were made on 24 August 2010. And the Secretary of State who made these absurd rules was, of course, the Right Honourable Michael Gove MP.

How absurd.

Leave a comment

Filed under Data Protection, data sharing, Freedom of Information, Let's Blame Data Protection, transparency

Let’s blame Data Protection: Part Two

“The leader of the council wishes to make the names of the debtors public, but the Data Protection Act of 1998 prohibits their publication.”

So says an article from the Blackpool Gazette, when quoting a council report (which I haven’t yet been able to find) which appears to have indicated that

The council has been forced to write off £1.68m in owed business rates going back around the last six years

The council leader is reported to have said

Several names appear more than once, owing vast sums of money to the council…Several high-profile business owners, who always seemed to have a lot to say about how the town is run, seem to have no qualms about disappearing owing us tens of thousands of pounds…We are very dogged and tenacious when it comes to pursuing debtors, and clearly need to continue to be.

but

What I do find very frustrating is that I am not able to publish the names of these people

This puzzles me: names of businesses will not, as a general rule constitute personal data under section 1(1) of the Data Protection Act 1998 (DPA). The definition of personal data

data which relate to a living individual who can be identified—
(a) from those data, or
(b)from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller

Even if individuals can be identified from disclosure of the names of defaulting businesses it is perhaps the case that the information will not considered to be personal data, especially following the precedent of the Court of Appeal in Durant where it was held that, for information to be personal data it
should have the putative data subject as its focus rather than…some transaction or event in which he may have figured or have had an interest
It is interesting to note that the Information Commissioner’s Office (ICO), in guidance which appears to have been withdrawn, said
Information about people who run businesses, and the businesses they run, will often be covered by the Act. This is because information about a person’s business, activities, possessions, and so on is generally personal information about that person
although, in a rather circular argument

Business information that does not identify individuals is not covered by the Act

What I think is being got at is that, for example, information consisting of “Richard Hannay is a fifty-year-old black man who runs Imaging Solutions Ltd, which made a £1.2m profit last year” is potentially Richard Hannay’s personal data throughout, whereas “Imaging Solutions Ltd made a £1.2m profit” is unlikely to be Hannay’s personal data when considered in isolation, even though one can easily find out that he is the sole director.
In another, more specific scenario, it might be more easily argued that the names of business are personal data. This is where someone is conducting business as a sole trader. The ICO’s ?withdrawn guidance said

Information about a sole trader’s business will be personal information about him

I’m not sure I would be so unequivocal, but as a general proposition it’s not objectionable.

However, even if business information is personal data, the DPA does not necessarily prevent disclosure of it. In fact, the DPA permits disclosure of any and all types of personal data, as long as it is in compliance with the Act. In short, if disclosure is fair and lawful and relevant provisions permit it, then it will be in compliance with the Act. And, helpfully for the council, there is a specific provision relating to personal data “processed for…the assessment or collection of any tax or duty”. This exemption permits disclosure where not disclosing would be likely to prejudice the collection of the tax in question. Additionally, the sixth condition of Schedule 2 of the DPA provides that, if it  is “necessary for the purposes of legitimate interests pursued by the data controller” personal data may be processed, provided it is not “unwarranted…by reason of prejudice to the rights and freedoms or legitimate interests of the data subject”.
This will not give carte blanche to disclosure of personal data (if personal data it is) of owners of defaulting businesses, but it is certainly arguable in this instance that disclosure would assist the collection of the tax (and, therefore, non-disclosure could prejudice it), and that the balancing exercise required by the sixth Schedule 2 condition would fall in favour of disclosure.
So, a) I doubt that the withheld information is personal data, and, even if it is b) disclosure would be in compliance with the DPA.
One thing is certain, the DPA does not prohibit publication of this information, and, to the extent that it might be engaged, I would not see it as a barrier to disclosure. It might even help the council in its aim to be “dogged and tenacious when it comes to pursuing debtors”.
But it’s so much easier to blame Data Protection.

Leave a comment

Filed under Data Protection, Let's Blame Data Protection

An unshared perspective

Paul Gibbons, FOI Man, has blogged about data-sharing, questioning whether an over-cautious approach to sharing of health data is damaging. Paul says

What I’m increasingly worried about is what appears to be a widely held and instinctive view that any sharing of personal data – and even data that has been anonymised – is necessarily a “bad thing”.

I’ve got to say, in all the time I’ve worked in the field of information rights I’ve never come across anyone who actually thinks that, let alone articulates it (in my experience the only people who say it are those who seek to misrepresent it). The Data Protection Act 1998 (DPA) and EC Directive 95/46/EC to which it gives effect do not act as a default bar to sharing of data. There may be circumstances under which compliance with the law means that sharing of personal data cannot happen, but the converse is true – there will be times when sharing is lawful, necessary and proportionate.

Paul’s prime example of what he sees as (to adopt the title of his piece) “a disproportionate fear of ‘Big Brother’” preventing us from seeing the big picture” is the “predictable outcry” about the care:data programme, whereby the Health and Social Care Information Centre will, through the exercise of certain provisions in the Health and Social Care Act 2012, extract enormous amounts of health and social care information from local systems to centralised ones. The first step in this is the GP Extraction Service (GPES) whereby information relating to medical conditions, treatments and diagnosis, with each patient’s NHS number, date of birth, postcode, gender, ethnicity and other information will be uploaded routinely. The information will then be made available to a range of organisations, sometimes including private companies, sometimes in ostensibly anonymised, sometimes in identifiable, form, for a variety of purposes. This will happen to your medical records unless you opt-out (and if you think you’ve already done so, you probably haven’t – those who objected to the creation of a summary care record will have to go through another opt-out process). And this week we were informed that there will be no national campaign to alert patients to the GPES – the responsibility (and liability) will lie with GP practices themselves. (Anyone wanting to understand this complex and less-than-transparent process must read and follow the superb MedConfidential).

I accept that, on one view, this amassing of health and social care data could be seen as a good thing: as Paul suggests, medical research, for instance is a hugely important area. And the NHS Commissioning Board identifies the following desired outcomes from care:data

– support patients’ choice of service provider and treatment by making comparative data publicly available
– advance customer services, with confidence that services are planned around the patient
– promote greater transparency, for instance in support of local service planning
– improve outcomes, by monitoring against the Outcomes Frameworks
– increase accountability in the health service by making data more widely available
– drive economic growth through the effective use of linked data

But how realistic are these? And what are the attendant risks or detriments? Paul says

central medical records for all NHS patients…would mean that when you turned up at a hospital far from home, as I have done myself, doctors would have access to your medical records and history. Believe me, when you are in pain and desperate to be treated, the last thing that you want to do is to answer questions about your medical history

With great respect, the ideal of a centralised system whereby medics can provide emergency treatment to patients by accessing electronic records is never going to be more than a myth. Put another way – would Paul be happy trusting his life to the accuracy of an electronic record that might or might not say, for instance, whether he is allergic to aspirin? Treatment of patients is a matter of diagnosis, and emergency diagnoses will never be made solely, if at all, on the basis of records.

Security of information, and risks of identification of individuals are other key concerns. Paul says Daniel Barth-Jones identifies “deficiencies in [reidentification] studies” but I think what Barth-Jones is actually arguing is that the risks of reidentification are real, but they must be accurately reported and balanced against the likelihood of their happening.

But ultimately I have two major conceptual concerns about care:data and what it implies. The first is that, yes, I am instinctively distrusting of agglomeration of sensitive personal data in identifiable form in mass processing systems: history has taught us to be this way so I don’t see this, as Paul appears to, as a “fashionable” mistrust (and, for instance, the Joseph Rowntree Foundations’ exemplary Database State report is now over six years old). The second is that patient-medic confidentiality exists, and has existed for a very long time, for a reason: if patients are not certain that their intimate medical details are confidential, they might be reluctant to speak candidly to their doctor. In fact, they might not even visit their doctor at all.

3 Comments

Filed under Confidentiality, Data Protection, data sharing, human rights, Let's Blame Data Protection