Category Archives: Directive 95/46/EC

The wrong test for anonymisation?

UPDATE: 23.01.15 The ICO has responded [.doc file] to my request for a review of their decision. I drew their attention to the arguments on this page but they don’t even mention them, let alone provide a counter-analysis, in dismissing my complaints (“Having reviewed the matter, I agree with the explanations provided”). I am invited by the ICO to consider taking my own legal action. I understand that the ICO and I might have differing views on a DPA matter, but what I find difficult to accept is the refusal even to enter into a discussion with me about the detailed arguments I’ve made. END UPDATE

In February this year I asked the Information Commissioner’s Office (ICO) to investigate reports that Hospital Episode Statistics (HES) data had apparently been sold to an actuarial society by the NHS Information Centre (NHSIC), the predecessor to the Health and Social Care Information Centre (HSCIC). Specifically I requested, as a data subject can under s42 of the Data Protection Act 1998 (DPA), that the ICO assess whether it was likely or not that the processing of my personal data by NHSIC and others had been in compliance with the DPA.

Nine months later, I was still awaiting the outcome. But a clue to how the assessment would turn out was contained in the text of Sir Nick Partridge’s six month review of various data releases by NHSIC (his original report in June seemed to me to point to multiple potential DPA contraventions). In the review document he says

Six investigations have been separately instigated by the HSCIC or Information Commissioner’s Office (ICO)and shared with both parties as these focussed on whether individuals were at risk of being identified. In the cases it has investigated, the ICO has upheld the HSCIC approach and informed us that it has “seen no evidence to suggest that re-identification has occurred or is reasonably likely to occur.”
And sure enough, after chasing the ICO for the outcome of my nine-month wait, I received this (in oddly formatted text, which rather whiffed of a lot of cutting-and-pasting)
Following the recent issue regarding HSCIC, PA Consulting, and Google we investigated the issue of whether HES data could be considered personal data. This detailed work involved contacting HSCIC, PA Consulting, and Google and included the analysis of the processes for the extraction and disclosure of HES data both generally and in that case in particular. We concluded that we did not consider that the HES dataset constitutes personal data.Furthermore we also investigated whether this information had been linked to other data to produce “personal data” which was subject to the provisions of the Act. We have no evidence that there has been any re-identification either on the part of PA Consulting or Google. We also noted that HSCIC have stated that the HES dataset does not include individual level patient data even at a pseudonymised level. Our view is that the data extracted and provided to PA Consulting did not identify any individuals and there was no reasonable likelihood that re-identification would be possible.
I have added the emphasis to the words “reasonable likelihood” above. They appear in similar terms in the Partridge Review, and they struck me as rather odd. An awful lot of analysis has taken and continues to take place on the subject of when can personal data be “rendered fully anonymous in the sense that it is information from which the data subject is no longer identifiable” (Lord Hope’s dicta in Common Services Agency v Scottish Information Commissioner [2008] UKHL 47). Some of that analysis has been academic, some takes the form of “soft law” guidance, for instance Opinion 05/2014 of the Article 29 Working Party, and the ICO Anonymisation Code of Practice. The former draws on the Data Protection Directive 95/46/EC, and notes that

Recital 26 signifies that to anonymise any data, the data must be stripped of sufficient elements such that the data subject can no longer be identified. More precisely, that data must be processed in such a way that it can no longer be used to identify a natural person by using “all the means likely reasonably to be used”

Anonymisation has also been subject to judicial analysis, notably in the Common Services Agency case, but, even more key, in the judgment of Mr Justice Cranston in Department of Health v Information Commissioner ([2011] EWHC 1430). The latter case, involving the question of disclosure of late-term abortion statistics, is by no means an easy judgment to parse (ironically so, given that it makes roughly the same observation of the Common Services Agency case). The judge held that the First-tier Tribunal had been wrong to say that the statistics in question were personal data, but that it had on the evidence been entitled to say that “the possibility of identification by a third party from these statistics was extremely remote”. The fact that the possibility of identification by a third party was extremely remote meant that “the requested statistics were fully anonymised” (¶55). I draw from this that for personal data to be anonymised in statistical format the possibility of identification of individuals by a third party must be extremely remote. The ICO’s Anonymisation Code, however, says of the case:

The High Court in the Department of Health case above stated that the risk of identification must be greater than remote and reasonably likely for information to be classed as personal data under the DPA [emphasis added]

But this seems to me to be an impermissible description of the case – the High Court did not state what the ICO says it stated – the phrases “greater than remote” and “reasonably likely” do not appear in the judgment. And that phrase “reasonably likely” is one that, as I say, makes it way into the Partridge Review, and the ICO’s assessment of the lawfulness of HES data “sale”.

I being to wonder if the ICO has taken the phrase from recital 26 of the Directive, which talks about the need to consider “all the means likely reasonably to be used” to identify an individual, and transformed it into a position from which, if identification is not reasonably likely, it will accept that data are anonymised. This cannot be right: there is a world of difference between a test which considers whether possibility of identification is “extremely remote” and whether it is “reasonably likely”.

I do not have a specific right to a review of the section 42 assessment decision that the processing of my personal data was likely in compliance with NHSIC’s obligations under the DPA, but I have asked for one. I am aware of course that others complained (après moi, la deluge) notably, in March, FIPR, MedConfidential and Big Brother Watch . I suspect they will also be pursuing this.

In October this year I attended an event at which the ICO’s Iain Bourne spoke. Iain was a key figure in the drawing up of the ICO’s Anonymisation Code, and I took the rather cheeky opportunity to ask about the HES investigations. He said that his initial view was that NHSIC had been performing good anonymisation practice. This reassured me at the time, but now, after considering this question of whether the Anonymisation Code (and the ICO) adopts the wrong test on the risks of identification, I am less reassured. Maybe “reasonably likely that an individual can be identified” is an appropriate test for determining when data is no longer anonymised, and becomes personal data, but it does not seem to me that the authorities support it.

Postscript Back in August of this year I alerted the ICO to the fact that a local authority had published open data sets which enabled individuals to be identified (for instance, social care and housing clients). More than four months later the data is still up (despite the ICO saying they would raise the issue with the council): is this perhaps because the council has argued that the risk of identification is not “reasonably likely”?

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

3 Comments

Filed under anonymisation, care.data, Data Protection, Directive 95/46/EC, Information Commissioner, NHS

Russell Brand and the domestic purposes exemption in the Data Protection Act

Was a now-deleted tweet by Russell Brand, revealing a journalist’s private number, caught by data protection law?

Data protection law applies to anyone who “processes” (which includes “disclosure…by transmission”) “personal data” (data relating to an identifiable living individual) as a “data controller” (the person who determines the purposes for which and the manner in which the processing occurs). Rather dramatically, in strict terms, this means that most individuals actually and regularly process personal data as data controllers. And nearly everyone would be caught by the obligations under the Data Protection Act 1998 (DPA), were it not for the exemption at section 36. This provides that

Personal data processed by an individual only for the purposes of that individual’s personal, family or household affairs (including recreational purposes) are exempt from the data protection principles and the provisions of Parts II and III

Data protection nerds will spot that exemption from the data protection principles and Parts II and III of the DPA is effectively an exemption from whole Act. So in general terms individuals who restrict their processing of personal data to domestic purposes are outwith the DPA’s ambit.

The extent of this exemption in terms of publication of information on the internet is subject to some disagreement. On one side is the Information Commissioner’s Office (ICO) who say in their guidance that it applies when an individual uses an online forum purely for domestic purposes, and on the other side are the Court of Justice of the European Union (and me) who said in the 2003 Lindqvist case that

The act of referring, on an internet page, to various persons and identifying them by name or by other means, for instance by giving their telephone numberconstitutes ‘the processing of personal data…[and] is not covered by any of the exceptionsin Article 3(2) of Directive 95/46 [section 36 of the DPA transposes Article 3(2) into domestic law]

Nonetheless, it is clear that publishing personal data on the internet for reasons not purely domestic constitutes an act of processing to which the DPA applies (let us assume that the act of publishing was a deliberate one, determined by the publisher). So when the comedian Russell Brand today decided to tweet a picture of a journalist’s business card, with an arrow pointing towards the journalist’s mobile phone number (which was not, for what it’s worth, already in the public domain – I checked with a Google search) he was processing that journalist’s personal data (note that data relating to an individual’s business life is still their personal data). Can he avail himself of the DPA domestic purposes exemption? No, says the CJEU, of course, following Lindqvist. But no, also, would surely say the ICO: this act by Brand was not purely domestic. Brand has 8.7 million twitter followers – I have no doubt that some will have taken the tweet as an invitation to call the journalist. It is quite possible that some of those calls will be offensive, or abusive, or even threatening.

Whilst I have been drafting this blog post Brand has deleted the tweet: that is to his credit. But of course, when you have so many millions of followers, the damage is already done – the picture is saved to hard drives, is mirrored by other sites, is emailed around. And, I am sure, the journalist will have to change his number, and maybe not much harm will have been caused, but the tweet was nasty, and unfair (although I have no doubt Brand was provoked in some way). If it was unfair (and lacking a legal basis for the publication) it was in contravention of the first data protection principle which requires that personal data be processed fairly and lawfully and with an appropriate legitimating condition. And because – as I submit –  Brand cannot plead the domestic purposes exemption, it was in contravention of the DPA. However, whether the journalist will take any private action, and whether the ICO will take any enforcement action, I doubt.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner, journalism, social media

The Crown Estate and behavioural advertising

A new app for Regent Street shoppers will deliver targeted behavioural advertising – is it processing personal data?

My interest was piqued by a story in the Telegraph that

Regent Street is set to become the first shopping street in Europe to pioneer a mobile phone app which delivers personalised content to shoppers during their visit

Although this sounds like my idea of hell, it will no doubt appeal to some people. It appears that a series of Bluetooth beacons will deliver mobile content (for which, read “targeted behavioural advertising”) to the devices of users who have installed the Regent Street app. Users will indicate their shopping preferences, and a profile of them will be built by the app.

Electronic direct marketing in the UK is ordinarily subject to compliance with The Privacy and Electronic Communications (EC Directive) Regulations 2003 (“PECR”). However, the definition of “electronic mail” in PECR is “any text, voice, sound or image message sent over a public electronic communications network or in the recipient’s terminal equipment until it is collected by the recipient and includes messages sent using a short message service”. In 2007 the Information Commissioner, upon receipt of advice, changed his previous stance that Bluetooth marketing would be caught by PECR, to one under which it would not be caught, because Bluetooth does not involve a “public electronic communications network”. Nonetheless, general data protection law relating to consent to direct marketing will still apply, and the Direct Marketing Association says

Although Bluetooth is not considered to fall within the definition of electronic mail under the current PECR, in practice you should consider it to fall within the definition and obtain positive consent before using it

This reference to “positive consent” reflects the definition in the Data Protection directive, which says that it is

any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed

And that word “informed” is where I start to have a possible problem with this app. Ever one for thoroughness, I decided to download it, to see what sort of privacy information it provided. There wasn’t much, but in the Terms and Conditions (which don’t appear to be viewable until you download the app) it did say

The App will create a profile for you, known as an autoGraph™, based on information provided by you using the App. You will not be asked for any personal information (such as an email address or phone number) and your profile will not be shared with third parties

autograph (don’t forget the™) is software which, in its words “lets people realise their interests, helping marketers drive response rates”, and it does so by profiling its users

In under one minute without knowing your name, email address or any personally identifiable information, autograph can figure out 5500 dimensions about you – age, income, likes and dislikes – at over 90% accuracy, allowing businesses to serve what matters to you – offers, programs, music… almost anything

Privacy types might notice the jarring words in that blurb. Apparently the software can quickly “figure out” thousands of potential identifiers about a user, without knowing “any personally identifiable information”. To me, that’s effectively saying “we will create a personally identifiable profile of you, without using any personally identifiable information”. The fact of the matter is that people’s likes, dislikes, preferences, choices etc (and does this app capture device information, such as IMEI?) can all be used to build up a picture which renders them identifiable. It is trite law that “personal data” is data which relate to a living individual who can be identified from those data or from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller. The Article 29 Working Party (made up of representatives from the data protection authorities of each EU member state) delivered an Opinion in 2010 on online behavioural advertising which stated that

behavioural advertising is based on the use of identifiers that enable the creation of very detailed user profiles which, in most cases, will be deemed personal data

If this app is, indeed, processing personal data, then I would suggest that the limited Terms and Conditions (which users are not even pointed to when they download the app, let alone be invited to agree them) are inadequate to mean that a user is freely giving specific and informed consent to the processing. And if the app is processing personal data to deliver electronic marketing failure to comply with PECR might not matter, but failure to comply with the Data Protection Act 1998 brings potential liability to legal claims and enforcement action.

The Information Commissioner last year produced good guidance on Privacy in Mobile Apps which states that

Users of your app must be properly informed about what will happen to their personal data if they install and use the app. This is part of Principle 1 in the DPA which states that “Personal data shall be processed fairly and lawfully”. For processing to be fair, the user must have suitable information about the processing and they must to be told about the purposes

The relevant data controller for Regent Street Online happens to be The Crown Estate. On the day that the Queen sent her first tweet, it is interesting to consider the extent to which her own property company are in compliance with their obligations under privacy laws.

This post has been edited as a result of comments on the original, which highlighted that PECR does not, in strict terms, apply to Bluetooth marketing

4 Comments

Filed under consent, Data Protection, Directive 95/46/EC, Information Commissioner, marketing, PECR, Privacy, tracking

Dancing to the beat of the Google drum

With rather wearying predictability, certain parts of the media are in uproar about the removal by Google of search results linking to a positive article about a young artist. Roy Greenslade, in the Guardian, writes

The Worcester News has been the victim of one of the more bizarre examples of the European court’s so-called “right to be forgotten” ruling.

The paper was told by Google that it was removing from its search archive an article in praise of a young artist.

Yes, you read that correctly. A positive story published five years ago about Dan Roach, who was then on the verge of gaining a degree in fine art, had to be taken down.

Although no one knows who made the request to Google, it is presumed to be the artist himself, as he had previously asked the paper itself to remove the piece,  on the basis that he felt it didn’t reflect the work he is producing now. But there is a bigger story here, and in my opinion it’s one of Google selling itself as an unwilling censor, and of media uncritically buying it.

Firstly, Google had no obligation to remove the results. The judgment of the Court of Justice of the European Union (CJEU) in the Google Spain case was controversial, and problematic, but its effect was certainly not to oblige a search engine to respond to a takedown request without considering whether it has a legal obligation to do so. What it did say was that, although as a rule data subjects’ rights to removal override the interest of the general public having access to the information delivered by a search query, there may be particular reasons why the balance might go the other way.

Furthermore, even if the artist here had a legitimate complaint that the results constituted his personal data, and that the continued processing by Google was inadequate, inaccurate, excessive or continuing for longer than was necessary (none of which, I would submit, would actually be likely to apply in this case), Google could simply refuse to comply with the takedown request. At that point, the requester would be left with two options: sue, or complain to the Information Commissioner’s Office (ICO). The former option is an interesting one (and I wonder if any such small claims cases will be brought in the County Court) but I think in the majority of cases people will be likely to take the latter. However, if the ICO receives a complaint, it appears that the first thing it is likely to do is refer the person to the publisher of the information in question. In a blog post in August the Deputy Commissioner David Smith said

We’re about to update our website* with advice on when an individual should complain to us, what they need to tell us and how, in some cases, they might be better off pursuing their complaint with the original publisher and not just the search engine [emphasis added]

This is in line with their new approach to handling complaints by data subjects – which is effectively telling them to go off and resolve it with the data controller in the first place.

Even if the complaint does make its way to an ICO case officer, what that officer will be doing is assessing – pursuant to section 42 of the Data Protection Act 1998 (DPA) – “whether it is likely or unlikely that the processing has been or is being carried out in compliance with the provisions of [the DPA]”. What the ICO is not doing is determining an appeal. An assessment of “compliance not likely” is no more than that – it does not oblige the data controller to take action (although it may be accompanied by recommendations). An assessment of “compliance likely”, moreover, leaves an aggrieved data subject with no other option but to attempt to sue the data controller. Contrary to what Information Commissioner Christopher Graham said at the recent Rewriting History debate, there is no right of appeal to the Information Tribunal in these circumstances.

Of course the ICO could, in addition to making a “compliance not likely” assessment, serve Google with an enforcement notice under section 42 DPA requiring them to remove the results. An enforcement notice does have proper legal force, and it is a criminal offence not comply with one. But they are rare creatures. If the ICO does ever serve one on Google things will get interesting, but let’s not hold our breath.

So, simply refusing to take down the results would, certainly in the short term, cause Google no trouble, nor attract any sanction.

Secondly (sorry, that was a long “firstly”) Google appear to have notified the paper of the takedown, in the same way they notified various journalists of takedowns of their pieces back in June this year (with, again, the predictable result that the journalists were outraged, and republicised the apparently taken down information). The ICO has identified that this practice by Google may in itself constitute unfair and unlawful processing: David Smith says

We can certainly see an argument for informing publishers that a link to their content has been taken down. However, in some cases, informing the publisher has led to the complained about information being republished, while in other cases results that are taken down will link to content that is far from legitimate – for example to hate sites of various sorts. In cases like that we can see why informing the content publisher could exacerbate an already difficult situation and could in itself have a very detrimental effect on the complainant’s privacy

Google is a huge and hugely rich organisation. It appears to be trying to chip away at the CJEU judgment by making it look ridiculous. And in doing so it is cleverly using the media to help portray it as a passive actor – victim, along with the media, of censorship. As I’ve written previously, Google is anything but passive – it has algorithms which prioritise certain results above others, for commercial reasons, and it will readily remove search results upon receipt of claims that the links are to copyright material. Those elements of the media who are expressing outrage at the spurious removal of links might take a moment to reflect whether Google is really as interested in freedom of expression as they are, and, if not, why it is acting as it is.

 

 
*At the time of writing this advice does not appear to have been made available on the ICO website.

5 Comments

Filed under Data Protection, Directive 95/46/EC, enforcement, Information Commissioner, Privacy

ICO indicates that (non-recreational) bloggers must register with them

I think I am liable to register with the ICO, and so are countless others. But I also think this means there needs to be a debate about what this, and future plans for levying a fee on data controllers, mean for freedom of expression

Recently I wrote about whether I, as a blogger, had a legal obligation to register with the Information Commissioner’s Office (ICO) the fact that I was processing personal data (and the purposes for which it was processed). As I said at the time, I asked the ICO whether I had such an obligation, and they said

from the information you have provided it would be unlikely that you would be required to register in respect of your blogs and tweets

However, I asked them for clarification on this point. I noted that I couldn’t see any exemption from the obligation to register, unless it was the general exemption (at section 36) from the Data Protection Act 1998 (DPA) where the processing is only for “domestic purposes”, which include “recreational purposes”. I noted that, as someone writing a semi-professional blog, I could hardly rely on the fact I do this only for recreational purposes. The ICO’s reply is illuminating

if you were blogging only for your own recreational purposes, it would be unlikely that you would need to register as a data controller. However, you have explained that your blogging is not just for recreational purposes. If you are sharing your views in order to further some other purpose, and this is likely to impact on third parties, then you should consider registering.

I know this is couched in rather vague terms – “if”…”likely”…”consider” – but it certainly suggests that merely being a non-professional blogger does not exempt me from having to register with a statutory regulator.

Those paying careful attention might understand the implications of this: millions of people every day share their views online, in order to further some purpose, in a way that “is likely to impact on third parties”. When poor Bodil Lindqvist got convicted in the Swedish courts in 2003 that is just what she was doing, and the Court of Justice of the European Union held that, under the European Data Protection Directive, she was processing personal data as a data controller, and consequently had legal obligations under data protection law to process data fairly, i.e. by not writing about a fellow churchgoer’s broken leg etc. without informing them/giving them an opportunity to object.

And there, in my last paragraph, you have an example of me processing personal data – I have published (i.e. processed) sensitive (i.e. criminal conviction) personal data (i.e. of an identifiable individual). I am a data controller. Surely I have to register with the ICO? Section 17 of the DPA says that personal data must not be processed unless an entry in respect of the data controller is included in the register maintained by the ICO, unless an exemption applies. The “domestic purposes” exemption doesn’t wash – the ICO has confirmed that1, and none of the exemptions apply. I have to register.

But if I have to register (and I will, because if I continue to process personal data without a registration I am potentially committing a criminal offence) then so, surely, do the millions of other people throughout the country, and throughout the jurisdiction of the data protection directive, who publish personal data on the internet not solely for recreational purposes – all the citizen bloggers, campaigning tweeters, community facebookers and many, many others…

To single people out would be unfair, so I’m not going to identify individuals who I think potentially fall into these categories, with the following exception. In 2011 Barnet Council was roundly ridiculed for complaining to the ICO about the activities of a blogger who regularly criticised the council and its staff on his blog2. The Council asked the ICO to determine whether the blogger in question had failed in his legal obligation to register with the ICO in order to legitimise his processing of personal data. The ICO’s response was

If the ICO were to take the approach of requiring all individuals running a blog to notify as a data controller … it would lead to a situation where the ICO is expected to rule on what is acceptable for one individual to say about another. Requiring all bloggers to register with this office and comply with the parts of the DPA exempted under Section 36 (of the Act) would, in our view, have a hugely disproportionate impact on freedom of expression.

But subsequently, the ICO was taken to task in the High Court on this general stance (but in unrelated proceedings) about being “expected to rule on what is acceptable for one individual to say about another”, with the judge saying

I do not find it possible to reconcile the views on the law expressed [by the ICO] with authoritative statements of the law. The DPA does envisage that the Information Commissioner should consider what it is acceptable for one individual to say about another, because the First Data Protection Principle requires that data should be processed lawfully

And if now the ICO accepts that, at least those bloggers (like the one in the Camden case) who are not solely blogging for recreational purposes, might be required to register, it possibly indicates a fundamental change.

In response to my last blog post on this subject someone asked “why ruffle feathers?”. But I think this should lead to a societal debate: is it an unacceptable infringement of the principles of freedom of expression for the law to require registration with a state regulator before one can share one’s (non-recreational) views about individuals online? Or is it necessary for this legal restraint to be in place, to seek to protect individuals’ privacy rights?European data protection reforms propose the removal of the general obligation for a data controller to register with a data protection authority, but in the UK proposals are being made (because of the loss of ICO fee income that would come with this removal) that there be a levy on data controllers.

If such proposals come into effect it is profoundly important that there is indeed a debate about the terms on which the levy is made – or else we could all end up being liable to pay a tax to allow us to talk online.

1On a strict reading of the law, and the CJEU judgment in Lindqvist, the distinction between recreational and non-recreational expressions online does not exist, and any online expression about an identifiable individual would constitute processing of personal data. The “recreational” distinction does not exist in the data protection directive, and is solely a domestic provision

2A confession: I joined in the ridicule, but was disabused of my error by the much better-informed Tim Turner. Not that I don’t think the Council’s actions were ill-judged.

 

10 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner, social media

Do bloggers need to register with the ICO?

A strict reading of data protection law suggests many (if not all) bloggers should register with the ICO, even though the latter disagrees. And, I argue, the proposal for an Information Rights Levy runs the risk of being notification under a different name

Part III of the Data Protection Act 1998 (DPA) gives domestic effect to Article 18 of the European Data Protection Directive (the Directive). It describes the requirement that data controllers notify the fact that they are processing personal data, and the details of that processing, to the Information Commissioner’s Office (ICO). It is, on one view, a rather quaint throwback to the days when processing of personal data was seen as an activity undertaken by computer bureaux (a term found in the predecessor Data Protection Act 1984). However, it is law which is very much in force, and processing personal data without a valid notification, in circumstances where the data controller had an obligation to notify, is a criminal offence (section 21(1) DPA). Moreover, it is an offence which is regularly prosecuted by the ICO (eleven such prosecutions so far this year).

These days, it is remarkably easy to find oneself in the position of being a data controller (“a person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be, processed”). There are, according to the ICO, more than 370,000 data controllers registered. Certainly, if you are a commercial enterprise which in any way electronically handles personal data of customers or clients it is almost inevitable that you will be a data controller with an obligation to register. The exemptions to registering are laid out in regulations, and are quite restrictive – they are in the main, the following (wording taken from the ICO Notification Handbook)

Data controllers who only process personal information for: staff administration (including payroll); advertising, marketing and public relations (in connection with their own business activity); and accounts and records.
Some not-for-profit organisations.
Maintenance of a public register.
Processing personal information for judicial functions.
Processing personal information without an automated system such
as a computer.
But there is one other, key exemption. This is not within the notification regulations, but at section 36 of the DPA itself, and it exempts personal data from the whole of the Act if it is
processed by an individual only for the purposes of that individual’s personal, family or household affairs (including recreational purposes)
Thus, if you, for instance, keep a record of your children’s medical histories on your home computer, you are not caught by any of the DPA (and not required to notify with the ICO).Where this becomes interesting (it does become interesting, honestly) is when the very expansive interpretation the ICO gives to this “domestic purposes exemption” is considered in view of the extent to which people’s domestic affairs – including recreational purposes – now take place in a more public sphere, whereby large amounts of information are happily published by individuals on social media. As I have written elsewhere, the Court of Justice of the European Union (CJEU) held in 2003, in the Lindqvist case, that the publishing of information on the internet could not be covered by the relevant domestic purposes exemption in the Directive. The ICO and the UK has, ever since, been in conflict with this CJEU authority, a point illustrated by the trenchant criticism delivered in the High Court in the judgment by Tugendhat J in The Law Society v Kordowski.

But I think there is a even more stark illustration of the implications of an expansive interpretation of the section 36 exemption, and I provide it. On this blog I habitually name and discuss identifiable individuals – this is processing of personal data, and I determine the purposes for which, and the manner in which, this personal data is processed. Accordingly, I become a data controller, according to the definitions at section 1(1) of the DPA. So, do I need to notify my processing with the ICO? The answer, according to the ICO, is “no”. They tell me

from the information you have provided it would be unlikely that you would be required to register in respect of your blogs and tweets
But I don’t understand this. I cannot see any exemption which applies to my processing – unless it is section 36. But in what way can I seriously claim that I am processing personal data only for my domestic (including recreational) purposes. Yes, blogging about information rights is partly a recreation to me (some might say that makes me odd) but I cannot pretend that I have no professional aims and purposes in doing so. Accordingly, the processing cannot only be for domestic purposes.I have asked the ICO to confirm what, in their view, exempts me from notification. I hope they can point me to something I have overlooked, because, firstly, anything that avoids my having to pay an annual notification fee of £35 would be welcome, and secondly, I find it rather uncomfortable to be on the receiving end of my own personal analysis that I’m potentially committing a criminal offence, even if the lead prosecutor assures me I’m not.

The point about the notification fee leads to me on to a further issue. As I say above, notification is in some ways rather quaint – it harks back to days when processing of personal data was a specific, discrete activity, and looks odd in a world where, with modern technology, millions of activities every day meet the definition of “processing personal data”. No doubt for these reasons, the concept of notification with a data protection authority is missing from the draft General Data Protection Regulation (GDPR) currently slouching its way through the European legislative process. However, a proposal by the ICO suggests that, at least in the domestic sphere, notification (in another guise), might remain under new law.The ICO, faced with the fact that its main funding stream (the annual notification fees from those 370,000-plus data controllers) would disappear if the GDPR is passed in its proposed form, is lobbying for an “information rights levy”. Christopher Graham said earlier this year

I would have thought  an information rights levy, paid for by public authorities and data controllers [is needed]. We would be fully accountable to Parliament for our spending.

and the fact that this proposal made its way into the ICO’s Annual Report  with Graham saying that Parliament needs to “get on with the task” of establishing the levy, suggests that it might well be something the Ministry of Justice agrees with. As the MoJ would be first in line to have make up the funding shortfall if a levy wasn’t introduced, it is not difficult to imagine it becoming a reality.

On one view, a levy makes perfect sense – a “tax” on those who process personal data. But looked at another way, it will potentially become another outmoded means of defining what a data controller is. One cannot imagine that, for instance, bloggers and other social media users will be expected to pay it, so it is likely that, in effect, those data controllers whom the ICO currently expects to notify will be those who are required to pay the levy. One imagines, also, that pour encorager les autres, it might be made a criminal offence not to pay the levy in circumstances where a data controller should pay it but fails to do so. In reality, will it just be a mirror-image of the current notification regime?

And will I still be analysing my own blogging as being processing that belongs to that regime, but with the ICO, for pragmatic, if not legally sound, reasons, deciding the opposite?

1 Comment

Filed under Data Protection, Directive 95/46/EC, Europe, GDPR, parliament

DVLA, disability and personal data

Is the DVLA’s online vehicle-checker risking the exposure of sensitive personal data of registered keepers of vehicles?

The concept of “personal data”, in the Data Protection Act 1998 (DPA) (and, beyond, in the European Data Protection Directive EC/95/46) can be a slippery one. In some cases, as the Court of Appeal recognised in Edem v The Information Commissioner & Anor [2014] EWCA Civ 92 where it had to untangle a mess that the First-tier tribunal had unnecessarily got itself into, it is straightforward: someone’s name is their personal data. In other cases, especially those which engage the second limb of the definition in section 1(1) of the DPA (“[can be identified] from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller” it can be profoundly complex (see the House of Lords in Common Services Agency v Scottish Information Commissioner (Scotland) [2008] UKHL 47, a judgment which, six years on, still makes data protection practitioners wake up in the night screaming).

When I first looked at the reports that the DVLA’s Vehicle Tax Check service enabled people to see whether the registered owner of a car was disabled, I thought this might fall into the complex category of data protection issues. On reflection, I think it’s relatively straightforward.

I adopt the excellent analysis by the benefitsandwork.co.uk site

A new vehicle check service on the DVLA website allows visitors to find out whether their neighbours are receiving the higher rate of the mobility component of disability living allowance (DLA) or either rate of the mobility component of personal independence payment (PIP)…The information that DVLA are making available is not about the vehicle itself. Instead they are publishing personal information about the benefits received by the individual who currently owns the car or for whom the car is solely used.

It’s difficult to argue against this, although it appears the DVLA are trying, because they responded to the initial post by saying

The Vehicle Enquiry Service does not include any personal data. It allows people to check online what information DVLA holds about a vehicle, including details of the vehicle’s tax class to make sure that local authorities and parking companies do not inadvertently issue parking penalties where parking concessions apply. There is no data breach – the information on a vehicle’s tax class that is displayed on the Vehicle Enquiry Service does not constitute personal data. It is merely a descriptive word for a tax class

but, as benefitsandwork say, that is only true insofar as the DVLA are publishing the tax band of the car, but when they are publishing that the car belongs to a tax-exempt category for reasons of the owner’s disability, they are publishing something about the registered keeper (or someone they care for, or regularly drive), and that is sensitive personal data.

What DVLA is doing is not publishing the car’s tax class – that remains the same whoever the owner is – they are publishing details of the exempt status of the individual who currently owns it. That is personal data about the individual, not data about the vehicle

As the Information Commissioner’s guidance (commended by Moses LJ in Edem) says

Is the data being processed, or could it easily be processed, to: learn; record; or decide something about an identifiable individual, or; as an incidental consequence of the processing, either: could you learn or record something about an identifiable individual; or could the processing have an impact on, or affect, an identifiable individual

Ultimately benefitsandwork’s example (where someone was identified from this information) unavoidably shows that the information can be personal data: if someone can search the registration number of a neighbour’s car, and find out that the registered keeper is exempt from paying the road fund licence for reasons of disability, that information will be the neighbour’s personal data, and it will have been disclosed to them unfairly, and in breach of the DPA (because no condition for the disclosure in Schedule 3 exists).

I hope the DVLA will rethink.

 

11 Comments

Filed under Confidentiality, Data Protection, Directive 95/46/EC, disability, Information Commissioner, Privacy

We’re looking into it

The news is awash with reports that the UK Information Commissioner’s Office (ICO) is “opening an investigation” into Facebook’s rather creepy research experiment, in conjunction with US universities, in which it apparently altered the users’ news feeds to elicit either positive or negative emotional responses. Thus, the BBC says “Facebook faces UK probe over emotion study”, SC Magazine says “ICO probes Facebook data privacy” and the Financial Times says “UK data regulator probes Facebook over psychological experiment”.

As well as prompting one to question some journalists’ obsession with probes, this also leads one to look at the basis for these stories. It appears to lie in a quote from an ICO spokesman, given I think originally to the online IT news outlet The Register

The Register asked the office of the UK’s Information Commissioner if it planned to probe Facebook following widespread criticism of its motives.

“We’re aware of this issue, and will be speaking to Facebook, as well as liaising with the Irish data protection authority, to learn more about the circumstances,” a spokesman told us.
So, the ICO is aware of the issue and will be speaking to Facebook and to the Irish Data Protection Commissioner’s office. This doesn’t quite match up to the rather hyperbolic news headlines. And there’s a good reason for this – the ICO is highly unlikely to have any power to investigate, let alone take action. Facebook, along with many other tech/social media companies, has its non-US headquarters in Ireland. This is partly for taxation reasons and partly because of access to high-skilled, relatively low cost labour. However, some companies – Facebook is one, LinkedIn another – have another reason, evidenced by the legal agreements that users enter into: because the agreement is with “Facebook Ireland”, then Ireland is deemed to be the relevant jurisdiction for data protection purposes. And, fairly or not, the Irish data protection regime is generally perceived to be relatively “friendly” towards business.
 
These jurisdictional issues are by no means clear cut – in 2013  a German data protection authority tried to exercise powers to stop Facebook imposing a “real name only” policy.
 
Furthermore, as the Court of Justice of the European Union recognised in the recent Google Spain case, the issue of territorial responsibilities and jurisdiction can be highly complex. The Court held there that, as Google had
 
[set] up in a Member State a branch or subsidiary which is intended to promote and sell advertising space offered by that engine and which orientates its activity towards the inhabitants of that Member State
 
it was processing personal data in that Member State (Spain). Facebook does have a large UK corporate office with some responsibility for sales. It is just possible that this could give the ICO, as domestic data protection authority, some power to investigate. And if or when the draft European General Data Protection Regulation gets passed, fundamental shifts could take place, extending even, under Article 3(2) to bringing data controllers outside the EU within jurisdiction, where they are offering goods or services to (or monitoring) data subjects in the EU.
 
But the question here is really whether the ICO will assert any purported power to investigate, when the Irish DPC is much more clearly placed to do so (albeit it with terribly limited resources). I think it’s highly unlikely, despite all the media reports. In fact, if the ICO does investigate, and it leads to any sort of enforcement action, I will eat my hat*.
 
*I reserve the right to specify what sort of hat

Leave a comment

Filed under Data Protection, Directive 95/46/EC, enforcement, facebook, journalism, social media, Uncategorized

Google is not a library, Dr Cavoukian

The outgoing Ontario Information and Privacy Commissioner Ann Cavoukian, whose time in office has been hugely, and globally, influential (see in particular Privacy by Design) has co-written (with Christopher Wolf) an article strongly criticising the judgment of the Court of Justice of the European Union (CJEU) in the Google Spain case.

For anyone who has been in the wilderness for the last few weeks, in Google Spain the CJEU ruled that Google Spain, as a subsidiary of Google inc. operating on Spanish territory, was covered by the obligations of the European Data Protection Directive 95/46/EC, that it was operating as an entity that processed personal data in the capacity of a data controller, and that it was accordingly required to consider applications from data subjects for removal of search returns. Thus, what is loosely called a “right to be forgotten” is seen already to exist in the current data protection regime.

Many have written on this landmark CJEU ruling (I commend in particular Dr David Erdos’s take, on the UK Constitutional Law Blog) and I am not here going to go into any great detail, but what I did take issue with in the Cavoukian and Wolf piece was the figurative comparison of Google with a public library:

A man walks into a library. He asks to see the librarian. He tells the librarian there is a book on the shelves of the library that contains truthful, historical information about his past conduct, but he says he is a changed man now and the book is no longer relevant. He insists that any reference in the library’s card catalog and electronic indexing system associating him with the book be removed, or he will go to the authorities…

…The government agent threatens to fine or jail the librarian if he does not comply with the man’s request to remove the reference to the unflattering book in the library’s indexing system.

Is this a scenario out of George Orwell’s Nineteen Eighty-Four? No, this is the logical extension of a recent ruling from Europe’s highest court

(I pause briefly to say that if I never see another reference to Orwell in the context of privacy debate I will die a happy man).

I’m fond of analogies but Cavoukian’s and Wolf’s one (or maybe it’s a metaphor?) is facile. I think it could more accurately say

A man walks into a library. He sees that, once again, the library has chosen, because of how it organises its profit-making activities, to give great prominence to a book which contains information about his past conduct, which is no longer relevant, and which it is unfair to highlight. He asks them to give less prominence to it.

Cavoukian and Wolf accept that there should be a right to remove “illegal defamatory” content if someone posts it online, but feel that the issue of links to “unflattering, but accurate” information should be explored using “other solutions”. (I pause again to note that “unflattering” is an odd and loaded word to use here: Mr Gonzalez, in the Google Spain case, was concerned about out-of-date information about bankruptcy, and other people who might want to exercise a right to removal of links might be concerned by much worse than “unflattering” information).

I don’t disagree that other solutions should be explored to the issue of the persistence or reemergence of old information which data subjects reasonably no longer wish to be known, but people are entitled to use the laws which exist to pursue their aims, and the application by the CJEU of data protection law to the issues pleaded was, to an extent, uncontroversial (is Google a data controller? if it is, what are its obligations to respect a request to desist from processing?)

Cavoukian and Wolf criticise the CJEU for failing to provide sufficient instruction on how “the right to be forgotten” should be applied, and for failing to consider whether “online actors other than search engines have a duty to ‘scrub’ the Internet of unflattering yet truthful facts”, but a court can only consider the issues pleaded before it, and these weren’t. Where I do agree with them is in their criticism of the apparent failure by the CJEU, when giving effect to the privacy rights in Article 8 of the European Convention on Human Rights, and Articles 7 and 8 of the Charter of Fundamental Rights of the European Union, to consider adequately, if at all, the countervailing rights to freedom of expression in Article 10 of the former and Article 11 of the latter. In this respect, the prior Opinion of the Advocate General was perhaps to be preferred.

The key word in my replacement library ananolgy above is “chosen”. Google is not a passive and inert indexing system. Rather, it is a dynamic and commercially-driven system which uses complex algorithms to determine which results appear against which search terms. It already exercises editorial control over results, and will remove some which it is satisfied are clearly unlawful or which constitute civil wrongs such as breach of copyright. Is it so wrong that (if it gives appropriate weight to the (sometimes) competing considerations of privacy and freedom of expression) it should be required to consider a request to remove unfair and outdated private information?

 

 

2 Comments

Filed under Data Protection, Directive 95/46/EC, Europe, human rights, Privacy

Nominal damages give rise to distress compensation under the Data Protection Act – AB v Ministry of Justice

An award of nominal DPA damages in the High Court.

Whether, or in what circumstances, compensation may be awarded to a claimant who shows a contravention by a data controller of any of the requirements of the Data Protection Act 1998 (DPA), is a much-debated issue. It is also, occasionally, litigated. One key aspect is when compensation for distress might be awarded.

Section 13 of the DPA provides, so far as is relevant here, that

(1)An individual who suffers damage by reason of any contravention by a data controller of any of the requirements of this Act is entitled to compensation from the data controller for that damage.

(2)An individual who suffers distress by reason of any contravention by a data controller of any of the requirements of this Act is entitled to compensation from the data controller for that distress if—

(a)the individual also suffers damage by reason of the contravention

The general interpretation of this has been that compensation for distress, in the absence of pecuniary damage, is not available. The leading case on this is Johnson v The Medical Defence Union Ltd (2) [2006] EWHC 321 and on appeal Johnson v Medical Defence Union [2007] EWCA Civ 262, with Buxton LJ saying in the latter

section 13 distress damages are only available if damage in the sense of pecuniary loss has been suffered

However in allowing an appeal in Murray v Big Pictures (UK) Ltd [2008] EWCA Civ 446, and directing that the case go to trial, the Court of Appeal was prepared to consider a different view

It seems to us to be at least arguable that the judge [in the first instance] has construed ‘damage’ too narrowly, having regard to the fact that the purpose of the Act was to enact the provisions of the relevant Directive

But that case was ultimately settled before trial, and the issue left undecided.

Clearly, the decision in Johnson is potentially controversial, especially in cases (of which Johnson was not one) where the UK’s obligations under the European Data Protection Directive, and data subjects’ associated rights under the European Convention on Human Rights and the Charter of Fundamental Rights of the European Union, are taken into account. This much was recognised by Tugendhat J, in giving permisssion to the applicants in Vidal -Hall & Ors v Google Inc [2014] EWHC 13 (QB) to serve on Google Inc out of jurisdiction. He noted (¶83-104) academic statements on the issue, as well as the European Commission’s view that the UK DPA wrongly restricts “[t]he right to compensation for moral damage when personal information is used inappropriately”, and said

This is a controversial question of law in a developing area, and it is desirable that the facts should be found. It would therefore be the better course in the present case that I should not decide this question on this application.

I shall therefore not decide it. However, in case it is of any assistance in the future, my preliminary view of the question is that Mr Tomlinson’s submissions are to be preferred, and so that damage in s.13 does include non-pecuniary damage

This is a fascinating point, and detailed judicial consideration of it would be welcomed (it may also be at issue in the impending case of Steinmetz v Global Witness Ltd) but, in the meantime, a question exists as to whether nominal pecuniary damage opens the door to awards for distress. In Johnson, the cost of a £10.50 breakfast had opened the door, but this was actual (if minor) damage. Last year, the Court of Appeal avoided having to decide the issue when the defendant conceded the point in Halliday v Creation Consumer Finance Ltd (CCF) [2013] EWCA Civ 333 (about which I blogged last year). However, in a very recent judgment, AB v Ministry of Justice [2014] EWHC 1847 (QB), which takes some wading through, Mr Justice Baker does appear to have proceeded on the basis that nominal damages do give rise to distress compensation.

The case involves an (anonymous) partner in a firm of solicitors who, as a result of events involving the coroner following his wife’s tragic death, made a series of subject access requests (under the provisions of section 7 DPA). The Ministry of Justice (MoJ) did not, it seems, necessarily handle these well, nor in accordance with their obligations under the DPA, and when it came to remedying these contraventions (which consisted of delayed responses) the judge awarded nominal damages of £1.00, before moving on to award £2250 for distress caused by the delays.

What is not clear from the judgment is to what extent the judge considered the MoJ’s submission that compensation for distress was only available if an individual has also suffered damage. The answer may lie in the fact that, although he awarded nominal damages, the judge accepted that AB had suffered (actual) damage but had “not sought to quantify his time or expense”. Query, therefore, whether this is a case of purely nominal damage.

One hopes that Vidal-Hall and Global Witness give the occasions to determine these matters. One notes, however, the vigour with which both cases are being litigated by the parties: it may be some time before the issue is settled once and for all.

 

Leave a comment

Filed under damages, Data Protection, Directive 95/46/EC, human rights