Tag Archives: directive 95/46/EC

Why what Which did wears my patience thin

Pre-ticked consent boxes and unsolicited emails from the Consumers’ Association

Which?, the brand name of the Consumers’ Association, publishes a monthly magazine. In an era of social media, and online reviews, its mix of consumer news and product ratings might seem rather old-fashioned, but it is still (according to its own figures1) Britain’s best-selling monthly magazine. Its rigidly paywalled website means that one must generally subscribe to get at the magazine’s contents. That’s fair enough (although after my grandmother died several years ago, we found piles of unread, unopened even, copies of Which? She had apparently signed up to a regular Direct Debit payment, probably to receive a “free gift”, and had never cancelled it: so one might draw one’s own conclusion about how many of Which?’s readers are regular subscribers for similar reasons).

In line with its general “locked-down” approach, Which?’s recent report into the sale of personal data was, except for snippets, not easy to access, but it got a fair bit of media coverage. Intrigued, I bit: I subscribed to the magazine. This post is not about the report, however, although the contents of the report drive the irony of what happened next.

As I went through the online sign-up process, I arrived at that familiar type of page where the subject of future marketing is broached. Which? had headlined their report “How your data could end up in the hands of scammers” so it struck me as amusing, but also irritating, that the marketing options section of the sign-in process came with a pre-ticked box:

img_0770

As guidance from the Information Commissioner’s Office makes clear, pre-ticked boxes are not a good way to get consent from someone to future marketing:

Some organisations provide pre-ticked opt-in boxes, and rely on the user to untick it if they don’t want to consent. In effect, this is more like an opt-out box, as it assumes consent unless the user clicks the box. A pre-ticked box will not automatically be enough to demonstrate consent, as it will be harder to show that the presence of the tick represents a positive, informed choice by the user.

The Article 29 Working Party goes further, saying in its opinion on unsolicited communications for marketing purposes that inferring consent to marketing from the use of pre-ticked boxes is not compatible with the data protection directive. By extension, therefore, any marketing subsequently sent on the basis of a pre-ticked box will be a contravention of the data protection directive (and, in the UK, the Data Protection Act 1998) and the ePrivacy directive (in the UK, the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR)).

Nothwithstanding this, I certainly did not want to consent to receive subsequent marketing, so, as well as making a smart-arse tweet, I unticked the box. However, to my consternation, if not my huge surprise, I have subsequently received several marketing emails from Which? They do not have my consent to send these, so they are manifestly in contravention of regulation 22 of PECR.

It’s not clear how this has happened. Could it be a deliberate tactic by Which?  to ignore subscribers’ wishes? One presumes not: Which? says it “exists to make individuals as powerful as the organisations they deal with in their daily live” – deliberately ignoring clear expressions regarding consent would hardly sit well with that mission statement. So is it a general website glitch – which means that those expressions are lost in the sign-up process? If so, how many individuals are affected? Or is it just a one-off glitch, affecting only me?

Let’s hope it’s the last. Because the ignoring or overriding of expressions of consent, and the use of pre-ticked boxes for gathering consent, are some of the key things which fuel trade in and disrespect for personal data. The fact that I’ve experience this issue with a charity which exists to represent consumers, as a result of my wish to read their report into misuse of personal data, is shoddy, to say the least.

I approached Which? for a comment, and a spokesman said:

We have noted all of your comments relating to new Which? members signing up, including correspondence received after sign-up, and we are considering these in relation to our process.

I appreciate the response, although I’m not sure it really addresses my concerns.

1Which? Annual Report 2015/2016

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

1 Comment

Filed under consent, Data Protection, Directive 95/46/EC, Information Commissioner, marketing, PECR, spam, subject access

Any Safe Harbor in a storm…?

The ICO has contacted me to say that it actually selected SnapSurveys because they offered clients the option of hosting survey response on UK servers, and it has checked with SnapSurveys that this remains the case. I’ve been pointed me to http://www.snapsurveys.com/survey-software/security-accessibility-and-professional-outline/ which confirms this point.

So the answer to my question

Is the ICO making unlawful transfers of personal data to the US?

I’m pleased to confirm, appears to be “no”.

Earlier this week the Information Commissioner’s Office (ICO) published a blogpost by Deputy Commissioner David Smith, entitled The US Safe Harbor – breached but perhaps not destroyed!

“Don’t panic” says David to those data controllers who are currently relying on Safe Harbor as a means of ensuring that personal data transferred by them to the United States has adequate protection (in line with the requirements of Article 25 of the European Data Protection Directive, and the eighth principle of schedule one of the UK’s Data Protection Act 1998 (DPA)). He is referring, of course, to the recent decision of the Court of Justice of the European Union in Schrems. which Data controllers should, he says, “take stock” and “make their own minds up”:

businesses in the UK don’t have to rely on Commission decisions on adequacy. Although you won’t get the same degree of legal certainty, UK law allows you to rely on your own adequacy assessment. Our guidance tells you how to go about doing this.  Much depend [sic] here on the nature of the data that you are transferring and who you are transferring it to but the big question is can you reduce the risks to the personal data, or rather the individuals whose personal data it is, to a level where the data are adequately protected after transfer? The Safe Harbor can still play a role here.

Smith also refers to a recent statement by the Article 29 Working Party – the grouping of representatives of the various European data protection authorities, of which he is a member – and refers to “the substance of the statement being measured, albeit expressed strongly”. What he doesn’t say is how unequivocal it is in saying that

transfers that are still taking place under the Safe Harbour decision after the CJEU judgment are unlawful

And this is particularly interesting because, as I discovered today, the ICO itself appears (still) to be making transfers under Safe Harbor. I reported a nuisance call using its online tool (in doing so I included some sensitive personal data about a family member) and noticed that the tool was operated by SnapSurveys. The ICO’s own website privacy notice says

We collect information volunteered by members of the public about nuisance calls and texts using an online reporting tool hosted by Snap Surveys. This company is a data processor for the ICO and only processes personal information in line with our instructions.

while SnapSurveys’ privacy policy explains that

Snap Surveys NH, Inc. complies with the U.S. – E.U. Safe Harbor framework

This does not unambiguously say that SnapSurveys are transferring the personal data of those submitting reports to the ICO to the US under Safe Harbor – it is possible that the ICO has set up some bespoke arrangement with its processor, under which they process that specific ICO data within the European Economic Area – but it strongly suggests it.

It is understandable that a certain amount of regulatory leeway and leniency be offered to data controllers who have relied on Safe Harbor until now – to that extent I agree with the light-touch approach of the ICO. But if it is really the case that peoples’ personal data are actually being transferred by the regulator to the US, three weeks after the European Commission decision of 2000 that Safe Harbor provided adequate protection was struck down, serious issues arise. I will be asking the ICO for confirmation about this, and whether, if it is indeed making these transfers, it has undertaken its own adequacy assessment.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

 

1 Comment

Filed under 8th principle, Data Protection, Directive 95/46/EC, Information Commissioner, safe harbor

Vidal-Hall v Google, and the rise of data protection ambulance-chasing

Everyone knows the concept of ambulance chasers – personal injury lawyers who seek out victims of accidents or negligence to help/persuade the latter to make compensation claims. With today’s judgment in the Court of Appeal in the case of Vidal-Hall & Ors v Google [2015] EWCA Civ 311 one wonders if we will start to see data protection ambulance chasers, arriving at the scene of serious “data breaches” with their business cards.

This is because the Court has made a definitive ruling on the issue, discussed several times previously on this blog, of whether compensation can be claimed under the Data Protection Act 1998 (DPA) in circumstances where a data subject has suffered distress but no tangible, pecuniary damage. Section 13 of the DPA provides that

(1)An individual who suffers damage by reason of any contravention by a data controller of any of the requirements of this Act is entitled to compensation from the data controller for that damage.

(2)An individual who suffers distress by reason of any contravention by a data controller of any of the requirements of this Act is entitled to compensation from the data controller for that distress if—

(a)the individual also suffers damage by reason of the contravention

This differs from the wording of the European Data Protection Directive 95/46/ec, which, at Article 23(1) says

Member States shall provide that any person who has suffered damage as a result of an unlawful processing operation or of any act incompatible with the national provisions adopted pursuant to this Directive is entitled to receive compensation from the controller for the damage suffered

It can be seen that, in the domestic statutory scheme “distress” is distinct from “damage”, but in the Directive, there is just a single category of “damage”. The position until relatively recently, following Johnson v Medical Defence Union [2007] EWCA Civ 262, had been that it meant pecuniary damage, and this in turn meant, as Buxton LJ said in that case, that “section 13 distress damages are only available if damage in the sense of pecuniary loss has been suffered”. So, absent pecuniary damage, no compensation for distress was available (except in certain specific circumstances involving processing of personal data for journalistic, literary or artistic purposes). But, this, said Lord Dyson and Lady Justice Sharp, in a joint judgment, was wrong, and, in any case, they were not bound by Johnson because the relevant remarks in that case were in fact obiter.  In fact, they said, section 13(2) DPA was incompatible with Article 23 of the Directive:

What is required in order to make section 13(2) compatible with EU law is the disapplication of section 13(2), no more and no less. The consequence of this would be that compensation would be recoverable under section 13(1) for any damage suffered as a result of a contravention by a data controller of any of the requirements of the DPA

As Christopher Knight says, in a characteristically fine and exuberant piece on the Panopticon blog, “And thus, section 13(2) was no more”.

And this means a few things. It certainly means that it will be much easier for an aggrieved data subject to bring a claim for compensation against a data controller which has contravened its obligations under the DPA in circumstances where there is little, or no, tangible or pecuniary damage, but only distress. It also means that we may well start to see the rise of data protection ambulance chasers – the DPA may not give rise to massive settlements, but it is a relatively easy claim to make – a contravention is often effectively a matter of fact, or is found to be such by the Information Commissioner, or is conceded/admitted by the data controller – and there is the prospect of group litigation (in 2013 Islington Council settled claims brought jointly by fourteen claimants following disclosure of their personal data to unauthorised third parties – the settlement totalled £43,000).

I mentioned in that last paragraph that data controller sometimes concede or admit to contraventions of their obligations under the DPA. Indeed, they are expected to by the Information Commissioner, and the draft European General Data Protection Regulation proposes to make it mandatory to do so, and to inform data subjects. And this is where I wonder if we might see another effect of the Vidal-Hall case – if data controller know that by owning up to contraventions they may be exposing themselves to multiple legal claims for distress compensation, they (or their shareholders, or insurers) may start to question why they should do this. Breach notification may be seen as even more of a risky exercise than it is now.

There are other interesting aspects to the Vidal-Hall case – misuse of private information is, indeed, a tort, allowing service of the claims against Google outside jurisdiction, and there are profound issues regarding the definition of personal data which are undecided and, if they go to trial, will be extremely important – but the disapplying of section 13(2) DPA looks likely to have profound effects for data controllers, for data subjects, for lawyers and for the landscape of data protection litigation in this country.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

17 Comments

Filed under Breach Notification, damages, Data Protection, Directive 95/46/EC, GDPR, Information Commissioner

A data protection justice gap?

On the 4th March the Supreme Court handed down judgment in the conjoined cases of Catt and T v Commissioner of Police of the Metropolis ([2015] UKSC 9). Almost unanimously (there was one dissenting opinion in Catt) the appeals by the Met were allowed. In brief, the judgments held that the retention of historical criminal conviction data was proportionate. But what I thought was particularly interesting was the suggestion (at paragraph 45) by Lord Sumption (described to me recently as “by far the cleverest man in England”) that T‘s claim at least had been unnecessary:

[this] was a straightforward dispute about retention which could have been more appropriately resolved by applying to the Information Commissioner. As it is, the parties have gone through three levels of judicial decision, at a cost out of all proportion to the questions at stake

and as this blog post suggests, there was certainly a hint that costs might flow in future towards those who choose to litigate rather than apply to the Information Commissioner’s Office (ICO).

But I think there’s a potential justice gap here. Last year the ICO consulted on changing how it handled concerns from data subjects about handling of their personal data. During the consultation period Dr David Erdos wrote a guest post for this blog, arguing that

The ICO’s suggested approach is hugely problematic from a rule of law point of view. Section 42 of the Data Protection Act [DPA] is crystal clear that “any person who is, or believes himself to be, directly affect by any processing of personal data” may make a request for assessment to the ICO “as to whether it is likely or unlikely that the processing has been or is being carried out in compliance with the provisions” of the Act. On receiving such a request the Commissioner “shall make an assessment” (s. 42 (1)) (emphasis added). This duty is an absolute one

but the ICO’s response to the consultation suggested that

We are…planning to make much greater use of the discretion afforded to us under section 42 of the legislation…so long as a data controller has provided an individual with a clear explanation of their processing of personal information, they are unlikely to need to describe their actions again to us if the matter in question does not appear to us to represent a serious issue or we don’t believe there is an opportunity for the data controller to improve their information rights practice

which is problematic, as section 42 confers a discretion on the ICO only as to the manner in which an assessment shall be made. Section 42(3) describes some matters to which he may have regard in determining the manner, and these include (so are not exhaustive) “the extent to which the request appears to him to raise a matter of substance”. I don’t think “a matter of substance” gets close to being the same as “a serious issue”: a matter can surely be non-serious yet still of substance. So if the discretion afforded to the ICO under section 42 as to the manner of the assessment includes a discretion to rely solely on prior correspondence between the data controller and the data subject, this is not specified in (and can only be inferred from) section 42.

Moreover, and interestingly, Article 28(4) of the European Data Protection Directive, which is transposed in section 42 DPA, confers no such discretion as to the manner of assessment, and this may well have been one of the reasons the European Commission began protracted infraction proceedings against the UK (see Chris Pounder blog posts passim).

Nonetheless, the outcome of the ICO consultation was indeed a new procedure for dealing with data subjects’ concerns. Their website now says

Should I raise my concern with the ICO?

If the organisation has been unable, or unwilling, to resolve your information rights concern, you can raise the matter with us.  We will use the information you have provided, including the organisation’s response to your concerns, to decide if your concern provides an opportunity to improve information rights practice.

If we think it does provide that opportunity, we will take appropriate action

“Improving information rights practice” refers to the ICO’s general duties under section 51 DPA, but what is notable by its absence there, though, is any statement that the ICO’s general duty, under section 42, to make an assessment as to whether it is likely or unlikely that the processing has been or is being carried out in compliance with the provisions of the DPA.

Lord Sumption in Catt (at 34) also said that “Mr Catt could have complained about the retention of his personal data to the Information Commissioner”. This is true, but would the ICO have actually done anything? Would it have represented a “serious issue”? Possibly not  – Lord Sumption describes the background to Mrs T’s complaints as a “minor incident” and the retention of her data as a “straightforward dispute”. But if there are hints from the highest court of the land that bringing judicial review proceedings on data protection matters might results in adverse costs, because a complaint to the ICO is available, and if the ICO, however, shows reluctance to consider complaints and concerns from aggrieved data subjects, is there an issue with access to data protection justice? Is there a privacy justice gap?

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

9 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner

Are we all journalists?

The ICO has said that Global Witness can claim the data protection exemption for journalism, regarding their investigations in BSGR. This fascinating case continues to raise difficult and important questions.

Data protection law rightly gives strong protection to journalism; this is something that the 2012 Leveson inquiry dealt with in considerable detail, but, as the inquiry’s terms of reference were expressly concerned with “the press”, with “commercial journalism”, it didn’t really grapple with the rather profound question of “what is journalism?” But the question does need to be asked, because in the balancing exercise between privacy and freedom of expression too much weight afforded to one side can result in detriment to the other. If personal privacy is given too much weight, freedom of expression is weakened, but equally if “journalism” is construed too widely, and the protection afforded to journalism is consequently too wide, then privacy rights of individuals will suffer.

In 2008 the Court of Justice of the European Union (CJEU) was asked, in the Satamedia case, to consider the extent of the exemption from a large part of data protection law for processing of personal data for “journalistic” purposes. Article 9 of the European Data Protection Directive (the Directive) provides that

Member States shall provide for exemptions or derogations…for the processing of personal data carried out solely for journalistic purposes or the purpose of artistic or literary expression only if they are necessary to reconcile the right to privacy with the rules governing freedom of expression.

and recital 37 says

Whereas the processing of personal data for purposes of journalism or for purposes of literary of artistic expression, in particular in the audiovisual field, should qualify for exemption from the requirements of certain provisions of this Directive in so far as this is necessary to reconcile the fundamental rights of individuals with freedom of information and notably the right to receive and impart information

In Satamedia one of the questions the CJEU was asked to consider was whether the publishing of public-domain taxpayer data by two Swedish companies could be “regarded as the processing of personal data carried out solely for journalistic purposes within the meaning of Article 9 of the directive”. To this, the Court replied “yes”

Article 9 of Directive 95/46 is to be interpreted as meaning that the activities [in question], must be considered as activities involving the processing of personal data carried out ‘solely for journalistic purposes’, within the meaning of that provision, if the sole object of those activities is the disclosure to the public of information, opinions or ideas [emphasis added]

One can see that, to the extent that Article 9 is transposed effectively in domestic legislation, it affords significant and potentially wide protection for “journalism”. In the UK it is transposed as section 32 of the Data Protection Act 1998 (DPA). This provides that

Personal data which are processed only for the special purposes are exempt from any provision to which this subsection relates if—

(a)the processing is undertaken with a view to the publication by any person of any journalistic, literary or artistic material,

(b)the data controller reasonably believes that, having regard in particular to the special importance of the public interest in freedom of expression, publication would be in the public interest, and

(c)the data controller reasonably believes that, in all the circumstances, compliance with that provision is incompatible with the special purposes.

where “the special purposes” are one or more of “the purposes of journalism”, “artistic purposes”, and “literary purposes”. Section 32 DPA exempts data processed for the special purposes from all of the data protection principles (save the 7th, data security, principle) and, importantly from provisions of sections 7 and 10. Section 7 is the “subject access” provision, and normally requires a data controller, upon receipt of written request by an individual, to inform them if their personal data is being processed, and, if it is, to give the particulars and to “communicate” the data to the individual. Section 10 broadly allows a data subject to object to processing which is likely to cause substantial damage or substantial distress, and to require the data to controller to cease (or not begin) processing (and the data controller must either comply or state reasons why it will not). Personal data processed for the special purposes are, therefore, exempt from subject access and from the right to prevent processing likely to cause damage or distress. It is not difficult to see why – if the subject of, say, investigative journalism, could find out what a journalist was doing, and prevent her from doing it, freedom of expression would be inordinately harmed.

The issue of the extent of the journalistic data protection exemption came into sharp focus towards the end of last year, when Benny Steinmetz and three other claimants employed by or associated with mining and minerals group Benny Steinmetz Group Resources (BSGR) brought proceedings in the High Court under the DPA seeking orders that would require campaigning group Global Witness to comply with subject access requests by the claimants, and to cease processing their data. The BSGR claimants had previously asked the Information Commissioner’s Office (ICO), pursuant to the latter’s duties under section 42 DPA, to assess the likelihood of the lawfulness of Global Witness’s processing, and the ICO had determined that it was unlikely that Global Witness were complying with their obligations under the DPA.

However, under section 32(4) DPA, if, in any relevant proceedings, the data controller claims (or it appears to the court) that the processing in question was for the special purposes and with a view to publication, the court must stay the proceedings in order for the ICO to consider whether to make a specific “special purposes” determination by the ICO. Such a determination would be (under section 45 DPA) that the processing was not for the special purposes nor was it with a view to publication, and it would result in a “special information notice”. Such a stay was applied to the BSGR proceedings and, on 15 December, after some considerable wait, the ICO conveyed to the parties that it was “satisfied that Global Witness is only processing the personal data requested … for the purposes of journalism”. Accordingly, no special information notice was served, and the proceedings remain stayed. Although media reports (e.g. Guardian and Financial Times) talk of appeals and tribunals, no direct appeal right exists for a data subject in these circumstances, so, if as seems likely, BSGR want to revive the proceedings, they will presumably either have to apply to have the stay lifted or/and issue judicial review proceedings against the ICO.

The case remains fascinating. It is easy to applaud a decision in which a plucky environmental campaign group claims journalistic data protection exemption regarding its investigations of a huge mining group. But would people be so quick to support, say, a fascist group which decided to investigate and publish private information about anti-fascist campaigners? Could that group also gain data protection exemption claiming that the sole object of their processing was the disclosure to the public of information, opinions or ideas? Global Witness say that

The ruling confirms that the Section 32 exemption for journalism in the Data Protection Act applies to anyone engaged in public-interest reporting, not just the conventional media

but it is not immediately clear from where they import the “public-interest” aspect – this does not appear, at least not in explicit terms, in either the Directive or the DPA. It is possible that it can be inferred, when one considers that processing for special purposes which is not in the public interest might constitute an interference with respect for data subjects’ fundamental rights and freedoms (per recital 2 of the Directive). And, of course, with talk about public interest journalism, we walk straight back into the arguments provoked by the Leveson inquiry.

Furthermore, one notes that the Directive talks about exemption for processing of personal data carried out solely for journalistic purposes, and the DPA says “personal data which are processed only for the special purposes are exempt…”. This was why I emphasised the words in the Satamedia judgment quoted above, which talks similarly of the exemption applying if the “sole object of those activities is the disclosure to the public of information, opinions or ideas”. One might ask whether a campaigning group’s sole or only purpose for processing personal data is for journalism. Might they not, in processing the data, be trying to achieve further ends? Might, in fact, one say that the people who engage solely in the disclosure to public of information, opinions or ideas are in fact those we more traditionally think of in these terms…the press, the commercial journalists?

P.S. Global Witness have uploaded a copy of the ICO’s decision letter. This clarifies that the latter was satisfied that the former was processing for the special purposes because it was part of “campaigning journalism” even though the proposed future publication of the information “forms part of a wider campaign to promote a particular cause”. This chimes with the ICO’s data protection guidance for the media, but it will be interesting if it is challenged on the basis that it doesn’t support a view that the processing is “only” or “solely” for the special purposes.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

5 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner, journalism, Leveson

The wrong test for anonymisation?

UPDATE: 23.01.15 The ICO has responded [.doc file] to my request for a review of their decision. I drew their attention to the arguments on this page but they don’t even mention them, let alone provide a counter-analysis, in dismissing my complaints (“Having reviewed the matter, I agree with the explanations provided”). I am invited by the ICO to consider taking my own legal action. I understand that the ICO and I might have differing views on a DPA matter, but what I find difficult to accept is the refusal even to enter into a discussion with me about the detailed arguments I’ve made. END UPDATE

In February this year I asked the Information Commissioner’s Office (ICO) to investigate reports that Hospital Episode Statistics (HES) data had apparently been sold to an actuarial society by the NHS Information Centre (NHSIC), the predecessor to the Health and Social Care Information Centre (HSCIC). Specifically I requested, as a data subject can under s42 of the Data Protection Act 1998 (DPA), that the ICO assess whether it was likely or not that the processing of my personal data by NHSIC and others had been in compliance with the DPA.

Nine months later, I was still awaiting the outcome. But a clue to how the assessment would turn out was contained in the text of Sir Nick Partridge’s six month review of various data releases by NHSIC (his original report in June seemed to me to point to multiple potential DPA contraventions). In the review document he says

Six investigations have been separately instigated by the HSCIC or Information Commissioner’s Office (ICO)and shared with both parties as these focussed on whether individuals were at risk of being identified. In the cases it has investigated, the ICO has upheld the HSCIC approach and informed us that it has “seen no evidence to suggest that re-identification has occurred or is reasonably likely to occur.”
And sure enough, after chasing the ICO for the outcome of my nine-month wait, I received this (in oddly formatted text, which rather whiffed of a lot of cutting-and-pasting)
Following the recent issue regarding HSCIC, PA Consulting, and Google we investigated the issue of whether HES data could be considered personal data. This detailed work involved contacting HSCIC, PA Consulting, and Google and included the analysis of the processes for the extraction and disclosure of HES data both generally and in that case in particular. We concluded that we did not consider that the HES dataset constitutes personal data.Furthermore we also investigated whether this information had been linked to other data to produce “personal data” which was subject to the provisions of the Act. We have no evidence that there has been any re-identification either on the part of PA Consulting or Google. We also noted that HSCIC have stated that the HES dataset does not include individual level patient data even at a pseudonymised level. Our view is that the data extracted and provided to PA Consulting did not identify any individuals and there was no reasonable likelihood that re-identification would be possible.
I have added the emphasis to the words “reasonable likelihood” above. They appear in similar terms in the Partridge Review, and they struck me as rather odd. An awful lot of analysis has taken and continues to take place on the subject of when can personal data be “rendered fully anonymous in the sense that it is information from which the data subject is no longer identifiable” (Lord Hope’s dicta in Common Services Agency v Scottish Information Commissioner [2008] UKHL 47). Some of that analysis has been academic, some takes the form of “soft law” guidance, for instance Opinion 05/2014 of the Article 29 Working Party, and the ICO Anonymisation Code of Practice. The former draws on the Data Protection Directive 95/46/EC, and notes that

Recital 26 signifies that to anonymise any data, the data must be stripped of sufficient elements such that the data subject can no longer be identified. More precisely, that data must be processed in such a way that it can no longer be used to identify a natural person by using “all the means likely reasonably to be used”

Anonymisation has also been subject to judicial analysis, notably in the Common Services Agency case, but, even more key, in the judgment of Mr Justice Cranston in Department of Health v Information Commissioner ([2011] EWHC 1430). The latter case, involving the question of disclosure of late-term abortion statistics, is by no means an easy judgment to parse (ironically so, given that it makes roughly the same observation of the Common Services Agency case). The judge held that the First-tier Tribunal had been wrong to say that the statistics in question were personal data, but that it had on the evidence been entitled to say that “the possibility of identification by a third party from these statistics was extremely remote”. The fact that the possibility of identification by a third party was extremely remote meant that “the requested statistics were fully anonymised” (¶55). I draw from this that for personal data to be anonymised in statistical format the possibility of identification of individuals by a third party must be extremely remote. The ICO’s Anonymisation Code, however, says of the case:

The High Court in the Department of Health case above stated that the risk of identification must be greater than remote and reasonably likely for information to be classed as personal data under the DPA [emphasis added]

But this seems to me to be an impermissible description of the case – the High Court did not state what the ICO says it stated – the phrases “greater than remote” and “reasonably likely” do not appear in the judgment. And that phrase “reasonably likely” is one that, as I say, makes it way into the Partridge Review, and the ICO’s assessment of the lawfulness of HES data “sale”.

I being to wonder if the ICO has taken the phrase from recital 26 of the Directive, which talks about the need to consider “all the means likely reasonably to be used” to identify an individual, and transformed it into a position from which, if identification is not reasonably likely, it will accept that data are anonymised. This cannot be right: there is a world of difference between a test which considers whether possibility of identification is “extremely remote” and whether it is “reasonably likely”.

I do not have a specific right to a review of the section 42 assessment decision that the processing of my personal data was likely in compliance with NHSIC’s obligations under the DPA, but I have asked for one. I am aware of course that others complained (après moi, la deluge) notably, in March, FIPR, MedConfidential and Big Brother Watch . I suspect they will also be pursuing this.

In October this year I attended an event at which the ICO’s Iain Bourne spoke. Iain was a key figure in the drawing up of the ICO’s Anonymisation Code, and I took the rather cheeky opportunity to ask about the HES investigations. He said that his initial view was that NHSIC had been performing good anonymisation practice. This reassured me at the time, but now, after considering this question of whether the Anonymisation Code (and the ICO) adopts the wrong test on the risks of identification, I am less reassured. Maybe “reasonably likely that an individual can be identified” is an appropriate test for determining when data is no longer anonymised, and becomes personal data, but it does not seem to me that the authorities support it.

Postscript Back in August of this year I alerted the ICO to the fact that a local authority had published open data sets which enabled individuals to be identified (for instance, social care and housing clients). More than four months later the data is still up (despite the ICO saying they would raise the issue with the council): is this perhaps because the council has argued that the risk of identification is not “reasonably likely”?

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

3 Comments

Filed under anonymisation, care.data, Data Protection, Directive 95/46/EC, Information Commissioner, NHS

Russell Brand and the domestic purposes exemption in the Data Protection Act

Was a now-deleted tweet by Russell Brand, revealing a journalist’s private number, caught by data protection law?

Data protection law applies to anyone who “processes” (which includes “disclosure…by transmission”) “personal data” (data relating to an identifiable living individual) as a “data controller” (the person who determines the purposes for which and the manner in which the processing occurs). Rather dramatically, in strict terms, this means that most individuals actually and regularly process personal data as data controllers. And nearly everyone would be caught by the obligations under the Data Protection Act 1998 (DPA), were it not for the exemption at section 36. This provides that

Personal data processed by an individual only for the purposes of that individual’s personal, family or household affairs (including recreational purposes) are exempt from the data protection principles and the provisions of Parts II and III

Data protection nerds will spot that exemption from the data protection principles and Parts II and III of the DPA is effectively an exemption from whole Act. So in general terms individuals who restrict their processing of personal data to domestic purposes are outwith the DPA’s ambit.

The extent of this exemption in terms of publication of information on the internet is subject to some disagreement. On one side is the Information Commissioner’s Office (ICO) who say in their guidance that it applies when an individual uses an online forum purely for domestic purposes, and on the other side are the Court of Justice of the European Union (and me) who said in the 2003 Lindqvist case that

The act of referring, on an internet page, to various persons and identifying them by name or by other means, for instance by giving their telephone numberconstitutes ‘the processing of personal data…[and] is not covered by any of the exceptionsin Article 3(2) of Directive 95/46 [section 36 of the DPA transposes Article 3(2) into domestic law]

Nonetheless, it is clear that publishing personal data on the internet for reasons not purely domestic constitutes an act of processing to which the DPA applies (let us assume that the act of publishing was a deliberate one, determined by the publisher). So when the comedian Russell Brand today decided to tweet a picture of a journalist’s business card, with an arrow pointing towards the journalist’s mobile phone number (which was not, for what it’s worth, already in the public domain – I checked with a Google search) he was processing that journalist’s personal data (note that data relating to an individual’s business life is still their personal data). Can he avail himself of the DPA domestic purposes exemption? No, says the CJEU, of course, following Lindqvist. But no, also, would surely say the ICO: this act by Brand was not purely domestic. Brand has 8.7 million twitter followers – I have no doubt that some will have taken the tweet as an invitation to call the journalist. It is quite possible that some of those calls will be offensive, or abusive, or even threatening.

Whilst I have been drafting this blog post Brand has deleted the tweet: that is to his credit. But of course, when you have so many millions of followers, the damage is already done – the picture is saved to hard drives, is mirrored by other sites, is emailed around. And, I am sure, the journalist will have to change his number, and maybe not much harm will have been caused, but the tweet was nasty, and unfair (although I have no doubt Brand was provoked in some way). If it was unfair (and lacking a legal basis for the publication) it was in contravention of the first data protection principle which requires that personal data be processed fairly and lawfully and with an appropriate legitimating condition. And because – as I submit –  Brand cannot plead the domestic purposes exemption, it was in contravention of the DPA. However, whether the journalist will take any private action, and whether the ICO will take any enforcement action, I doubt.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner, journalism, social media

The Crown Estate and behavioural advertising

A new app for Regent Street shoppers will deliver targeted behavioural advertising – is it processing personal data?

My interest was piqued by a story in the Telegraph that

Regent Street is set to become the first shopping street in Europe to pioneer a mobile phone app which delivers personalised content to shoppers during their visit

Although this sounds like my idea of hell, it will no doubt appeal to some people. It appears that a series of Bluetooth beacons will deliver mobile content (for which, read “targeted behavioural advertising”) to the devices of users who have installed the Regent Street app. Users will indicate their shopping preferences, and a profile of them will be built by the app.

Electronic direct marketing in the UK is ordinarily subject to compliance with The Privacy and Electronic Communications (EC Directive) Regulations 2003 (“PECR”). However, the definition of “electronic mail” in PECR is “any text, voice, sound or image message sent over a public electronic communications network or in the recipient’s terminal equipment until it is collected by the recipient and includes messages sent using a short message service”. In 2007 the Information Commissioner, upon receipt of advice, changed his previous stance that Bluetooth marketing would be caught by PECR, to one under which it would not be caught, because Bluetooth does not involve a “public electronic communications network”. Nonetheless, general data protection law relating to consent to direct marketing will still apply, and the Direct Marketing Association says

Although Bluetooth is not considered to fall within the definition of electronic mail under the current PECR, in practice you should consider it to fall within the definition and obtain positive consent before using it

This reference to “positive consent” reflects the definition in the Data Protection directive, which says that it is

any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed

And that word “informed” is where I start to have a possible problem with this app. Ever one for thoroughness, I decided to download it, to see what sort of privacy information it provided. There wasn’t much, but in the Terms and Conditions (which don’t appear to be viewable until you download the app) it did say

The App will create a profile for you, known as an autoGraph™, based on information provided by you using the App. You will not be asked for any personal information (such as an email address or phone number) and your profile will not be shared with third parties

autograph (don’t forget the™) is software which, in its words “lets people realise their interests, helping marketers drive response rates”, and it does so by profiling its users

In under one minute without knowing your name, email address or any personally identifiable information, autograph can figure out 5500 dimensions about you – age, income, likes and dislikes – at over 90% accuracy, allowing businesses to serve what matters to you – offers, programs, music… almost anything

Privacy types might notice the jarring words in that blurb. Apparently the software can quickly “figure out” thousands of potential identifiers about a user, without knowing “any personally identifiable information”. To me, that’s effectively saying “we will create a personally identifiable profile of you, without using any personally identifiable information”. The fact of the matter is that people’s likes, dislikes, preferences, choices etc (and does this app capture device information, such as IMEI?) can all be used to build up a picture which renders them identifiable. It is trite law that “personal data” is data which relate to a living individual who can be identified from those data or from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller. The Article 29 Working Party (made up of representatives from the data protection authorities of each EU member state) delivered an Opinion in 2010 on online behavioural advertising which stated that

behavioural advertising is based on the use of identifiers that enable the creation of very detailed user profiles which, in most cases, will be deemed personal data

If this app is, indeed, processing personal data, then I would suggest that the limited Terms and Conditions (which users are not even pointed to when they download the app, let alone be invited to agree them) are inadequate to mean that a user is freely giving specific and informed consent to the processing. And if the app is processing personal data to deliver electronic marketing failure to comply with PECR might not matter, but failure to comply with the Data Protection Act 1998 brings potential liability to legal claims and enforcement action.

The Information Commissioner last year produced good guidance on Privacy in Mobile Apps which states that

Users of your app must be properly informed about what will happen to their personal data if they install and use the app. This is part of Principle 1 in the DPA which states that “Personal data shall be processed fairly and lawfully”. For processing to be fair, the user must have suitable information about the processing and they must to be told about the purposes

The relevant data controller for Regent Street Online happens to be The Crown Estate. On the day that the Queen sent her first tweet, it is interesting to consider the extent to which her own property company are in compliance with their obligations under privacy laws.

This post has been edited as a result of comments on the original, which highlighted that PECR does not, in strict terms, apply to Bluetooth marketing

4 Comments

Filed under consent, Data Protection, Directive 95/46/EC, Information Commissioner, marketing, PECR, Privacy, tracking

Dancing to the beat of the Google drum

With rather wearying predictability, certain parts of the media are in uproar about the removal by Google of search results linking to a positive article about a young artist. Roy Greenslade, in the Guardian, writes

The Worcester News has been the victim of one of the more bizarre examples of the European court’s so-called “right to be forgotten” ruling.

The paper was told by Google that it was removing from its search archive an article in praise of a young artist.

Yes, you read that correctly. A positive story published five years ago about Dan Roach, who was then on the verge of gaining a degree in fine art, had to be taken down.

Although no one knows who made the request to Google, it is presumed to be the artist himself, as he had previously asked the paper itself to remove the piece,  on the basis that he felt it didn’t reflect the work he is producing now. But there is a bigger story here, and in my opinion it’s one of Google selling itself as an unwilling censor, and of media uncritically buying it.

Firstly, Google had no obligation to remove the results. The judgment of the Court of Justice of the European Union (CJEU) in the Google Spain case was controversial, and problematic, but its effect was certainly not to oblige a search engine to respond to a takedown request without considering whether it has a legal obligation to do so. What it did say was that, although as a rule data subjects’ rights to removal override the interest of the general public having access to the information delivered by a search query, there may be particular reasons why the balance might go the other way.

Furthermore, even if the artist here had a legitimate complaint that the results constituted his personal data, and that the continued processing by Google was inadequate, inaccurate, excessive or continuing for longer than was necessary (none of which, I would submit, would actually be likely to apply in this case), Google could simply refuse to comply with the takedown request. At that point, the requester would be left with two options: sue, or complain to the Information Commissioner’s Office (ICO). The former option is an interesting one (and I wonder if any such small claims cases will be brought in the County Court) but I think in the majority of cases people will be likely to take the latter. However, if the ICO receives a complaint, it appears that the first thing it is likely to do is refer the person to the publisher of the information in question. In a blog post in August the Deputy Commissioner David Smith said

We’re about to update our website* with advice on when an individual should complain to us, what they need to tell us and how, in some cases, they might be better off pursuing their complaint with the original publisher and not just the search engine [emphasis added]

This is in line with their new approach to handling complaints by data subjects – which is effectively telling them to go off and resolve it with the data controller in the first place.

Even if the complaint does make its way to an ICO case officer, what that officer will be doing is assessing – pursuant to section 42 of the Data Protection Act 1998 (DPA) – “whether it is likely or unlikely that the processing has been or is being carried out in compliance with the provisions of [the DPA]”. What the ICO is not doing is determining an appeal. An assessment of “compliance not likely” is no more than that – it does not oblige the data controller to take action (although it may be accompanied by recommendations). An assessment of “compliance likely”, moreover, leaves an aggrieved data subject with no other option but to attempt to sue the data controller. Contrary to what Information Commissioner Christopher Graham said at the recent Rewriting History debate, there is no right of appeal to the Information Tribunal in these circumstances.

Of course the ICO could, in addition to making a “compliance not likely” assessment, serve Google with an enforcement notice under section 42 DPA requiring them to remove the results. An enforcement notice does have proper legal force, and it is a criminal offence not comply with one. But they are rare creatures. If the ICO does ever serve one on Google things will get interesting, but let’s not hold our breath.

So, simply refusing to take down the results would, certainly in the short term, cause Google no trouble, nor attract any sanction.

Secondly (sorry, that was a long “firstly”) Google appear to have notified the paper of the takedown, in the same way they notified various journalists of takedowns of their pieces back in June this year (with, again, the predictable result that the journalists were outraged, and republicised the apparently taken down information). The ICO has identified that this practice by Google may in itself constitute unfair and unlawful processing: David Smith says

We can certainly see an argument for informing publishers that a link to their content has been taken down. However, in some cases, informing the publisher has led to the complained about information being republished, while in other cases results that are taken down will link to content that is far from legitimate – for example to hate sites of various sorts. In cases like that we can see why informing the content publisher could exacerbate an already difficult situation and could in itself have a very detrimental effect on the complainant’s privacy

Google is a huge and hugely rich organisation. It appears to be trying to chip away at the CJEU judgment by making it look ridiculous. And in doing so it is cleverly using the media to help portray it as a passive actor – victim, along with the media, of censorship. As I’ve written previously, Google is anything but passive – it has algorithms which prioritise certain results above others, for commercial reasons, and it will readily remove search results upon receipt of claims that the links are to copyright material. Those elements of the media who are expressing outrage at the spurious removal of links might take a moment to reflect whether Google is really as interested in freedom of expression as they are, and, if not, why it is acting as it is.

 

 
*At the time of writing this advice does not appear to have been made available on the ICO website.

5 Comments

Filed under Data Protection, Directive 95/46/EC, enforcement, Information Commissioner, Privacy

ICO indicates that (non-recreational) bloggers must register with them

I think I am liable to register with the ICO, and so are countless others. But I also think this means there needs to be a debate about what this, and future plans for levying a fee on data controllers, mean for freedom of expression

Recently I wrote about whether I, as a blogger, had a legal obligation to register with the Information Commissioner’s Office (ICO) the fact that I was processing personal data (and the purposes for which it was processed). As I said at the time, I asked the ICO whether I had such an obligation, and they said

from the information you have provided it would be unlikely that you would be required to register in respect of your blogs and tweets

However, I asked them for clarification on this point. I noted that I couldn’t see any exemption from the obligation to register, unless it was the general exemption (at section 36) from the Data Protection Act 1998 (DPA) where the processing is only for “domestic purposes”, which include “recreational purposes”. I noted that, as someone writing a semi-professional blog, I could hardly rely on the fact I do this only for recreational purposes. The ICO’s reply is illuminating

if you were blogging only for your own recreational purposes, it would be unlikely that you would need to register as a data controller. However, you have explained that your blogging is not just for recreational purposes. If you are sharing your views in order to further some other purpose, and this is likely to impact on third parties, then you should consider registering.

I know this is couched in rather vague terms – “if”…”likely”…”consider” – but it certainly suggests that merely being a non-professional blogger does not exempt me from having to register with a statutory regulator.

Those paying careful attention might understand the implications of this: millions of people every day share their views online, in order to further some purpose, in a way that “is likely to impact on third parties”. When poor Bodil Lindqvist got convicted in the Swedish courts in 2003 that is just what she was doing, and the Court of Justice of the European Union held that, under the European Data Protection Directive, she was processing personal data as a data controller, and consequently had legal obligations under data protection law to process data fairly, i.e. by not writing about a fellow churchgoer’s broken leg etc. without informing them/giving them an opportunity to object.

And there, in my last paragraph, you have an example of me processing personal data – I have published (i.e. processed) sensitive (i.e. criminal conviction) personal data (i.e. of an identifiable individual). I am a data controller. Surely I have to register with the ICO? Section 17 of the DPA says that personal data must not be processed unless an entry in respect of the data controller is included in the register maintained by the ICO, unless an exemption applies. The “domestic purposes” exemption doesn’t wash – the ICO has confirmed that1, and none of the exemptions apply. I have to register.

But if I have to register (and I will, because if I continue to process personal data without a registration I am potentially committing a criminal offence) then so, surely, do the millions of other people throughout the country, and throughout the jurisdiction of the data protection directive, who publish personal data on the internet not solely for recreational purposes – all the citizen bloggers, campaigning tweeters, community facebookers and many, many others…

To single people out would be unfair, so I’m not going to identify individuals who I think potentially fall into these categories, with the following exception. In 2011 Barnet Council was roundly ridiculed for complaining to the ICO about the activities of a blogger who regularly criticised the council and its staff on his blog2. The Council asked the ICO to determine whether the blogger in question had failed in his legal obligation to register with the ICO in order to legitimise his processing of personal data. The ICO’s response was

If the ICO were to take the approach of requiring all individuals running a blog to notify as a data controller … it would lead to a situation where the ICO is expected to rule on what is acceptable for one individual to say about another. Requiring all bloggers to register with this office and comply with the parts of the DPA exempted under Section 36 (of the Act) would, in our view, have a hugely disproportionate impact on freedom of expression.

But subsequently, the ICO was taken to task in the High Court on this general stance (but in unrelated proceedings) about being “expected to rule on what is acceptable for one individual to say about another”, with the judge saying

I do not find it possible to reconcile the views on the law expressed [by the ICO] with authoritative statements of the law. The DPA does envisage that the Information Commissioner should consider what it is acceptable for one individual to say about another, because the First Data Protection Principle requires that data should be processed lawfully

And if now the ICO accepts that, at least those bloggers (like the one in the Camden case) who are not solely blogging for recreational purposes, might be required to register, it possibly indicates a fundamental change.

In response to my last blog post on this subject someone asked “why ruffle feathers?”. But I think this should lead to a societal debate: is it an unacceptable infringement of the principles of freedom of expression for the law to require registration with a state regulator before one can share one’s (non-recreational) views about individuals online? Or is it necessary for this legal restraint to be in place, to seek to protect individuals’ privacy rights?European data protection reforms propose the removal of the general obligation for a data controller to register with a data protection authority, but in the UK proposals are being made (because of the loss of ICO fee income that would come with this removal) that there be a levy on data controllers.

If such proposals come into effect it is profoundly important that there is indeed a debate about the terms on which the levy is made – or else we could all end up being liable to pay a tax to allow us to talk online.

1On a strict reading of the law, and the CJEU judgment in Lindqvist, the distinction between recreational and non-recreational expressions online does not exist, and any online expression about an identifiable individual would constitute processing of personal data. The “recreational” distinction does not exist in the data protection directive, and is solely a domestic provision

2A confession: I joined in the ridicule, but was disabused of my error by the much better-informed Tim Turner. Not that I don’t think the Council’s actions were ill-judged.

 

10 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner, social media