Category Archives: Data Protection

We’re looking into it

The news is awash with reports that the UK Information Commissioner’s Office (ICO) is “opening an investigation” into Facebook’s rather creepy research experiment, in conjunction with US universities, in which it apparently altered the users’ news feeds to elicit either positive or negative emotional responses. Thus, the BBC says “Facebook faces UK probe over emotion study”, SC Magazine says “ICO probes Facebook data privacy” and the Financial Times says “UK data regulator probes Facebook over psychological experiment”.

As well as prompting one to question some journalists’ obsession with probes, this also leads one to look at the basis for these stories. It appears to lie in a quote from an ICO spokesman, given I think originally to the online IT news outlet The Register

The Register asked the office of the UK’s Information Commissioner if it planned to probe Facebook following widespread criticism of its motives.

“We’re aware of this issue, and will be speaking to Facebook, as well as liaising with the Irish data protection authority, to learn more about the circumstances,” a spokesman told us.
So, the ICO is aware of the issue and will be speaking to Facebook and to the Irish Data Protection Commissioner’s office. This doesn’t quite match up to the rather hyperbolic news headlines. And there’s a good reason for this – the ICO is highly unlikely to have any power to investigate, let alone take action. Facebook, along with many other tech/social media companies, has its non-US headquarters in Ireland. This is partly for taxation reasons and partly because of access to high-skilled, relatively low cost labour. However, some companies – Facebook is one, LinkedIn another – have another reason, evidenced by the legal agreements that users enter into: because the agreement is with “Facebook Ireland”, then Ireland is deemed to be the relevant jurisdiction for data protection purposes. And, fairly or not, the Irish data protection regime is generally perceived to be relatively “friendly” towards business.
 
These jurisdictional issues are by no means clear cut – in 2013  a German data protection authority tried to exercise powers to stop Facebook imposing a “real name only” policy.
 
Furthermore, as the Court of Justice of the European Union recognised in the recent Google Spain case, the issue of territorial responsibilities and jurisdiction can be highly complex. The Court held there that, as Google had
 
[set] up in a Member State a branch or subsidiary which is intended to promote and sell advertising space offered by that engine and which orientates its activity towards the inhabitants of that Member State
 
it was processing personal data in that Member State (Spain). Facebook does have a large UK corporate office with some responsibility for sales. It is just possible that this could give the ICO, as domestic data protection authority, some power to investigate. And if or when the draft European General Data Protection Regulation gets passed, fundamental shifts could take place, extending even, under Article 3(2) to bringing data controllers outside the EU within jurisdiction, where they are offering goods or services to (or monitoring) data subjects in the EU.
 
But the question here is really whether the ICO will assert any purported power to investigate, when the Irish DPC is much more clearly placed to do so (albeit it with terribly limited resources). I think it’s highly unlikely, despite all the media reports. In fact, if the ICO does investigate, and it leads to any sort of enforcement action, I will eat my hat*.
 
*I reserve the right to specify what sort of hat

Leave a comment

Filed under Data Protection, Directive 95/46/EC, enforcement, facebook, journalism, social media, Uncategorized

A green light for publishing FOI requesters names? I hope not

The Information Commissioner’s Office (ICO) today issued a statement about the data protection implications of public authorities publishing the names of people who have made requests under the Freedom of Information Act 2000 (FOIA). It was issued to journalist Jules Mattsson (it may have been issued to others) and I credit him for pursuing it. It arose out of concerns expressed on Twitter yesterday that a council had uploaded a disclosure log in which the names of requesters were unredacted*.

When the Justice Committee undertook its post-legislative scrutiny of FOIA in 2012 it made a recommendation (¶82) that names of requesters be published in disclosure logs

it can be argued that someone seeking to exercise freedom of information rights should be willing for the fact they have requested such information to be in the public domain; we therefore recommend that where the information released from FOI requests is published in a disclosure log, the name of the requestor should be published alongside it

But this was rejected by the government in its response to the report (¶25)

The Government does not share the view that publishing the names of requesters in disclosure logs would be beneficial in terms of burdens. Such a move would have implications for the data protection of requesters..

 Tim Turner blogged in his usual meticulous style on these data protection implications yesterday, and I am not going to rehearse the points he makes. Indeed, the ICO in its statement more or less agrees with Tim’s comments on fairness, and necessity, when it comes to the publication of requesters’ names

Individuals who make…requests must have their details handled fairly. Many people who have made a request would not expect to have their name linked to published details of the request they have made. If a public authority is considering publishing this information then they must consider why publishing the requester’s name is necessary/ While there is a need for authorities to be transparent about the [FOI] process, in most cases this would not extend to releasing people’s name simply to deter requesters

There then follow some (correct) observations that journalists and politicians might have different expectations, before the statement says

At the very least people should be told that their details will be published and given the opportunity to explain to the council why their name should not be disclosed. If having raised it with the authority a person is not happy with the way their details have been handled then we may be able to help

So what the ICO appears to be doing is agreeing that there are data protection implications, but, as long as authorities give requesters a privacy notice, announcing that they’re not going to do anything (unless people complain). It’s not often I take issue with the excellent Matt Burgess, who runs FOI Directory, but he claims that “the ICO has criticised the Council”. With respect, I don’t see any targeted criticism in the ICO’s statement, and I fear some public authorities will see it as a green light to publishing names.

As source does inform me that an ICO spokesman has said that they are going to be in touch with the council in question, to find out the full details. However, I wonder if the statement shows an approach more in line with the ICO’s new, largely reactive (as opposed to proactive), approach to data protection concerns (described on my blog by Dr David Erdos as having worrying implications for the rule of law), but I fear it risks the exposure of the personal data of large numbers of people exercising their right to information under a statutory scheme which, at heart, is meant to be applicant-blind. As the ICO implies, this could have the effect of deterring some requesters, and this would be, in the words of the always perceptive Rich Greenhill, a type of reverse chilling effect for FOIA.

 *I’m not going to link to the information: I don’t think its publication is fair. 

 

 

UPDATE: 05.07.14

The Council appears to have taken the information down, with Jules Mattsson reporting on 3 July that they are reviewing the publication of requesters’ names.

6 Comments

Filed under Data Protection, Freedom of Information, Information Commissioner

I DON’T KNOW WHAT I’M DOING

As surprising as it always is to me, I’m occasionally reminded that I don’t know everything. But when I’m shown not to know how my own website works, it’s more humbling.

A commenter on one of my blog posts recently pointed out the number of tracking applications which were in operation. I had no idea. (I’ve disabled (most of) them now).

And someone has just pointed out (and some others have confirmed) that, when visiting my blog on their iphone, it asks them whether they want to tell me their current location. I have no idea why. (I’m looking into it).

These two incidents illustrate a few things to me.

Firstly, for all my pontificating about data protection, and – sometimes – information security, I’m not particularly technically literate: this is a wordpress.com blog, which is the off-the-peg version, with lots of things embedded/enabled by default. Ideally, I would run and host my own site, but I do this entirely in my own time, with no funding at all.

Secondly, and following on from the first,  I am one among billions of people who run web applications without knowing a great deal about the code that they’re based on. In a world of (possibly deliberately coded) back-door and zero day vulnerabilities this isn’t that surprising. If even experts can be duped, what hope for the rest of us?

Thirdly, and more prosaically, I had naively assumed that, in inviting people to read and interact with my blog, I was doing so in a capacity of data controller: determining the purposes for which and the manner in which their personal data was to be processed. (I had even considered notifying the processing with the Information Commissioner, although I know that they would (wrongly) consider I was exempt under section 36 of the Data Protection Act 1998)). But if I don’t even know what my site is doing, in what way can I be said to determine the data processing purposes and manner? But if I can’t, then should I stop doing it? I don’t like to be nominally responsible for activities I can’t control.

Fourthly, and finally, can anyone tell me why my out-of-control blog is asking users to give me their location, and how I can turn the damned thing off?

UPDATE: 30.06.14

The consensus from lots and lots of helpful and much-appreciated comments seems to be a) that this location thingy is embedded in the wordpress software (maybe the theme software), and b) I should migrate to self-hosting.

The latter option sounds good, but I have to remind people that I DON’T KNOW WHAT I’M DOING.

UPDATE:05.07.14

The rather excellent Rich Greenhill seems to have identified the problem (I trust his judgement, but haven’t confirmed this). He says “WordPress inserts mobile-only getCurrentPosition from aka-cdn-nsDOTadtechusDOTcom/…DAC.js via adsDOTmopubDOTcom in WP ad script”…”Basically, WordPress inserts ads; but, for mobile devices only, the imported ad code also attempts to detect geo coordinates”.

So it dooes look like I, and other wordpress.com bloggers, who can’t afford the “no ads” option, are stuck with this unless or until we can migrate away.

UPDATE: 11.07.14

We are informed that the code which asks (some) mobile users for their location when browsing this blog has now been corrected. Please let me know if it isn’t.

3 Comments

Filed under Data Protection, Information Commissioner, Personal, social media, tracking

Google is not a library, Dr Cavoukian

The outgoing Ontario Information and Privacy Commissioner Ann Cavoukian, whose time in office has been hugely, and globally, influential (see in particular Privacy by Design) has co-written (with Christopher Wolf) an article strongly criticising the judgment of the Court of Justice of the European Union (CJEU) in the Google Spain case.

For anyone who has been in the wilderness for the last few weeks, in Google Spain the CJEU ruled that Google Spain, as a subsidiary of Google inc. operating on Spanish territory, was covered by the obligations of the European Data Protection Directive 95/46/EC, that it was operating as an entity that processed personal data in the capacity of a data controller, and that it was accordingly required to consider applications from data subjects for removal of search returns. Thus, what is loosely called a “right to be forgotten” is seen already to exist in the current data protection regime.

Many have written on this landmark CJEU ruling (I commend in particular Dr David Erdos’s take, on the UK Constitutional Law Blog) and I am not here going to go into any great detail, but what I did take issue with in the Cavoukian and Wolf piece was the figurative comparison of Google with a public library:

A man walks into a library. He asks to see the librarian. He tells the librarian there is a book on the shelves of the library that contains truthful, historical information about his past conduct, but he says he is a changed man now and the book is no longer relevant. He insists that any reference in the library’s card catalog and electronic indexing system associating him with the book be removed, or he will go to the authorities…

…The government agent threatens to fine or jail the librarian if he does not comply with the man’s request to remove the reference to the unflattering book in the library’s indexing system.

Is this a scenario out of George Orwell’s Nineteen Eighty-Four? No, this is the logical extension of a recent ruling from Europe’s highest court

(I pause briefly to say that if I never see another reference to Orwell in the context of privacy debate I will die a happy man).

I’m fond of analogies but Cavoukian’s and Wolf’s one (or maybe it’s a metaphor?) is facile. I think it could more accurately say

A man walks into a library. He sees that, once again, the library has chosen, because of how it organises its profit-making activities, to give great prominence to a book which contains information about his past conduct, which is no longer relevant, and which it is unfair to highlight. He asks them to give less prominence to it.

Cavoukian and Wolf accept that there should be a right to remove “illegal defamatory” content if someone posts it online, but feel that the issue of links to “unflattering, but accurate” information should be explored using “other solutions”. (I pause again to note that “unflattering” is an odd and loaded word to use here: Mr Gonzalez, in the Google Spain case, was concerned about out-of-date information about bankruptcy, and other people who might want to exercise a right to removal of links might be concerned by much worse than “unflattering” information).

I don’t disagree that other solutions should be explored to the issue of the persistence or reemergence of old information which data subjects reasonably no longer wish to be known, but people are entitled to use the laws which exist to pursue their aims, and the application by the CJEU of data protection law to the issues pleaded was, to an extent, uncontroversial (is Google a data controller? if it is, what are its obligations to respect a request to desist from processing?)

Cavoukian and Wolf criticise the CJEU for failing to provide sufficient instruction on how “the right to be forgotten” should be applied, and for failing to consider whether “online actors other than search engines have a duty to ‘scrub’ the Internet of unflattering yet truthful facts”, but a court can only consider the issues pleaded before it, and these weren’t. Where I do agree with them is in their criticism of the apparent failure by the CJEU, when giving effect to the privacy rights in Article 8 of the European Convention on Human Rights, and Articles 7 and 8 of the Charter of Fundamental Rights of the European Union, to consider adequately, if at all, the countervailing rights to freedom of expression in Article 10 of the former and Article 11 of the latter. In this respect, the prior Opinion of the Advocate General was perhaps to be preferred.

The key word in my replacement library ananolgy above is “chosen”. Google is not a passive and inert indexing system. Rather, it is a dynamic and commercially-driven system which uses complex algorithms to determine which results appear against which search terms. It already exercises editorial control over results, and will remove some which it is satisfied are clearly unlawful or which constitute civil wrongs such as breach of copyright. Is it so wrong that (if it gives appropriate weight to the (sometimes) competing considerations of privacy and freedom of expression) it should be required to consider a request to remove unfair and outdated private information?

 

 

2 Comments

Filed under Data Protection, Directive 95/46/EC, Europe, human rights, Privacy

The Partridge Review reveals apparently huge data protection breaches

Does the Partridge Review of NHS transfers of hospital episode patient data point towards one of the biggest DPA breaches ever?

In February this year Tim Kelsey, NHS England’s National Director for Patients and Information, and vocal cheerleader for the care.data initiative, assured the public, in an interview on the Radio 4 Today programme, that in the twenty five years that Hospital Episode Statistics (HES) have been shared with other organisations

the management of the hospital episode database…there has never been a single example of that data being compromised, the privacy of patients being compromised…

When pressed by medConfidential‘s Phil Booth about this, and about risks of reidentification from the datasets, Tim repeated that no patient’s privacy had been compromised.

Some of us doubted this, as news of specific incidents of data loss emerged, and even more so as further news emerged suggesting that there had been transfers (a.k.a. sale) of huge amounts of potentially identifiable patient data to, for instance, the Institute and Faculty of Actuaries. The latter news led me to ask the Information Commissioner’s Office (ICO) to assess the lawfulness of this processing, an assessment which has not been completed four months later.

However, with the publication on 17 June of Sir Nick Partridge’s Review of Data Releases by the NHS Information Centre one questions the basis for Tim’s assertions. Sir Nick commissioned PwC to analyse a total of 3059 data releases between 2005 and 2013 (when the NHS Information Centre (NHSIC) ceased to exist, and was replaced by the Health and Social Care Information Centre HSCIC). The summary report to the Review says that

It disappoints me to report that the review has discovered lapses in the strict arrangements that were supposed to be in place to ensure that people’s personal data would never be used improperly

and it reveals a series of concerning and serious failures of data governance, including

  • lack of detailed records between 1 April 2005 and 31 March 2009
  • two cases of data that was apparently released without a proper record remaining of which organisation received the data
  • [no] evidence that Northgate [the NHSIC contractor responsible for releases] got permission from the NHS IC before making releases as it was supposed to do
  • PwC could not find records to confirm full compliance in about 10% of the sample

 Sir Nick observes that

 the system did not have the checks and balances needed to ensure that the appropriate authority was always in place before data was released. In many cases the decision making process was unclear and the records of decisions are incomplete.

and crucially

It also seems clear that the responsibilities of becoming a data controller, something that happens as soon as an organisation receives data under a data sharing agreement, were not always clear to those who received data. The importance of data controllers understanding their responsibilities remains vital to the protection of people’s confidentiality

(This resonates with my concern, in my request to the ICO to assess the transfer of data from HES to the actuarial society, about what the legal basis was for the latter’s processing).

Notably, Sir Nick dispenses with the idea that data such as HES was anonymised:

The data provided to these other organisations under data sharing agreements is not anonymised. Although names and addresses are normally removed, it is possible that the identity of individuals may be deduced if the data is linked to other data

 And if it was not anonymised, then the Data Protection Act 1998 (DPA) is engaged.

All of this indicates a failure to take appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data, which the perspicacious among you will identify as one of the key statutory obligations placed on data controllers by the seventh data protection principle in the DPA.

Sir Nick may say

 It is a matter of fact that no individual ever complained that their confidentiality had been breached as a result of data being shared or lost by the NHS IC

but simply because no complaint was made (at the time – complaints certainly have been made since concerns started to be raised) does not mean that the seventh principle was not contravened, in a serious way.  And a serious contravention of the DPA of a kind likely to cause substantial damage or substantial distress can potentially lead to the ICO serving a monetary penalty notice (MPN) to a maximum of £500,000 (at least for contraventions after April 2010, when the ICO’s powers commenced).

The NHSIC is no more (although as Sir Nick says, HSCIC “inherited many of the NHS IC’s staff and procedures”). But that has not stopped the ICO serving MPNs on successor organisation in circumstances where their predecessors committed the contravention.  One waits with interest to see whether the ICO will take any enforcement action, but I think it’s important that they consider doing so, because, even though Sir Nick makes nine very sensible recommendations to HSCIC, one could be forgiven – having been given clear assurances previously, by the likes of Tim Kelsey and others – for having reservations as to future governance of our confidential medical data. I would suggest it is imperative that HSCIC know that their processing of personal data is now subject to close oversight by all relevant regulatory bodies.

 

 

 

 

 

 

 

 

 

2 Comments

Filed under care.data, Confidentiality, Data Protection, data sharing, Information Commissioner, monetary penalty notice, NHS, Privacy

A public interest test in the Data Protection Act?

Mr Justice Cranston has suggested that there is a public interest factor when considering whether disclosure of personal data would be “fair” processing. I’m not sure that is right.

The first data protection principle (DPP1) in Schedule 1 of the Data Protection Act 1998 (DPA) says that personal data must be processed “fairly” (and lawfully). But what does “fairly” mean?

In an interesting recent case (AB v A Chief Constable [2014] EWHC 1965 (QB)) the High Court determined that, on the very specific facts, it would not be fair, in terms of DPP1, and common law legitimate expectation, for a Chief Constable to send a second, non-standard, reference to the new employer of a senior police officer who was subject to disciplinary investigation. (The judgment merits close reading – this was by no means a statement of general principle about police references). The reason it would not be fair was because the officer in question had tendered his resignation upon the sending of the initial, anodyne, reference, and the force had terminated misconduct proceedings:

He was thus in the position that for the Force to send the second reference would most likely leave him without employment and without the opportunity to refute the gross misconduct allegations. In these special circumstances it would be a breach of the Data Protection Act 1998 and undermine his legitimate expectations for the second reference to be sent [¶94]

Something in particular struck me about the judge’s analysis of DPP1, although, given the outcome, it was not determinative. He rejected a submission from the claimant officer that the duty of fairness in the DPP1 and the European Data Protection Directive was a duty to be fair primarily to the data subject. Rather, correctly identifying that the privacy rights in the Directive and the DPA are grounded in article 8 of the European Convention on Human Rights and in general principles of EU law, he held that

The rights to private and family life in Article 8 are subject to the countervailing public interests set out in Article 8(2). So it is here: assessing fairness involves a balancing of the interests of the data subject in non-disclosure against the public interest in disclosure [¶75]

I am not sure this is right. Recital 28 of the Directive says

Whereas any processing of personal data must be lawful and fair to the individuals concerned [emphasis added]

and recital 38 suggests that whether processing is “fair” is in large part dependent on whether the data subject is made aware of the processing and the circumstances under which it takes place. These recitals give way to the descriptions in Articles 10 and 11 which both talk about “fair processing in respect of the data subject” (again, emphasis added). Similarly Part II of Schedule One to the DPA provides interpretation to DPP1, and says that in determining whether personal data are processed fairly

regard is to be had to the method by which they are obtained, including in particular whether any person from whom they are obtained is deceived or misled as to the purpose or purposes for which they are to be processed

Admittedly this introduces “any person”, which could be someone other than the data subject, but more general considerations of public interest are absent. It is also notable that the Information Commissioner’s position in guidance seems predicated solely on the belief that it is the data subject’s interests that are engaged in an analysis of “fairness”, although the guidance does conceded that processing might cause some detriment to the individual without it being unfair, but I do not think this is the same as taking into account public interest in disclosure.

To the extent that a public interest test does manifest itself in DPP1, it is normally held to be in the conditions in Schedules 2 and 3. DPPP1 says that, in addition to the obligation to process personal data fairly and lawfully, a condition in Schedule 2 (and, for sensitive personal data, Schedule 3) must be met. Many of these conditions contain tests as to whether the processing is “necessary”, and that “necessity test” constitutes a proportionality test, as described by Latham LJ in Corporate Officer of the House of Commons v The Information Commissioner & Ors [2008] EWHC 1084 (Admin)

‘necessary’…should reflect the meaning attributed to it by the European Court of Human Rights when justifying an interference with a recognised right, namely that there should be a pressing social need and that the interference was both proportionate as to means and fairly balanced as to ends

To import a public interest test into the word “fairly” in DPP1 seems to me to be a potentially radical step, especially when disclosures of personal data under the Freedom of Information Act 2000 (FOIA) are being considered. As I say – I doubt that this is correct, but I would welcome any contrary (or concurring) opinions.

(By the way, I at first thought there was a more fundamental error in the judgment: the judge found that a rule of law was engaged which ordinarily would have required the Chief Constable to send the second reference:

the public law duty of honesty and integrity would ordinarily have demanded that the Chief Constable send the Regulatory Body something more than the anodyne reference about the claimant [¶93]

If a rule of law necessitates disclosure of personal data, then the exemption at section 35 DPA removes the requirement to process that data fairly and lawfully. However, I think the answer lies in the use of the word “ordinarily”: in this instance the doctrine of legitimate expectation (which the claimant could rely upon) meant that the public law duty to send the second reference didn’t apply. So section 35 DPA wasn’t engaged.)

 

 

 

 

 

7 Comments

Filed under Confidentiality, Data Protection, human rights, police

Virgin on the ridiculous

UPDATE 15.12.14: I think the comments on this piece take it further, and I do accept (as I did at the time, in fact) that the “password” in question was not likely to relate to customers’ accounts.
END UPDATE.

I got into a rather odd exchange over the weekend with the people running the Virgin Media twitter account. It began when, as is my wont, I was searching for tweets about “data protection” and noticed an exchange in which someone had asked Virgin Media whether their sales people rang customers and asked them to give their passwords. Virgin Media kindly appeared to confirm they did, and that

it’s for security as we can’t make any changes without data protection being passed

I asked for clarification, and this exchange ensued

[ME] Is it true your sales people call customers and ask for their account passwords? If so, are these unsolicited calls?

[VM] Yes this is true, our sales team would call and before entering your account, would need you to pass account security. I understand for your own security purposes why you wouldn’t feel great doing this, i’d be the same. If you give us a call on 150/03454541111 we can get this cleared up. Let me know how you get on

[ME] Thanks. Not a customer. Just interested in what seems like questionable practice being defended under guise of data protection

[VM] We contact our customers if there upgrade is due, or for a heath check on accounts, and a few other instances, but I get where your coming from [sic]

There’s nothing unlawful about this practice, and I assume that the accounts in question are service and not financial ones, but it doesn’t accord with normal industry practice. Moreover, one is warned often enough about the risks of phishing calls asking for account passwords. If a legitimate company requires or encourages its sales staff to do this, it adds to a culture of unnecessary risk. There are better ways of verifying identity, as their social media person seems to accept, when they say “I understand for your own security purposes why you wouldn’t feel great doing this, i’d be the same”.

One thing I’m certain about, though, is that isn’t any part of “passing data protection” (unless they mean bypassing) to make outbound calls and ask for customer passwords.

On a final note, and in admiration of bare-faced cheek, I highlight the end of my exchange with Virgin Media

If you want, as your not a customer, you can check out our brill offers here [removed] maybe we could save you a few pounds?

That’s an offer I most certainly can refuse.

(By the way, as it’s an official Virgin Media account, I’ve taken what I was told on Twitter at face value. If I have misunderstood any of their policies on this I’d be happy to correct).

UPDATE:

Virgin Media’s Twitter account appears to have confirmed to me a) that they do ask for customers’ passwords on outbound sales calls, and b) that they see nothing wrong with it. And rather hilariously, they say that “we can discuss further” if I will “pop a few details” on their web form for social media enquiries. No thanks.

12 Comments

Filed under Data Protection, Let's Blame Data Protection, marketing, nuisance calls, PECR, social media

Ticking off Neelie Kroes (sort of)

In which I take issue with the European Commission V-P about what the Consumer Rights Directive says about pre-ticked boxes

I found myself retweeting what I think was a rather misleading message from the Vice-President of the European Commission, Neelie Kroes. Her tweet said

You know those annoying “pre-ticked boxes” on shopping/travel websites? They’re banned in #EU from today http://europa.eu/rapid/press-release_IP-14-655_en.htm#eCommerce

I thought this was very interesting, particularly in light of my recent post about the implying of consent to electronic marketing if people forget to untick such boxes. The EU press release itself does say at one point

Under the new EU rules…consumers can now rely on…A ban on pre-ticked boxes on the internet, as for example when they buy plane tickets

But, it earlier says

The new rules also ban…pre-ticked boxes on websites for charging additional payments (for example when buying plane tickets online)

The emphasis I’ve added in that last quote is crucial. What DIRECTIVE 2011/83/EU OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 25 October 2011 on consumer rights actually proscribes is the contractual binding of a consumer to any payment in addition to the original remuneration agreed on if

the trader has not obtained the consumer’s express consent but has inferred it by using default options which the consumer is required to reject in order to avoid the additional payment

 So, as the press release explains,

When shopping online –for example when buying a plane ticket – you may be offered additional options during the purchase process, such as travel insurance or car rental. These additional services may be offered through so-called pre-ticked boxes. Consumers are currently often forced to untick those boxes if they do not want these extra services. With the new Directive, pre-ticked boxes will be banned across the European Union.

I happen to think that that text should more properly say “With the new Directive, pre-ticked boxes of this sort will be banned across the European Union”.

So, no ban on pre-ticked boxes themselves, just on those which purport to bind a consumer to an additional payment under a contract.

The Directive has been implemented in the UK by  The Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013 and associated The Enterprise Act 2002 (Part 8 EU Infringements) Order 2013 the former of which says (at regulation 40)

Under a contract between a trader and a consumer, no payment is payable in addition to the remuneration agreed for the trader’s main obligation unless, before the consumer became bound by the contract, the trader obtained the consumer’s express consent.. There is no express consent (if there would otherwise be) for the purposes of this paragraph if consent is inferred from the consumer not changing a default option (such as a pre-ticked box on a website)

Having said all this, I do think it is interesting that clearly-defined concepts of “express consent” are making their way into European and domestic legislation. And in due course, we may even find that, for instance, electronic marketing will be restrained unless similarly clearly-defined express consent is given. But not just yet.

Update: Ms Kroes kindly replied to me, saying it’s difficult to get a message across in 140 characters. So true.

 

 

 

 

Leave a comment

Filed under Data Protection, Europe, marketing, PECR

Nominal damages give rise to distress compensation under the Data Protection Act – AB v Ministry of Justice

An award of nominal DPA damages in the High Court.

Whether, or in what circumstances, compensation may be awarded to a claimant who shows a contravention by a data controller of any of the requirements of the Data Protection Act 1998 (DPA), is a much-debated issue. It is also, occasionally, litigated. One key aspect is when compensation for distress might be awarded.

Section 13 of the DPA provides, so far as is relevant here, that

(1)An individual who suffers damage by reason of any contravention by a data controller of any of the requirements of this Act is entitled to compensation from the data controller for that damage.

(2)An individual who suffers distress by reason of any contravention by a data controller of any of the requirements of this Act is entitled to compensation from the data controller for that distress if—

(a)the individual also suffers damage by reason of the contravention

The general interpretation of this has been that compensation for distress, in the absence of pecuniary damage, is not available. The leading case on this is Johnson v The Medical Defence Union Ltd (2) [2006] EWHC 321 and on appeal Johnson v Medical Defence Union [2007] EWCA Civ 262, with Buxton LJ saying in the latter

section 13 distress damages are only available if damage in the sense of pecuniary loss has been suffered

However in allowing an appeal in Murray v Big Pictures (UK) Ltd [2008] EWCA Civ 446, and directing that the case go to trial, the Court of Appeal was prepared to consider a different view

It seems to us to be at least arguable that the judge [in the first instance] has construed ‘damage’ too narrowly, having regard to the fact that the purpose of the Act was to enact the provisions of the relevant Directive

But that case was ultimately settled before trial, and the issue left undecided.

Clearly, the decision in Johnson is potentially controversial, especially in cases (of which Johnson was not one) where the UK’s obligations under the European Data Protection Directive, and data subjects’ associated rights under the European Convention on Human Rights and the Charter of Fundamental Rights of the European Union, are taken into account. This much was recognised by Tugendhat J, in giving permisssion to the applicants in Vidal -Hall & Ors v Google Inc [2014] EWHC 13 (QB) to serve on Google Inc out of jurisdiction. He noted (¶83-104) academic statements on the issue, as well as the European Commission’s view that the UK DPA wrongly restricts “[t]he right to compensation for moral damage when personal information is used inappropriately”, and said

This is a controversial question of law in a developing area, and it is desirable that the facts should be found. It would therefore be the better course in the present case that I should not decide this question on this application.

I shall therefore not decide it. However, in case it is of any assistance in the future, my preliminary view of the question is that Mr Tomlinson’s submissions are to be preferred, and so that damage in s.13 does include non-pecuniary damage

This is a fascinating point, and detailed judicial consideration of it would be welcomed (it may also be at issue in the impending case of Steinmetz v Global Witness Ltd) but, in the meantime, a question exists as to whether nominal pecuniary damage opens the door to awards for distress. In Johnson, the cost of a £10.50 breakfast had opened the door, but this was actual (if minor) damage. Last year, the Court of Appeal avoided having to decide the issue when the defendant conceded the point in Halliday v Creation Consumer Finance Ltd (CCF) [2013] EWCA Civ 333 (about which I blogged last year). However, in a very recent judgment, AB v Ministry of Justice [2014] EWHC 1847 (QB), which takes some wading through, Mr Justice Baker does appear to have proceeded on the basis that nominal damages do give rise to distress compensation.

The case involves an (anonymous) partner in a firm of solicitors who, as a result of events involving the coroner following his wife’s tragic death, made a series of subject access requests (under the provisions of section 7 DPA). The Ministry of Justice (MoJ) did not, it seems, necessarily handle these well, nor in accordance with their obligations under the DPA, and when it came to remedying these contraventions (which consisted of delayed responses) the judge awarded nominal damages of £1.00, before moving on to award £2250 for distress caused by the delays.

What is not clear from the judgment is to what extent the judge considered the MoJ’s submission that compensation for distress was only available if an individual has also suffered damage. The answer may lie in the fact that, although he awarded nominal damages, the judge accepted that AB had suffered (actual) damage but had “not sought to quantify his time or expense”. Query, therefore, whether this is a case of purely nominal damage.

One hopes that Vidal-Hall and Global Witness give the occasions to determine these matters. One notes, however, the vigour with which both cases are being litigated by the parties: it may be some time before the issue is settled once and for all.

 

Leave a comment

Filed under damages, Data Protection, Directive 95/46/EC, human rights

Piles of cash for claiming against spammers? I’m not so sure

I am not a lawyer, but I’m pretty certain that most commercial litigation strategies will be along the lines of “don’t waste lots of money fighting a low-value case which sets no precedent”. And I know it is a feature of such litigation that some companies will not even bother defending such cases, calculating that doing so will cost the company much more, with no other gain.

With this in mind, one notes the recent case of Sky News producer Roddy Mansfield. His employer itself reported (in a piece with a sub-heading  “John Lewis is prosecuted…”, which is manifestly not the case – this was a civil matter) that

John Lewis has been ordered to pay damages for sending “spam” emails in a privacy ruling that could open the floodgates for harassed consumers.

Roddy Mansfield, who is a producer for Sky News, brought the case under EU legislation that prohibits businesses from sending marketing emails without consent

The case appears to have been brought under regulation 30 of The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). Those regulations, as the title suggests, give effect to the UK’s obligations under the snappily titled Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector. Regulation 30(1) of PECR provides that

A person who suffers damage by reason of any contravention of any of the requirements of these Regulations by any other person shall be entitled to bring proceedings for compensation from that other person for that damage

It appears that Mr Mansfield created an account on the John Lewis website, and omitted to “untick” a box which purported to convey his consent to John Lewis sending him marketing emails. It further appears that in the County Court Mr Mansfield successfully argued that the subsequent sending of such emails was in breach of regulation 22(2), which provides in relevant part that

a person shall neither transmit, nor instigate the transmission of, unsolicited communications for the purposes of direct marketing by means of electronic mail unless the recipient of the electronic mail has previously notified the sender that he consents for the time being to such communications being sent…

Assuming that this accurately reflects what happened, I think Mr Mansfield was probably correct to argue that John Lewis had breached the regulations: the Information Commissioner’s Office (ICO) guidance states that

Some organisations provide pre-ticked opt-in boxes, and rely on the user to untick it if they don’t want to consent. In effect, this is more like an opt-out box, as it assumes consent unless the user clicks the box. A pre-ticked box will not automatically be enough to demonstrate consent, as it will be harder to show that the presence of the tick represents a positive, informed choice by the user

For a detailed exposition of the PECR provisions in play, see Tim Turner’s excellent recent blog post on this same story.

I’ve used the word “appears” quite a bit in this post, because there are various unknowns in this story. One of the main missing pieces of information is the actual amount of damages awarded to Mr Mansfield. Unless (and it is not the case here) exemplary or aggravated damages are available, an award will only act as compensation. It has been said that

The central purpose of a civil law award of damages is to compensate the claimant for the damage, loss or injury he or she has suffered as a result of another’s acts or omissions, and to put the claimant in the same position as he or she would have been but for the injury, loss or damage, so far as this is possible

So I doubt very much whether the award to Mr Mansfield was anything other than a small sum (so the albeit tongue-in-cheek Register reference to a PILE OF CASH is very probably way off the mark) . I have asked him via his twitter account for details, but have had no reply as yet.

Perhaps the most important aspect of this story, though, is the extent to which it indicates the way the courts might interpret the relevant consent provisions of PECR. As this was a case in the County Court it sets no precedent, and, unless someone decides to pay for a transcript of the hearing we’re very unlikely to get any written judgment or law report, but the principles at stake are profound ones, concerning how electronic marketing communications can be lawfully sent, and about what “consent” means in this context.

The issue will not go away, and, although I suspect (referring back to my opening paragraph) that John Lewis chose not to appeal because the costs of doing so would have vastly outweighed the costs of settling the matter by paying the required damages, it would greatly benefit from some proper consideration by a higher court.

And another important aspect of the story is whether behaviours might change as a result. Maybe they have: I see that John Lewis, no doubt aware that others might take up the baton passed on by Mr Mansfield, have quietly amended their “create an account” page, so that the opt-in box is no longer pre-ticked.

jl

UPDATE: 7 June

In a comment below a pseudonymed person suggests that the damages award was indeed tiny – £10 plus £25 costs. It also suggests that John Lewis tried to argue that they were permitted to send the emails by virtue of the “soft opt-in” provisions of regulation 22(3) PECR, perhaps spuriously arguing that Mr Mansfield and they were in negotiations for a sale.

9 Comments

Filed under damages, Data Protection, Information Commissioner, marketing, PECR