Author Archives: Jon Baines

What’s so foolish about FOI?

The television presenter Phillip Schofield took to Twitter recently to draw attention to a Freedom of Information (FOI) request to Avon and Somerset Police. He did so because the request had asked about the cost to the force of Mr Schofield’s attendance at an open day.

Message to Tom Hodder .. No Fee!! My bro works for the police, it was a family day out!

I’ve no problem with his drawing attention to it, nor with his naming the person, but I thought it was rather unpleasant that he chose to use the hashtags #WastingPoliceTime #Fool. As Mr Schofield, and the response on WhatDoTheyKnow.com, say, the cost was nil, but I don’t suppose Mr Hodder was to know that: Mr Schofield was described on his own employer’s site as having been invited to attend, and he promotes himself as someone for hire for “personal appearances”. I didn’t know Mr Schofield’s brother works for the police, and I suspect Mr Hodder didn’t either.

Wasting Police Time is a term used to describe a criminal offence. What Mr Hodder was doing was exercising his statutory right to ask a public authority for information (in this instance about the expenditure of public funds), and I see nothing wrong in what he asked (nor, indeed, in the response by the police. I am sure Mr Schofield wasn’t seriously suggesting the commission of a criminal offence, but his use of the term, and the epithet “fool” seem mean-spirited. And, of course, as he might have expected, many of his fans jumped to his defence and to verbally attack Mr Hodder.

All this seems rather ironic when one considers Mr Schofield’s involvement in 2012 in another “transparency” story. This was when he confronted the prime minister with a list of alleged child sex abusers which he had found online, but which he failed to shield from the studio cameras – a stunt which Jonathan Dimbleby described as “cretinous”. This led to his employer having to pay the late Lord McAlpine (whose name was on the list) £125,000 to settle a defamation claim. Even the apology which followed the incident had a mean-spirited air about it, when Mr Schofield appeared to blame the cameraman.

Mr Schofield has one of the largest followings on Twitter (2.99 million, at the time of writing). People with that sort of following carry some responsibility, and if they criticise named individuals they should do so fairly. I think it would be in order if he apologised to Mr Hodder.

 

 

2 Comments

Filed under Freedom of Information, police, social media

I DON’T KNOW WHAT I’M DOING

As surprising as it always is to me, I’m occasionally reminded that I don’t know everything. But when I’m shown not to know how my own website works, it’s more humbling.

A commenter on one of my blog posts recently pointed out the number of tracking applications which were in operation. I had no idea. (I’ve disabled (most of) them now).

And someone has just pointed out (and some others have confirmed) that, when visiting my blog on their iphone, it asks them whether they want to tell me their current location. I have no idea why. (I’m looking into it).

These two incidents illustrate a few things to me.

Firstly, for all my pontificating about data protection, and – sometimes – information security, I’m not particularly technically literate: this is a wordpress.com blog, which is the off-the-peg version, with lots of things embedded/enabled by default. Ideally, I would run and host my own site, but I do this entirely in my own time, with no funding at all.

Secondly, and following on from the first,  I am one among billions of people who run web applications without knowing a great deal about the code that they’re based on. In a world of (possibly deliberately coded) back-door and zero day vulnerabilities this isn’t that surprising. If even experts can be duped, what hope for the rest of us?

Thirdly, and more prosaically, I had naively assumed that, in inviting people to read and interact with my blog, I was doing so in a capacity of data controller: determining the purposes for which and the manner in which their personal data was to be processed. (I had even considered notifying the processing with the Information Commissioner, although I know that they would (wrongly) consider I was exempt under section 36 of the Data Protection Act 1998)). But if I don’t even know what my site is doing, in what way can I be said to determine the data processing purposes and manner? But if I can’t, then should I stop doing it? I don’t like to be nominally responsible for activities I can’t control.

Fourthly, and finally, can anyone tell me why my out-of-control blog is asking users to give me their location, and how I can turn the damned thing off?

UPDATE: 30.06.14

The consensus from lots and lots of helpful and much-appreciated comments seems to be a) that this location thingy is embedded in the wordpress software (maybe the theme software), and b) I should migrate to self-hosting.

The latter option sounds good, but I have to remind people that I DON’T KNOW WHAT I’M DOING.

UPDATE:05.07.14

The rather excellent Rich Greenhill seems to have identified the problem (I trust his judgement, but haven’t confirmed this). He says “WordPress inserts mobile-only getCurrentPosition from aka-cdn-nsDOTadtechusDOTcom/…DAC.js via adsDOTmopubDOTcom in WP ad script”…”Basically, WordPress inserts ads; but, for mobile devices only, the imported ad code also attempts to detect geo coordinates”.

So it dooes look like I, and other wordpress.com bloggers, who can’t afford the “no ads” option, are stuck with this unless or until we can migrate away.

UPDATE: 11.07.14

We are informed that the code which asks (some) mobile users for their location when browsing this blog has now been corrected. Please let me know if it isn’t.

3 Comments

Filed under Data Protection, Information Commissioner, Personal, social media, tracking

Google is not a library, Dr Cavoukian

The outgoing Ontario Information and Privacy Commissioner Ann Cavoukian, whose time in office has been hugely, and globally, influential (see in particular Privacy by Design) has co-written (with Christopher Wolf) an article strongly criticising the judgment of the Court of Justice of the European Union (CJEU) in the Google Spain case.

For anyone who has been in the wilderness for the last few weeks, in Google Spain the CJEU ruled that Google Spain, as a subsidiary of Google inc. operating on Spanish territory, was covered by the obligations of the European Data Protection Directive 95/46/EC, that it was operating as an entity that processed personal data in the capacity of a data controller, and that it was accordingly required to consider applications from data subjects for removal of search returns. Thus, what is loosely called a “right to be forgotten” is seen already to exist in the current data protection regime.

Many have written on this landmark CJEU ruling (I commend in particular Dr David Erdos’s take, on the UK Constitutional Law Blog) and I am not here going to go into any great detail, but what I did take issue with in the Cavoukian and Wolf piece was the figurative comparison of Google with a public library:

A man walks into a library. He asks to see the librarian. He tells the librarian there is a book on the shelves of the library that contains truthful, historical information about his past conduct, but he says he is a changed man now and the book is no longer relevant. He insists that any reference in the library’s card catalog and electronic indexing system associating him with the book be removed, or he will go to the authorities…

…The government agent threatens to fine or jail the librarian if he does not comply with the man’s request to remove the reference to the unflattering book in the library’s indexing system.

Is this a scenario out of George Orwell’s Nineteen Eighty-Four? No, this is the logical extension of a recent ruling from Europe’s highest court

(I pause briefly to say that if I never see another reference to Orwell in the context of privacy debate I will die a happy man).

I’m fond of analogies but Cavoukian’s and Wolf’s one (or maybe it’s a metaphor?) is facile. I think it could more accurately say

A man walks into a library. He sees that, once again, the library has chosen, because of how it organises its profit-making activities, to give great prominence to a book which contains information about his past conduct, which is no longer relevant, and which it is unfair to highlight. He asks them to give less prominence to it.

Cavoukian and Wolf accept that there should be a right to remove “illegal defamatory” content if someone posts it online, but feel that the issue of links to “unflattering, but accurate” information should be explored using “other solutions”. (I pause again to note that “unflattering” is an odd and loaded word to use here: Mr Gonzalez, in the Google Spain case, was concerned about out-of-date information about bankruptcy, and other people who might want to exercise a right to removal of links might be concerned by much worse than “unflattering” information).

I don’t disagree that other solutions should be explored to the issue of the persistence or reemergence of old information which data subjects reasonably no longer wish to be known, but people are entitled to use the laws which exist to pursue their aims, and the application by the CJEU of data protection law to the issues pleaded was, to an extent, uncontroversial (is Google a data controller? if it is, what are its obligations to respect a request to desist from processing?)

Cavoukian and Wolf criticise the CJEU for failing to provide sufficient instruction on how “the right to be forgotten” should be applied, and for failing to consider whether “online actors other than search engines have a duty to ‘scrub’ the Internet of unflattering yet truthful facts”, but a court can only consider the issues pleaded before it, and these weren’t. Where I do agree with them is in their criticism of the apparent failure by the CJEU, when giving effect to the privacy rights in Article 8 of the European Convention on Human Rights, and Articles 7 and 8 of the Charter of Fundamental Rights of the European Union, to consider adequately, if at all, the countervailing rights to freedom of expression in Article 10 of the former and Article 11 of the latter. In this respect, the prior Opinion of the Advocate General was perhaps to be preferred.

The key word in my replacement library ananolgy above is “chosen”. Google is not a passive and inert indexing system. Rather, it is a dynamic and commercially-driven system which uses complex algorithms to determine which results appear against which search terms. It already exercises editorial control over results, and will remove some which it is satisfied are clearly unlawful or which constitute civil wrongs such as breach of copyright. Is it so wrong that (if it gives appropriate weight to the (sometimes) competing considerations of privacy and freedom of expression) it should be required to consider a request to remove unfair and outdated private information?

 

 

2 Comments

Filed under Data Protection, Directive 95/46/EC, Europe, human rights, Privacy

Wading through the rules: fairness for litigants in the Information Tribunal

Any judicial system needs to have rules to ensure effective and efficient case management: failure to do so risks delays, backlogs and, ultimately, breaches of natural justice and Article 6 Convention rights. Thus, we have the civil, the criminal, and the family procedure rules, and, within the tribunal system, the 2008 Upper Tribunal Rules, and a whole host of First-tier Tribunal Rules (the ones relating to Information Rights cases are the General Regulatory Chamber Rules 2009 (TPR)). In addition, there are Practice Notes (such as one for “Closed Material in Information Rights Cases”) and a range of forms and guidance.  There are even specific “Guidance notes for individuals representing themselves in freedom of information appeals in the general regulatory chamber of the first-tier tribunal” (which I shall call the “LiP Guidance” (with LiP meaning Litigant in Person)). (Interestingly, the only copy of this I can find online is hosted on a third party site.)

For such litigants in person, these sources of rules and guidance (and the navigating of them) are essential but complicated. A neat illustration of this point comes in a recent judgment of the Upper Tribunal on a Freedom of Information Act 2000 (FOIA) case.

In the First-tier Tribunal (FTT) a Mr Matthews had sought to appeal the Information Commissioner’s (IC) decision notice  that the Department for Business, Innovation and Skills (DBIS) didn’t hold the majority of information sought about the tendering process for the delivery of marketing workshops from Business Link West Midlands, and that what it did hold was exempt from disclosure under section 40(2) of FOIA. Mr Matthews, referring to the LiP Guidance (at paragraph 16) asked for, and expected, an oral hearing.

However, in responding to the notice of appeal, the IC applied successfully, under rule 8(2)(a) of the TPR to “strike out” one ground of appeal, and under rule 8(3)(c) to “strike out” the remainder.

Lawyers, and those who deal in this subject regularly, recognise that to “strike out” all grounds of appeal means the appeal is no more. But others might sympathise with Mr Matthews, who did not have any help on this matter from the LiP Guidance, and who, when asked by the Upper Tribunal judge, explained that what he had thought it meant was

that the way in which he had written his grounds out may be stuck through or altered, or sent back to him to change, but that the appeal itself would continue

So, we have Mr Matthews, still expecting an appeal with a hearing, but getting neither.

But was he entitled to a hearing, not of his substantive appeal, but to determine whether his appeal should be struck out? This was what was, in the main, at issue in the Upper Tribunal.

Rule 32(3) of the TPR says that the general rule that the FTT must hold a hearing before disposing of an appeal need not apply when deciding whether to strike out a party’s case. It does not preclude a hearing, though, but, rather, leaves it to the FTT’s discretion. In this instance the Upper Tribunal judge decided that the FTT erred in law in not exercising its discretion to hold a hearing and, alternatively or additionally, for failing to give any reasons for not holding a hearing.

Accordingly, the case is remitted to the FTT for it to hold an oral hearing of the strike-out application.

This might seem a very convoluted and unimportant judgment, but it shows the Upper Tribunal is alive to the difficulties faced by lay self-represented litigants in what should be more of an inquisitorial, rather than adversial, system. And it shows, as have other cases before it (see for instance Dransfield v IC & Devon Council, and IICUS v IC & BIS & Ray) that the Upper Tribunal is not unwilling to remit cases to the FTT on grounds of procedural unfairness.

3 Comments

Filed under Freedom of Information, Information Commissioner, Information Tribunal, Upper Tribunal

Social media crimes at least 50% of front line policing? I don’t think so

UPDATE: The BBC have now amended the headline, but, as FullFact point out, there are still concerns about the accuracy of the story.

What looks like a silly and hyperbolic BBC headline about crimes on social media is getting a lot of coverage. On social media. Here I question whether it’s accurate. On social media

Trailing the always excellent Joshua Rozenberg programme Law in Action the BBC has run a story with a headline saying

Social media crimes ‘at least half’ of front-line policing

And Law in Action’s own page on the broadcast in question also says

Chief Constable Alex Marshall, head of the College of Policing…estimates that as much as half of a front-line officer’s daily workload is spent dealing with calls related to online disputes

I know the BBC has to publicise itself, and maybe the programme itself will support the assertions made, but the quotes attributed to Mr Marshall don’t do so. He says

[Reports of crime involving social media are] a real problem for people working on the front line of policing, and they deal with this every day…So in a typical day where perhaps they deal with a dozen calls, they might expect that at least half of them, whether around antisocial behaviour or abuse or threats of assault may well relate to social media, Facebook, Twitter or other forms

SO what he’s actually saying is that of the dozen or so calls that a front line officer receives a day, about half “may well” relate to social media. Now, I may be naive, but surely a front line police officer’s workload is about an awful lot more than receiving calls. Even if a call is often the precursor to further actions, Mr Marshall doesn’t suggest that the calls about social media inevitably lead to such further action. In fact, I would be amazed if they did, and, indeed, other remarks attributed to Mr Marshall and an unnamed officer suggest that many of these calls relate to obviously non-criminal matters, and the clear implication is that they will lead to no further action whatsoever.

Crimes involving or committed on social media are a serious societal and policing issue, and I am sure Law in Action itself will consider this in its usual measured and serious way, but for the BBC to suggest that the issue takes up more than half of front line policing resource seems to me to be hyperbolic and irresponsible.

Leave a comment

Filed under BBC, police, social media

The Partridge Review reveals apparently huge data protection breaches

Does the Partridge Review of NHS transfers of hospital episode patient data point towards one of the biggest DPA breaches ever?

In February this year Tim Kelsey, NHS England’s National Director for Patients and Information, and vocal cheerleader for the care.data initiative, assured the public, in an interview on the Radio 4 Today programme, that in the twenty five years that Hospital Episode Statistics (HES) have been shared with other organisations

the management of the hospital episode database…there has never been a single example of that data being compromised, the privacy of patients being compromised…

When pressed by medConfidential‘s Phil Booth about this, and about risks of reidentification from the datasets, Tim repeated that no patient’s privacy had been compromised.

Some of us doubted this, as news of specific incidents of data loss emerged, and even more so as further news emerged suggesting that there had been transfers (a.k.a. sale) of huge amounts of potentially identifiable patient data to, for instance, the Institute and Faculty of Actuaries. The latter news led me to ask the Information Commissioner’s Office (ICO) to assess the lawfulness of this processing, an assessment which has not been completed four months later.

However, with the publication on 17 June of Sir Nick Partridge’s Review of Data Releases by the NHS Information Centre one questions the basis for Tim’s assertions. Sir Nick commissioned PwC to analyse a total of 3059 data releases between 2005 and 2013 (when the NHS Information Centre (NHSIC) ceased to exist, and was replaced by the Health and Social Care Information Centre HSCIC). The summary report to the Review says that

It disappoints me to report that the review has discovered lapses in the strict arrangements that were supposed to be in place to ensure that people’s personal data would never be used improperly

and it reveals a series of concerning and serious failures of data governance, including

  • lack of detailed records between 1 April 2005 and 31 March 2009
  • two cases of data that was apparently released without a proper record remaining of which organisation received the data
  • [no] evidence that Northgate [the NHSIC contractor responsible for releases] got permission from the NHS IC before making releases as it was supposed to do
  • PwC could not find records to confirm full compliance in about 10% of the sample

 Sir Nick observes that

 the system did not have the checks and balances needed to ensure that the appropriate authority was always in place before data was released. In many cases the decision making process was unclear and the records of decisions are incomplete.

and crucially

It also seems clear that the responsibilities of becoming a data controller, something that happens as soon as an organisation receives data under a data sharing agreement, were not always clear to those who received data. The importance of data controllers understanding their responsibilities remains vital to the protection of people’s confidentiality

(This resonates with my concern, in my request to the ICO to assess the transfer of data from HES to the actuarial society, about what the legal basis was for the latter’s processing).

Notably, Sir Nick dispenses with the idea that data such as HES was anonymised:

The data provided to these other organisations under data sharing agreements is not anonymised. Although names and addresses are normally removed, it is possible that the identity of individuals may be deduced if the data is linked to other data

 And if it was not anonymised, then the Data Protection Act 1998 (DPA) is engaged.

All of this indicates a failure to take appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data, which the perspicacious among you will identify as one of the key statutory obligations placed on data controllers by the seventh data protection principle in the DPA.

Sir Nick may say

 It is a matter of fact that no individual ever complained that their confidentiality had been breached as a result of data being shared or lost by the NHS IC

but simply because no complaint was made (at the time – complaints certainly have been made since concerns started to be raised) does not mean that the seventh principle was not contravened, in a serious way.  And a serious contravention of the DPA of a kind likely to cause substantial damage or substantial distress can potentially lead to the ICO serving a monetary penalty notice (MPN) to a maximum of £500,000 (at least for contraventions after April 2010, when the ICO’s powers commenced).

The NHSIC is no more (although as Sir Nick says, HSCIC “inherited many of the NHS IC’s staff and procedures”). But that has not stopped the ICO serving MPNs on successor organisation in circumstances where their predecessors committed the contravention.  One waits with interest to see whether the ICO will take any enforcement action, but I think it’s important that they consider doing so, because, even though Sir Nick makes nine very sensible recommendations to HSCIC, one could be forgiven – having been given clear assurances previously, by the likes of Tim Kelsey and others – for having reservations as to future governance of our confidential medical data. I would suggest it is imperative that HSCIC know that their processing of personal data is now subject to close oversight by all relevant regulatory bodies.

 

 

 

 

 

 

 

 

 

2 Comments

Filed under care.data, Confidentiality, Data Protection, data sharing, Information Commissioner, monetary penalty notice, NHS, Privacy

A public interest test in the Data Protection Act?

Mr Justice Cranston has suggested that there is a public interest factor when considering whether disclosure of personal data would be “fair” processing. I’m not sure that is right.

The first data protection principle (DPP1) in Schedule 1 of the Data Protection Act 1998 (DPA) says that personal data must be processed “fairly” (and lawfully). But what does “fairly” mean?

In an interesting recent case (AB v A Chief Constable [2014] EWHC 1965 (QB)) the High Court determined that, on the very specific facts, it would not be fair, in terms of DPP1, and common law legitimate expectation, for a Chief Constable to send a second, non-standard, reference to the new employer of a senior police officer who was subject to disciplinary investigation. (The judgment merits close reading – this was by no means a statement of general principle about police references). The reason it would not be fair was because the officer in question had tendered his resignation upon the sending of the initial, anodyne, reference, and the force had terminated misconduct proceedings:

He was thus in the position that for the Force to send the second reference would most likely leave him without employment and without the opportunity to refute the gross misconduct allegations. In these special circumstances it would be a breach of the Data Protection Act 1998 and undermine his legitimate expectations for the second reference to be sent [¶94]

Something in particular struck me about the judge’s analysis of DPP1, although, given the outcome, it was not determinative. He rejected a submission from the claimant officer that the duty of fairness in the DPP1 and the European Data Protection Directive was a duty to be fair primarily to the data subject. Rather, correctly identifying that the privacy rights in the Directive and the DPA are grounded in article 8 of the European Convention on Human Rights and in general principles of EU law, he held that

The rights to private and family life in Article 8 are subject to the countervailing public interests set out in Article 8(2). So it is here: assessing fairness involves a balancing of the interests of the data subject in non-disclosure against the public interest in disclosure [¶75]

I am not sure this is right. Recital 28 of the Directive says

Whereas any processing of personal data must be lawful and fair to the individuals concerned [emphasis added]

and recital 38 suggests that whether processing is “fair” is in large part dependent on whether the data subject is made aware of the processing and the circumstances under which it takes place. These recitals give way to the descriptions in Articles 10 and 11 which both talk about “fair processing in respect of the data subject” (again, emphasis added). Similarly Part II of Schedule One to the DPA provides interpretation to DPP1, and says that in determining whether personal data are processed fairly

regard is to be had to the method by which they are obtained, including in particular whether any person from whom they are obtained is deceived or misled as to the purpose or purposes for which they are to be processed

Admittedly this introduces “any person”, which could be someone other than the data subject, but more general considerations of public interest are absent. It is also notable that the Information Commissioner’s position in guidance seems predicated solely on the belief that it is the data subject’s interests that are engaged in an analysis of “fairness”, although the guidance does conceded that processing might cause some detriment to the individual without it being unfair, but I do not think this is the same as taking into account public interest in disclosure.

To the extent that a public interest test does manifest itself in DPP1, it is normally held to be in the conditions in Schedules 2 and 3. DPPP1 says that, in addition to the obligation to process personal data fairly and lawfully, a condition in Schedule 2 (and, for sensitive personal data, Schedule 3) must be met. Many of these conditions contain tests as to whether the processing is “necessary”, and that “necessity test” constitutes a proportionality test, as described by Latham LJ in Corporate Officer of the House of Commons v The Information Commissioner & Ors [2008] EWHC 1084 (Admin)

‘necessary’…should reflect the meaning attributed to it by the European Court of Human Rights when justifying an interference with a recognised right, namely that there should be a pressing social need and that the interference was both proportionate as to means and fairly balanced as to ends

To import a public interest test into the word “fairly” in DPP1 seems to me to be a potentially radical step, especially when disclosures of personal data under the Freedom of Information Act 2000 (FOIA) are being considered. As I say – I doubt that this is correct, but I would welcome any contrary (or concurring) opinions.

(By the way, I at first thought there was a more fundamental error in the judgment: the judge found that a rule of law was engaged which ordinarily would have required the Chief Constable to send the second reference:

the public law duty of honesty and integrity would ordinarily have demanded that the Chief Constable send the Regulatory Body something more than the anodyne reference about the claimant [¶93]

If a rule of law necessitates disclosure of personal data, then the exemption at section 35 DPA removes the requirement to process that data fairly and lawfully. However, I think the answer lies in the use of the word “ordinarily”: in this instance the doctrine of legitimate expectation (which the claimant could rely upon) meant that the public law duty to send the second reference didn’t apply. So section 35 DPA wasn’t engaged.)

 

 

 

 

 

7 Comments

Filed under Confidentiality, Data Protection, human rights, police

Virgin on the ridiculous

UPDATE 15.12.14: I think the comments on this piece take it further, and I do accept (as I did at the time, in fact) that the “password” in question was not likely to relate to customers’ accounts.
END UPDATE.

I got into a rather odd exchange over the weekend with the people running the Virgin Media twitter account. It began when, as is my wont, I was searching for tweets about “data protection” and noticed an exchange in which someone had asked Virgin Media whether their sales people rang customers and asked them to give their passwords. Virgin Media kindly appeared to confirm they did, and that

it’s for security as we can’t make any changes without data protection being passed

I asked for clarification, and this exchange ensued

[ME] Is it true your sales people call customers and ask for their account passwords? If so, are these unsolicited calls?

[VM] Yes this is true, our sales team would call and before entering your account, would need you to pass account security. I understand for your own security purposes why you wouldn’t feel great doing this, i’d be the same. If you give us a call on 150/03454541111 we can get this cleared up. Let me know how you get on

[ME] Thanks. Not a customer. Just interested in what seems like questionable practice being defended under guise of data protection

[VM] We contact our customers if there upgrade is due, or for a heath check on accounts, and a few other instances, but I get where your coming from [sic]

There’s nothing unlawful about this practice, and I assume that the accounts in question are service and not financial ones, but it doesn’t accord with normal industry practice. Moreover, one is warned often enough about the risks of phishing calls asking for account passwords. If a legitimate company requires or encourages its sales staff to do this, it adds to a culture of unnecessary risk. There are better ways of verifying identity, as their social media person seems to accept, when they say “I understand for your own security purposes why you wouldn’t feel great doing this, i’d be the same”.

One thing I’m certain about, though, is that isn’t any part of “passing data protection” (unless they mean bypassing) to make outbound calls and ask for customer passwords.

On a final note, and in admiration of bare-faced cheek, I highlight the end of my exchange with Virgin Media

If you want, as your not a customer, you can check out our brill offers here [removed] maybe we could save you a few pounds?

That’s an offer I most certainly can refuse.

(By the way, as it’s an official Virgin Media account, I’ve taken what I was told on Twitter at face value. If I have misunderstood any of their policies on this I’d be happy to correct).

UPDATE:

Virgin Media’s Twitter account appears to have confirmed to me a) that they do ask for customers’ passwords on outbound sales calls, and b) that they see nothing wrong with it. And rather hilariously, they say that “we can discuss further” if I will “pop a few details” on their web form for social media enquiries. No thanks.

12 Comments

Filed under Data Protection, Let's Blame Data Protection, marketing, nuisance calls, PECR, social media

Ticking off Neelie Kroes (sort of)

In which I take issue with the European Commission V-P about what the Consumer Rights Directive says about pre-ticked boxes

I found myself retweeting what I think was a rather misleading message from the Vice-President of the European Commission, Neelie Kroes. Her tweet said

You know those annoying “pre-ticked boxes” on shopping/travel websites? They’re banned in #EU from today http://europa.eu/rapid/press-release_IP-14-655_en.htm#eCommerce

I thought this was very interesting, particularly in light of my recent post about the implying of consent to electronic marketing if people forget to untick such boxes. The EU press release itself does say at one point

Under the new EU rules…consumers can now rely on…A ban on pre-ticked boxes on the internet, as for example when they buy plane tickets

But, it earlier says

The new rules also ban…pre-ticked boxes on websites for charging additional payments (for example when buying plane tickets online)

The emphasis I’ve added in that last quote is crucial. What DIRECTIVE 2011/83/EU OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 25 October 2011 on consumer rights actually proscribes is the contractual binding of a consumer to any payment in addition to the original remuneration agreed on if

the trader has not obtained the consumer’s express consent but has inferred it by using default options which the consumer is required to reject in order to avoid the additional payment

 So, as the press release explains,

When shopping online –for example when buying a plane ticket – you may be offered additional options during the purchase process, such as travel insurance or car rental. These additional services may be offered through so-called pre-ticked boxes. Consumers are currently often forced to untick those boxes if they do not want these extra services. With the new Directive, pre-ticked boxes will be banned across the European Union.

I happen to think that that text should more properly say “With the new Directive, pre-ticked boxes of this sort will be banned across the European Union”.

So, no ban on pre-ticked boxes themselves, just on those which purport to bind a consumer to an additional payment under a contract.

The Directive has been implemented in the UK by  The Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013 and associated The Enterprise Act 2002 (Part 8 EU Infringements) Order 2013 the former of which says (at regulation 40)

Under a contract between a trader and a consumer, no payment is payable in addition to the remuneration agreed for the trader’s main obligation unless, before the consumer became bound by the contract, the trader obtained the consumer’s express consent.. There is no express consent (if there would otherwise be) for the purposes of this paragraph if consent is inferred from the consumer not changing a default option (such as a pre-ticked box on a website)

Having said all this, I do think it is interesting that clearly-defined concepts of “express consent” are making their way into European and domestic legislation. And in due course, we may even find that, for instance, electronic marketing will be restrained unless similarly clearly-defined express consent is given. But not just yet.

Update: Ms Kroes kindly replied to me, saying it’s difficult to get a message across in 140 characters. So true.

 

 

 

 

Leave a comment

Filed under Data Protection, Europe, marketing, PECR

The Ministry of Poor Record Keeping?

If the Ministry of Justice really can’t search the text of emails for information, how can it comply with the FOI Code of Practice on Records Management?

In performing his functions under the Freedom of Information Act 2000 (FOIA) the Information Commissioner (IC) must promote the observance by public authorities of codes of practice issued under section 45 and section 46 of FOIA. Section 46 provides for a code of practice to be issued by the Lord Chancellor as to desirable practice for public authorities for the keeping, management and destruction of their records. A code was duly issued by the then Lord Chancellor Lord Irvine in 2002.

So, when deciding whether, for instance, a public authority has complied with its obligations under part 1 of FOIA (i.e. has it properly responded to a request for information?) the IC should, I submit, take into account where necessary whether the authority is complying with the Records Management Code.

With this in mind, consider the Ministry of Justice’s (MoJ) reported response to an FOI request for any mentions on its systems of the Howard League for Penal Reform. As Ian Dunt reports, the MoJ said that

On this occasion, the cost of determining whether we hold the information would exceed the limit set by the Freedom of Information Act

I have seen the MoJ response in question, and I accept that it is legitimate for a public authority to refuse to disclose information if the costs of determining whether it is held exceeds the limit prescribed by regulations (although authorities have an obligation under section 16 FOIA to advise and assist applicants as to how they might reframe their request to fall within the cost limits, and the MoJ have failed to do this). However, while the response refers to a necessity to search paper records, it also says

A manual search is required as central search functions (for example, those on email systems) would not identify all correspondence  – for example, if the Howard League for Penal Reform was mentioned in the body of the text

This appears to suggest, as Ian says, that “they can only search electronically for the headline of an email, not the body of a message”

If this is true (which seems extraordinary, but one is sure it must be, because intentionally to conceal information which otherwise should be disclosed under FOIA is an offence) it would appear to be contrary to the desirable practice in the Records Management Code, which says that

Records systems should be designed to meet the authority’s operational needs and using them should be an integral part of business operations and processes. Records systems should…enable quick and easy retrieval of information. With digital systems this should include the capacity to search for information requested under [FOIA]

It would be most interesting if the Howard League were to refer this to the IC for a decision. The IC rarely these days mentions the Records Management Code, but as the Code itself says

Records and information are the lifeblood of any organisation. They are the basis on which decisions are made, services provide and policies developed and communicate

Not only does poor records management affect compliance with FOIA (and other legal obligations), but it is not conducive to the reduction of back-office costs, developing new ways of working, and driving economies of scale (all things, of course, which the current Lord Chancellor prays in aid of his potentially devastating changes to legal aid provision).

p.s. As @Unity_MoT points out on twitter, if the MoJ struggles to search its systems to respond to FOIA requests, how does it undertake searches for responding to subject access requests under section 7 of the Data Protection Act 1998? See e.g. page 17 of the IC Code of Practice on Subject Access:

Not only should your systems have the technical capability to search for the information necessary to respond to a SAR, but they should also operate by reference to effective records management policies

 

Leave a comment

Filed under Freedom of Information, Information Commissioner, records management