Tag Archives: ICO

Chris Graham and the cost of FOI tribunals

When Information Commissioner (IC) Christopher Graham speaks, people listen. And so they should: he is the statutory regulator of the Freedom of Information Act 2000 (FOIA) whose role is “to uphold information rights in the public interest”. A speech by Graham is likely be examined carefully, to see if it gives indications of future developments, and this is the reason I am slightly concerned by a particular section of his recent speech at an event in Scotland looking at ten years of the Scottish FOI Act.

The section in question dealt with his envy of his Scottish counterparts. They, he observed, have relatively greater resources, and the Scottish Information Commissioner, unlike him, has a constitutional status that bolsters her independence, but also he envied

the simple and straightforward appeals mechanism in the Scottish legislation. The Scottish Commissioner’s decision is final, subject only to an appeal to the Court of Session on a point of law.

By contrast, in England, Wales and Northern Ireland, under section 57 of FOIA, there is a right of appeal to a tribunal (the First-tier Tribunal (Information Rights)). Under section 58(2) the Tribunal may review any finding of fact by the IC – this means that the Tribunal is able to substitute its own view for that of the commissioner. In Scotland, by contrast, as Graham indicates, the commissioner’s decision is only able to be overturned if it was wrong as a matter of law.

But there is another key difference arising from the different appellate systems: an appeal to the Tribunal is free, whereas in Scotland an application to the Court of Session requires a fee to be paid (currently £202). Moreover, a court is a different creature to a tribunal: the latter aims to “adopt procedures that are less complicated and more informal” and, as Sir Andrew Leggatt noted in his key 2001 report Tribunals for Users: One System, One Service

Tribunals are intended to provide a simple, accessible system of justice where users can represent themselves

It is very much easier for a litigant to represent herself in the Information tribunal, than it would be in a court.

Clearly, the situation as it currently obtains in England, Wales and Northern Ireland – free right of appeal to a Tribunal which can take a merits view of the case – will lead to more appeals, but isn’t that rather the point? There should be a straightforward way of challenging the decisions of a regulator on access to information matters. Graham bemoans that he is “having to spend too much of my very limited resources on Tribunals and lawyers” but I could have more sympathy if it was the case that this was purely wasted expenditure – if the appeals made were futile and changed nothing – but the figures don’t bear this out. Graham says that this year there have been 179 appeals; I don’t know where his figures are from, but from a rough totting-up of the cases listed on the Tribunal’s website I calculated that there have been about 263 decisions promulgated this year, of which 42 were successful. So, very far from showing an appeal to be a futile exercise, these figures suggest that approximately 1 in 5 was successful (at least in the first instance). What is also notable though, is the small but significant number of consent orders – nine this year. A consent order will result where the parties no longer contest the proceedings, and agree on terms to conclude them. It is speculation on my part but I would be very interested to know how many of those nine orders resulted from the IC deciding on the arguments submitted that his position was no longer sustainable.

What I’m getting at is that the IC doesn’t always get things right in the first instance; therefore, a right of appeal to an independent fact-finding tribunal is a valuable one for applicants. I think it is something we should be proud of, and we should feel sorry for FOI applicants in Scotland who are forced into court litigation (and proving an error of law) in order to challenge a decision there.

Ultimately, the clue to Graham’s disapproval of the right of appeal to Tribunal lies in the words “limited resources”. I do sympathise with his position – FOI regulation is massively underfunded by the government, and I rather suspect that, with better resourcing, Graham would take a different view. But I think his speech was particularly concerning because the issue of whether there should be a fee for bringing a case in the Tribunal was previously raised by the government, in its response to post-legislative scrutiny of FOIA. Things have gone rather quiet on this since, but might Graham’s speech herald the revival of such proposals?

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under access to information, Freedom of Information, Information Commissioner, Information Tribunal

Hidden data in FOI disclosures

The Hackney Gazette reports that details of 15,000 residents have been published on the internet after Hackney Council apparently inadvertently disclosed the data when responding to a Freedom of Information (FOI) request made using the WhatDoTheyKnow site.

This is not the first time that such apparently catastrophic inadvertent disclosures have happened through WhatDoTheyKnow, and, indeed, in 2012 MySociety, who run the site, issued a statement following a similar incident with Islington Council. As that made clear

responses sent via WhatDoTheyKnow are automatically published online without any human intervention – this is the key feature that makes this site both valuable and popular

It is clearly the responsibility of the authorities in question to ensure that no hidden or exempt information is included in FOI disclosures via WhatDoTheyKnow, or indeed, in FOI disclosures in general. A failure to have appropriate organisational and technical safeguards in place can lead to enforcement action by the Information Commissioner’s Office for contraventions of the Data Protection Act 1998 (DPA): Islington ended up with a monetary penalty notice of £70,000 for their incident, which involved 2000 people. Although the number of data subjects involved is not the only factor the ICO will take into account when deciding what action to take, it is certainly a relevant one: 15000 affected individuals is a hell of a lot.

What concerns me is this sort of thing keeps happening. We don’t know the details of this incident yet, but with such large numbers of data subjects involved it seems likely that it will have involved some sort of dataset, and I would not be at all surprised if it involved purportedly masked or hidden data, such as in a pivot table [EDIT – I’m given to understand that this incident involved cached data in MS Excel]. Around the time of the Islington incident the ICO’s Head of Policy Steve Wood published a blog post drawing attention to the risks. A warning also takes the form of a small piece on a generic page about request handling, which says

take care when using pivot tables to anonymise data in a spreadsheet. The spreadsheet will usually still contain the detailed source data, even if this is hidden and not immediately visible at first glance. Consider converting the spreadsheet to a plain text format (such as CSV) if necessary.

This is fine, but does it go far enough? Last year I wrote on the Guardian web site, and called for greater efforts to be made to highlight the issue. I think that what I wrote then still holds

The ICO must work with the government to offer advice direct to chief executives and those reponsible for risk at councils and NHS bodies (and perhaps other bodies, but these two sectors are probably the highest risk ones). So far these disclosure errors do not appear to have led to harm to those individuals whose private information was compromised, but, without further action, I fear it is only a matter of time.

Time will tell whether this Hackney incident results in a finding of DPA contravention, and ICO enforcement, but in the interim I wish the word would get spread around about how to avoid disclosing hidden data in spreadsheets.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

3 Comments

Filed under Data Protection, Freedom of Information, Information Commissioner, monetary penalty notice

FOI disclosure of personal data: balancing of interests

In June this year I blogged about the case of AB v A Chief Constable (Rev 1) [2014] EWHC 1965 (QB). In that case, Mr Justice Cranston had held that, when determining whether personal data is being or has been processed “fairly” (pursuant to the first principle of Schedule One of the Data Protection Act 1998 (DPA))

assessing fairness involves a balancing of the interests of the data subject in non-disclosure against the public interest in disclosure [¶75]

I was surprised by this reading in of an interests balance to the first principle, and said so in my post. Better people than I disagreed, and I certainly am even less sure now than I was of the correctness of my view.

In any case, the binding authority of the High Court rather trumps my meanderings, and it is cited in a recent decision of the First-tier Tribunal (Information Rights) in support of a ruling that the London Borough of Merton Council must disclose, under the Freedom of Information Act 2000 (FOIA), an email sent to a cabinet member of that council by Stephen Hammond MP. The Tribunal, in overturning the decision of the Information Commissioner, considered the private interests of Mr Hammond, including the fact that he had objected to the disclosure, but felt that these did not carry much weight:

we do not consider anything in the requested information to be particularly private or personal and that [sic] this substantially weakens the weight of interest in nondisclosure…We accept that Mr Hammond has objected to the disclosure, which in itself carries some weight as representing his interests. However, asides from an expectation of a general principle of non-disclosure of MP correspondence, we have not been given any reason for this. We have been given very little from the Commissioner to substantiate why Members of Parliament would have an expectation that all their correspondence in relation to official work remain confidential

and balanced against these were the public interests in disclosure, including

no authority had been given for the statement [in the ICO’s decision notice] that MPs expect that all correspondence to remain confidential…[;]…withholding of the requested information was not compatible with the principles of accountability and openness, whereby MPs should subject themselves to public scrutiny, and only withhold information when the wider public interest requires it…[;]…the particular circumstances of this case [concerning parking arrangements in the applicant’s road] made any expectation of confidentiality unreasonable and strongly indicated that disclosure would be fair

The arguments weighed, said the Tribunal, strongly in favour of disclosure.

A further point fell to be considered, however: for processing of personal data to be fair and lawful (per the first data protection principle) there must be met, beyond any general considerations, a condition in Schedule Two DPA. The relevant one, condition 6(1) requires that

The processing is necessary for the purposes of legitimate interests pursued by the data controller or by the third party or parties to whom the data are disclosed, except where the processing is unwarranted in any particular case by reason of prejudice to the rights and freedoms or legitimate interests of the data subject

It has to be noted that “necessary” here in the DPA imports a human rights proportionality test and it “is not synonymous with ‘indispensable’…[but] it implies the existence of a ‘pressing social need'” (The Sunday Times v United Kingdom (1979) 2 EHRR 245). The Tribunal, in what effectively was a reiteration of the arguments about general “fairness”, accepted that the condition would be met in this case, citing the applicant’s arguments, which included the fact that

disclosure is necessary to meet the public interest in making public what Mr Hammond has said to the Council on the subject of parking in Wimbledon Village, and that as an elected MP, accountable to his constituents, disclosure of such correspondence cannot constitute unwarranted prejudice to his interests.

With the exception of certain names within the requested information, the Tribunal ordered disclosure.  Assessing “fairness” now, following Mr Justice Cranston, and not following me, clearly does involve balancing the interests of the data subject against the public interest in disclosure.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

1 Comment

Filed under Data Protection, Freedom of Information, Information Commissioner, Information Tribunal

The wrong test for anonymisation?

UPDATE: 23.01.15 The ICO has responded [.doc file] to my request for a review of their decision. I drew their attention to the arguments on this page but they don’t even mention them, let alone provide a counter-analysis, in dismissing my complaints (“Having reviewed the matter, I agree with the explanations provided”). I am invited by the ICO to consider taking my own legal action. I understand that the ICO and I might have differing views on a DPA matter, but what I find difficult to accept is the refusal even to enter into a discussion with me about the detailed arguments I’ve made. END UPDATE

In February this year I asked the Information Commissioner’s Office (ICO) to investigate reports that Hospital Episode Statistics (HES) data had apparently been sold to an actuarial society by the NHS Information Centre (NHSIC), the predecessor to the Health and Social Care Information Centre (HSCIC). Specifically I requested, as a data subject can under s42 of the Data Protection Act 1998 (DPA), that the ICO assess whether it was likely or not that the processing of my personal data by NHSIC and others had been in compliance with the DPA.

Nine months later, I was still awaiting the outcome. But a clue to how the assessment would turn out was contained in the text of Sir Nick Partridge’s six month review of various data releases by NHSIC (his original report in June seemed to me to point to multiple potential DPA contraventions). In the review document he says

Six investigations have been separately instigated by the HSCIC or Information Commissioner’s Office (ICO)and shared with both parties as these focussed on whether individuals were at risk of being identified. In the cases it has investigated, the ICO has upheld the HSCIC approach and informed us that it has “seen no evidence to suggest that re-identification has occurred or is reasonably likely to occur.”
And sure enough, after chasing the ICO for the outcome of my nine-month wait, I received this (in oddly formatted text, which rather whiffed of a lot of cutting-and-pasting)
Following the recent issue regarding HSCIC, PA Consulting, and Google we investigated the issue of whether HES data could be considered personal data. This detailed work involved contacting HSCIC, PA Consulting, and Google and included the analysis of the processes for the extraction and disclosure of HES data both generally and in that case in particular. We concluded that we did not consider that the HES dataset constitutes personal data.Furthermore we also investigated whether this information had been linked to other data to produce “personal data” which was subject to the provisions of the Act. We have no evidence that there has been any re-identification either on the part of PA Consulting or Google. We also noted that HSCIC have stated that the HES dataset does not include individual level patient data even at a pseudonymised level. Our view is that the data extracted and provided to PA Consulting did not identify any individuals and there was no reasonable likelihood that re-identification would be possible.
I have added the emphasis to the words “reasonable likelihood” above. They appear in similar terms in the Partridge Review, and they struck me as rather odd. An awful lot of analysis has taken and continues to take place on the subject of when can personal data be “rendered fully anonymous in the sense that it is information from which the data subject is no longer identifiable” (Lord Hope’s dicta in Common Services Agency v Scottish Information Commissioner [2008] UKHL 47). Some of that analysis has been academic, some takes the form of “soft law” guidance, for instance Opinion 05/2014 of the Article 29 Working Party, and the ICO Anonymisation Code of Practice. The former draws on the Data Protection Directive 95/46/EC, and notes that

Recital 26 signifies that to anonymise any data, the data must be stripped of sufficient elements such that the data subject can no longer be identified. More precisely, that data must be processed in such a way that it can no longer be used to identify a natural person by using “all the means likely reasonably to be used”

Anonymisation has also been subject to judicial analysis, notably in the Common Services Agency case, but, even more key, in the judgment of Mr Justice Cranston in Department of Health v Information Commissioner ([2011] EWHC 1430). The latter case, involving the question of disclosure of late-term abortion statistics, is by no means an easy judgment to parse (ironically so, given that it makes roughly the same observation of the Common Services Agency case). The judge held that the First-tier Tribunal had been wrong to say that the statistics in question were personal data, but that it had on the evidence been entitled to say that “the possibility of identification by a third party from these statistics was extremely remote”. The fact that the possibility of identification by a third party was extremely remote meant that “the requested statistics were fully anonymised” (¶55). I draw from this that for personal data to be anonymised in statistical format the possibility of identification of individuals by a third party must be extremely remote. The ICO’s Anonymisation Code, however, says of the case:

The High Court in the Department of Health case above stated that the risk of identification must be greater than remote and reasonably likely for information to be classed as personal data under the DPA [emphasis added]

But this seems to me to be an impermissible description of the case – the High Court did not state what the ICO says it stated – the phrases “greater than remote” and “reasonably likely” do not appear in the judgment. And that phrase “reasonably likely” is one that, as I say, makes it way into the Partridge Review, and the ICO’s assessment of the lawfulness of HES data “sale”.

I being to wonder if the ICO has taken the phrase from recital 26 of the Directive, which talks about the need to consider “all the means likely reasonably to be used” to identify an individual, and transformed it into a position from which, if identification is not reasonably likely, it will accept that data are anonymised. This cannot be right: there is a world of difference between a test which considers whether possibility of identification is “extremely remote” and whether it is “reasonably likely”.

I do not have a specific right to a review of the section 42 assessment decision that the processing of my personal data was likely in compliance with NHSIC’s obligations under the DPA, but I have asked for one. I am aware of course that others complained (après moi, la deluge) notably, in March, FIPR, MedConfidential and Big Brother Watch . I suspect they will also be pursuing this.

In October this year I attended an event at which the ICO’s Iain Bourne spoke. Iain was a key figure in the drawing up of the ICO’s Anonymisation Code, and I took the rather cheeky opportunity to ask about the HES investigations. He said that his initial view was that NHSIC had been performing good anonymisation practice. This reassured me at the time, but now, after considering this question of whether the Anonymisation Code (and the ICO) adopts the wrong test on the risks of identification, I am less reassured. Maybe “reasonably likely that an individual can be identified” is an appropriate test for determining when data is no longer anonymised, and becomes personal data, but it does not seem to me that the authorities support it.

Postscript Back in August of this year I alerted the ICO to the fact that a local authority had published open data sets which enabled individuals to be identified (for instance, social care and housing clients). More than four months later the data is still up (despite the ICO saying they would raise the issue with the council): is this perhaps because the council has argued that the risk of identification is not “reasonably likely”?

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

3 Comments

Filed under anonymisation, care.data, Data Protection, Directive 95/46/EC, Information Commissioner, NHS

Russell Brand and the domestic purposes exemption in the Data Protection Act

Was a now-deleted tweet by Russell Brand, revealing a journalist’s private number, caught by data protection law?

Data protection law applies to anyone who “processes” (which includes “disclosure…by transmission”) “personal data” (data relating to an identifiable living individual) as a “data controller” (the person who determines the purposes for which and the manner in which the processing occurs). Rather dramatically, in strict terms, this means that most individuals actually and regularly process personal data as data controllers. And nearly everyone would be caught by the obligations under the Data Protection Act 1998 (DPA), were it not for the exemption at section 36. This provides that

Personal data processed by an individual only for the purposes of that individual’s personal, family or household affairs (including recreational purposes) are exempt from the data protection principles and the provisions of Parts II and III

Data protection nerds will spot that exemption from the data protection principles and Parts II and III of the DPA is effectively an exemption from whole Act. So in general terms individuals who restrict their processing of personal data to domestic purposes are outwith the DPA’s ambit.

The extent of this exemption in terms of publication of information on the internet is subject to some disagreement. On one side is the Information Commissioner’s Office (ICO) who say in their guidance that it applies when an individual uses an online forum purely for domestic purposes, and on the other side are the Court of Justice of the European Union (and me) who said in the 2003 Lindqvist case that

The act of referring, on an internet page, to various persons and identifying them by name or by other means, for instance by giving their telephone numberconstitutes ‘the processing of personal data…[and] is not covered by any of the exceptionsin Article 3(2) of Directive 95/46 [section 36 of the DPA transposes Article 3(2) into domestic law]

Nonetheless, it is clear that publishing personal data on the internet for reasons not purely domestic constitutes an act of processing to which the DPA applies (let us assume that the act of publishing was a deliberate one, determined by the publisher). So when the comedian Russell Brand today decided to tweet a picture of a journalist’s business card, with an arrow pointing towards the journalist’s mobile phone number (which was not, for what it’s worth, already in the public domain – I checked with a Google search) he was processing that journalist’s personal data (note that data relating to an individual’s business life is still their personal data). Can he avail himself of the DPA domestic purposes exemption? No, says the CJEU, of course, following Lindqvist. But no, also, would surely say the ICO: this act by Brand was not purely domestic. Brand has 8.7 million twitter followers – I have no doubt that some will have taken the tweet as an invitation to call the journalist. It is quite possible that some of those calls will be offensive, or abusive, or even threatening.

Whilst I have been drafting this blog post Brand has deleted the tweet: that is to his credit. But of course, when you have so many millions of followers, the damage is already done – the picture is saved to hard drives, is mirrored by other sites, is emailed around. And, I am sure, the journalist will have to change his number, and maybe not much harm will have been caused, but the tweet was nasty, and unfair (although I have no doubt Brand was provoked in some way). If it was unfair (and lacking a legal basis for the publication) it was in contravention of the first data protection principle which requires that personal data be processed fairly and lawfully and with an appropriate legitimating condition. And because – as I submit –  Brand cannot plead the domestic purposes exemption, it was in contravention of the DPA. However, whether the journalist will take any private action, and whether the ICO will take any enforcement action, I doubt.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner, journalism, social media

Do your research. Properly

Campaigning group Big Brother Watch have released a report entitled “NHS Data Breaches”. It purports to show the extent of such “breaches” within the NHS. However it fails properly to define its terms, and uses very questionable methodology. I think, most worryingly, this sort of flawed research could lead to a reluctance on the part of public sector data controllers to monitor and record data security incidents.

As I checked my news alerts over a mug of contemplative coffee last Friday morning, the first thing I noticed was an odd story from a Bedfordshire news outlet:

Bedford Hospital gets clean bill of health in new data protection breach report, unlike neighbouring counties…From 2011 to 2014 the hospital did not breach the data protection act once, unlike neighbours Northampton where the mental health facility recorded 346 breaches, and Cambridge University Hospitals which registered 535 (the third worst in the country).

Elsewhere I saw that one NHS Trust had apparently breached data protection law 869 times in the same period, but many others, like Bedford Hospital had not done so once. What was going on – are some NHS Trusts so much worse in terms of legal compliance than others? Are some staffed by people unaware and unconcerned about patient confidentiality? No. What was going on was that campaigning group Big Brother Watch had released a report with flawed methodology, a misrepresentation of the law and flawed conclusions, which I fear could actually lead to poorer data protection compliance in the future.

I have written before about the need for clear terminology when discussing data protection compliance, and of the confusion which can be caused by sloppiness. The data protection world is very found of the word “breach”, or “data breach”, and it can be a useful term to describe a data security incident involving compromise or potential compromise of personal data, but the confusion arises because it can also be used to describe, or assumed to apply to, a breach of the law, a breach of the Data Protection Act 1998 (DPA). But a data security incident is not necessarily a breach of a legal obligation in the DPA: the seventh data protection principle in Schedule One requires that

Appropriate technical and organisational measures shall be taken [by a data controller] against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data

And section 4(4) of the DPA obliges a data controller to comply with the Schedule One data protection principles. This means that when appropriate technical and organisational measures are taken but unauthorised or unlawful processing, or accidental loss or destruction of, or damage to, personal data nonetheless occurs, the data controller is not in breach of its obligations (at least under the seventh principle). This distinction between a data security incident, and a breach, or contravention, of legal obligations, is one that the Information Commissioner’s Office (ICO) itself has sometimes failed to appreciate (as the First-tier Tribunal found in the Scottish Borders Council case EA/2012/0212). Confusion only increases when one takes into account that under The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) which are closely related to the DPA, and which deal with data security in – broadly – the telecoms arena, there is an actual legislative provision (regulation 2, as amended) which talks in terms of a “personal data breach”, which is

a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed in connection with the provision of a public electronic communications service

and regulation 5A obliges a relevant data controller to inform the ICO when there has been a “personal data breach”. It is important to note, however, that a “personal data breach” under PECR will not be a breach, or contravention, of the seventh DPA data protection principle, provided the data controller took appropriate technical and organisational to safeguard the data.

Things get even more complex when one bears in mind that the draft European General Data Protection Regulation proposes a similar approach as PECR, and defines a “personal data breach” in similar terms as above (simply removing the words “in connection with the provision of a public electronic communications service“).

Notwithstanding this, the Big Brother Watch report is entitled “NHS Data Breaches”, so one would hope that it would have been clear about its own terms. It has led to a lot of coverage, with media outlets picking up on headline-grabbing claims of “7225 breaches” in the NHS between 2011 and 2014, which is the equivalent to “6 breaches a day”. But when one looks at the methodology used, serious questions are raised about the research. It used Freedom of Information requests to all NHS Trusts and Bodies, and the actual request was in the following terms

1. The number of a) medical personnel and b) non-medical personnel that have been convicted for breaches of the Data Protection Act.

2. The number of a) medical personnel and b) non-medical personnel that have had their employment terminated for breaches of the Data Protection Act.

3. The number of a) medical personnel and b) non-medical personnel that have been disciplined internally but have not been prosecuted for breaches of the Data Protection Act.

4. The number of a) medical personnel and b) non-medical personnel that have resigned during disciplinary procedures.

5. The number of instances where a breach has not led to any disciplinary action.

The first thing to note is that, in broad terms, the only way that an individual NHS employee can “breach the Data Protection Act” is by committing a criminal offence under section 55 of unlawfully obtaining personal data without the consent of the (employer) data controller. All the other relevant legal obligations under the DPA are ones attaching to the NHS body itself, as data controller. Thus, by section 4(4) the NHS body has an obligation to comply with the data protection principles in Schedule One of the DPA, not individual employees. And so, except in the most serious of cases, where an employee acts without the consent of the employer to unlawfully obtain personal data, individual employees, whether medical or non-medical personnel, cannot as a matter of law “breach the Data Protection Act”.

One might argue that it is easy to infer that what Big Brother Watch meant to ask for was information about the number of times when actions of individual employees meant that their employer NHS body had breached its obligations under the DPA, and, yes, that it probably what was meant, but the incorrect terms and lack of clarity vitiated the purported research from the start. This is because NHS bodies have to comply with the NHS/Department of Health Information Governance Toolkit. This toolkit actually requires NHS bodies to record serious data security incidents even where those incidents did not, in fact, constitute a breach of the body’s obligations under the DPA (i.e. incidents might be recorded which were “near misses” or which did not constitute a failure of the obligation to comply with the seventh, data security, principle).

The results Big Brother Watch got in response to their ambiguous and inaccurately termed FOI request show that some NHS bodies clearly interpreted it expansively, to encompass all data security incidents, while others – those with zero returns in any of the fields, for instance – clearly interpreted it restrictively. In fact, in at least one case an NHS Trust highlighted that its return included “near misses”, but these were still categorised by Big Brother Watch as “breaches”.

And this is not unimportant: data security and data protection are of immense importance in the NHS, which has to handle huge amounts of highly sensitive personal data, often under challenging circumstances. Awful contraventions of the DPA do occur, but so too do individual and unavoidable instances of human error. The best data controllers will record and act on the latter, even though they don’t give rise to liability under the DPA, and they should be applauded for doing so. Naming and shaming NHS bodies on the basis of such flawed research methodology might well achieve Big Brother Watch’s aim of publicising its call for greater sanctions for criminal offences, but I worry that it might lead to some data controllers being wary of recording incidents, for fear that they will be disclosed and misinterpreted in the pursuit of questionable research.

1 Comment

Filed under Data Protection, Freedom of Information, Information Commissioner, NHS

Samaritans cannot deny being data controller for #samaritansradar

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

So, Samaritans continue to support the #samaritansradar app, about which I, and many others, have already written. A large number of people suffering from, or with experience of mental health problems, have pleaded with Samaritans to withdraw the app, which monitors the tweets of the people one follows on twitter, applies an algorithm to identify tweets from potentially vulnerable people, and emails that information to the app user, all without the knowledge of the person involved. As Paul Bernal has eloquently said, this is not really an issue about privacy, and nor is it about data protection – it is about the threat many vulnerable people feel from the presence of the app. Nonetheless, privacy and data protection law, in part, are about the rights of the vulnerable; last night (4 November) Samaritans issued their latest sparse statement, part of which dealt with data protection:

We have taken the time to seek further legal advice on the issues raised. Our continuing view is that Samaritans Radar is compliant with the relevant data protection legislation for the following reasons:

o   We believe that Samaritans are neither the data controller or data processor of the information passing through the app

o   All information identified by the app is available on Twitter, in accordance with Twitter’s Ts&Cs (link here). The app does not process private tweets.

o   If Samaritans were deemed to be a data controller, given that vital interests are at stake, exemptions from data protection law are likely to apply

It is interesting that there is reference here to “further” legal advice: none of the previous statements from Samaritans had given any indication that legal or data protection advice had been sought prior to the launch of the app. It would be enormously helpful to discussion of the issue if Samaritans actually disclosed their advice, but I doubt very much that they will do so. Nonetheless, their position appears to be at odds with the legal authorities.

In May this year the Court of Justice of the European Union (CJEU) gave its ruling in the Google Spain case. The most widely covered aspect of that case was, of course, the extent of a right to be forgotten – a right to require Google to remove search terms in certain specified cases. But the CJEU also was asked to rule on the question of whether a search engine, such as Google, was a data controller in circumstances in which it engages in the indexing of web pages. Before the court Google argued that

the operator of a search engine cannot be regarded as a ‘controller’ in respect of that processing since it has no knowledge of those data and does not exercise control over the data

and this would appear to be a similar position to that adopted by Samaritans in the first bullet point above. However, the CJEU dismissed Google’s argument, holding that

the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results…It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of [the activity at issue] and which must, consequently, be regarded as the ‘controller’ in respect of that processing

Inasmuch as I understand how it works, I would submit that #samaritansradar, while not a search engine as such, collects data (personal data), records and organises it, stores it on servers and discloses it to its users in the form of a result. The app has been developed by and launched by Samaritans, it carries their name and seeks to further their aims: it is clearly “their” app, and they are, as clearly, a data controller with attendant legal responsibilities and liabilities. In further proof of this Samaritans introduced, after the app launch and in response to outcry, a “whitelist” of twitter users who have specifically informed Samaritans that they do not want their tweets to be monitored (update on 30 October). If Samaritans are effectively saying they have no role in the processing of the data, how on earth would such a whitelist be expected to work?

And it’s interesting to consider the apparent alternative view that they are implicitly putting forward. If they are not data controller, then who is? The answer must be the users who download and run the app, who would attract all the legal obligations that go with being a data controller. The Samaritans appear to want to back out of the room, leaving app users to answer all the awkward questions.1

Also very interesting is that Samaritans clearly accept that others might have a different view to theirs on the issue of controllership; they suggest that if they were held to be a data controller they would avail themselves of “exemptions” in data protection law relating to “vital interest” to legitimise their activities. One presumes this to be a reference to certain conditions in Schedule 2 and 3 of the Data Protection Act 1998 (DPA). Those schedules contain conditions which must be met, in order for the processing of, respectively, personal data and sensitive personal data, to be fair and lawful. As we are here clearly talking about sensitive personal data (personal data relating to someone’s physical or mental health is classed as sensitive), let us look at the relevant condition in Schedule 3:

The processing is necessary—
(a)in order to protect the vital interests of the data subject or another person, in a case where—
(i)consent cannot be given by or on behalf of the data subject, or
(ii)the data controller cannot reasonably be expected to obtain the consent of the data subject, or
(b)in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld

Samaritans alternative defence founders on the first four words: in what way can this processing be necessary to protect vital interests? The Information Commissioner’s Office explains that this condition only applies

in cases of life or death, such as where an individual’s medical history is disclosed to a hospital’s A&E department treating them after a serious road accident

The evidence suggests this app is actually delivering a very large number of false positives (as it’s based on what seems to be a crude keyword algorithm, this is only to be expected). Given that, and, indeed, given that Samaritans have – expressly – no control over what happens once the app notifies a user of a concerning tweet, it is absolutely preposterous to suggest that the processing is necessary to protect people’s vital interests. Moreover, the condition above also explains that it can only be relied on where consent cannot be given by the data subject or the controller cannot reasonably be expected to obtain consent. Nothing prevents Samaritans from operating an app which would do the same thing (flag a tweet of concern) but basing it on a consent model, whereby someone agrees that their tweets will be monitored in that way. Indeed, such a model would fit better with Samaritans stated aim of allowing people to “lead the conversation at their own pace”. It is clear, nonetheless, that consent could be sought for this processing, but that Samaritans have failed to design an app which allows it to be sought.

The Information Commissioner’s Office is said to be looking into the issues raised by Samaritans’ app. It may be that it will only be through legal enforcement action that it will actually be – as I think it should – removed. But it would be extremely sad if it came to that. It should be removed voluntarily by Samaritans, so they can rethink, re-programme, take full legal advice, but – most importantly – listen to the voices of the most vulnerable, who feel so threatened and betrayed by the app.

1On a strict and nuanced analysis of data protection law users of the app probably are data controllers, acting as joint ones with Samaritans. However, given the regulatory approach of the Information Commissioner they would probably be able to avail themselves of the general exemption from all of the DPA for processing which is purely domestic (although even that is arguably wrong). These are matters for another blog post however, and the fact that users might be held to be data controllers doesn’t alter the fact that Samaritans are, and in a much clearer way

43 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

No harm done

Why does nobody listen to me?

Quite a few media outlets and commentators have picked up on the consultation by the Department for Culture, Media and Sport I blogged about recently. The consultation is about the possibility of legislative change to make it easier for the Information Commissioner’s Office (ICO)(ICO) to “fine” (in reality, serve a civil monetary penalty notice) on people or organisations who commit serious contraventions of ePrivacy law in sending unsolicited electronic marketing messages (aka spam calls, texts, emails etc).

However, almost every report I have seen has missed a crucial point. So, we have The Register saying “ICO to fine UNBIDDEN MARKETEERS who cause ‘ANXIETY’…Inconvenience, annoyance also pass the watchdog’s stress test”, and Pinsent Masons, Out-Law.com saying “Unsolicited marketing causing ‘annoyance, inconvenience or anxiety’ could result in ICO fine”. We even have 11KBW’s formidable Christopher Knight saying

the DCMS has just launched a consultation exercise on amending PECR with a view to altering the test from “substantial damage or distress” to causing “annoyance, inconvenience or anxiety”

But none of these spot that the preferred option of DCMS, and the ICO is actually to go further, and give the ICO the power to serve a monetary penalty notice even when no harm has been shown at all

Remove the existing legal threshold of “substantial damage and distress” (this is the preferred option of both ICO and DCMS. There would be no need to prove “substantial damage and distress”, or any other threshold such as ‘annoyance, inconvenience or anxiety’…

So yes, this is a blog post purely to moan about the fact that people haven’t read my previous post. It’s my blog and I’ll cry if I want to.

UPDATE:

Chris Knight is so formidable that he’s both updated the Panopticon post and pointed out the oddness of option 3 being preferred when nearly all of the consultation paper is predicated on option 2 being victorious.

Leave a comment

Filed under Information Commissioner, marketing, monetary penalty notice, PECR, spam texts

Samaritans Radar – serious privacy concerns raised

UPDATE: 31 October

It appears Samaritans have silently tweaked their FAQs (so the text near the foot of this post no longer appears). They now say tweets will only be retained by the app for seven (as opposed to thirty) days, and have removed the words saying the app will retain a “Count of flags against a Twitter Users Friends ID”. Joe Ferns said on Twitter that the inclusion of this in the original FAQs was “a throw back to a stage of the development where that was being considered”. Samaritans also say “The only people who will be able to see the alerts, and the tweets flagged in them, are followers who would have received these Tweets in their current feed already”, but this does not absolve them of their data controller status: a controller does not need to access data in order to determine the means by which and the manner in which personal data are being processed, and they are still doing this. Moreover, this changing of the FAQs, with no apparent change to the position that those whose tweets are processed get no fair processing notice whatsoever, makes me more concerned that this app has been released without adequate assessment of its impact on people’s privacy.

END UPDATE

UPDATE: 30 October

Susan Hall has written a brilliant piece expanding on mine below, and she points out that section 12 of the Data Protection Act 1998 in terms allows a data subject to send a notice to a data controller requiring it to ensure no automated decisions are taken by processing their personal data for the purposes of evaluating matters such as their conduct. It seems to me that is precisely what “Samaritans Radar” does. So I’ve sent the following to Samaritans

Dear Samaritans

This is a notice pursuant to section 12 Data Protection Act 1998. Please ensure that no decision is taken by you or on your behalf (for instance by the “Samaritans Radar” app) based solely on the processing by automatic means of my personal data for the purpose of evaluating my conduct.

Thanks, Jon Baines @bainesy1969

I’ll post here about any developments.

END UPDATE

Samaritans have launched a Twitter App “to help identify vulnerable people”. I have only ever had words of praise and awe about Samaritans and their volunteers, but this time I think they may have misjudged the effect, and the potential legal implications of “Samaritans Radar”. Regarding the effect, this post from former volunteer @elphiemcdork is excellent:

How likely are you to tweet about your mental health problems if you know some of your followers would be alerted every time you did? Do you know all your followers? Personally? Are they all friends? What if your stalker was a follower? How would you feel knowing your every 3am mental health crisis tweet was being flagged to people who really don’t have your best interests at heart, to put it mildly? In this respect, this app is dangerous. It is terrifying to think that anyone can monitor your tweets, especially the ones that disclose you may be very vulnerable at that time

As for the legal implications, it seems to be potentially the case that Samaritans are processing sensitive personal data, in circumstances where there may not be a legal basis to do so. And some rather worrying misconceptions have accompanied the app launch. The first and most concerning of these is in the FAQs prepared for the media. In reply to the question “Isn’t there a data privacy issue here? Is Samaritans Radar spying on people?” the following answer is given

All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets. It does not look at private Tweets

The idea that, because something is in the public domain it cannot engage privacy issues is a horribly simplistic one, and if that constitutes the impact assessment undertaken, then serious questions have to be asked. Moreover, it doesn’t begin to consider the data protection considerations: personal data is personal data, whether it’s in the public domain or not. A tweet from an identified tweeter is inescapably the personal data of that person, and, if it is, or appears to be, about the person’s physical or mental health, then it is sensitive personal data, afforded a higher level of protection under the Data Protection Act 1998 (DPA). It would appear that Samaritans, as the legal person who determines the purposes for which, and the manner in which, the personal data are processed (i.e. they have produced an app which identifies a tweet on the basis of words, or sequences of words, and push it to another person) are acting as a data controller. As such, any processing has to be in accordance with their obligation to abide by the data protection principles in Schedule One of the DPA. The first principle says that personal data must be processed fairly and lawfully, and that a condition for processing contained in Schedule Two (and for sensitive personal data Schedule Two and Three) must be met. Looking only at Schedule Three, I struggle to see the condition which permits the app to identify a tweet, decide that it is from a potentially suicidal person and send it as such to a third party. The one condition which might apply, the fifth “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject” is undercut by the fact that the data in question is not just the public tweet, but the “package” of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.

The reliance on “all the data used in the app is public, so user privacy is not an issue” has carried through in messages sent on twitter by Samaritans Director of Policy, Research and Development, Joe Ferns, in response to people raising concerns, such as

existing Twitter search means anyone can search tweets unless you have set to private. #SamaritansRadar is like an automated search

Again, this misses the point that it is not just “anyone” doing a search on twitter, it is an app in Samaritans name which specifically identifies (in an automated way) certain tweets as of concern, and pushes them to third parties. Even more concerning was Mr Ferns’ response to someone asking if there was a way to opt out of having their tweets scanned by the app software:

if you use Twitter settings to mark your tweets private #SamaritansRadar will not see them

What he is actually suggesting there is that to avoid what some people clearly feel are intrusive actions they should lock their account and make it private. And, of course, going back to @elphiemcdork’s points, it is hard to avoid the conclusion that those who will do this might be some of the most vulnerable people.

A further concern is raised (one which confirms the data controller point above) about retention and reuse of data. The media FAQ states

Where will all the data be stored? Will it be secure? The data we will store is as follows:
• Twitter User ID – a unique ID that is associated with a Twitter account
• All Twitter User Friends ID’s – The same as above but for all the users friends that they
follow
• Any flagged Tweets – This is the data associated with the Tweet, we will store the raw
data for the Tweet as well
• Count of flags against a Twitter Users Friends ID – We store a count of flags against an
individual User
• To prevent the Database growing exponentially we will remove flagged Tweets that are
older than 30 days.

So it appears that Samaritans will be amassing data on unwitting twitter users, and in effect profiling them. This sort of data is terrifically sensitive, and no indication is given regarding the location of this data, and security measures in place to protect it.

The Information Commissioner’s Office recently produced some good guidance for app developers on Privacy in Mobile Apps. The guidance commends the use of Privacy Impact Assessments when developing apps. I would be interested to know if one was undertaken for Samaritans Radar, and, if so, how it dealt with the serious concerns that have been raised by many people since its launch.

This post was amended to take into account the observations in the comments by Susan Hall, to whom I give thanks. I have also since seen a number of excellent blog posts dealing with wider concerns. I commend, in particular, this by Adrian Short and this by @latentexistence

 

 

33 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

DCMS consulting on lower threshold for “fining” spammers

UPDATE: 08.11.14

Rich Greenhill has spotted another odd feature of this consultation. Options one and two both use the formulation “the contravention was deliberate or the person knew or ought to have known that there was a risk that the contravention would occur”, however, option three omits the words “…or ought to have known”. This is surely a typo, because if it were a deliberate omission it would effectively mean that penalties could not be imposed for negligent contraventions (only deliberate or wilful contraventions would qualify). I understand Rich has asked DCMS to clarify this, and will update as and when he hears anything.

END UPDATE

UPDATE: 04.11.14

An interesting development of this story was how many media outlets and commentators reported that the consultation was about lowering the threshold to “likely to cause annoyance, inconvenience or anxiety”, ignoring in the process that the preferred option of DCMS and ICO was for no harm threshold at all. Christopher Knight, on 11KBW’s Panopticon blog kindly amended his piece when I drew this point to his attention. He did, however observe that most of the consultation paper, and DCMS’s website, appeared predicated on the assumption that the lower-harm threshold was at issue. Today, Rich Greenhill informs us all that he has spoken to DCMS, and that their preference is indeed for a “no harm” approach: “Just spoke to DCMS: govt prefers PECR Option 3 (zero harm), its PR is *wrong*”. How very odd.

END UPDATE

The Department of Culture, Media and Sport (DCMS) has announced a consultation on lowering the threshold for the imposing of financial sanctions on those who unlawfully send electronic direct marketing. They’ve called it a “Nuisance calls consultation”, which, although they explain that it applies equally to nuisance text messages, emails etc., doesn’t adequately describe what could be an important development in electronic privacy regulation.

When, a year ago, the First-tier Tribunal (FTT) upheld the appeal by spam texter Christopher Niebel against the £300,000 monetary penalty notice (MPN) served on him by the Information Commissioner’s Office (ICO), it put the latter in an awkward position. And when the Upper Tribunal dismissed the ICO’s subsequent appeal, there was binding authority on the limits to the ICO’s power to serve MPNs for serious breaches of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). There was no dispute that, per the mechanism at section 55A of the Data Protection Act 1998 (DPA), adopted by PECR by virtue of regulation 31, Niebel’s contraventions were serious and deliberate, but what was at issue was whether they were “of a kind likely to cause substantial damage or substantial distress”. The FTT held that they were not – no substantial damage would be likely to arise and when it came to distress

the effect of the contravention is likely to be widespread irritation but not widespread distress…we cannot construct a logical likelihood of substantial distress as a result of the contravention.

When the Upper Tribunal agreed with the FTT, and the ICO’s Head of Enforcement said it had “largely [rendered] our power to issue fines for breaches of PECR involving spam texts redundant” it seemed clear that, for the time being at least, there was in effect a green light for spam texters, and, by extension, other spam electronic marketers. The DCMS consultation is in response to calls from the ICO, and others, such as the All Party Parliamentary Group (APPG) on Nuisance Calls, the Direct Marketing Association and Which for a change in the law.

The consultation proposes three options – 1) do nothing, 2) lower the threshold from “likely to cause substantial damage or substantial distress” to “likely to cause annoyance, inconvenience or anxiety”, or 3) remove the threshold altogether, so any serious and deliberate (or reckless) contravention of the PECR provisions would attract the possibility of a monetary penalty. The third option is the one favoured by DCMS and the ICO.

If either of the second or third options is ultimately enacted, this could, I feel, lead to a significant reduction in the prevalence of spam marketing. The consultation document notes that (despite the fact that the MPN was overturned on appeal) the number of unsolicited spam SMS text message sent reduced by a significant number after the Niebel MPN was served. A robust and prominent campaign of enforcement under a legislative scheme which makes it much easier to impose penalties to a maximum of £500,000, and much more difficult to appeal them, could put many spammers out of business, and discourage others. This will be subject, of course, both to the willingness and the resources of the ICO. The consultation document notes that there might be “an expectation that [MPNs] would be issued by the ICO in many more cases than its resources permit” but the ICO has said (according to the document) that it is “ready and equipped to investigate and progress a significant number of additional cases with a view to taking greater enforcement action including issuing more CMPs”.

There appears to be little resistance (as yet, at least) to the idea of lowering or removing the penalty threshold. Given that, and given the ICO’s apparent willingness to take on the spammers, we may well see a real and significant attack on the scourge. Of course, this only applies to identifiable spammers in the domestic jurisdiction – let’s hope it doesn’t just drive an increase in non-traceable, overseas spam.

 

 

3 Comments

Filed under Data Protection, enforcement, Information Commissioner, Information Tribunal, marketing, monetary penalty notice, nuisance calls, PECR, spam texts, Upper Tribunal