Category Archives: Data Protection

Watching the detective

The ICO might be mostly powerless to take action against the operators of the Russian web site streaming unsecured web cams, but the non-domestic users of the web cams could be vulnerable to enforcement action

The Information Commissioner’s Office (ICO) warned yesterday of the dangers of failing to secure web cams which are connected to the internet. This was on the back of stories about a Russian-based web site which aggregates feeds from thousands of compromised cameras worldwide.

This site was drawn to my attention a few weeks ago, and, although I tweeted obliquely about it, I thought it best not to identify it because of the harm it could potentially cause. However, although most news outlets didn’t identify the site, the cat is now, as they say, out of the bag. No doubt this is why the ICO chose to issue sensible guidance on network security in its blog post.

I also noticed that the Information Commissioner himself, Christopher Graham, rightly pointed to the difficulties in shutting down the site, and the fact that it is users’ responsibility to secure their web cams:

It is not within my jurisdiction, it is not within the European Union, it is Russia.

I will do what I can but don’t wait for me to have sorted this out.

This is, of course, true, and domestic users of web cams would do well to note the advice. Moreover, this is just the latest of these aggregator sites to appear. But news reports suggested that some of the 500-odd (or was it 2000-odd?) feeds on the site from the UK were from cameras of businesses or other non-domestic users (I saw a screenshot, for instance, of a feed from a pizza takeaway). Those users, if their web cams are capturing images of identifiable individuals, are processing personal data in the role of a data controller. And they can’t claim the exemption in the Data Protection Act 1998 (DPA) that applies to processing for purely domestic purposes. They must, therefore comply with the seventh data protection principle, which requires them to take appropriate measures to safeguard against unauthorised and unlawful processing of personal data. Allowing one’s web can to be compromised and its feed streamed on a Russian website is a pretty good indication that one is not complying with the seventh principle. Serious contraventions of the obligation to comply with the data protection principles can, of course, lead to ICO enforcement action, such as monetary penalty notices, to a maximum of £500,000.

The ICO is not, therefore, completely powerless here. Arguably it should be (maybe it is?) looking at the feeds on the site to determine which are from non-domestic premises, and looking to take appropriate enforcement action against them. So to that extent, one is rather watching Mr Graham, to see if he can sort this out.

2 Comments

Filed under Data Protection, Information Commissioner, Privacy

The voluntary data controller

One last post on #samaritansradar. I hope.

I am given to understand that Samaritans, having pulled their benighted app, have begun responding to the various legal notices people served on them under the Data Protection Act 1998 (specifically, these were notices under section 7 (subject access) section 10 (right to prevent processing likely to cause damage or distress) and section 12 (rights in relation to automated processing)). I tweeted my section 12 notice, but I doubt I’ll get a response to that, because they’ve never engaged with me once on twitter or elsewhere.

However, I have been shown a response to a section 7 request (which I have permission to blog about) and it continues to raise questions about Samaritans’ handling of this matter (and indeed, their legal advice – which hasn’t been disclosed, or even  really hinted at). The response, in relevant part, says

We are writing to acknowledge the subject access request that you sent to Samaritans via DM on 6 November 2014.  Samaritans has taken advice on this matter and believe that we are not a data controller of information passing through the Samaritans Radar app. However, in response to concerns that have been raised, we have agreed to voluntarily take on the obligations of a data controller in an attempt to facilitate requests made as far as we can. To this end, whilst a Subject Access Request made under the Data Protection Act can attract a £10 fee, we do not intend to charge any amount to provide information on this occasion.

So, Samaritans continue to deny being data controller for #samaritansradar, although they continue also merely to give assertions, not any legal analysis. But, notwithstanding their belief that they are not controller they are taking on the obligations of a data controller.

I think they need to be careful. A person who knowingly discloses personal data without the consent of the data controller potentially commits a criminal offence under section 55 DPA. One can’t just step in, grab personal data and start processing it, without acting in breach of the law. Unless one is a data controller.

And, in seriousness, this purported adoption of the role of “voluntary data controller” just bolsters the view that Samaritans have been data controllers from the start, for reasons laid out repeatedly on this blog and others. They may have acted as joint data controller with users of the app, but I simply cannot understand how they can claim not to have been determining the purposes for which and the manner in which personal data were processed. And if they were, they were a data controller.

 

Leave a comment

Filed under Data Protection, social media

Do your research. Properly

Campaigning group Big Brother Watch have released a report entitled “NHS Data Breaches”. It purports to show the extent of such “breaches” within the NHS. However it fails properly to define its terms, and uses very questionable methodology. I think, most worryingly, this sort of flawed research could lead to a reluctance on the part of public sector data controllers to monitor and record data security incidents.

As I checked my news alerts over a mug of contemplative coffee last Friday morning, the first thing I noticed was an odd story from a Bedfordshire news outlet:

Bedford Hospital gets clean bill of health in new data protection breach report, unlike neighbouring counties…From 2011 to 2014 the hospital did not breach the data protection act once, unlike neighbours Northampton where the mental health facility recorded 346 breaches, and Cambridge University Hospitals which registered 535 (the third worst in the country).

Elsewhere I saw that one NHS Trust had apparently breached data protection law 869 times in the same period, but many others, like Bedford Hospital had not done so once. What was going on – are some NHS Trusts so much worse in terms of legal compliance than others? Are some staffed by people unaware and unconcerned about patient confidentiality? No. What was going on was that campaigning group Big Brother Watch had released a report with flawed methodology, a misrepresentation of the law and flawed conclusions, which I fear could actually lead to poorer data protection compliance in the future.

I have written before about the need for clear terminology when discussing data protection compliance, and of the confusion which can be caused by sloppiness. The data protection world is very found of the word “breach”, or “data breach”, and it can be a useful term to describe a data security incident involving compromise or potential compromise of personal data, but the confusion arises because it can also be used to describe, or assumed to apply to, a breach of the law, a breach of the Data Protection Act 1998 (DPA). But a data security incident is not necessarily a breach of a legal obligation in the DPA: the seventh data protection principle in Schedule One requires that

Appropriate technical and organisational measures shall be taken [by a data controller] against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data

And section 4(4) of the DPA obliges a data controller to comply with the Schedule One data protection principles. This means that when appropriate technical and organisational measures are taken but unauthorised or unlawful processing, or accidental loss or destruction of, or damage to, personal data nonetheless occurs, the data controller is not in breach of its obligations (at least under the seventh principle). This distinction between a data security incident, and a breach, or contravention, of legal obligations, is one that the Information Commissioner’s Office (ICO) itself has sometimes failed to appreciate (as the First-tier Tribunal found in the Scottish Borders Council case EA/2012/0212). Confusion only increases when one takes into account that under The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) which are closely related to the DPA, and which deal with data security in – broadly – the telecoms arena, there is an actual legislative provision (regulation 2, as amended) which talks in terms of a “personal data breach”, which is

a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed in connection with the provision of a public electronic communications service

and regulation 5A obliges a relevant data controller to inform the ICO when there has been a “personal data breach”. It is important to note, however, that a “personal data breach” under PECR will not be a breach, or contravention, of the seventh DPA data protection principle, provided the data controller took appropriate technical and organisational to safeguard the data.

Things get even more complex when one bears in mind that the draft European General Data Protection Regulation proposes a similar approach as PECR, and defines a “personal data breach” in similar terms as above (simply removing the words “in connection with the provision of a public electronic communications service“).

Notwithstanding this, the Big Brother Watch report is entitled “NHS Data Breaches”, so one would hope that it would have been clear about its own terms. It has led to a lot of coverage, with media outlets picking up on headline-grabbing claims of “7225 breaches” in the NHS between 2011 and 2014, which is the equivalent to “6 breaches a day”. But when one looks at the methodology used, serious questions are raised about the research. It used Freedom of Information requests to all NHS Trusts and Bodies, and the actual request was in the following terms

1. The number of a) medical personnel and b) non-medical personnel that have been convicted for breaches of the Data Protection Act.

2. The number of a) medical personnel and b) non-medical personnel that have had their employment terminated for breaches of the Data Protection Act.

3. The number of a) medical personnel and b) non-medical personnel that have been disciplined internally but have not been prosecuted for breaches of the Data Protection Act.

4. The number of a) medical personnel and b) non-medical personnel that have resigned during disciplinary procedures.

5. The number of instances where a breach has not led to any disciplinary action.

The first thing to note is that, in broad terms, the only way that an individual NHS employee can “breach the Data Protection Act” is by committing a criminal offence under section 55 of unlawfully obtaining personal data without the consent of the (employer) data controller. All the other relevant legal obligations under the DPA are ones attaching to the NHS body itself, as data controller. Thus, by section 4(4) the NHS body has an obligation to comply with the data protection principles in Schedule One of the DPA, not individual employees. And so, except in the most serious of cases, where an employee acts without the consent of the employer to unlawfully obtain personal data, individual employees, whether medical or non-medical personnel, cannot as a matter of law “breach the Data Protection Act”.

One might argue that it is easy to infer that what Big Brother Watch meant to ask for was information about the number of times when actions of individual employees meant that their employer NHS body had breached its obligations under the DPA, and, yes, that it probably what was meant, but the incorrect terms and lack of clarity vitiated the purported research from the start. This is because NHS bodies have to comply with the NHS/Department of Health Information Governance Toolkit. This toolkit actually requires NHS bodies to record serious data security incidents even where those incidents did not, in fact, constitute a breach of the body’s obligations under the DPA (i.e. incidents might be recorded which were “near misses” or which did not constitute a failure of the obligation to comply with the seventh, data security, principle).

The results Big Brother Watch got in response to their ambiguous and inaccurately termed FOI request show that some NHS bodies clearly interpreted it expansively, to encompass all data security incidents, while others – those with zero returns in any of the fields, for instance – clearly interpreted it restrictively. In fact, in at least one case an NHS Trust highlighted that its return included “near misses”, but these were still categorised by Big Brother Watch as “breaches”.

And this is not unimportant: data security and data protection are of immense importance in the NHS, which has to handle huge amounts of highly sensitive personal data, often under challenging circumstances. Awful contraventions of the DPA do occur, but so too do individual and unavoidable instances of human error. The best data controllers will record and act on the latter, even though they don’t give rise to liability under the DPA, and they should be applauded for doing so. Naming and shaming NHS bodies on the basis of such flawed research methodology might well achieve Big Brother Watch’s aim of publicising its call for greater sanctions for criminal offences, but I worry that it might lead to some data controllers being wary of recording incidents, for fear that they will be disclosed and misinterpreted in the pursuit of questionable research.

1 Comment

Filed under Data Protection, Freedom of Information, Information Commissioner, NHS

So farewell then #samaritansradar…

…or should that be au revoir?

With an interestingly timed announcement (18:00 on a Friday evening) Samaritans conceded that they were pulling their much-heralded-then-muchcriticised app “Samaritans Radar”, and, as if some of us didn’t feel conflicted enough criticising such a normally laudable charity, their Director of Policy Joe Ferns managed to get a dig in, hidden in what was purportedly an apology:

We are very aware that the range of information and opinion, which is circulating about Samaritans Radar, has created concern and worry for some people and would like to apologise to anyone who has inadvertently been caused any distress

So, you see, it wasn’t the app, and the creepy feeling of having all one’s tweets closely monitored for potentially suicidal expressions, which caused concern and worry and distress – it was all those nasty people expressing a range of information and opinion. Maybe if we’d all kept quiet the app could have continued on its unlawful and unethical merry way.

However, although the app has been pulled, it doesn’t appear to have gone away

We will…be testing a number of potential changes and adaptations to the app to make it as safe and effective as possible for both subscribers and their followers

There is a survey at the foot of this page which seeks feedback and comment. I’ve completed it, and would urge others to do so. I’ve also given my name and contact details, because one of my main criticisms of the launch of the app was that there was no evidence that Samaritans had taken advice from anyone on its data protection implications – and I’m happy to do so for no fee. As Paul Bernal says, “[Samaritans] need to talk to the very people who brought down the app: the campaigners, the Twitter activists and so on”.

Data protection law’s place in our digital lives is of profound importance, and of profound interest to me. Let’s not forget that its genesis in the 1960s and 1970s was in the concerns raised by the extraordinary advances that computing brought to data analysis. For me some of the most irritating counter-criticism during the recent online debates about Samaritans Radar was from people who equated what the app did to mere searching of tweets, or searching for keywords. As I said before, the sting of this app lay in the overall picture – it was developed, launched and promoted by Samaritans – and in the overall processing of data which went on – it monitored tweets, identified potentially worrying ones and pushed this information to a third party, all without the knowledge of the data subject.

But also irritating were comments from people who told us that other organisations do similar analytics, for commercial reasons, so why, the implication went, shouldn’t Samaritans do it for virtuous ones? It is no secret that an enormous amount of analysis takes place of information on social media, and people should certainly be aware of this (see Adrian Short’s excellent piece here for some explanation), but the fact that it can and does take place a) doesn’t mean that it is necessarily lawful, nor that the law is impotent within the digital arena, and b) doesn’t mean that it is necessarily ethical. And for both those reasons Samaritans Radar was an ill-judged experiment that should never have taken place as it did. If any replacement is to be both ethical and lawful a lot of work, and a lot of listening, needs to be done.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

7 Comments

Filed under Data Protection, social media

Samaritans cannot deny being data controller for #samaritansradar

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

So, Samaritans continue to support the #samaritansradar app, about which I, and many others, have already written. A large number of people suffering from, or with experience of mental health problems, have pleaded with Samaritans to withdraw the app, which monitors the tweets of the people one follows on twitter, applies an algorithm to identify tweets from potentially vulnerable people, and emails that information to the app user, all without the knowledge of the person involved. As Paul Bernal has eloquently said, this is not really an issue about privacy, and nor is it about data protection – it is about the threat many vulnerable people feel from the presence of the app. Nonetheless, privacy and data protection law, in part, are about the rights of the vulnerable; last night (4 November) Samaritans issued their latest sparse statement, part of which dealt with data protection:

We have taken the time to seek further legal advice on the issues raised. Our continuing view is that Samaritans Radar is compliant with the relevant data protection legislation for the following reasons:

o   We believe that Samaritans are neither the data controller or data processor of the information passing through the app

o   All information identified by the app is available on Twitter, in accordance with Twitter’s Ts&Cs (link here). The app does not process private tweets.

o   If Samaritans were deemed to be a data controller, given that vital interests are at stake, exemptions from data protection law are likely to apply

It is interesting that there is reference here to “further” legal advice: none of the previous statements from Samaritans had given any indication that legal or data protection advice had been sought prior to the launch of the app. It would be enormously helpful to discussion of the issue if Samaritans actually disclosed their advice, but I doubt very much that they will do so. Nonetheless, their position appears to be at odds with the legal authorities.

In May this year the Court of Justice of the European Union (CJEU) gave its ruling in the Google Spain case. The most widely covered aspect of that case was, of course, the extent of a right to be forgotten – a right to require Google to remove search terms in certain specified cases. But the CJEU also was asked to rule on the question of whether a search engine, such as Google, was a data controller in circumstances in which it engages in the indexing of web pages. Before the court Google argued that

the operator of a search engine cannot be regarded as a ‘controller’ in respect of that processing since it has no knowledge of those data and does not exercise control over the data

and this would appear to be a similar position to that adopted by Samaritans in the first bullet point above. However, the CJEU dismissed Google’s argument, holding that

the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results…It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of [the activity at issue] and which must, consequently, be regarded as the ‘controller’ in respect of that processing

Inasmuch as I understand how it works, I would submit that #samaritansradar, while not a search engine as such, collects data (personal data), records and organises it, stores it on servers and discloses it to its users in the form of a result. The app has been developed by and launched by Samaritans, it carries their name and seeks to further their aims: it is clearly “their” app, and they are, as clearly, a data controller with attendant legal responsibilities and liabilities. In further proof of this Samaritans introduced, after the app launch and in response to outcry, a “whitelist” of twitter users who have specifically informed Samaritans that they do not want their tweets to be monitored (update on 30 October). If Samaritans are effectively saying they have no role in the processing of the data, how on earth would such a whitelist be expected to work?

And it’s interesting to consider the apparent alternative view that they are implicitly putting forward. If they are not data controller, then who is? The answer must be the users who download and run the app, who would attract all the legal obligations that go with being a data controller. The Samaritans appear to want to back out of the room, leaving app users to answer all the awkward questions.1

Also very interesting is that Samaritans clearly accept that others might have a different view to theirs on the issue of controllership; they suggest that if they were held to be a data controller they would avail themselves of “exemptions” in data protection law relating to “vital interest” to legitimise their activities. One presumes this to be a reference to certain conditions in Schedule 2 and 3 of the Data Protection Act 1998 (DPA). Those schedules contain conditions which must be met, in order for the processing of, respectively, personal data and sensitive personal data, to be fair and lawful. As we are here clearly talking about sensitive personal data (personal data relating to someone’s physical or mental health is classed as sensitive), let us look at the relevant condition in Schedule 3:

The processing is necessary—
(a)in order to protect the vital interests of the data subject or another person, in a case where—
(i)consent cannot be given by or on behalf of the data subject, or
(ii)the data controller cannot reasonably be expected to obtain the consent of the data subject, or
(b)in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld

Samaritans alternative defence founders on the first four words: in what way can this processing be necessary to protect vital interests? The Information Commissioner’s Office explains that this condition only applies

in cases of life or death, such as where an individual’s medical history is disclosed to a hospital’s A&E department treating them after a serious road accident

The evidence suggests this app is actually delivering a very large number of false positives (as it’s based on what seems to be a crude keyword algorithm, this is only to be expected). Given that, and, indeed, given that Samaritans have – expressly – no control over what happens once the app notifies a user of a concerning tweet, it is absolutely preposterous to suggest that the processing is necessary to protect people’s vital interests. Moreover, the condition above also explains that it can only be relied on where consent cannot be given by the data subject or the controller cannot reasonably be expected to obtain consent. Nothing prevents Samaritans from operating an app which would do the same thing (flag a tweet of concern) but basing it on a consent model, whereby someone agrees that their tweets will be monitored in that way. Indeed, such a model would fit better with Samaritans stated aim of allowing people to “lead the conversation at their own pace”. It is clear, nonetheless, that consent could be sought for this processing, but that Samaritans have failed to design an app which allows it to be sought.

The Information Commissioner’s Office is said to be looking into the issues raised by Samaritans’ app. It may be that it will only be through legal enforcement action that it will actually be – as I think it should – removed. But it would be extremely sad if it came to that. It should be removed voluntarily by Samaritans, so they can rethink, re-programme, take full legal advice, but – most importantly – listen to the voices of the most vulnerable, who feel so threatened and betrayed by the app.

1On a strict and nuanced analysis of data protection law users of the app probably are data controllers, acting as joint ones with Samaritans. However, given the regulatory approach of the Information Commissioner they would probably be able to avail themselves of the general exemption from all of the DPA for processing which is purely domestic (although even that is arguably wrong). These are matters for another blog post however, and the fact that users might be held to be data controllers doesn’t alter the fact that Samaritans are, and in a much clearer way

43 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

Samaritans Radar – serious privacy concerns raised

UPDATE: 31 October

It appears Samaritans have silently tweaked their FAQs (so the text near the foot of this post no longer appears). They now say tweets will only be retained by the app for seven (as opposed to thirty) days, and have removed the words saying the app will retain a “Count of flags against a Twitter Users Friends ID”. Joe Ferns said on Twitter that the inclusion of this in the original FAQs was “a throw back to a stage of the development where that was being considered”. Samaritans also say “The only people who will be able to see the alerts, and the tweets flagged in them, are followers who would have received these Tweets in their current feed already”, but this does not absolve them of their data controller status: a controller does not need to access data in order to determine the means by which and the manner in which personal data are being processed, and they are still doing this. Moreover, this changing of the FAQs, with no apparent change to the position that those whose tweets are processed get no fair processing notice whatsoever, makes me more concerned that this app has been released without adequate assessment of its impact on people’s privacy.

END UPDATE

UPDATE: 30 October

Susan Hall has written a brilliant piece expanding on mine below, and she points out that section 12 of the Data Protection Act 1998 in terms allows a data subject to send a notice to a data controller requiring it to ensure no automated decisions are taken by processing their personal data for the purposes of evaluating matters such as their conduct. It seems to me that is precisely what “Samaritans Radar” does. So I’ve sent the following to Samaritans

Dear Samaritans

This is a notice pursuant to section 12 Data Protection Act 1998. Please ensure that no decision is taken by you or on your behalf (for instance by the “Samaritans Radar” app) based solely on the processing by automatic means of my personal data for the purpose of evaluating my conduct.

Thanks, Jon Baines @bainesy1969

I’ll post here about any developments.

END UPDATE

Samaritans have launched a Twitter App “to help identify vulnerable people”. I have only ever had words of praise and awe about Samaritans and their volunteers, but this time I think they may have misjudged the effect, and the potential legal implications of “Samaritans Radar”. Regarding the effect, this post from former volunteer @elphiemcdork is excellent:

How likely are you to tweet about your mental health problems if you know some of your followers would be alerted every time you did? Do you know all your followers? Personally? Are they all friends? What if your stalker was a follower? How would you feel knowing your every 3am mental health crisis tweet was being flagged to people who really don’t have your best interests at heart, to put it mildly? In this respect, this app is dangerous. It is terrifying to think that anyone can monitor your tweets, especially the ones that disclose you may be very vulnerable at that time

As for the legal implications, it seems to be potentially the case that Samaritans are processing sensitive personal data, in circumstances where there may not be a legal basis to do so. And some rather worrying misconceptions have accompanied the app launch. The first and most concerning of these is in the FAQs prepared for the media. In reply to the question “Isn’t there a data privacy issue here? Is Samaritans Radar spying on people?” the following answer is given

All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets. It does not look at private Tweets

The idea that, because something is in the public domain it cannot engage privacy issues is a horribly simplistic one, and if that constitutes the impact assessment undertaken, then serious questions have to be asked. Moreover, it doesn’t begin to consider the data protection considerations: personal data is personal data, whether it’s in the public domain or not. A tweet from an identified tweeter is inescapably the personal data of that person, and, if it is, or appears to be, about the person’s physical or mental health, then it is sensitive personal data, afforded a higher level of protection under the Data Protection Act 1998 (DPA). It would appear that Samaritans, as the legal person who determines the purposes for which, and the manner in which, the personal data are processed (i.e. they have produced an app which identifies a tweet on the basis of words, or sequences of words, and push it to another person) are acting as a data controller. As such, any processing has to be in accordance with their obligation to abide by the data protection principles in Schedule One of the DPA. The first principle says that personal data must be processed fairly and lawfully, and that a condition for processing contained in Schedule Two (and for sensitive personal data Schedule Two and Three) must be met. Looking only at Schedule Three, I struggle to see the condition which permits the app to identify a tweet, decide that it is from a potentially suicidal person and send it as such to a third party. The one condition which might apply, the fifth “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject” is undercut by the fact that the data in question is not just the public tweet, but the “package” of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.

The reliance on “all the data used in the app is public, so user privacy is not an issue” has carried through in messages sent on twitter by Samaritans Director of Policy, Research and Development, Joe Ferns, in response to people raising concerns, such as

existing Twitter search means anyone can search tweets unless you have set to private. #SamaritansRadar is like an automated search

Again, this misses the point that it is not just “anyone” doing a search on twitter, it is an app in Samaritans name which specifically identifies (in an automated way) certain tweets as of concern, and pushes them to third parties. Even more concerning was Mr Ferns’ response to someone asking if there was a way to opt out of having their tweets scanned by the app software:

if you use Twitter settings to mark your tweets private #SamaritansRadar will not see them

What he is actually suggesting there is that to avoid what some people clearly feel are intrusive actions they should lock their account and make it private. And, of course, going back to @elphiemcdork’s points, it is hard to avoid the conclusion that those who will do this might be some of the most vulnerable people.

A further concern is raised (one which confirms the data controller point above) about retention and reuse of data. The media FAQ states

Where will all the data be stored? Will it be secure? The data we will store is as follows:
• Twitter User ID – a unique ID that is associated with a Twitter account
• All Twitter User Friends ID’s – The same as above but for all the users friends that they
follow
• Any flagged Tweets – This is the data associated with the Tweet, we will store the raw
data for the Tweet as well
• Count of flags against a Twitter Users Friends ID – We store a count of flags against an
individual User
• To prevent the Database growing exponentially we will remove flagged Tweets that are
older than 30 days.

So it appears that Samaritans will be amassing data on unwitting twitter users, and in effect profiling them. This sort of data is terrifically sensitive, and no indication is given regarding the location of this data, and security measures in place to protect it.

The Information Commissioner’s Office recently produced some good guidance for app developers on Privacy in Mobile Apps. The guidance commends the use of Privacy Impact Assessments when developing apps. I would be interested to know if one was undertaken for Samaritans Radar, and, if so, how it dealt with the serious concerns that have been raised by many people since its launch.

This post was amended to take into account the observations in the comments by Susan Hall, to whom I give thanks. I have also since seen a number of excellent blog posts dealing with wider concerns. I commend, in particular, this by Adrian Short and this by @latentexistence

 

 

33 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

DCMS consulting on lower threshold for “fining” spammers

UPDATE: 08.11.14

Rich Greenhill has spotted another odd feature of this consultation. Options one and two both use the formulation “the contravention was deliberate or the person knew or ought to have known that there was a risk that the contravention would occur”, however, option three omits the words “…or ought to have known”. This is surely a typo, because if it were a deliberate omission it would effectively mean that penalties could not be imposed for negligent contraventions (only deliberate or wilful contraventions would qualify). I understand Rich has asked DCMS to clarify this, and will update as and when he hears anything.

END UPDATE

UPDATE: 04.11.14

An interesting development of this story was how many media outlets and commentators reported that the consultation was about lowering the threshold to “likely to cause annoyance, inconvenience or anxiety”, ignoring in the process that the preferred option of DCMS and ICO was for no harm threshold at all. Christopher Knight, on 11KBW’s Panopticon blog kindly amended his piece when I drew this point to his attention. He did, however observe that most of the consultation paper, and DCMS’s website, appeared predicated on the assumption that the lower-harm threshold was at issue. Today, Rich Greenhill informs us all that he has spoken to DCMS, and that their preference is indeed for a “no harm” approach: “Just spoke to DCMS: govt prefers PECR Option 3 (zero harm), its PR is *wrong*”. How very odd.

END UPDATE

The Department of Culture, Media and Sport (DCMS) has announced a consultation on lowering the threshold for the imposing of financial sanctions on those who unlawfully send electronic direct marketing. They’ve called it a “Nuisance calls consultation”, which, although they explain that it applies equally to nuisance text messages, emails etc., doesn’t adequately describe what could be an important development in electronic privacy regulation.

When, a year ago, the First-tier Tribunal (FTT) upheld the appeal by spam texter Christopher Niebel against the £300,000 monetary penalty notice (MPN) served on him by the Information Commissioner’s Office (ICO), it put the latter in an awkward position. And when the Upper Tribunal dismissed the ICO’s subsequent appeal, there was binding authority on the limits to the ICO’s power to serve MPNs for serious breaches of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). There was no dispute that, per the mechanism at section 55A of the Data Protection Act 1998 (DPA), adopted by PECR by virtue of regulation 31, Niebel’s contraventions were serious and deliberate, but what was at issue was whether they were “of a kind likely to cause substantial damage or substantial distress”. The FTT held that they were not – no substantial damage would be likely to arise and when it came to distress

the effect of the contravention is likely to be widespread irritation but not widespread distress…we cannot construct a logical likelihood of substantial distress as a result of the contravention.

When the Upper Tribunal agreed with the FTT, and the ICO’s Head of Enforcement said it had “largely [rendered] our power to issue fines for breaches of PECR involving spam texts redundant” it seemed clear that, for the time being at least, there was in effect a green light for spam texters, and, by extension, other spam electronic marketers. The DCMS consultation is in response to calls from the ICO, and others, such as the All Party Parliamentary Group (APPG) on Nuisance Calls, the Direct Marketing Association and Which for a change in the law.

The consultation proposes three options – 1) do nothing, 2) lower the threshold from “likely to cause substantial damage or substantial distress” to “likely to cause annoyance, inconvenience or anxiety”, or 3) remove the threshold altogether, so any serious and deliberate (or reckless) contravention of the PECR provisions would attract the possibility of a monetary penalty. The third option is the one favoured by DCMS and the ICO.

If either of the second or third options is ultimately enacted, this could, I feel, lead to a significant reduction in the prevalence of spam marketing. The consultation document notes that (despite the fact that the MPN was overturned on appeal) the number of unsolicited spam SMS text message sent reduced by a significant number after the Niebel MPN was served. A robust and prominent campaign of enforcement under a legislative scheme which makes it much easier to impose penalties to a maximum of £500,000, and much more difficult to appeal them, could put many spammers out of business, and discourage others. This will be subject, of course, both to the willingness and the resources of the ICO. The consultation document notes that there might be “an expectation that [MPNs] would be issued by the ICO in many more cases than its resources permit” but the ICO has said (according to the document) that it is “ready and equipped to investigate and progress a significant number of additional cases with a view to taking greater enforcement action including issuing more CMPs”.

There appears to be little resistance (as yet, at least) to the idea of lowering or removing the penalty threshold. Given that, and given the ICO’s apparent willingness to take on the spammers, we may well see a real and significant attack on the scourge. Of course, this only applies to identifiable spammers in the domestic jurisdiction – let’s hope it doesn’t just drive an increase in non-traceable, overseas spam.

 

 

3 Comments

Filed under Data Protection, enforcement, Information Commissioner, Information Tribunal, marketing, monetary penalty notice, nuisance calls, PECR, spam texts, Upper Tribunal

If at first you don’t succeed…

The Information Commissioner’s Office (ICO) has uploaded to its website (24 October) two undertakings for breaches of data controllers’ obligations under the Data Protection Act 1998 (DPA). Undertakings are part of the ICO’s suite of possible enforcement actions against controllers.

One undertaking was signed by Gwynedd Council, after incidents in which social care information was posted to the wrong address, and a social care file went missing in transit between two sites. The other, more notably, was signed by the Disclosure and Barring Service (DBS), who signed a previous undertaking in March this year, after failing to amend a question (“e55″) on its application form which had been rendered obsolete by legislative changes. The March undertaking noted that

Question e55 of the application form asked the individuals ‘Have you ever been convicted of a criminal offence or received a caution, reprimand or warning?’ [Some applicants] responded positively to this question even though it was old and minor caution/conviction information that would have been filtered under the legislation. The individual’s positive response to question e55 was then seen by prospective employers who withdrew their job offers

This unnecessary disclosure was, said the ICO, unfair processing of sensitive personal data, and the undertaking committed DBS to amend the question on the form by the end of March.

However, the latest undertaking reveals that

application forms which do not contain the necessary amendments remain in circulation. This is because a large number of third party organisations are continuing to rely on legacy forms issued prior to the amendment of question e55. In the Commissioner’s view, the failure to address these legacy forms could be considered to create circumstances under which the unfair processing of personal data arises

The March undertaking had also committed DBS to ensure that supporting information provided to those bodies with access to the form be

kept under review to ensure that they continue to receive up to date, accurate and relevant guidance in relation to filtered matters

One might cogently argue that part of that provision of up-to-date guidance should have involved ensuring that those bodies destroyed old, unamended forms. And if one did argue that successfully, one would arrive at the conclusion that DBS could be in breach of the March undertaking for failing to do so. Breach of an undertaking does not automatically result in more serious sanctions, but they are available to the ICO, in the form of monetary penalties and enforcement notices. DBS might consider themselves lucky to have been given a second (or third?) chance, under which they must, by the end of of the year at the latest ensure that unamended legacy application forms containing are either rejected or removed from circulation.

One final point I would make is that no press release appears to have been put out about yesterday’s undertakings, nothing is on the ICO’s home page, and there wasn’t even a tweet from their twitter account. A large part of a successful enforcement regime is publicising when action has been taken. The ICO’s own policy on this says

Publicising our enforcement and regulatory activities is an important part of our role as strategic regulator, and a deterrent for potential offenders

Letting “offenders” off the publicising hook runs the risk of diminishing that deterrent effect.

2 Comments

Filed under Data Protection, enforcement, Information Commissioner, undertaking

The Crown Estate and behavioural advertising

A new app for Regent Street shoppers will deliver targeted behavioural advertising – is it processing personal data?

My interest was piqued by a story in the Telegraph that

Regent Street is set to become the first shopping street in Europe to pioneer a mobile phone app which delivers personalised content to shoppers during their visit

Although this sounds like my idea of hell, it will no doubt appeal to some people. It appears that a series of Bluetooth beacons will deliver mobile content (for which, read “targeted behavioural advertising”) to the devices of users who have installed the Regent Street app. Users will indicate their shopping preferences, and a profile of them will be built by the app.

Electronic direct marketing in the UK is ordinarily subject to compliance with The Privacy and Electronic Communications (EC Directive) Regulations 2003 (“PECR”). However, the definition of “electronic mail” in PECR is “any text, voice, sound or image message sent over a public electronic communications network or in the recipient’s terminal equipment until it is collected by the recipient and includes messages sent using a short message service”. In 2007 the Information Commissioner, upon receipt of advice, changed his previous stance that Bluetooth marketing would be caught by PECR, to one under which it would not be caught, because Bluetooth does not involve a “public electronic communications network”. Nonetheless, general data protection law relating to consent to direct marketing will still apply, and the Direct Marketing Association says

Although Bluetooth is not considered to fall within the definition of electronic mail under the current PECR, in practice you should consider it to fall within the definition and obtain positive consent before using it

This reference to “positive consent” reflects the definition in the Data Protection directive, which says that it is

any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed

And that word “informed” is where I start to have a possible problem with this app. Ever one for thoroughness, I decided to download it, to see what sort of privacy information it provided. There wasn’t much, but in the Terms and Conditions (which don’t appear to be viewable until you download the app) it did say

The App will create a profile for you, known as an autoGraph™, based on information provided by you using the App. You will not be asked for any personal information (such as an email address or phone number) and your profile will not be shared with third parties

autograph (don’t forget the™) is software which, in its words “lets people realise their interests, helping marketers drive response rates”, and it does so by profiling its users

In under one minute without knowing your name, email address or any personally identifiable information, autograph can figure out 5500 dimensions about you – age, income, likes and dislikes – at over 90% accuracy, allowing businesses to serve what matters to you – offers, programs, music… almost anything

Privacy types might notice the jarring words in that blurb. Apparently the software can quickly “figure out” thousands of potential identifiers about a user, without knowing “any personally identifiable information”. To me, that’s effectively saying “we will create a personally identifiable profile of you, without using any personally identifiable information”. The fact of the matter is that people’s likes, dislikes, preferences, choices etc (and does this app capture device information, such as IMEI?) can all be used to build up a picture which renders them identifiable. It is trite law that “personal data” is data which relate to a living individual who can be identified from those data or from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller. The Article 29 Working Party (made up of representatives from the data protection authorities of each EU member state) delivered an Opinion in 2010 on online behavioural advertising which stated that

behavioural advertising is based on the use of identifiers that enable the creation of very detailed user profiles which, in most cases, will be deemed personal data

If this app is, indeed, processing personal data, then I would suggest that the limited Terms and Conditions (which users are not even pointed to when they download the app, let alone be invited to agree them) are inadequate to mean that a user is freely giving specific and informed consent to the processing. And if the app is processing personal data to deliver electronic marketing failure to comply with PECR might not matter, but failure to comply with the Data Protection Act 1998 brings potential liability to legal claims and enforcement action.

The Information Commissioner last year produced good guidance on Privacy in Mobile Apps which states that

Users of your app must be properly informed about what will happen to their personal data if they install and use the app. This is part of Principle 1 in the DPA which states that “Personal data shall be processed fairly and lawfully”. For processing to be fair, the user must have suitable information about the processing and they must to be told about the purposes

The relevant data controller for Regent Street Online happens to be The Crown Estate. On the day that the Queen sent her first tweet, it is interesting to consider the extent to which her own property company are in compliance with their obligations under privacy laws.

This post has been edited as a result of comments on the original, which highlighted that PECR does not, in strict terms, apply to Bluetooth marketing

4 Comments

Filed under consent, Data Protection, Directive 95/46/EC, Information Commissioner, marketing, PECR, Privacy, tracking

Clegg calls for a data protection public interest defence (where there already is one)

UPDATE: 22.10.14

It appears that Clegg’s comments were in the context of proposed amendments to the Crime and Criminal Justice Bill, and the Guardian reports that

The amendments propose a new defence for journalists who unlawfully obtain personal data (section 55 of the Data Protection Act) where they do so as part of a story that is in the public interest

But I’m not sure how this could add anything to the existing section 55 provisions which I discuss below, which mean that an offence is not committed if “the obtaining, disclosing or procuring [of personal data without the consent of the data controller] was justified as being in the public interest” – it will be interesting to see the wording of the amendments.

Interestingly it seems that another proposed amendment would be to introduce custodial sentences for section 55 offences. One wonders if the elevated public interest protections for journalists are a sop to the press, who have long lobbied against custodial sentences for this offence.

END UPDATE.

In an interesting development of the tendency of politicians to call for laws which aren’t really necessary, Nick Clegg has apparently called for data protection law to be changed to what it already says

The Telegraph reports that Nick Clegg has called for changes to data protection, bribery and other laws to “give journalists more protection when carrying out their job”. The more informed of you will have spotted the error here: data protection law at least already carries a strong exemption for journalistic activities. Clegg is quoted as saying

There should be a public interest defence put in law – you would probably need to put it in the Data Protection Act, the Bribery Act, maybe one or two other laws as well – where you enshrine a public interest defence for the press so that where you are going after information and you are being challenged, you can set out a public interest defence to do so

Section 32 of the Data Protection Act 1998 provides an exemption to almost all of a data controller’s obligations under the Act regarding the processing of personal data if

(a)the processing is undertaken with a view to the publication by any person of any journalistic…material,

(b)the data controller reasonably believes that, having regard in particular to the special importance of the public interest in freedom of expression, publication would be in the public interest, and

(c)the data controller reasonably believes that, in all the circumstances, compliance with that provision is incompatible with [the publication by any person of any journalistic…material]

This provision (described as “extremely wide” at Bill stage1) was considered at length in Part H of the report of the Leveson Inquiry into the Culture, Practices and Ethics of the Press, which looked at the press and data protection. Indeed, Leveson recommended section 32 be amended and narrowed in scope. Notably, he recommended that the current subjective test (“the data controller reasonably believes”) should be changed so that section 32 could only be relied on if inter alia “objectively the likely interference with privacy resulting from the processing of the data is outweighed by the public interest in publication” (emphasis added). I know we’ve all forgotten about Leveson now, and the Press look on the report as though it emerged, without context, from some infernal pit, but even so, I’m surprised Mr Clegg is calling for the introduction of a provision that’s already there.

Perhaps, one might pipe up, he was talking about the section 55 DPA offence provisions (indeed, the sub-heading to the Telegraph article does talk in terms of journalists being protected “when being prosecuted”. So let’s look at that: section 55(2)(d) provides in terms that the elements of the offence of unlawful obtaining etc of personal data are not made out if

 in the particular circumstances the obtaining, disclosing or procuring was justified as being in the public interest

So, we have not just a public interest defence to a prosecution, but, even stronger, a public interest provision which means an offence is not even committed if the acts were justified as being in the public interest.

Maybe Mr Clegg thinks that public interest provision should be made even stronger when journalists are involved. But I’m not sure it realistically could be. Nonetheless, I await further announcements with interest.

1Hansard, HC, vo1315, col 602, 2 July 1998 (as cited in Philip Coppel QC’s evidence to the Leveson Inquiry).

1 Comment

Filed under Data Protection, journalism, Leveson