Tag Archives: social media

ICO: Samaritans Radar failed to comply with Data Protection Act

I’ve written so much previously on this blog about Samaritans Radar, the misguided Twitter app launched last year by Samaritans, that I’d thought I wouldn’t have to do so again. However, this is just a brief update on the outcome of the investigation by the Information Commissioner’s Office (ICO) into whether Samaritans were obliged to comply with data protection law when running the app, and, if so, the extent to which they did comply.

To recap, the app monitored the timelines of those the user followed on Twitter, and, if certain trigger words or phrases were tweeted, would send an email alert to the user. This was intended to be a “safety net” so that potential suicidal cries for help were not missed. But what was missed by Samaritans was the fact the those whose tweets were being monitored in this way would have no knowledge of it, and that this could lead to a feeling of uncertainty and unease in some of the very target groups they sought to protect. People with mental health issues raised concerns that the app could actually drive people off Twitter, where there were helpful and supportive networks of users, often tweeting the phrases and words the app was designed to pick up.

Furthermore, questions were raised, by me and many others, about the legality of the app under data protection law. So I made a request to the ICO under the Freedom of Information Act for

any information – such as an assessment of legality, correspondence etc. – which you hold about the “Samaritans Radar” app which Samaritans recently launched, then withdrew in light of serious legal and ethical concerns being raised

After an initial refusal because their investigation was ongoing, the ICO have now disclosed a considerable amount of information. Within it, however, is the substantive assessment I sought, in the form of a letter from the Group Manager for Government and Society to Samaritans. I think it is important to post it in full, and I do so below. I don’t have much to add, other than it vindicates the legal position put forward at the time by me and others (notably Susan Hall and Tim Turner).

19 December 2014

Samaritans Radar app

Many thanks for coming to our office and explaining the background to the development of the Radar application and describing how it worked.  We have now had an opportunity to consider the points made at the  meeting, as well as study the information provided in earlier  teleconferences and on the Samaritans’ website. I am writing to let you know our conclusions on how the Data Protection Act applies to the Radar  application.

We recognise that the Radar app was developed with the best of intentions and was withdrawn shortly after its launch but, as you know, during its operation we received a number of queries and concerns about the application. We have been asked for our vtew on whether personal data was processed in compliance with data protection prlnciples and whether the Samaritans are data controllers. You continue to believe that you are not data controllers or that personal data has been processed so I am writing to explain detail our conclusions on these points.

Personal data

Personal data is data that relates to an identifiable living individual. It is  our well-established position that data which identifies an individual, even without a name associated with it, may be personal data where it is processed to learn or record something about that individual, or where the processing of that information has an impact upon that individual. According to the information you have provided, the Radar app was a web-based application that used a specially designed algorithm that searched for specific keywords within the Twitter feeds of subscribers to the Radar app. When words indicating distress were detected within a Tweet, an email alert was automatically sent from the Samaritans to the subscriber saying Radar had detected someone they followed who may be going through a tough time and provided a link to that individual’s Tweet. The email asked the subscriber whether they were worried about the Tweet and if yes, they were re-directed to the Samaritans’ website for guidance on the best way of providing support to a follower who may be distressed. According to your FAQs, you also stored Twitter User IDs, Twitter User friends’ IDs, all tagged Tweets including the raw data associated with it and a count of flags against an individual Twitter user’s friends’ ID. These unique identifiers are personal data, in that they can easily be linked back to identifiable individuals.

Based on our understanding of how the application worked, we have reached the conclusion that the Radar service did involve processing of personal data. It used an algorithm to search for words that triggered an automated decision about an individual, at which point it sent an email alert to a Radar subscriber. It singled out an individual’s data with the purpose of differentiating them and treating them differently. In addition, you also stored information about all the Tweets that were tagged.

Data controller

We are aware of your view that you “are neither the data controller nor data processor of the information passing through the app”.

The concept of a data controller is defined in section 1 of the Data Protection 1998 (the DPA) as

“a person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be, processed”

We have concluded that the Radar service has involved processing of personal data. We understand that you used the agency [redacted] to develop and host the application. We are not fully aware of the role of [redacted] but given your central role in setting up and promoting the Radar application, we consider that the Samaritans have determined the manner and purpose of the processing of this personal data and as such you are data controllers. If you wish to be reminded of the approach we take in this area you may find it helpful to consult our guidance on data controllers and data processors. Here’s the link: https://ico.org.uk/media/about-the-ico/documents/1042555/data-controllers-and-data-processors-dp-guidance.pdf

Sensitive personal data

We also discussed whether you had processed sensitive personal data. You explained that the charity did deal with people seeking help for many different reasons and the service was not aimed at people with possible mental health issues. However the mission of the Samaritans is to alleviate emotional distress and reduce the incidence of suicide feelings and suicidal behaviours. In addition, the stated aims of the Radar project, the research behind it and the information provided in the FAQs all emphasise the aim of helping vulnerable peopie online and using the app to detect someone who is suicidal. For example, you say “research has shown there is a strong correlation between “suicidal tweets” and actual suicides and with Samaritans Radar we can turn a social net into a safety net”. Given the aims of the project, it is highly likely that some of the tweets identified to subscribers included information about an
individual’s mental health or other medical information and therefore would have been sensitive personal data.

At our meetings you said that even if you were processing sensitive personal data then Schedule 3 condiüon 5 (“The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject”) was sufficient to legitimise this processing. Our guidance in our Personal Information Online Code of Practice makes it clear that although people post personal information in a way that becomes publicly visible, organisations still have an overarching duty to handle it fairly and to comply with the rules of data protection. The Samaritans are well respected in this field and receiving an email from your organisation carries a lot of weight. Linking an individual’s tweet to an email alert from the Samaritans is unlikely to be perceived in the same light as the information received in the original Tweet — not least because of the risk that people’s tweets were flagged when they were not in any distress at all.

Fair processing

Any processing of personal data must be fair and organisations must consider the effect of the processing on the individuals concerned and whether the processing would be within their reasonable expectations. You indicated that although you had undertaken some elements of an impact assessment, you had not carried out a full privacy impact assessment. You appear to have reached the conclusion that since the Tweets were publicly visible, you did not need to fully consider the privacy risks. For example, on your website you say that “all the data is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of people you follow, which are public Tweets. It does not look at private Tweets.”

It is our view that if organisations collect information from the internet and use it in a way that’s unfair, they could still breach the data protection principles even though the information was obtained from a publicly available source. It is particularly important that organisations should consider the data protection implications if they are planning to use analytics to make automated decisions that could have a direct effect on individuals. Under section 12 Of the Data Protection Act, individuals have certain rights to prevent decisions being taken about them that are solely based on automated processing of their personal data. The quality of the data being used as a basis for these decisions may also be an issue.

We note that the application was a year in development and that you used leading academics in linguistics to develop your word search algorithm. You also tested the application on a large number of people, although, as we discussed, most if not of these were connected to the project in some way and many were enthusiastic to see the project succeed. As our recent paper on Big Data explains, it is not so much a question of whether the data accurately records what someone says but rather to what extent that information provides a reliable basis for drawing conclusions. Commentators expressed concern at the apparent high level of false positives involving the Radar App (figures in the media suggest only 4% of email alerts were genuine). This raises questions about whether a System operating with such a low success rate could represent fair processing and indicates that many Tweets were being flagged up unnecessarily.

Since you did not consider yourselves to be data controllers, you have not sought the consent of, or provided fair processing notices to, the individuals whose Tweets you flagged to subscribers. It seems unlikely that it would be within people’s reasonable expectations that certain words and phrases from their Tweets would trigger an automatic email alert from the Samaritans saying Radar had detected someone who may be going throuqh a tough time. Our Personal Information Online Code of Practice says it is good practice to only use publicly available information in a way that is unlikely to cause embarrassment, distress or anxiety to the individual concerned. Organisations should only use their information in a way they are likely to expect and to be comfortable with. Our advice is that if in doubt about this, and you are unable to ask permission, you should not collect their information in the first place.

Conclusion

Based on our observations above, we have reached the conclusion that the Radar application did risk causing distress to individuals and was unlikely to be compliant with the Data Protection Act.

We acknowledge that the Samaritans did take responsibility for dealing with the many concerns raised about the application very quickly. The application was suspended on 7 November and we welcomed [redacted] assurances on 14 November that not only was the application suspended but it would not be coming back in anything like its previous form. We also understand that there have been no complaints that indicate that anyone had suffered damage and distress in the very short period the application was in operation.

We do not want to discourage innovation but it is important that organisations should consider privacy throughout the development and implementation of new projects. Failing to do so risks undermining people’s trust in an organisation. We strongly recommend that if you are considering further projects involving the use of online information and technologies you should carry out and publish a privacy impact assessment. This will help you to build trust and engage with the wider public. Guidance on this can be found in our PIA Code of Practice. We also recommend that you look at our paper on Big Data and Data Protection and our Personal Information Online Code of Practice. Building trust and adopting an ethical approach to such projects can also help to ensure you handle people’s information in a fair and transparent way. We would be very happy to advise the Samaritans on data protection compliance in relation any future projects.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

18 Comments

Filed under Data Protection, Information Commissioner, social media

Russell Brand and the domestic purposes exemption in the Data Protection Act

Was a now-deleted tweet by Russell Brand, revealing a journalist’s private number, caught by data protection law?

Data protection law applies to anyone who “processes” (which includes “disclosure…by transmission”) “personal data” (data relating to an identifiable living individual) as a “data controller” (the person who determines the purposes for which and the manner in which the processing occurs). Rather dramatically, in strict terms, this means that most individuals actually and regularly process personal data as data controllers. And nearly everyone would be caught by the obligations under the Data Protection Act 1998 (DPA), were it not for the exemption at section 36. This provides that

Personal data processed by an individual only for the purposes of that individual’s personal, family or household affairs (including recreational purposes) are exempt from the data protection principles and the provisions of Parts II and III

Data protection nerds will spot that exemption from the data protection principles and Parts II and III of the DPA is effectively an exemption from whole Act. So in general terms individuals who restrict their processing of personal data to domestic purposes are outwith the DPA’s ambit.

The extent of this exemption in terms of publication of information on the internet is subject to some disagreement. On one side is the Information Commissioner’s Office (ICO) who say in their guidance that it applies when an individual uses an online forum purely for domestic purposes, and on the other side are the Court of Justice of the European Union (and me) who said in the 2003 Lindqvist case that

The act of referring, on an internet page, to various persons and identifying them by name or by other means, for instance by giving their telephone numberconstitutes ‘the processing of personal data…[and] is not covered by any of the exceptionsin Article 3(2) of Directive 95/46 [section 36 of the DPA transposes Article 3(2) into domestic law]

Nonetheless, it is clear that publishing personal data on the internet for reasons not purely domestic constitutes an act of processing to which the DPA applies (let us assume that the act of publishing was a deliberate one, determined by the publisher). So when the comedian Russell Brand today decided to tweet a picture of a journalist’s business card, with an arrow pointing towards the journalist’s mobile phone number (which was not, for what it’s worth, already in the public domain – I checked with a Google search) he was processing that journalist’s personal data (note that data relating to an individual’s business life is still their personal data). Can he avail himself of the DPA domestic purposes exemption? No, says the CJEU, of course, following Lindqvist. But no, also, would surely say the ICO: this act by Brand was not purely domestic. Brand has 8.7 million twitter followers – I have no doubt that some will have taken the tweet as an invitation to call the journalist. It is quite possible that some of those calls will be offensive, or abusive, or even threatening.

Whilst I have been drafting this blog post Brand has deleted the tweet: that is to his credit. But of course, when you have so many millions of followers, the damage is already done – the picture is saved to hard drives, is mirrored by other sites, is emailed around. And, I am sure, the journalist will have to change his number, and maybe not much harm will have been caused, but the tweet was nasty, and unfair (although I have no doubt Brand was provoked in some way). If it was unfair (and lacking a legal basis for the publication) it was in contravention of the first data protection principle which requires that personal data be processed fairly and lawfully and with an appropriate legitimating condition. And because – as I submit –  Brand cannot plead the domestic purposes exemption, it was in contravention of the DPA. However, whether the journalist will take any private action, and whether the ICO will take any enforcement action, I doubt.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner, journalism, social media

So farewell then #samaritansradar…

…or should that be au revoir?

With an interestingly timed announcement (18:00 on a Friday evening) Samaritans conceded that they were pulling their much-heralded-then-muchcriticised app “Samaritans Radar”, and, as if some of us didn’t feel conflicted enough criticising such a normally laudable charity, their Director of Policy Joe Ferns managed to get a dig in, hidden in what was purportedly an apology:

We are very aware that the range of information and opinion, which is circulating about Samaritans Radar, has created concern and worry for some people and would like to apologise to anyone who has inadvertently been caused any distress

So, you see, it wasn’t the app, and the creepy feeling of having all one’s tweets closely monitored for potentially suicidal expressions, which caused concern and worry and distress – it was all those nasty people expressing a range of information and opinion. Maybe if we’d all kept quiet the app could have continued on its unlawful and unethical merry way.

However, although the app has been pulled, it doesn’t appear to have gone away

We will…be testing a number of potential changes and adaptations to the app to make it as safe and effective as possible for both subscribers and their followers

There is a survey at the foot of this page which seeks feedback and comment. I’ve completed it, and would urge others to do so. I’ve also given my name and contact details, because one of my main criticisms of the launch of the app was that there was no evidence that Samaritans had taken advice from anyone on its data protection implications – and I’m happy to do so for no fee. As Paul Bernal says, “[Samaritans] need to talk to the very people who brought down the app: the campaigners, the Twitter activists and so on”.

Data protection law’s place in our digital lives is of profound importance, and of profound interest to me. Let’s not forget that its genesis in the 1960s and 1970s was in the concerns raised by the extraordinary advances that computing brought to data analysis. For me some of the most irritating counter-criticism during the recent online debates about Samaritans Radar was from people who equated what the app did to mere searching of tweets, or searching for keywords. As I said before, the sting of this app lay in the overall picture – it was developed, launched and promoted by Samaritans – and in the overall processing of data which went on – it monitored tweets, identified potentially worrying ones and pushed this information to a third party, all without the knowledge of the data subject.

But also irritating were comments from people who told us that other organisations do similar analytics, for commercial reasons, so why, the implication went, shouldn’t Samaritans do it for virtuous ones? It is no secret that an enormous amount of analysis takes place of information on social media, and people should certainly be aware of this (see Adrian Short’s excellent piece here for some explanation), but the fact that it can and does take place a) doesn’t mean that it is necessarily lawful, nor that the law is impotent within the digital arena, and b) doesn’t mean that it is necessarily ethical. And for both those reasons Samaritans Radar was an ill-judged experiment that should never have taken place as it did. If any replacement is to be both ethical and lawful a lot of work, and a lot of listening, needs to be done.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

7 Comments

Filed under Data Protection, social media

Samaritans cannot deny being data controller for #samaritansradar

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

So, Samaritans continue to support the #samaritansradar app, about which I, and many others, have already written. A large number of people suffering from, or with experience of mental health problems, have pleaded with Samaritans to withdraw the app, which monitors the tweets of the people one follows on twitter, applies an algorithm to identify tweets from potentially vulnerable people, and emails that information to the app user, all without the knowledge of the person involved. As Paul Bernal has eloquently said, this is not really an issue about privacy, and nor is it about data protection – it is about the threat many vulnerable people feel from the presence of the app. Nonetheless, privacy and data protection law, in part, are about the rights of the vulnerable; last night (4 November) Samaritans issued their latest sparse statement, part of which dealt with data protection:

We have taken the time to seek further legal advice on the issues raised. Our continuing view is that Samaritans Radar is compliant with the relevant data protection legislation for the following reasons:

o   We believe that Samaritans are neither the data controller or data processor of the information passing through the app

o   All information identified by the app is available on Twitter, in accordance with Twitter’s Ts&Cs (link here). The app does not process private tweets.

o   If Samaritans were deemed to be a data controller, given that vital interests are at stake, exemptions from data protection law are likely to apply

It is interesting that there is reference here to “further” legal advice: none of the previous statements from Samaritans had given any indication that legal or data protection advice had been sought prior to the launch of the app. It would be enormously helpful to discussion of the issue if Samaritans actually disclosed their advice, but I doubt very much that they will do so. Nonetheless, their position appears to be at odds with the legal authorities.

In May this year the Court of Justice of the European Union (CJEU) gave its ruling in the Google Spain case. The most widely covered aspect of that case was, of course, the extent of a right to be forgotten – a right to require Google to remove search terms in certain specified cases. But the CJEU also was asked to rule on the question of whether a search engine, such as Google, was a data controller in circumstances in which it engages in the indexing of web pages. Before the court Google argued that

the operator of a search engine cannot be regarded as a ‘controller’ in respect of that processing since it has no knowledge of those data and does not exercise control over the data

and this would appear to be a similar position to that adopted by Samaritans in the first bullet point above. However, the CJEU dismissed Google’s argument, holding that

the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results…It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of [the activity at issue] and which must, consequently, be regarded as the ‘controller’ in respect of that processing

Inasmuch as I understand how it works, I would submit that #samaritansradar, while not a search engine as such, collects data (personal data), records and organises it, stores it on servers and discloses it to its users in the form of a result. The app has been developed by and launched by Samaritans, it carries their name and seeks to further their aims: it is clearly “their” app, and they are, as clearly, a data controller with attendant legal responsibilities and liabilities. In further proof of this Samaritans introduced, after the app launch and in response to outcry, a “whitelist” of twitter users who have specifically informed Samaritans that they do not want their tweets to be monitored (update on 30 October). If Samaritans are effectively saying they have no role in the processing of the data, how on earth would such a whitelist be expected to work?

And it’s interesting to consider the apparent alternative view that they are implicitly putting forward. If they are not data controller, then who is? The answer must be the users who download and run the app, who would attract all the legal obligations that go with being a data controller. The Samaritans appear to want to back out of the room, leaving app users to answer all the awkward questions.1

Also very interesting is that Samaritans clearly accept that others might have a different view to theirs on the issue of controllership; they suggest that if they were held to be a data controller they would avail themselves of “exemptions” in data protection law relating to “vital interest” to legitimise their activities. One presumes this to be a reference to certain conditions in Schedule 2 and 3 of the Data Protection Act 1998 (DPA). Those schedules contain conditions which must be met, in order for the processing of, respectively, personal data and sensitive personal data, to be fair and lawful. As we are here clearly talking about sensitive personal data (personal data relating to someone’s physical or mental health is classed as sensitive), let us look at the relevant condition in Schedule 3:

The processing is necessary—
(a)in order to protect the vital interests of the data subject or another person, in a case where—
(i)consent cannot be given by or on behalf of the data subject, or
(ii)the data controller cannot reasonably be expected to obtain the consent of the data subject, or
(b)in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld

Samaritans alternative defence founders on the first four words: in what way can this processing be necessary to protect vital interests? The Information Commissioner’s Office explains that this condition only applies

in cases of life or death, such as where an individual’s medical history is disclosed to a hospital’s A&E department treating them after a serious road accident

The evidence suggests this app is actually delivering a very large number of false positives (as it’s based on what seems to be a crude keyword algorithm, this is only to be expected). Given that, and, indeed, given that Samaritans have – expressly – no control over what happens once the app notifies a user of a concerning tweet, it is absolutely preposterous to suggest that the processing is necessary to protect people’s vital interests. Moreover, the condition above also explains that it can only be relied on where consent cannot be given by the data subject or the controller cannot reasonably be expected to obtain consent. Nothing prevents Samaritans from operating an app which would do the same thing (flag a tweet of concern) but basing it on a consent model, whereby someone agrees that their tweets will be monitored in that way. Indeed, such a model would fit better with Samaritans stated aim of allowing people to “lead the conversation at their own pace”. It is clear, nonetheless, that consent could be sought for this processing, but that Samaritans have failed to design an app which allows it to be sought.

The Information Commissioner’s Office is said to be looking into the issues raised by Samaritans’ app. It may be that it will only be through legal enforcement action that it will actually be – as I think it should – removed. But it would be extremely sad if it came to that. It should be removed voluntarily by Samaritans, so they can rethink, re-programme, take full legal advice, but – most importantly – listen to the voices of the most vulnerable, who feel so threatened and betrayed by the app.

1On a strict and nuanced analysis of data protection law users of the app probably are data controllers, acting as joint ones with Samaritans. However, given the regulatory approach of the Information Commissioner they would probably be able to avail themselves of the general exemption from all of the DPA for processing which is purely domestic (although even that is arguably wrong). These are matters for another blog post however, and the fact that users might be held to be data controllers doesn’t alter the fact that Samaritans are, and in a much clearer way

43 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

Samaritans Radar – serious privacy concerns raised

UPDATE: 31 October

It appears Samaritans have silently tweaked their FAQs (so the text near the foot of this post no longer appears). They now say tweets will only be retained by the app for seven (as opposed to thirty) days, and have removed the words saying the app will retain a “Count of flags against a Twitter Users Friends ID”. Joe Ferns said on Twitter that the inclusion of this in the original FAQs was “a throw back to a stage of the development where that was being considered”. Samaritans also say “The only people who will be able to see the alerts, and the tweets flagged in them, are followers who would have received these Tweets in their current feed already”, but this does not absolve them of their data controller status: a controller does not need to access data in order to determine the means by which and the manner in which personal data are being processed, and they are still doing this. Moreover, this changing of the FAQs, with no apparent change to the position that those whose tweets are processed get no fair processing notice whatsoever, makes me more concerned that this app has been released without adequate assessment of its impact on people’s privacy.

END UPDATE

UPDATE: 30 October

Susan Hall has written a brilliant piece expanding on mine below, and she points out that section 12 of the Data Protection Act 1998 in terms allows a data subject to send a notice to a data controller requiring it to ensure no automated decisions are taken by processing their personal data for the purposes of evaluating matters such as their conduct. It seems to me that is precisely what “Samaritans Radar” does. So I’ve sent the following to Samaritans

Dear Samaritans

This is a notice pursuant to section 12 Data Protection Act 1998. Please ensure that no decision is taken by you or on your behalf (for instance by the “Samaritans Radar” app) based solely on the processing by automatic means of my personal data for the purpose of evaluating my conduct.

Thanks, Jon Baines @bainesy1969

I’ll post here about any developments.

END UPDATE

Samaritans have launched a Twitter App “to help identify vulnerable people”. I have only ever had words of praise and awe about Samaritans and their volunteers, but this time I think they may have misjudged the effect, and the potential legal implications of “Samaritans Radar”. Regarding the effect, this post from former volunteer @elphiemcdork is excellent:

How likely are you to tweet about your mental health problems if you know some of your followers would be alerted every time you did? Do you know all your followers? Personally? Are they all friends? What if your stalker was a follower? How would you feel knowing your every 3am mental health crisis tweet was being flagged to people who really don’t have your best interests at heart, to put it mildly? In this respect, this app is dangerous. It is terrifying to think that anyone can monitor your tweets, especially the ones that disclose you may be very vulnerable at that time

As for the legal implications, it seems to be potentially the case that Samaritans are processing sensitive personal data, in circumstances where there may not be a legal basis to do so. And some rather worrying misconceptions have accompanied the app launch. The first and most concerning of these is in the FAQs prepared for the media. In reply to the question “Isn’t there a data privacy issue here? Is Samaritans Radar spying on people?” the following answer is given

All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets. It does not look at private Tweets

The idea that, because something is in the public domain it cannot engage privacy issues is a horribly simplistic one, and if that constitutes the impact assessment undertaken, then serious questions have to be asked. Moreover, it doesn’t begin to consider the data protection considerations: personal data is personal data, whether it’s in the public domain or not. A tweet from an identified tweeter is inescapably the personal data of that person, and, if it is, or appears to be, about the person’s physical or mental health, then it is sensitive personal data, afforded a higher level of protection under the Data Protection Act 1998 (DPA). It would appear that Samaritans, as the legal person who determines the purposes for which, and the manner in which, the personal data are processed (i.e. they have produced an app which identifies a tweet on the basis of words, or sequences of words, and push it to another person) are acting as a data controller. As such, any processing has to be in accordance with their obligation to abide by the data protection principles in Schedule One of the DPA. The first principle says that personal data must be processed fairly and lawfully, and that a condition for processing contained in Schedule Two (and for sensitive personal data Schedule Two and Three) must be met. Looking only at Schedule Three, I struggle to see the condition which permits the app to identify a tweet, decide that it is from a potentially suicidal person and send it as such to a third party. The one condition which might apply, the fifth “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject” is undercut by the fact that the data in question is not just the public tweet, but the “package” of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.

The reliance on “all the data used in the app is public, so user privacy is not an issue” has carried through in messages sent on twitter by Samaritans Director of Policy, Research and Development, Joe Ferns, in response to people raising concerns, such as

existing Twitter search means anyone can search tweets unless you have set to private. #SamaritansRadar is like an automated search

Again, this misses the point that it is not just “anyone” doing a search on twitter, it is an app in Samaritans name which specifically identifies (in an automated way) certain tweets as of concern, and pushes them to third parties. Even more concerning was Mr Ferns’ response to someone asking if there was a way to opt out of having their tweets scanned by the app software:

if you use Twitter settings to mark your tweets private #SamaritansRadar will not see them

What he is actually suggesting there is that to avoid what some people clearly feel are intrusive actions they should lock their account and make it private. And, of course, going back to @elphiemcdork’s points, it is hard to avoid the conclusion that those who will do this might be some of the most vulnerable people.

A further concern is raised (one which confirms the data controller point above) about retention and reuse of data. The media FAQ states

Where will all the data be stored? Will it be secure? The data we will store is as follows:
• Twitter User ID – a unique ID that is associated with a Twitter account
• All Twitter User Friends ID’s – The same as above but for all the users friends that they
follow
• Any flagged Tweets – This is the data associated with the Tweet, we will store the raw
data for the Tweet as well
• Count of flags against a Twitter Users Friends ID – We store a count of flags against an
individual User
• To prevent the Database growing exponentially we will remove flagged Tweets that are
older than 30 days.

So it appears that Samaritans will be amassing data on unwitting twitter users, and in effect profiling them. This sort of data is terrifically sensitive, and no indication is given regarding the location of this data, and security measures in place to protect it.

The Information Commissioner’s Office recently produced some good guidance for app developers on Privacy in Mobile Apps. The guidance commends the use of Privacy Impact Assessments when developing apps. I would be interested to know if one was undertaken for Samaritans Radar, and, if so, how it dealt with the serious concerns that have been raised by many people since its launch.

This post was amended to take into account the observations in the comments by Susan Hall, to whom I give thanks. I have also since seen a number of excellent blog posts dealing with wider concerns. I commend, in particular, this by Adrian Short and this by @latentexistence

 

 

33 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

Monitoring of blogs and lawful/unlawful surveillance

Tim Turner wrote recently about the data protection implications of the monitoring of Sara Ryan’s blog by Southern Health NHS Trust. Tim’s piece is an exemplary analysis of how the processing of personal data which is in the public domain is still subject to compliance with the Data Protection Act 1998 (DPA):

there is nothing in the Data Protection Act that says that the public domain is off-limits. Whatever else, fairness still applies, and organisations have to accept that if they want to monitor what people are saying, they have to be open about it

But it is not just data protection law which is potentially engaged by the Trust’s actions. Monitoring of social media and networks by public authorities for the purposes of gathering intelligence might well constitute directed surveillance, bringing us explicitly into the area of human rights law. Sir Christopher Rose, the Chief Surveillance Commissioner said, in his most recent annual report

my commissioners remain of the view that the repeat viewing of individual “open source” sites for the purpose of intelligence gathering and data collation should be considered within the context of the protection that RIPA affords to such activity

“RIPA” there of course refers to the complex Regulation of Investigatory Powers Act 2000 (RIPA) (parts of which were reputedly “intentionally drafted for maximum obscurity”)1. What is not complex, however, is to note which public authorities are covered by RIPA when they engage in surveillance activities. A 2006 statutory instrument2 removed NHS Trusts from the list (at Schedule One of RIPA) of relevant public authorities whose surveillance was authorised by RIPA. Non-inclusion on the Schedule One lists doesn’t as a matter of fact or law mean that a public authority cannot undertake surveillance. This is because of the rather odd provision at section 80 of RIPA, which effectively explains that surveillance is lawful if carried out in accordance with RIPA, but surveillance not carried out in accordance with RIPA is not ipso facto unlawful. As the Investigatory Powers Tribunal put it, in C v The Police and the Home Secretary IPT/03/32/H

Although RIPA provides a framework for obtaining internal authorisations of directed surveillance (and other forms of surveillance), there is no general prohibition in RIPA against conducting directed surveillance without RIPA authorisation. RIPA does not require prior authorisation to be obtained by a public authority in order to carry out surveillance. Lack of authorisation under RIPA does not necessarily mean that the carrying out of directed surveillance is unlawful.

But it does mean that where surveillance is not specifically authorised by RIPA questions would arise about its legality under Article 8 of the European Convention on Human Rights, as incorporated into domestic law by the Human Rights Act 1998. The Tribunal in the above case went on to say

the consequences of not obtaining an authorisation under this Part may be, where there is an interference with Article 8 rights and there is no other source of authority, that the action is unlawful by virtue of section 6 of the 1998 Act.3

So, when the Trust was monitoring Sara Ryan’s blog, was it conducting directed surveillance (in a manner not authorised by RIPA)? RIPA describes directed surveillance as covert (and remember, as Tim Turner pointed out – no notification had been given to Sara) surveillance which is “undertaken for the purposes of a specific investigation or a specific operation and in such a manner as is likely to result in the obtaining of private information about a person (whether or not one specifically identified for the purposes of the investigation or operation)” (there is a further third limb which is not relevant here). One’s immediate thought might be that no private information was obtained or intended to be obtained about Sara, but one must bear in mind that, by section 26(10) of RIPA “‘private information’, in relation to a person, includes any information relating to his private or family life” (emphasis added). This interpretation of “private information” of course is to be read alongside the protection afforded to the respect for one’s private and family life under Article 8. The monitoring of Sara’s blog, and the matching of entries in it against incidents in the ward on which her late son, LB, was placed, unavoidably resulted in the obtaining of information about her and LB’s family life. This, of course, is the sort of thing that Sir Christopher Rose warned about in his most recent report, in which he went on to say

In cash-strapped public authorities, it might be tempting to conduct on line investigations from a desktop, as this saves time and money, and often provides far more detail about someone’s personal lifestyle, employment, associates, etc. But just because one can, does not mean one should.

And one must remember that he was talking about cash-strapped public authorities whose surveillance could be authorised under RIPA. When one remembers that this NHS Trust was not authorised to conduct directed surveillance under RIPA, one struggles to avoid the conclusion that monitoring was potentially in breach of Sara’s and LB’s human rights.

1See footnote to Caspar Bowden’s submission to the Intelligence and Security Committee
2The Regulation of Investigatory Powers (Directed Surveillance and Covert Human Intelligence Sources) (Amendment) Order 2006
3This passage was apparently lifted directly from the explanatory notes to RIPA

3 Comments

Filed under Data Protection, human rights, NHS, Privacy, RIPA, social media, surveillance, surveillance commissioner

Brooks Newmark, the press, and “the other woman”

UPDATE: 30.09.14 Sunday Mirror editor Lloyd Embley is reported by the BBC and other media outlets to have apologised for the use of women’s photos (it transpires that two women’s images appropriated), saying

We thought that pictures used by the investigation were posed by models, but we now know that some real pictures were used. At no point has the Sunday Mirror published any of these images, but we would like to apologise to the women involved for their use in the investigation

What I think is interesting here is the implicit admission that (consenting) models could have been used in the fake profiles. Does this mean therefore, the processing of the (non-consenting) women’s personal data was not done in the reasonable belief that it was in the public interest?

Finally, I think it’s pretty shoddy that former Culture Secretary Maria Miller resorts to victim-blaming, and missing the point, when she is reported to have said that the story “showed why people had to be very careful about the sorts of images they took of themselves and put on the internet”

END UPDATE.

With most sex scandals involving politicians, there is “the other person”. For every Profumo, a Keeler;  for every Mellor, a de Sancha; for every Clinton, a Lewinsky. More often than not the rights and dignity of these others are trampled in the rush to revel in outrage at the politicians’ behaviour. But in the latest, rather tedious, such scandal, the person whose rights have been trampled was not even “the other person”, because there was no other person. Rather, it was a Swedish woman* whose image was appropriated by a journalist without her permission or even her knowledge. This raises the question of whether such use, by the journalist, and the Sunday Mirror, which ran the exposé, was in accordance with their obligations under data protection and other privacy laws.

The story run by the Sunday Mirror told of how a freelance journalist set up a fake social media profile, purportedly of a young PR girl called Sophie with a rather implausible interest in middle-aged Tory MPs. He apparently managed to snare the Minister for Civil Society and married father of five, Brooks Newmark, and encourage him into sending explicit photographs of himself. The result was that the newspaper got a lurid scoop, and the Minister subsequently resigned. Questions are being asked about the ethics of the journalism involved, and there are suggestions that this could be the first difficult test for IPSO, the new Independent Press Standards Organisation.

But for me much the most unpleasant part of this unpleasant story was that the journalist appears to have decided to attach to the fake twitter profile the image of a Swedish woman. It’s not clear where he got this from, but it is understood that the same image had apparently already appeared on several fake Facebook accounts (it is not suggested, I think, that the same journalist was responsible for those accounts). The woman is reported to be distressed at the appropriation:

It feels really unpleasant…I have received lot of emails, text messages and phone calls from various countries on this today. It feels unreal…I do not want to be exploited in this way and someone has used my image like this feels really awful, both for me and the others involved in this. [Google translation of original Swedish]

Under European and domestic law the image of an identifiable individual is their personal data. Anyone “processing” such data as a data controller (“the person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be, processed”) has to do so in accordance with the law. Such processing as happened here, both by the freelance journalist, when setting up and operating the social media account(s), and by the Sunday Mirror, in publishing the story, is covered by the UK Data Protection Act 1998 (DPA). This will be the case even though the person whose image was appropriated is in Sweden. The DPA requires, among other things, that processing of personal data be “fair and lawful”. It affords aggrieved individuals the right to bring civil claims for compensation for damage and distress arising from contraventions of data controllers’ obligations under the DPA. It also affords them the right to ask the Information Commissioner’s Office (ICO) for an assessment of the likelihood (or not) that processing was in compliance with the DPA.

However, section 32 of the DPA also gives journalism a very broad exemption from almost all of the Act, if the processing is undertaken with a view to publication, and the data controller reasonably believes that publication would be in the public interest and that compliance with the DPA would be incompatible with the purposes of journalism. As the ICO says

The scope of the exemption is very broad. It can disapply almost all of the DPA’s provisions, and gives the media a significant leeway to decide for themselves what is in the public interest

The two data controllers here (the freelancer and the paper) would presumably have little problem satisfying a court, or the ICO, that when it came to processing of Brooks Newmark’s personal data, they acted in the reasonable belief that the public interest justified the processing. But one wonders to what extent they even considered the processing of (and associated intrusion into the private life of) the Swedish woman whose image was appropriated. Supposing they didn’t even consider this processing – could they reasonably say they that they reasonably believed it to have been in the public interest?

These are complex questions, and the breadth and ambit of the section 32 exemption are likely to be tested in litigation between the mining and minerals company BSG and the campaigning group Global Witness (currently stalled/being considered at the ICO). But even if a claim or complaint under DPA would be a tricky one to make, there are other legal issues raised. Perhaps in part because of the breadth of the section 32 DPA exemption (and perhaps because of the low chance of significant damages under the DPA), claims of press intrusion into private lives are more commonly brought under the cause of action of “misuse of private information “, confirmed – it would seem – as a tort, in the ruling of Mr Justice Tugendhat in Vidal Hall and Ors v Google Inc [2014] EWHC 13 (QB), earlier this year. Damage awards for successful claims in misuse of private information have been known to be in the tens of thousands of pounds – most notably recently an award of £10,000 for Paul Weller’s children, after photographs taken covertly and without consent had been published in the Mail Online.

IPSO expects journalists to abide by the Editor’s Code, Clause 3 of which says

i) Everyone is entitled to respect for his or her private and family life, home, health and correspondence, including digital communications.

ii) Editors will be expected to justify intrusions into any individual’s private life without consent. Account will be taken of the complainant’s own public disclosures of information

and the ICO will take this Code into account when considering complaints about journalistic processing of personal data. One notes that “account will be taken of the complainant’s own public disclosures of information”, but one hopes that this would not be seen to justify the unfair and unethical appropriation of images found elsewhere on the internet.

*I’ve deliberately, although rather pointlessly – given their proliferation in other media – avoided naming the woman in question, or posting her photograph

4 Comments

Filed under Confidentiality, consent, Data Protection, Information Commissioner, journalism, Privacy, social media

Twitter timeline changes – causing offence?

@jamesrbuk: Well that’s jarring: Twitter just put a tweet into my feed showing a still from the James Foley beheading video, from account I don’t follow

When the Metropolitan Police put out a statement last week suggesting that merely viewing (absent publication, incitement etc) the video of the beheading of James Foley, they were rightly challenged on the basis for this (conclusion, there wasn’t a valid one).

But what about a company which actively, by the coding of its software, communicates stills from the video to unwilling recipients? That seems to be the potential (and actual, in the case of James Ball in the tweet quoted above) effect of recent changes Twitter has made to its user experience. Tweets are now posted to users’ timelines which are not from people followed, nor from followers of people followed

when we identify a Tweet, an account to follow, or other content that’s popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don’t follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting

I’m not clear on the algorithm that is used to select which unsolicited tweets are posted to a timeline, but the automated nature of it raises issues, I would argue, about Twitter’s responsibility and potential legal liability for the tweets’ appearance, particularly if the tweets are offensive to the recipient.

Section 127 of the Communications Act 2003 says

A person is guilty of an offence if he—

(a)sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character; or

(b)causes any such message or matter to be so sent.

The infamous case of DPP v Chambers dealt with this provision, and although Paul Chambers was, thankfully, successful in appealing his ridiculous conviction for sending a menacing message, the High Court accepted that a tweet is a message sent by means of a public electronic communications network for the purposes of the Communications Act 2003 (¶25).

A still of the beheading video certainly has the potential to be grossly offensive, and also obscene. The original tweeter might possibly be risking the committing of a criminal offence in originally tweeting it, but what of Twitter, inserting into an unwilling recipient’s timeline?

Similarly, section 2 of the Terrorism Act 2006 creates an offence if a person is reckless at whether the distribution or circulation of a terrorist publication constitutes a direct or indirect encouragement or other inducement to the commission, preparation or instigation of acts of terrorism (it’s possible this is the offence the Met were -oddly – hinting at in their statement).

I’m not a criminal lawyer (I’m not even a lawyer) so I don’t know whether the elements of the offence are made out, nor whether there are jurisdictional or other considerations in play, but it does strike me that the changes Twitter has made have the potential to produce grossly offensive results.

Leave a comment

Filed under police, social media

ICO indicates that (non-recreational) bloggers must register with them

I think I am liable to register with the ICO, and so are countless others. But I also think this means there needs to be a debate about what this, and future plans for levying a fee on data controllers, mean for freedom of expression

Recently I wrote about whether I, as a blogger, had a legal obligation to register with the Information Commissioner’s Office (ICO) the fact that I was processing personal data (and the purposes for which it was processed). As I said at the time, I asked the ICO whether I had such an obligation, and they said

from the information you have provided it would be unlikely that you would be required to register in respect of your blogs and tweets

However, I asked them for clarification on this point. I noted that I couldn’t see any exemption from the obligation to register, unless it was the general exemption (at section 36) from the Data Protection Act 1998 (DPA) where the processing is only for “domestic purposes”, which include “recreational purposes”. I noted that, as someone writing a semi-professional blog, I could hardly rely on the fact I do this only for recreational purposes. The ICO’s reply is illuminating

if you were blogging only for your own recreational purposes, it would be unlikely that you would need to register as a data controller. However, you have explained that your blogging is not just for recreational purposes. If you are sharing your views in order to further some other purpose, and this is likely to impact on third parties, then you should consider registering.

I know this is couched in rather vague terms – “if”…”likely”…”consider” – but it certainly suggests that merely being a non-professional blogger does not exempt me from having to register with a statutory regulator.

Those paying careful attention might understand the implications of this: millions of people every day share their views online, in order to further some purpose, in a way that “is likely to impact on third parties”. When poor Bodil Lindqvist got convicted in the Swedish courts in 2003 that is just what she was doing, and the Court of Justice of the European Union held that, under the European Data Protection Directive, she was processing personal data as a data controller, and consequently had legal obligations under data protection law to process data fairly, i.e. by not writing about a fellow churchgoer’s broken leg etc. without informing them/giving them an opportunity to object.

And there, in my last paragraph, you have an example of me processing personal data – I have published (i.e. processed) sensitive (i.e. criminal conviction) personal data (i.e. of an identifiable individual). I am a data controller. Surely I have to register with the ICO? Section 17 of the DPA says that personal data must not be processed unless an entry in respect of the data controller is included in the register maintained by the ICO, unless an exemption applies. The “domestic purposes” exemption doesn’t wash – the ICO has confirmed that1, and none of the exemptions apply. I have to register.

But if I have to register (and I will, because if I continue to process personal data without a registration I am potentially committing a criminal offence) then so, surely, do the millions of other people throughout the country, and throughout the jurisdiction of the data protection directive, who publish personal data on the internet not solely for recreational purposes – all the citizen bloggers, campaigning tweeters, community facebookers and many, many others…

To single people out would be unfair, so I’m not going to identify individuals who I think potentially fall into these categories, with the following exception. In 2011 Barnet Council was roundly ridiculed for complaining to the ICO about the activities of a blogger who regularly criticised the council and its staff on his blog2. The Council asked the ICO to determine whether the blogger in question had failed in his legal obligation to register with the ICO in order to legitimise his processing of personal data. The ICO’s response was

If the ICO were to take the approach of requiring all individuals running a blog to notify as a data controller … it would lead to a situation where the ICO is expected to rule on what is acceptable for one individual to say about another. Requiring all bloggers to register with this office and comply with the parts of the DPA exempted under Section 36 (of the Act) would, in our view, have a hugely disproportionate impact on freedom of expression.

But subsequently, the ICO was taken to task in the High Court on this general stance (but in unrelated proceedings) about being “expected to rule on what is acceptable for one individual to say about another”, with the judge saying

I do not find it possible to reconcile the views on the law expressed [by the ICO] with authoritative statements of the law. The DPA does envisage that the Information Commissioner should consider what it is acceptable for one individual to say about another, because the First Data Protection Principle requires that data should be processed lawfully

And if now the ICO accepts that, at least those bloggers (like the one in the Camden case) who are not solely blogging for recreational purposes, might be required to register, it possibly indicates a fundamental change.

In response to my last blog post on this subject someone asked “why ruffle feathers?”. But I think this should lead to a societal debate: is it an unacceptable infringement of the principles of freedom of expression for the law to require registration with a state regulator before one can share one’s (non-recreational) views about individuals online? Or is it necessary for this legal restraint to be in place, to seek to protect individuals’ privacy rights?European data protection reforms propose the removal of the general obligation for a data controller to register with a data protection authority, but in the UK proposals are being made (because of the loss of ICO fee income that would come with this removal) that there be a levy on data controllers.

If such proposals come into effect it is profoundly important that there is indeed a debate about the terms on which the levy is made – or else we could all end up being liable to pay a tax to allow us to talk online.

1On a strict reading of the law, and the CJEU judgment in Lindqvist, the distinction between recreational and non-recreational expressions online does not exist, and any online expression about an identifiable individual would constitute processing of personal data. The “recreational” distinction does not exist in the data protection directive, and is solely a domestic provision

2A confession: I joined in the ridicule, but was disabused of my error by the much better-informed Tim Turner. Not that I don’t think the Council’s actions were ill-judged.

 

10 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner, social media

Lords’ Committee on Social Media and Criminal Offences – lacking a DPA expert?

In its generally sensible report on Social Media and Criminal Offences the House of Lords’ Communications Committee dealt with the subject of “Revenge Porn” (defined as “the electronic publication or distribution of sexually explicit material (principally images) of one or both of the couple, the material having originally been provided consensually for private use” which seems to me worryingly to miss a key factor – that the publication or distribution will be done with harmful intent). The committee considered what criminal offences might be enaged by this hateful practice, but also observed (¶41) that

a private remedy is already available to the victim. Images of people are covered by the Data Protection Act 1988 (as “personal data”), and so is information about people which is derived from images. Images of a person count as “sensitive personal data” under the Act if they relate to “sexual life”. Under the Act, a data subject may require a data controller not to process the data in a manner that is “causing or is likely to cause substantial damage or substantial distress to him or to another”.

This is all true, but the next bit is not

The Information Commissioner may award compensation to a person so affected 

The Information Commissioner (IC) has no such powers, and one wonders from where the committee got this impression (maybe they mistook the IC’s enforcement powers with the powers of the Local Government Ombudsman to make recommendations (such as payment of compensation)). In circumstances where someone wishes to complain about the processing of their personal data their only direct right (regarding the IC) is to ask him (pursuant to section 42) to assess whether the data controller’s processing was likely to have complied with its obligations under the Data Protection Act 1998 (DPA). All the substantive rights given to data subjects under the DPA (such as access to data, rectification, ceasing of processing, compensation etc) are enforceable only by the data subject through the courts. Moreover, in the case of “revenge porn” cases, they would involve the data subject requesting the data controller (who in most cases will be the person who has uploaded the images/content in question) to desist. This could clearly be a course of action fraught with difficulties.

The Committee goes on to point to another civil remedy – “An individual may also apply to the High Court for a privacy injunction to prevent or stop the publication of material relating to a person’s sexual life” – but observes (¶44) that

We are concerned that the latter remedy is available only to those who can afford access to the High Court. It would be desirable to provide a proportionately more accessible route to judicial intervention

Whilst remedies under the DPA are available through the County Court (or Sheriff’s Court in Scotland), rather than the High Court, this still involves expenditure, especially if the case is not amenable to the small claims track, and also involves potential exposure to costs in the event that the claim is unsuccessful.

Furthermore, in the event that the IC were asked to consider a complaint about “revenge porn”, it might be born in mind that he is reluctant to rule on matters regarding publication of private information on the internet. Section 36 of the DPA provides an exemption to the Act where the processing is only for “domestic purposes”. The Committee correctly says (¶41)

Personal data “processed by an individual only for the purposes of that individual’s personal, family or household affairs (including recreational purposes)” are exempt from this provision but the European Court of Justice has determined that posting material on the internet is not part of one’s “personal, family or household affairs”

And the Committee cites in support of this the Court of Justice of the European Union’s judgment in the case of Lindqvist. But the IC has traditionally been reluctant fully to grapple with the implications of Lindqvist, and, as I have noted previously, its guidance Social networking and online forums – when does the DPA apply?, which says

the ‘domestic purposes’ exemption…will apply whenever an individual uses an online forum purely for domestic purposes

is manifestly at odds with the CJEU’s ruling.

I would greatly hope that, if asked to consider the legality of the posting of “revenge porn”, the IC would not decline jurisdiction on the basis of the section 36 exemption, but his position on section 36 is problematic when it comes to regulation and enforcement of social media.

It is rather to be regretted that the Lords’ Committee was not better informed on these particular aspects of its report.

3 Comments

Filed under Data Protection, Information Commissioner, social media