Category Archives: social media

Where’s the Tories’ privacy notice? (just don’t mention the footballer)

The Conservative Party, no doubt scrabbling to gather perceived support for its contentious immigration policies and measures is running a web and social media campaign. The web page encourages those visiting it to “back our plan and send a message” to other parties:

Further down the page visitors are invited to “send Labour a message”

Clicking on either of the red buttons in those screenshots results in a pop-up form, on which one can say whether or not one supports the Tory plans (in the screenshot below, I’ve selected “no”)

One is then required to give one’s name, email address and postcode, and there is a tick box against text saying “I agree to the Conservative Party, and the wider Conservative Party, using the information I provide to keep me updated via email about the Party’s campaigns and opportunities to get involved”

There are two things to note.

First, the form appears to submit whether one ticks the “I agree” box or not.

Second, and in any case, none of the links to “how we use your data”, or the “privacy policy”, or the “terms and conditions” works.

So anyone submitting their special category data (information about one’s views on a political party’s policies on immigration is personal data revealing political opinions, and so Article 9 UK GDPR applies) has no idea whatsoever how it will subsequently be processed by the Tories.

I suppose there is an argument that anyone who happens upon this page, and chooses to submit the form, has a good idea what is going on (although that is by no means certain, and people could quite plausibly think that it provides an opportunity to provide views contrary to the Tories’). In any event, it would seem potentially to meet to definition of “plugging” (political lobbying under the guide of research) which ICO deals with in its direct marketing guidance.

Also in any event, the absence of any workable links to privacy notice information means, unavoidably, that the lawfulness of any subsequent processing is vitiated.

It’s the sort of thing I would hope the ICO is alive to (I’ve seen people on social media saying they have complained to ICO). But I won’t hold my breath on that – many years ago I wrote about how such data abuse was rife across the political spectrum – but little if anything has changed.

And finally, the most remarkable thing of all is that I’ve written a whole post on what is a pressing and high-profile issue without once mentioning Gary Lineker.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, Information Commissioner, marketing, PECR, privacy notice, social media, spam, UK GDPR

Public houses, private comms

Wetherspoons delete their entire customer email database. Deliberately.

In a very interesting development, the pub chain JD Wetherspoon have announced that they are ceasing sending monthly newsletters by email, and are deleting their database of customer email addresses.

Although the only initial evidence of this was the screenshot of the email communication (above), the company have confirmed to me on their Twitter account that the email is genuine.

Wetherspoons say the reason for the deletion is that they feel that email marketing of this kind is “too intrusive”, and that, instead of communicating marketing by email, they will “continue to release news stories on [their] website” and customers will be able to keep up to date by following them on Facebook and Twitter.

This is interesting for a couple of reasons. Firstly, companies such as Flybe and Honda have recently discovered that an email marketing database can be a liability if it is not clear whether the customers in question have consented to receive marketing emails (which is a requirement under the Privacy and Electronic Communications ((EC Directive) Regulations 2003 (PECR)). In March Flybe received a monetary penalty of £70,000 from the Information Commissioner’s Office (ICO) after sending more than 3.3 million emails with the title ‘Are your details correct?’ to people who had previously told them they didn’t want to receive marketing emails. These, said the ICO, were themselves marketing emails, and the sending of them was a serious contravention of PECR. Honda, less egregiously, sent 289,790 emails when they did not know whether or not the recipients had consented to receive marketing emails. This also, said ICO, was unlawful marketing, as the burden of proof was on Honda to show that they had recipients’ consent to send the emails, and they could not. The result was a £13,000 monetary penalty.

There is no reason to think Wetherspoons were concerned about the data quality (in terms of whether people had consented to marketing) of their own email marketing database, but it is clear from the Flybe and Honda cases that a bloated database with email details of people who have not consented to marketing (or where it is unclear whether they have) is potentially a liability under PECR (and related data protection law). It is a liability both because any marketing emails sent are likely to be unlawful (and potentially attract a monetary penalty) but also because, if it cannot be used for marketing, what purpose does it serve? If none, then it constitutes a huge amount of personal data, held for no ostensible purpose, which would be in contravention of the fifth principle in schedule 1 to the Data Protection Act 1998.

For this reason, I can understand why some companies might take a commercial and risk-based decision not to retain email databases – if something brings no value, and significant risk, then why keep it?

But there is another reason Wetherspoons’ rationale is interesting: they are clearly aiming now to use social media channels to market their products. Normally, one thinks of advertising on social media as not aimed at or delivered to individuals, but as technology has advanced, so has the ability for social media marketing to become increasingly targeted. In May this year it was announced that the ICO were undertaking “a wide assessment of the data-protection risks arising from the use of data analytics”. This was on the back of reports that adverts on Facebook were being targeted by political groups towards people on the basis of data scraped from Facebook and other social media. Although we don’t know what the outcome of this investigation by the ICO will be (and I understand some of the allegations are strongly denied by entities alleged to be involved) what it does show is that stopping your e-marketing on one channel won’t necessarily stop you having privacy and data protection challenges on another.

And that’s before we even get on to the small fact that European ePrivacy law is in the process of being rewritten. Watch that space.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under consent, Data Protection, marketing, monetary penalty notice, PECR, social media, spam

ICO: Samaritans Radar failed to comply with Data Protection Act

I’ve written so much previously on this blog about Samaritans Radar, the misguided Twitter app launched last year by Samaritans, that I’d thought I wouldn’t have to do so again. However, this is just a brief update on the outcome of the investigation by the Information Commissioner’s Office (ICO) into whether Samaritans were obliged to comply with data protection law when running the app, and, if so, the extent to which they did comply.

To recap, the app monitored the timelines of those the user followed on Twitter, and, if certain trigger words or phrases were tweeted, would send an email alert to the user. This was intended to be a “safety net” so that potential suicidal cries for help were not missed. But what was missed by Samaritans was the fact the those whose tweets were being monitored in this way would have no knowledge of it, and that this could lead to a feeling of uncertainty and unease in some of the very target groups they sought to protect. People with mental health issues raised concerns that the app could actually drive people off Twitter, where there were helpful and supportive networks of users, often tweeting the phrases and words the app was designed to pick up.

Furthermore, questions were raised, by me and many others, about the legality of the app under data protection law. So I made a request to the ICO under the Freedom of Information Act for

any information – such as an assessment of legality, correspondence etc. – which you hold about the “Samaritans Radar” app which Samaritans recently launched, then withdrew in light of serious legal and ethical concerns being raised

After an initial refusal because their investigation was ongoing, the ICO have now disclosed a considerable amount of information. Within it, however, is the substantive assessment I sought, in the form of a letter from the Group Manager for Government and Society to Samaritans. I think it is important to post it in full, and I do so below. I don’t have much to add, other than it vindicates the legal position put forward at the time by me and others (notably Susan Hall and Tim Turner).

19 December 2014

Samaritans Radar app

Many thanks for coming to our office and explaining the background to the development of the Radar application and describing how it worked.  We have now had an opportunity to consider the points made at the  meeting, as well as study the information provided in earlier  teleconferences and on the Samaritans’ website. I am writing to let you know our conclusions on how the Data Protection Act applies to the Radar  application.

We recognise that the Radar app was developed with the best of intentions and was withdrawn shortly after its launch but, as you know, during its operation we received a number of queries and concerns about the application. We have been asked for our vtew on whether personal data was processed in compliance with data protection prlnciples and whether the Samaritans are data controllers. You continue to believe that you are not data controllers or that personal data has been processed so I am writing to explain detail our conclusions on these points.

Personal data

Personal data is data that relates to an identifiable living individual. It is  our well-established position that data which identifies an individual, even without a name associated with it, may be personal data where it is processed to learn or record something about that individual, or where the processing of that information has an impact upon that individual. According to the information you have provided, the Radar app was a web-based application that used a specially designed algorithm that searched for specific keywords within the Twitter feeds of subscribers to the Radar app. When words indicating distress were detected within a Tweet, an email alert was automatically sent from the Samaritans to the subscriber saying Radar had detected someone they followed who may be going through a tough time and provided a link to that individual’s Tweet. The email asked the subscriber whether they were worried about the Tweet and if yes, they were re-directed to the Samaritans’ website for guidance on the best way of providing support to a follower who may be distressed. According to your FAQs, you also stored Twitter User IDs, Twitter User friends’ IDs, all tagged Tweets including the raw data associated with it and a count of flags against an individual Twitter user’s friends’ ID. These unique identifiers are personal data, in that they can easily be linked back to identifiable individuals.

Based on our understanding of how the application worked, we have reached the conclusion that the Radar service did involve processing of personal data. It used an algorithm to search for words that triggered an automated decision about an individual, at which point it sent an email alert to a Radar subscriber. It singled out an individual’s data with the purpose of differentiating them and treating them differently. In addition, you also stored information about all the Tweets that were tagged.

Data controller

We are aware of your view that you “are neither the data controller nor data processor of the information passing through the app”.

The concept of a data controller is defined in section 1 of the Data Protection 1998 (the DPA) as

“a person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be, processed”

We have concluded that the Radar service has involved processing of personal data. We understand that you used the agency [redacted] to develop and host the application. We are not fully aware of the role of [redacted] but given your central role in setting up and promoting the Radar application, we consider that the Samaritans have determined the manner and purpose of the processing of this personal data and as such you are data controllers. If you wish to be reminded of the approach we take in this area you may find it helpful to consult our guidance on data controllers and data processors. Here’s the link: https://ico.org.uk/media/about-the-ico/documents/1042555/data-controllers-and-data-processors-dp-guidance.pdf

Sensitive personal data

We also discussed whether you had processed sensitive personal data. You explained that the charity did deal with people seeking help for many different reasons and the service was not aimed at people with possible mental health issues. However the mission of the Samaritans is to alleviate emotional distress and reduce the incidence of suicide feelings and suicidal behaviours. In addition, the stated aims of the Radar project, the research behind it and the information provided in the FAQs all emphasise the aim of helping vulnerable peopie online and using the app to detect someone who is suicidal. For example, you say “research has shown there is a strong correlation between “suicidal tweets” and actual suicides and with Samaritans Radar we can turn a social net into a safety net”. Given the aims of the project, it is highly likely that some of the tweets identified to subscribers included information about an
individual’s mental health or other medical information and therefore would have been sensitive personal data.

At our meetings you said that even if you were processing sensitive personal data then Schedule 3 condiüon 5 (“The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject”) was sufficient to legitimise this processing. Our guidance in our Personal Information Online Code of Practice makes it clear that although people post personal information in a way that becomes publicly visible, organisations still have an overarching duty to handle it fairly and to comply with the rules of data protection. The Samaritans are well respected in this field and receiving an email from your organisation carries a lot of weight. Linking an individual’s tweet to an email alert from the Samaritans is unlikely to be perceived in the same light as the information received in the original Tweet — not least because of the risk that people’s tweets were flagged when they were not in any distress at all.

Fair processing

Any processing of personal data must be fair and organisations must consider the effect of the processing on the individuals concerned and whether the processing would be within their reasonable expectations. You indicated that although you had undertaken some elements of an impact assessment, you had not carried out a full privacy impact assessment. You appear to have reached the conclusion that since the Tweets were publicly visible, you did not need to fully consider the privacy risks. For example, on your website you say that “all the data is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of people you follow, which are public Tweets. It does not look at private Tweets.”

It is our view that if organisations collect information from the internet and use it in a way that’s unfair, they could still breach the data protection principles even though the information was obtained from a publicly available source. It is particularly important that organisations should consider the data protection implications if they are planning to use analytics to make automated decisions that could have a direct effect on individuals. Under section 12 Of the Data Protection Act, individuals have certain rights to prevent decisions being taken about them that are solely based on automated processing of their personal data. The quality of the data being used as a basis for these decisions may also be an issue.

We note that the application was a year in development and that you used leading academics in linguistics to develop your word search algorithm. You also tested the application on a large number of people, although, as we discussed, most if not of these were connected to the project in some way and many were enthusiastic to see the project succeed. As our recent paper on Big Data explains, it is not so much a question of whether the data accurately records what someone says but rather to what extent that information provides a reliable basis for drawing conclusions. Commentators expressed concern at the apparent high level of false positives involving the Radar App (figures in the media suggest only 4% of email alerts were genuine). This raises questions about whether a System operating with such a low success rate could represent fair processing and indicates that many Tweets were being flagged up unnecessarily.

Since you did not consider yourselves to be data controllers, you have not sought the consent of, or provided fair processing notices to, the individuals whose Tweets you flagged to subscribers. It seems unlikely that it would be within people’s reasonable expectations that certain words and phrases from their Tweets would trigger an automatic email alert from the Samaritans saying Radar had detected someone who may be going throuqh a tough time. Our Personal Information Online Code of Practice says it is good practice to only use publicly available information in a way that is unlikely to cause embarrassment, distress or anxiety to the individual concerned. Organisations should only use their information in a way they are likely to expect and to be comfortable with. Our advice is that if in doubt about this, and you are unable to ask permission, you should not collect their information in the first place.

Conclusion

Based on our observations above, we have reached the conclusion that the Radar application did risk causing distress to individuals and was unlikely to be compliant with the Data Protection Act.

We acknowledge that the Samaritans did take responsibility for dealing with the many concerns raised about the application very quickly. The application was suspended on 7 November and we welcomed [redacted] assurances on 14 November that not only was the application suspended but it would not be coming back in anything like its previous form. We also understand that there have been no complaints that indicate that anyone had suffered damage and distress in the very short period the application was in operation.

We do not want to discourage innovation but it is important that organisations should consider privacy throughout the development and implementation of new projects. Failing to do so risks undermining people’s trust in an organisation. We strongly recommend that if you are considering further projects involving the use of online information and technologies you should carry out and publish a privacy impact assessment. This will help you to build trust and engage with the wider public. Guidance on this can be found in our PIA Code of Practice. We also recommend that you look at our paper on Big Data and Data Protection and our Personal Information Online Code of Practice. Building trust and adopting an ethical approach to such projects can also help to ensure you handle people’s information in a fair and transparent way. We would be very happy to advise the Samaritans on data protection compliance in relation any future projects.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

18 Comments

Filed under Data Protection, Information Commissioner, social media

Russell Brand and the domestic purposes exemption in the Data Protection Act

Was a now-deleted tweet by Russell Brand, revealing a journalist’s private number, caught by data protection law?

Data protection law applies to anyone who “processes” (which includes “disclosure…by transmission”) “personal data” (data relating to an identifiable living individual) as a “data controller” (the person who determines the purposes for which and the manner in which the processing occurs). Rather dramatically, in strict terms, this means that most individuals actually and regularly process personal data as data controllers. And nearly everyone would be caught by the obligations under the Data Protection Act 1998 (DPA), were it not for the exemption at section 36. This provides that

Personal data processed by an individual only for the purposes of that individual’s personal, family or household affairs (including recreational purposes) are exempt from the data protection principles and the provisions of Parts II and III

Data protection nerds will spot that exemption from the data protection principles and Parts II and III of the DPA is effectively an exemption from whole Act. So in general terms individuals who restrict their processing of personal data to domestic purposes are outwith the DPA’s ambit.

The extent of this exemption in terms of publication of information on the internet is subject to some disagreement. On one side is the Information Commissioner’s Office (ICO) who say in their guidance that it applies when an individual uses an online forum purely for domestic purposes, and on the other side are the Court of Justice of the European Union (and me) who said in the 2003 Lindqvist case that

The act of referring, on an internet page, to various persons and identifying them by name or by other means, for instance by giving their telephone numberconstitutes ‘the processing of personal data…[and] is not covered by any of the exceptionsin Article 3(2) of Directive 95/46 [section 36 of the DPA transposes Article 3(2) into domestic law]

Nonetheless, it is clear that publishing personal data on the internet for reasons not purely domestic constitutes an act of processing to which the DPA applies (let us assume that the act of publishing was a deliberate one, determined by the publisher). So when the comedian Russell Brand today decided to tweet a picture of a journalist’s business card, with an arrow pointing towards the journalist’s mobile phone number (which was not, for what it’s worth, already in the public domain – I checked with a Google search) he was processing that journalist’s personal data (note that data relating to an individual’s business life is still their personal data). Can he avail himself of the DPA domestic purposes exemption? No, says the CJEU, of course, following Lindqvist. But no, also, would surely say the ICO: this act by Brand was not purely domestic. Brand has 8.7 million twitter followers – I have no doubt that some will have taken the tweet as an invitation to call the journalist. It is quite possible that some of those calls will be offensive, or abusive, or even threatening.

Whilst I have been drafting this blog post Brand has deleted the tweet: that is to his credit. But of course, when you have so many millions of followers, the damage is already done – the picture is saved to hard drives, is mirrored by other sites, is emailed around. And, I am sure, the journalist will have to change his number, and maybe not much harm will have been caused, but the tweet was nasty, and unfair (although I have no doubt Brand was provoked in some way). If it was unfair (and lacking a legal basis for the publication) it was in contravention of the first data protection principle which requires that personal data be processed fairly and lawfully and with an appropriate legitimating condition. And because – as I submit –  Brand cannot plead the domestic purposes exemption, it was in contravention of the DPA. However, whether the journalist will take any private action, and whether the ICO will take any enforcement action, I doubt.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under Data Protection, Directive 95/46/EC, Information Commissioner, journalism, social media

Naming and shaming the innocent

Around this time last year I wrote two blog posts about two separate police forces’ decision to tweet the names of drivers charged (but not – yet, at least – convicted) of drink driving offences. In the latter example Staffordshire police were actually using a hashtag #drinkdriversnamedontwitter, and I argued that

If someone has merely been charged with an offence, it is contrary to the ancient and fundamental presumption of innocence to shame them for that fact. Indeed, I struggle to understand how it doesn’t constitute contempt of court to do so, or to suggest that someone who has not been convicted of drink-driving is a drink driver. Being charged with an offence does not inevitably lead to conviction. I haven’t been able to find statistics relating to drink-driving acquittals, but in 2010 16% of all defendants dealt with by magistrates’ courts were either acquitted or not proceeded against

The Information Commissioner’s Office investigated whether there had been a breach of the first principle of Schedule One of the Data Protection Act 1998 (DPA), which requires that processing of personal data be “fair and lawful”, but decided to take no action after Staffs police agreed not to use the hashtag again, saying

Our concern was that naming people who have only been charged alongside the label ‘drink-driver’ strongly implies a presumption of guilt for the offence. We have received reassurances from Staffordshire Police the hashtag will no longer be used in this way and are happy with the procedures they have in place. As a result, we will be taking no further action.

But my first blog post had raised questions about whether the mere naming of those charged was in accordance with the same DPA principle. Newspaper articles talked of naming and “shaming”, but where is the shame in being charged with an offence? I wondered why Sussex police didn’t correct those newspapers who attributed the phrase to them.

And this year, Sussex police, as well as neighbouring Surrey, and Somerset and Avon are doing the same thing: naming drivers charged with drink driving offences on twitter or elsewhere online. The media happily describe this as a “naming and shaming” tactic, and I have not seen the police disabusing them, although Sussex police did at least enter into a dialogue with me and others on twitter, in which they assured us that their actions were in pursuit of open justice, and that they were not intending to shame people. However, this doesn’t appear to tally with the understanding of the Sussex Police and Crime Commissioner who said earlier this year

I am keen to find out if the naming and shaming tactic that Sussex Police has adopted is actually working

But I also continue to question whether the practice is in accordance with police forces’ obligations under the DPA. Information relating to the commission or alleged commission by a person of an offence is that person’s sensitive personal data, and for processing to be fair and lawful a condition in both of Schedule Two and, particularly, Schedule Three must be met. And I struggle to see which Schedule Three condition applies – the closest is probably

The processing is necessary…for the administration of justice
But “necessary”, in the DPA, imports a proportionality test of the kind required by human rights jurisprudence. The High Court, in the MPs’ expenses case cited the European Court of Human Rights, in The Sunday Times v United Kingdom (1979) 2 EHRR 245  to the effect that

while the adjective “necessary”, within the meaning of article 10(2) [of the European Convention on Human Rights] is not synonymous with “indispensable”, neither has it the flexibility of such expressions as “admissible”, “ordinary”, “useful”, “reasonable” or “desirable” and that it implies the existence of a “pressing social need.”
and went on to hold, therefore that “necessary” in the DPA

should reflect the meaning attributed to it by the European Court of Human Rights when justifying an interference with a recognised right, namely that there should be a pressing social need and that the interference was both proportionate as to means and fairly balanced as to ends
So is there a pressing social need to interfere with the rights of people charged with (and not convicted of) an offence, in circumstances where the media and others portray the charge as a source of shame? Is it proportionate and fairly balanced to do so? One consideration might be whether the same police forces name all people charged with an offence. If the intent is to promote open justice, then it is difficult to see why one charging decision should merit online naming, and others not.But is the intent really to promote open justice? Or is it to dissuade others from drink-driving? Supt Richard Corrigan of Avon and Somerset police says

This is another tool in our campaign to stop people driving while under the influence of drink or drugs. If just one person is persuaded not to take to the road as a result, then it is worthwhile as far as we are concerned.

and Sussex police’s Chief Inspector Natalie Moloney says

I hope identifying all those who are to appear in court because of drink or drug driving will act as a deterrent and make Sussex safer for all road users

which firstly fails to use the word “alleged” before “drink or drug driving”, and secondly – as Supt Corrigan – suggests the purpose of naming is not to promote open justice, but rather to deter drink drivers.

Deterring drink driving is certainly a worthy public aim (and I stress that I have no sympathy whatsoever with those convicted of such offences) but should the sensitive personal data of who have not been convicted of any offence be used to their detriment in pursuance of that aim?

I worry that unless such naming practices are scrutinised, and challenged when they are unlawful and unfair, the practice will spread, and social “shame” will be encouraged to be visited on the innocent. I hope the Information Commissioner investigates.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

3 Comments

Filed under Data Protection, human rights, Information Commissioner, Open Justice, police, social media

The voluntary data controller

One last post on #samaritansradar. I hope.

I am given to understand that Samaritans, having pulled their benighted app, have begun responding to the various legal notices people served on them under the Data Protection Act 1998 (specifically, these were notices under section 7 (subject access) section 10 (right to prevent processing likely to cause damage or distress) and section 12 (rights in relation to automated processing)). I tweeted my section 12 notice, but I doubt I’ll get a response to that, because they’ve never engaged with me once on twitter or elsewhere.

However, I have been shown a response to a section 7 request (which I have permission to blog about) and it continues to raise questions about Samaritans’ handling of this matter (and indeed, their legal advice – which hasn’t been disclosed, or even  really hinted at). The response, in relevant part, says

We are writing to acknowledge the subject access request that you sent to Samaritans via DM on 6 November 2014.  Samaritans has taken advice on this matter and believe that we are not a data controller of information passing through the Samaritans Radar app. However, in response to concerns that have been raised, we have agreed to voluntarily take on the obligations of a data controller in an attempt to facilitate requests made as far as we can. To this end, whilst a Subject Access Request made under the Data Protection Act can attract a £10 fee, we do not intend to charge any amount to provide information on this occasion.

So, Samaritans continue to deny being data controller for #samaritansradar, although they continue also merely to give assertions, not any legal analysis. But, notwithstanding their belief that they are not controller they are taking on the obligations of a data controller.

I think they need to be careful. A person who knowingly discloses personal data without the consent of the data controller potentially commits a criminal offence under section 55 DPA. One can’t just step in, grab personal data and start processing it, without acting in breach of the law. Unless one is a data controller.

And, in seriousness, this purported adoption of the role of “voluntary data controller” just bolsters the view that Samaritans have been data controllers from the start, for reasons laid out repeatedly on this blog and others. They may have acted as joint data controller with users of the app, but I simply cannot understand how they can claim not to have been determining the purposes for which and the manner in which personal data were processed. And if they were, they were a data controller.

 

Leave a comment

Filed under Data Protection, social media

So farewell then #samaritansradar…

…or should that be au revoir?

With an interestingly timed announcement (18:00 on a Friday evening) Samaritans conceded that they were pulling their much-heralded-then-muchcriticised app “Samaritans Radar”, and, as if some of us didn’t feel conflicted enough criticising such a normally laudable charity, their Director of Policy Joe Ferns managed to get a dig in, hidden in what was purportedly an apology:

We are very aware that the range of information and opinion, which is circulating about Samaritans Radar, has created concern and worry for some people and would like to apologise to anyone who has inadvertently been caused any distress

So, you see, it wasn’t the app, and the creepy feeling of having all one’s tweets closely monitored for potentially suicidal expressions, which caused concern and worry and distress – it was all those nasty people expressing a range of information and opinion. Maybe if we’d all kept quiet the app could have continued on its unlawful and unethical merry way.

However, although the app has been pulled, it doesn’t appear to have gone away

We will…be testing a number of potential changes and adaptations to the app to make it as safe and effective as possible for both subscribers and their followers

There is a survey at the foot of this page which seeks feedback and comment. I’ve completed it, and would urge others to do so. I’ve also given my name and contact details, because one of my main criticisms of the launch of the app was that there was no evidence that Samaritans had taken advice from anyone on its data protection implications – and I’m happy to do so for no fee. As Paul Bernal says, “[Samaritans] need to talk to the very people who brought down the app: the campaigners, the Twitter activists and so on”.

Data protection law’s place in our digital lives is of profound importance, and of profound interest to me. Let’s not forget that its genesis in the 1960s and 1970s was in the concerns raised by the extraordinary advances that computing brought to data analysis. For me some of the most irritating counter-criticism during the recent online debates about Samaritans Radar was from people who equated what the app did to mere searching of tweets, or searching for keywords. As I said before, the sting of this app lay in the overall picture – it was developed, launched and promoted by Samaritans – and in the overall processing of data which went on – it monitored tweets, identified potentially worrying ones and pushed this information to a third party, all without the knowledge of the data subject.

But also irritating were comments from people who told us that other organisations do similar analytics, for commercial reasons, so why, the implication went, shouldn’t Samaritans do it for virtuous ones? It is no secret that an enormous amount of analysis takes place of information on social media, and people should certainly be aware of this (see Adrian Short’s excellent piece here for some explanation), but the fact that it can and does take place a) doesn’t mean that it is necessarily lawful, nor that the law is impotent within the digital arena, and b) doesn’t mean that it is necessarily ethical. And for both those reasons Samaritans Radar was an ill-judged experiment that should never have taken place as it did. If any replacement is to be both ethical and lawful a lot of work, and a lot of listening, needs to be done.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

7 Comments

Filed under Data Protection, social media

Samaritans cannot deny being data controller for #samaritansradar

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

So, Samaritans continue to support the #samaritansradar app, about which I, and many others, have already written. A large number of people suffering from, or with experience of mental health problems, have pleaded with Samaritans to withdraw the app, which monitors the tweets of the people one follows on twitter, applies an algorithm to identify tweets from potentially vulnerable people, and emails that information to the app user, all without the knowledge of the person involved. As Paul Bernal has eloquently said, this is not really an issue about privacy, and nor is it about data protection – it is about the threat many vulnerable people feel from the presence of the app. Nonetheless, privacy and data protection law, in part, are about the rights of the vulnerable; last night (4 November) Samaritans issued their latest sparse statement, part of which dealt with data protection:

We have taken the time to seek further legal advice on the issues raised. Our continuing view is that Samaritans Radar is compliant with the relevant data protection legislation for the following reasons:

o   We believe that Samaritans are neither the data controller or data processor of the information passing through the app

o   All information identified by the app is available on Twitter, in accordance with Twitter’s Ts&Cs (link here). The app does not process private tweets.

o   If Samaritans were deemed to be a data controller, given that vital interests are at stake, exemptions from data protection law are likely to apply

It is interesting that there is reference here to “further” legal advice: none of the previous statements from Samaritans had given any indication that legal or data protection advice had been sought prior to the launch of the app. It would be enormously helpful to discussion of the issue if Samaritans actually disclosed their advice, but I doubt very much that they will do so. Nonetheless, their position appears to be at odds with the legal authorities.

In May this year the Court of Justice of the European Union (CJEU) gave its ruling in the Google Spain case. The most widely covered aspect of that case was, of course, the extent of a right to be forgotten – a right to require Google to remove search terms in certain specified cases. But the CJEU also was asked to rule on the question of whether a search engine, such as Google, was a data controller in circumstances in which it engages in the indexing of web pages. Before the court Google argued that

the operator of a search engine cannot be regarded as a ‘controller’ in respect of that processing since it has no knowledge of those data and does not exercise control over the data

and this would appear to be a similar position to that adopted by Samaritans in the first bullet point above. However, the CJEU dismissed Google’s argument, holding that

the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results…It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of [the activity at issue] and which must, consequently, be regarded as the ‘controller’ in respect of that processing

Inasmuch as I understand how it works, I would submit that #samaritansradar, while not a search engine as such, collects data (personal data), records and organises it, stores it on servers and discloses it to its users in the form of a result. The app has been developed by and launched by Samaritans, it carries their name and seeks to further their aims: it is clearly “their” app, and they are, as clearly, a data controller with attendant legal responsibilities and liabilities. In further proof of this Samaritans introduced, after the app launch and in response to outcry, a “whitelist” of twitter users who have specifically informed Samaritans that they do not want their tweets to be monitored (update on 30 October). If Samaritans are effectively saying they have no role in the processing of the data, how on earth would such a whitelist be expected to work?

And it’s interesting to consider the apparent alternative view that they are implicitly putting forward. If they are not data controller, then who is? The answer must be the users who download and run the app, who would attract all the legal obligations that go with being a data controller. The Samaritans appear to want to back out of the room, leaving app users to answer all the awkward questions.1

Also very interesting is that Samaritans clearly accept that others might have a different view to theirs on the issue of controllership; they suggest that if they were held to be a data controller they would avail themselves of “exemptions” in data protection law relating to “vital interest” to legitimise their activities. One presumes this to be a reference to certain conditions in Schedule 2 and 3 of the Data Protection Act 1998 (DPA). Those schedules contain conditions which must be met, in order for the processing of, respectively, personal data and sensitive personal data, to be fair and lawful. As we are here clearly talking about sensitive personal data (personal data relating to someone’s physical or mental health is classed as sensitive), let us look at the relevant condition in Schedule 3:

The processing is necessary—
(a)in order to protect the vital interests of the data subject or another person, in a case where—
(i)consent cannot be given by or on behalf of the data subject, or
(ii)the data controller cannot reasonably be expected to obtain the consent of the data subject, or
(b)in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld

Samaritans alternative defence founders on the first four words: in what way can this processing be necessary to protect vital interests? The Information Commissioner’s Office explains that this condition only applies

in cases of life or death, such as where an individual’s medical history is disclosed to a hospital’s A&E department treating them after a serious road accident

The evidence suggests this app is actually delivering a very large number of false positives (as it’s based on what seems to be a crude keyword algorithm, this is only to be expected). Given that, and, indeed, given that Samaritans have – expressly – no control over what happens once the app notifies a user of a concerning tweet, it is absolutely preposterous to suggest that the processing is necessary to protect people’s vital interests. Moreover, the condition above also explains that it can only be relied on where consent cannot be given by the data subject or the controller cannot reasonably be expected to obtain consent. Nothing prevents Samaritans from operating an app which would do the same thing (flag a tweet of concern) but basing it on a consent model, whereby someone agrees that their tweets will be monitored in that way. Indeed, such a model would fit better with Samaritans stated aim of allowing people to “lead the conversation at their own pace”. It is clear, nonetheless, that consent could be sought for this processing, but that Samaritans have failed to design an app which allows it to be sought.

The Information Commissioner’s Office is said to be looking into the issues raised by Samaritans’ app. It may be that it will only be through legal enforcement action that it will actually be – as I think it should – removed. But it would be extremely sad if it came to that. It should be removed voluntarily by Samaritans, so they can rethink, re-programme, take full legal advice, but – most importantly – listen to the voices of the most vulnerable, who feel so threatened and betrayed by the app.

1On a strict and nuanced analysis of data protection law users of the app probably are data controllers, acting as joint ones with Samaritans. However, given the regulatory approach of the Information Commissioner they would probably be able to avail themselves of the general exemption from all of the DPA for processing which is purely domestic (although even that is arguably wrong). These are matters for another blog post however, and the fact that users might be held to be data controllers doesn’t alter the fact that Samaritans are, and in a much clearer way

43 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

Samaritans Radar – serious privacy concerns raised

UPDATE: 31 October

It appears Samaritans have silently tweaked their FAQs (so the text near the foot of this post no longer appears). They now say tweets will only be retained by the app for seven (as opposed to thirty) days, and have removed the words saying the app will retain a “Count of flags against a Twitter Users Friends ID”. Joe Ferns said on Twitter that the inclusion of this in the original FAQs was “a throw back to a stage of the development where that was being considered”. Samaritans also say “The only people who will be able to see the alerts, and the tweets flagged in them, are followers who would have received these Tweets in their current feed already”, but this does not absolve them of their data controller status: a controller does not need to access data in order to determine the means by which and the manner in which personal data are being processed, and they are still doing this. Moreover, this changing of the FAQs, with no apparent change to the position that those whose tweets are processed get no fair processing notice whatsoever, makes me more concerned that this app has been released without adequate assessment of its impact on people’s privacy.

END UPDATE

UPDATE: 30 October

Susan Hall has written a brilliant piece expanding on mine below, and she points out that section 12 of the Data Protection Act 1998 in terms allows a data subject to send a notice to a data controller requiring it to ensure no automated decisions are taken by processing their personal data for the purposes of evaluating matters such as their conduct. It seems to me that is precisely what “Samaritans Radar” does. So I’ve sent the following to Samaritans

Dear Samaritans

This is a notice pursuant to section 12 Data Protection Act 1998. Please ensure that no decision is taken by you or on your behalf (for instance by the “Samaritans Radar” app) based solely on the processing by automatic means of my personal data for the purpose of evaluating my conduct.

Thanks, Jon Baines @bainesy1969

I’ll post here about any developments.

END UPDATE

Samaritans have launched a Twitter App “to help identify vulnerable people”. I have only ever had words of praise and awe about Samaritans and their volunteers, but this time I think they may have misjudged the effect, and the potential legal implications of “Samaritans Radar”. Regarding the effect, this post from former volunteer @elphiemcdork is excellent:

How likely are you to tweet about your mental health problems if you know some of your followers would be alerted every time you did? Do you know all your followers? Personally? Are they all friends? What if your stalker was a follower? How would you feel knowing your every 3am mental health crisis tweet was being flagged to people who really don’t have your best interests at heart, to put it mildly? In this respect, this app is dangerous. It is terrifying to think that anyone can monitor your tweets, especially the ones that disclose you may be very vulnerable at that time

As for the legal implications, it seems to be potentially the case that Samaritans are processing sensitive personal data, in circumstances where there may not be a legal basis to do so. And some rather worrying misconceptions have accompanied the app launch. The first and most concerning of these is in the FAQs prepared for the media. In reply to the question “Isn’t there a data privacy issue here? Is Samaritans Radar spying on people?” the following answer is given

All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets. It does not look at private Tweets

The idea that, because something is in the public domain it cannot engage privacy issues is a horribly simplistic one, and if that constitutes the impact assessment undertaken, then serious questions have to be asked. Moreover, it doesn’t begin to consider the data protection considerations: personal data is personal data, whether it’s in the public domain or not. A tweet from an identified tweeter is inescapably the personal data of that person, and, if it is, or appears to be, about the person’s physical or mental health, then it is sensitive personal data, afforded a higher level of protection under the Data Protection Act 1998 (DPA). It would appear that Samaritans, as the legal person who determines the purposes for which, and the manner in which, the personal data are processed (i.e. they have produced an app which identifies a tweet on the basis of words, or sequences of words, and push it to another person) are acting as a data controller. As such, any processing has to be in accordance with their obligation to abide by the data protection principles in Schedule One of the DPA. The first principle says that personal data must be processed fairly and lawfully, and that a condition for processing contained in Schedule Two (and for sensitive personal data Schedule Two and Three) must be met. Looking only at Schedule Three, I struggle to see the condition which permits the app to identify a tweet, decide that it is from a potentially suicidal person and send it as such to a third party. The one condition which might apply, the fifth “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject” is undercut by the fact that the data in question is not just the public tweet, but the “package” of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.

The reliance on “all the data used in the app is public, so user privacy is not an issue” has carried through in messages sent on twitter by Samaritans Director of Policy, Research and Development, Joe Ferns, in response to people raising concerns, such as

existing Twitter search means anyone can search tweets unless you have set to private. #SamaritansRadar is like an automated search

Again, this misses the point that it is not just “anyone” doing a search on twitter, it is an app in Samaritans name which specifically identifies (in an automated way) certain tweets as of concern, and pushes them to third parties. Even more concerning was Mr Ferns’ response to someone asking if there was a way to opt out of having their tweets scanned by the app software:

if you use Twitter settings to mark your tweets private #SamaritansRadar will not see them

What he is actually suggesting there is that to avoid what some people clearly feel are intrusive actions they should lock their account and make it private. And, of course, going back to @elphiemcdork’s points, it is hard to avoid the conclusion that those who will do this might be some of the most vulnerable people.

A further concern is raised (one which confirms the data controller point above) about retention and reuse of data. The media FAQ states

Where will all the data be stored? Will it be secure? The data we will store is as follows:
• Twitter User ID – a unique ID that is associated with a Twitter account
• All Twitter User Friends ID’s – The same as above but for all the users friends that they
follow
• Any flagged Tweets – This is the data associated with the Tweet, we will store the raw
data for the Tweet as well
• Count of flags against a Twitter Users Friends ID – We store a count of flags against an
individual User
• To prevent the Database growing exponentially we will remove flagged Tweets that are
older than 30 days.

So it appears that Samaritans will be amassing data on unwitting twitter users, and in effect profiling them. This sort of data is terrifically sensitive, and no indication is given regarding the location of this data, and security measures in place to protect it.

The Information Commissioner’s Office recently produced some good guidance for app developers on Privacy in Mobile Apps. The guidance commends the use of Privacy Impact Assessments when developing apps. I would be interested to know if one was undertaken for Samaritans Radar, and, if so, how it dealt with the serious concerns that have been raised by many people since its launch.

This post was amended to take into account the observations in the comments by Susan Hall, to whom I give thanks. I have also since seen a number of excellent blog posts dealing with wider concerns. I commend, in particular, this by Adrian Short and this by @latentexistence

 

 

33 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

Monitoring of blogs and lawful/unlawful surveillance

Tim Turner wrote recently about the data protection implications of the monitoring of Sara Ryan’s blog by Southern Health NHS Trust. Tim’s piece is an exemplary analysis of how the processing of personal data which is in the public domain is still subject to compliance with the Data Protection Act 1998 (DPA):

there is nothing in the Data Protection Act that says that the public domain is off-limits. Whatever else, fairness still applies, and organisations have to accept that if they want to monitor what people are saying, they have to be open about it

But it is not just data protection law which is potentially engaged by the Trust’s actions. Monitoring of social media and networks by public authorities for the purposes of gathering intelligence might well constitute directed surveillance, bringing us explicitly into the area of human rights law. Sir Christopher Rose, the Chief Surveillance Commissioner said, in his most recent annual report

my commissioners remain of the view that the repeat viewing of individual “open source” sites for the purpose of intelligence gathering and data collation should be considered within the context of the protection that RIPA affords to such activity

“RIPA” there of course refers to the complex Regulation of Investigatory Powers Act 2000 (RIPA) (parts of which were reputedly “intentionally drafted for maximum obscurity”)1. What is not complex, however, is to note which public authorities are covered by RIPA when they engage in surveillance activities. A 2006 statutory instrument2 removed NHS Trusts from the list (at Schedule One of RIPA) of relevant public authorities whose surveillance was authorised by RIPA. Non-inclusion on the Schedule One lists doesn’t as a matter of fact or law mean that a public authority cannot undertake surveillance. This is because of the rather odd provision at section 80 of RIPA, which effectively explains that surveillance is lawful if carried out in accordance with RIPA, but surveillance not carried out in accordance with RIPA is not ipso facto unlawful. As the Investigatory Powers Tribunal put it, in C v The Police and the Home Secretary IPT/03/32/H

Although RIPA provides a framework for obtaining internal authorisations of directed surveillance (and other forms of surveillance), there is no general prohibition in RIPA against conducting directed surveillance without RIPA authorisation. RIPA does not require prior authorisation to be obtained by a public authority in order to carry out surveillance. Lack of authorisation under RIPA does not necessarily mean that the carrying out of directed surveillance is unlawful.

But it does mean that where surveillance is not specifically authorised by RIPA questions would arise about its legality under Article 8 of the European Convention on Human Rights, as incorporated into domestic law by the Human Rights Act 1998. The Tribunal in the above case went on to say

the consequences of not obtaining an authorisation under this Part may be, where there is an interference with Article 8 rights and there is no other source of authority, that the action is unlawful by virtue of section 6 of the 1998 Act.3

So, when the Trust was monitoring Sara Ryan’s blog, was it conducting directed surveillance (in a manner not authorised by RIPA)? RIPA describes directed surveillance as covert (and remember, as Tim Turner pointed out – no notification had been given to Sara) surveillance which is “undertaken for the purposes of a specific investigation or a specific operation and in such a manner as is likely to result in the obtaining of private information about a person (whether or not one specifically identified for the purposes of the investigation or operation)” (there is a further third limb which is not relevant here). One’s immediate thought might be that no private information was obtained or intended to be obtained about Sara, but one must bear in mind that, by section 26(10) of RIPA “‘private information’, in relation to a person, includes any information relating to his private or family life” (emphasis added). This interpretation of “private information” of course is to be read alongside the protection afforded to the respect for one’s private and family life under Article 8. The monitoring of Sara’s blog, and the matching of entries in it against incidents in the ward on which her late son, LB, was placed, unavoidably resulted in the obtaining of information about her and LB’s family life. This, of course, is the sort of thing that Sir Christopher Rose warned about in his most recent report, in which he went on to say

In cash-strapped public authorities, it might be tempting to conduct on line investigations from a desktop, as this saves time and money, and often provides far more detail about someone’s personal lifestyle, employment, associates, etc. But just because one can, does not mean one should.

And one must remember that he was talking about cash-strapped public authorities whose surveillance could be authorised under RIPA. When one remembers that this NHS Trust was not authorised to conduct directed surveillance under RIPA, one struggles to avoid the conclusion that monitoring was potentially in breach of Sara’s and LB’s human rights.

1See footnote to Caspar Bowden’s submission to the Intelligence and Security Committee
2The Regulation of Investigatory Powers (Directed Surveillance and Covert Human Intelligence Sources) (Amendment) Order 2006
3This passage was apparently lifted directly from the explanatory notes to RIPA

3 Comments

Filed under Data Protection, human rights, NHS, Privacy, RIPA, social media, surveillance, surveillance commissioner