The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.
So, Samaritans continue to support the #samaritansradar app, about which I, and many others, have already written. A large number of people suffering from, or with experience of mental health problems, have pleaded with Samaritans to withdraw the app, which monitors the tweets of the people one follows on twitter, applies an algorithm to identify tweets from potentially vulnerable people, and emails that information to the app user, all without the knowledge of the person involved. As Paul Bernal has eloquently said, this is not really an issue about privacy, and nor is it about data protection – it is about the threat many vulnerable people feel from the presence of the app. Nonetheless, privacy and data protection law, in part, are about the rights of the vulnerable; last night (4 November) Samaritans issued their latest sparse statement, part of which dealt with data protection:
We have taken the time to seek further legal advice on the issues raised. Our continuing view is that Samaritans Radar is compliant with the relevant data protection legislation for the following reasons:
o We believe that Samaritans are neither the data controller or data processor of the information passing through the app
o All information identified by the app is available on Twitter, in accordance with Twitter’s Ts&Cs (link here). The app does not process private tweets.
o If Samaritans were deemed to be a data controller, given that vital interests are at stake, exemptions from data protection law are likely to apply
It is interesting that there is reference here to “further” legal advice: none of the previous statements from Samaritans had given any indication that legal or data protection advice had been sought prior to the launch of the app. It would be enormously helpful to discussion of the issue if Samaritans actually disclosed their advice, but I doubt very much that they will do so. Nonetheless, their position appears to be at odds with the legal authorities.
In May this year the Court of Justice of the European Union (CJEU) gave its ruling in the Google Spain case. The most widely covered aspect of that case was, of course, the extent of a right to be forgotten – a right to require Google to remove search terms in certain specified cases. But the CJEU also was asked to rule on the question of whether a search engine, such as Google, was a data controller in circumstances in which it engages in the indexing of web pages. Before the court Google argued that
the operator of a search engine cannot be regarded as a ‘controller’ in respect of that processing since it has no knowledge of those data and does not exercise control over the data
and this would appear to be a similar position to that adopted by Samaritans in the first bullet point above. However, the CJEU dismissed Google’s argument, holding that
the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results…It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of [the activity at issue] and which must, consequently, be regarded as the ‘controller’ in respect of that processing
Inasmuch as I understand how it works, I would submit that #samaritansradar, while not a search engine as such, collects data (personal data), records and organises it, stores it on servers and discloses it to its users in the form of a result. The app has been developed by and launched by Samaritans, it carries their name and seeks to further their aims: it is clearly “their” app, and they are, as clearly, a data controller with attendant legal responsibilities and liabilities. In further proof of this Samaritans introduced, after the app launch and in response to outcry, a “whitelist” of twitter users who have specifically informed Samaritans that they do not want their tweets to be monitored (update on 30 October). If Samaritans are effectively saying they have no role in the processing of the data, how on earth would such a whitelist be expected to work?
And it’s interesting to consider the apparent alternative view that they are implicitly putting forward. If they are not data controller, then who is? The answer must be the users who download and run the app, who would attract all the legal obligations that go with being a data controller. The Samaritans appear to want to back out of the room, leaving app users to answer all the awkward questions.1
Also very interesting is that Samaritans clearly accept that others might have a different view to theirs on the issue of controllership; they suggest that if they were held to be a data controller they would avail themselves of “exemptions” in data protection law relating to “vital interest” to legitimise their activities. One presumes this to be a reference to certain conditions in Schedule 2 and 3 of the Data Protection Act 1998 (DPA). Those schedules contain conditions which must be met, in order for the processing of, respectively, personal data and sensitive personal data, to be fair and lawful. As we are here clearly talking about sensitive personal data (personal data relating to someone’s physical or mental health is classed as sensitive), let us look at the relevant condition in Schedule 3:
The processing is necessary—
(a)in order to protect the vital interests of the data subject or another person, in a case where—
(i)consent cannot be given by or on behalf of the data subject, or
(ii)the data controller cannot reasonably be expected to obtain the consent of the data subject, or
(b)in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld
Samaritans alternative defence founders on the first four words: in what way can this processing be necessary to protect vital interests? The Information Commissioner’s Office explains that this condition only applies
in cases of life or death, such as where an individual’s medical history is disclosed to a hospital’s A&E department treating them after a serious road accident
The evidence suggests this app is actually delivering a very large number of false positives (as it’s based on what seems to be a crude keyword algorithm, this is only to be expected). Given that, and, indeed, given that Samaritans have – expressly – no control over what happens once the app notifies a user of a concerning tweet, it is absolutely preposterous to suggest that the processing is necessary to protect people’s vital interests. Moreover, the condition above also explains that it can only be relied on where consent cannot be given by the data subject or the controller cannot reasonably be expected to obtain consent. Nothing prevents Samaritans from operating an app which would do the same thing (flag a tweet of concern) but basing it on a consent model, whereby someone agrees that their tweets will be monitored in that way. Indeed, such a model would fit better with Samaritans stated aim of allowing people to “lead the conversation at their own pace”. It is clear, nonetheless, that consent could be sought for this processing, but that Samaritans have failed to design an app which allows it to be sought.
The Information Commissioner’s Office is said to be looking into the issues raised by Samaritans’ app. It may be that it will only be through legal enforcement action that it will actually be – as I think it should – removed. But it would be extremely sad if it came to that. It should be removed voluntarily by Samaritans, so they can rethink, re-programme, take full legal advice, but – most importantly – listen to the voices of the most vulnerable, who feel so threatened and betrayed by the app.
1On a strict and nuanced analysis of data protection law users of the app probably are data controllers, acting as joint ones with Samaritans. However, given the regulatory approach of the Information Commissioner they would probably be able to avail themselves of the general exemption from all of the DPA for processing which is purely domestic (although even that is arguably wrong). These are matters for another blog post however, and the fact that users might be held to be data controllers doesn’t alter the fact that Samaritans are, and in a much clearer way
what has the ICO said about it?
P
They’ve merely said “we’re aware of concerns raised about this app and are contacting the Samaritans to find out more about how the app works”
The Information Commissioner’s view is now essential; if they agree with the Samaritans’ position, it will suggest a Durant-style rewrite of common understanding of how Data Protection works. ICO cannot flunk or fudge this. We need a clear decision, unambiguously expressed.
I have to go now, as a large pig has just started to flex its wings.
It might be used to abuse people, but we hope it won’t. We have no evidence that this will help anyone, but it’s a vital interest. We don’t control or process the data, but you can opt out by messaging us. Is that clear?
You left out the bit about how the app can make a vital difference and save lives, but without conveying any information that you wouldn’t have received anyway
As has been said until some of us are blue in the face, this app does more than convey information one would have received anyway. It applies an algorithm to tweets, and categorises certain of them as being from someone potentially vulnerable. It then highlights this tweet to the user of the app. All of this without the subject being aware.
I understand what Doremus is getting at. If it conveys only information that people would have received anyway, they cannot hope to hide behind the vital interest argument. You can’t have it both ways.
I’d also say that there could be an argument that by just following somebody a Tweet is not being ‘received’. Personally, I would only count it as being received if it was an @ mention.
I completely agree about the assessment of a data controller – as the court says, the search engine is a data controller, and in this instance, so are the Samaritans. To contest otherwise is ludicrous. The claim of exemption around consent also seems bizarre.
However, in the case of the search provider there is another interesting facet – just because a search index can access some information doesn’t take into account whether the controller has a legal right to use the information, that the content has a legal right to exist, or that it can be legally displayed out of context.
Just because you put something on the internet, doesn’t automatically mean you provide a licence for others to use it. It may just be defamatory, but finding a jurisdiction to remove it may be hard. And a search index may surface “ancient” information as if it was current.
But Twitter has terms of service that state “you grant us a worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content”. So issues of licence don’t occur – if you tweet something, you provide a licence to use it, providing it isn’t misrepresented. This licence specifically includes the right to make it “available to other companies, organizations or individuals”.
I’m not attempting to make a judgement here, but it would be far, far easier to make the case that this licence provides Samaritans with the right to process data, than any of the claims that they are actually making.
We can ask the Samaritans to withdraw the app, and we can ask the ICO to evaluate the legality of it. However, we can only do so because it has been publicised. How many other applications may be out there that do similar or worse, that can be operating on our data without us even knowing they exist? As well as operating without our knowledge, it could be that they are doing so entirely legally, because we have provided a licence to do so.
Yes, Samaritans (and Twitter) should be thinking very carefully about the level of trust they want to have, when defending this app. But to make a real difference to the protection of vulnerable people, there needs to be more control over the licence and use of the content you provide within Twitter itself.
The license is provided to twitter. No license has been granted by the end user to the Samaritans to use this data. It does not excuse or nullify their obligations under the Data Protection Act. Twitter may share the tweets, but our implicit consent to that does not extend the license to the third party to do with as they wish. All it arguably does is absolve Twitter of any liability for misuse.
The license explicitly contains the right to sub-license. So they can grant license to a third-party for all of the rights that we have granted to Twitter. Twitter is likely to argue that this is precisely the kind of scenario that this provision is intended to cover – that a user of the API can do everything that we’ve consented Twitter to do – subject to their developer T&Cs – without seeking the explicit permission of the people tweeting.
Whether the license granted to Twitter in the first instance is sufficient to cover this processing, whether it is sufficient to be deemed consent under DPA, may be debatable.
It is impossible for consent to have been provided in this way when they do not make the data subject aware of decisions being made about data they have collected. The twitter terms and conditions helpfully reminds us of our rights and that we own any content that we provide to the service that was ours to put there. That’s nice. That also means that I have the right to demand that Samaritans do not keep any of my personal data that was provided through Twitter or from any other channel. I am not able to exercise that right because the processing of my data is not disclosed. Their answer to this (an afterthought) was to create an opt-out list. That’s insufficient, because in order for that to work they still need to store data about me. A bit of a Catch-22. There is no way that this forms the basis of consent. As Tim Turner said to me earlier, it is not a freely given and informed indication of the subject’s wishes. This is generally not a problem for most people when the data is used in ways similar to how it was originally conveyed, or for anonymous marketing purposes. It gets very different very quickly in sensitive scenarios such as the current one.
Sorry, but no. You can grant a licence, without even being specifically aware of who you are granting the licence to, or what purpose they will actually use it for – providing they remain within the terms of the licence.
If the licence is sufficient to satisfy the conditions of consent, then it is. That somebody might reasonably say later that I didn’t knowingly consent to a specific type of use by a specific entity that didn’t exist at the time would be irrelevant in those circumstances.
Incidentally, have a look at section VII (A) here: https://dev.twitter.com/overview/terms/agreement-and-policy
“User Protection. You will not knowingly: 1) allow or assist any government entities, law enforcement, or other organizations to conduct surveillance on Content or obtain information on Twitter’s users or their Tweets that would require a subpoena, court order, or other valid legal process, or that would otherwise have the potential to be inconsistent with our users’ reasonable expectations of privacy; … “
I am aware of the developer T&Cs, and have raised them as a possible route to explore.
However, the first part regarding subponea / court order appears to be irrelevant, given that these are “public” tweets.
Which brings us down to a “reasonable expectation of privacy”. I would agree that the Samaritans use can be classed a not fulfilling that criteria.
But also note that these are terms and conditions between Twitter and a third party, not an end user contract. Twitter could use them to deny access to a third party, and you could petition Twitter to consider this. It may not have any legal standing for – say – a class action case.
“User Protection. You will not knowingly: 1) allow or assist any government entities, law enforcement, or other organizations to conduct surveillance on Content or obtain information on Twitter’s users or their Tweets that would require a subpoena, court order, or other valid legal process, or that would otherwise have the potential to be inconsistent with our users’ reasonable expectations of privacy; … “
Lawyers can argue about how to interpret this. I’m not sure how the addition or removal of commas might change it. But, “or other valid legal process” can certainly include the workings of the Data Protection Act, so even if that clause must be read alongside “to conduct surveillance on Content”, I would say it’s caught. We clearly have surveillance on Content that is contrary to the the DPA, since the Content is then stored and processed with automatic decisions made, all without the consent or even knowledge of the data subject.
Lawyer’s can argue about it, but as it these are developer terms not legislature, I can’t see it having any legal ramifications beyond a dispute between Twitter and a “third party partner”. But in this case, I read “valid legal process” to mean an equivalent of a court order, not a set of principles.
You may not even be able to argue that they are not conducting surveillance, as they aren’t looking at the outputs – but they could be assisting others (the users that sign up) to conduct surveillance. That’s still only relevant to the agreement between Twitter and the third party, not the DPA.
Also, note that in Schedule 3 of the DPA (relating to processing of sensitive data):
“The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject.”
It is unequivocal that by tweeting the data subject has taken steps deliberately to make that data public. That would make the DPA a dead duck as far as this is concerned (although again, not something that was cited in defence).
Graham, it makes no difference that the data subject is deliberately making the information public. The data subject is doing that in a way that s/he controls and can delete the content if they no longer want it to be out there. Deleting the content will not have prevented the followers who subscribe from getting alerts when the content was pushed out. So in effect, the subscriber will be receiving information they would not have seen otherwise, if say the data subject was speaking to only a handful of people in the middle of the night. Not all of the information was deliberately made public by the data subject. A judgement was made by a third party that there was a mental health dimension and that was added to the context, increasing the sensitivity. This is particularly true in cases of false positives.
The developer terms are not binding between Twitter and the end user, but they impact the fairness of the processing. The DPA is very much active here.
You can’t say that an article specifically cited in the DPA has no relevance to the DPA. I’m not saying that the data controller is not subject to the DPA, but rather that if they meet the tests of the DPA, they are complying with it – and this is one of those tests.
If a data subject deliberately makes the information public, then a data controller is entitled to do anything which that legally allows them to do at the time – even if it can’t be undone later. The argument that you can limit which followers see that content by deleting it is flimsy – you can’t necessarily know who will and won’t see it.
In this case, all the information was deliberately made public by the data subject at the time it was collected and processed. That has to be considered when applying the DPA.
I addressed the fifth condition of Sch 3 in my earlier post on this. One has to consider what personal data and what processing we are actually dealing with:
” The one condition which might apply, the fifth “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject” is undercut by the fact that the data in question is not just the public tweet, but the “package” of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.”
Quite so. What they are in fact doing is creating sensitive personal data without consent by tying data to a judgement about it. And the judgement is likely to often be wrong, so a person is identified as potentially having mental health problems when there are none. Incorrect sensitive personal data shared is just as bad correct sensitive personal data unlawfully shared.
Wouldn’t another viable line of argument be that my licence agreement is contigent upon Twitter enforcing their 3rd-party-T&Cs is a reasonable and responsible manner?
That may be possible. Although assessment of reasonable could be vague.
Also, note that Twitter can only enforce their T&Cs if they are aware of an application not obeying them (they wouldn’t in this case be able to determine that from the API usage). And we can only petition Twitter to consider that the T&Cs aren’t being upheld when we are aware of an application and what it is doing.
Not all applications are going to be as well publicised or scrutinized.
So relying on the T&Cs to protect user privacy would not be adequate in the general case.
Twitter may not have to enforce those T&Cs for the license argument to have traction. Part of their argument is that they can do this because everything is provided publicly on Twitter anyway. If they are in violation, then that argument is weakened.
“3. Respect Users’ Control and Privacy, a, i:
a) Get the user’s express consent before you do any of the following:
i) Take any actions on a user’s behalf, including posting Content, following/unfollowing other users, modifying profile information, or adding hashtags or other data to the user’s Tweets. A user authenticating through your Service does not constitute user consent.”
There you have it. If a user authenticating through the service does not constitute user consent, then certainly there is no implied consent from other users to have the action taken on their behalf of alerting their followers to a trigger activated by a tweet. So no consent under the Data Protection Act, and developer terms of service appear to be violated as well.
https://dev.twitter.com/overview/terms/agreement-and-policy#I._Guiding_Principles
d) and e) from the same section of the terms:
“d) If your Service allows users to post Content to your Service and Twitter, then, before publishing to the Service:
i. Explain how you will use the Content;
ii. Obtain proper permission to use the Content; and
iii. Continue to use such Content in accordance with this Policy in connection with the Content.
e) Display your Service’s privacy policy to users before download, installation or sign up of your application. You must comply with your privacy policy, which must clearly disclose the information you collect from users and how you use and share that information, including with Twitter.”
The app posts content to their service, in that it sends an email with the content contained or referred to (which is publishing) in it. None of these conditions have been met with regard to the user who owns the content.
e) they comply with respect to the user signing up for the service.
d) Is concerning content being supplied by the signed up user to the service for use on Twitter – e.g. a web based Twitter client that allowed users to write posts. It is saying that you have to obtain that content in a way that is consistent with the Twitter ecosystem, such that it can fit in to the Twitter ecosystem. I don’t believe that it applies to content being taken from Twitter to be used in the third-party service.
e) that’s not good enough when you are processing someone else’s data in a hidden and altogether unexpected way.
d) I suppose it doesn’t allow users of the service to post content to the service, but we’re really seeing that the terms are written assuming that the user of the service is the one supplying the content. That’s been a fair assumption up until now. If not the letter, than the spirit of the terms are violated. The content being consumed by the users of the service is not the original content. It is derived from the original content with additional context supplied by the service.
e) I believe that this only applies to the signing up user. Usage of content provided by other users would be covered by the licence that they agree to in using the service – that’s why the licence terms are there.
d) It’s written that way, as consumption of the content is meant to be covered by the licence granted. The content displayed is not modified in any way that is inconsistent with the T&C. It’s arguable how much context is being added, or whether that is relevant.
One thing is clear, this is the first application that has meaningfully used content in a way that challenges people’s assumptions about the terms of use. But “fair assumption up until now” doesn’t mean it ever was the case.
I believe this only relates to making changes in Twitter, not the use of content supplied by Twitter.
That’s probably true, but it’s arguable.
Reblogged this on marisa Feathers and commented:
Important conversation for Twitter users who live with mental illness.
A really interesting exchange. I don’t think there’s a realistic chance of directly asserting third party rights through the developer terms, but they might well be persuasive when it comes to an assessment of fairness of processing under the first data protection principle: processing of personal data which might be held to contravene (or stretch) developer terms is unlikely to be fair if it is adversely affecting data subjects’ rights.
What is sadly becoming clear is that this sort of legal/technical analysis, which really needs to be undertaken by the Information Commissioner, is going to be necessary, as Samaritans show no sign of listening to the broader ethical arguments.
Pingback: Samaritans Radar and the big questions… | Quantumplations
Here’s why they can’t claim to have consent, Graham. Paragraph 56.
Click to access big-data-and-data-protection.pdf
“56. If an organisation is relying on people’s consent as the
condition for processing their personal data, then that consent
must be freely given, specific and informed28. This means
people must be able to understand what the organisation is
going to do with their data and there must be a clear indication
that they consent to it. If an organisation has collected
personal data for one purpose and then decides to start
analysing it for completely different purposes (or to make it
available for others to do so) then it needs to make its users
aware of this. This is particularly important if the organisation
is planning to use the data for a purpose that is not apparent to
the individual because it is not obviously connected with their
use of a service. For example, if a social media company were
selling on the wealth of personal data of its users to another
company for other purposes. ”
Paragraph 69 further undermines their arguments for not needing consent.
“69. In our view, a key factor in deciding whether a new purpose is
incompatible with the original purpose is whether it is fair. In
particular, this means considering how the new purpose affects
the privacy of the individuals concerned and whether it is within
their reasonable expectations that their data could be used in
this way. If, for example, information that people have put on
social media is going to be used to assess their health risks or
their credit worthiness, or to market certain products to them,
then unless they are informed of this and asked to give their
consent, it is unlikely to be either fair or compatible. Where the
new purpose would be otherwise unexpected, and it involves
making decisions about them as individuals, then in most cases
the organisation concerned will need to seek specific consent,
in addition to establishing whether the new purpose is
incompatible with the original reason for processing the data. ”
I look forward to your fisking of this.
Is Facebook’s news feed a fair use of data? That’s a non-transparent algorithm highlighting status updates from your connections based on what it perceives the consumer (not the provider(s)) might consider most relevant. It was introduced without consent and applied to data that was collected in ignorance of it. If it had fallen within jurisdiction, should that have been considered a DPA issue?
The logical model of what Radar is doing is no different – except “relevance” is defined as things that indicate the subject may be in help and support. At what point does it cross the line to be unfair?
But to get back to quoting obligations in processing data:
“CHOICE: An organization must offer individuals the opportunity to choose (opt out) whether their personal information is (a) to be disclosed to a third party(1) or (b) to be used for a purpose that is incompatible with the purpose(s) for which it was originally collected or subsequently authorized by the individual. Individuals must be provided with clear and conspicuous, readily available, and affordable mechanisms to exercise choice.
For sensitive information (i.e. personal information specifying medical or health conditions, racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership or information specifying the sex life of the individual), they must be given affirmative or explicit (opt in) choice if the information is to be disclosed to a third party or used for a purpose other than those for which it was originally collected or subsequently authorized by the individual through the exercise of opt in choice. In any case, an organization should treat as sensitive any information received from a third party where the third party treats and identifies it as sensitive.”
That doesn’t come from DPA – it comes from the Safe Harbor principles, which Twitter is signed up to, and has a “current” filing. If the DPA applies to Samaritans handling of the data, then Twitter is probably failing to comply with it’s Safe Habor filing by not providing a means to opt out.
“Is Facebook’s news feed a fair use of data?”
I don’t know. Probably, since it is not a long way off from the original purpose. I’m not aware of any serious complaints about it. If there were, then that’s something Facebook should pay close attention to. The logical model of Radar may be similar, but it defies reasonable expectation and it concerns sensitive personal data that it either actually creates or makes more sensitive through its processing.
James O’Malley has mentioned some of the unexpected things Facebook has done that annoyed users: http://www.techdigest.tv/2014/11/the-samaritans-radar-app-is-a-reminder-of-how-much-data-about-us-is-public-and-how-we-really-have-no-control-over-how-it-is-used.html
They are not going to be called out on it unless someone decides it is serious enough to bring to the attention of information authorities. That is a real indicator of whether or not the new purpose is compatible.
Good point about Safe Harbor. I completely agree that Twitter needs to step up here and take more responsibility for how apps process data in incompatible ways. That doesn’t alleviate the responsibilities that Samaritans must face up to though.
Pingback: So farewell then #samaritansradar… | informationrightsandwrongs
Pingback: Not Listening: Blog Round-Up On The Awful #SamaritansRadar App | Quiet Riot Girl
Pingback: Suspending Samaritans Radar: inadequate mitigation of risks from suspension | jonmendel
Pingback: ICO confirm they are considering enforcement action over #samaritansradar app | informationrightsandwrongs
Pingback: ICO: Samaritans Radar failed to comply with Data Protection Act | informationrightsandwrongs
@ Graham: “User Protection. You will not [use the Twitter API to] knowingly: 1) allow or assist any government entities, law enforcement, or other organizations to conduct surveillance on Content or obtain information on Twitter’s users or their Tweets that would require a subpoena, court order, or other valid legal process, or that would otherwise have the potential to be inconsistent with our users’ reasonable expectations of privacy; … “
Does that make it any clearer? And yes, click-wrap ‘agreements’ have been found to be enforceable in a court of law, the use of the service being deemed to be sufficient consideration to form a contract.