Category Archives: consent

ICO finds Lib Dems in breach of ePrivacy law

A few months ago, when I entered my email address on the Liberal Democrats’ website to say that I agreed with the statement 

Girls should never be cut. We must end FGM

I hoped I wouldn’t subsequently receive spam emails promoting the party. However I had no way of knowing because there was no obvious statement explaining what would happen. But, furthermore, I had clearly not given specific consent to receive such emails.

Nonetheless, I did get them, and continue to do so – emails purportedly from Nick Clegg, from Paddy Ashdown and from others, promoting their party and sometimes soliciting donations.

I happen to think the compiling of a marketing database by use of serious and emotive subjects such as female genital mutilation is extraordinarily tasteless. It’s also manifestly unlawful in terms of Lib Dems’ obligations under the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR), which require specific consent to have been given before marketing emails can be sent to individuals.

On the lawfulness point I am pleased to say the Information Commissioner’s Office (ICO) agrees with me. Having considered my complaint they have said:

I have reviewed your correspondence and the organisations website, and it appears that their current practices would fail to comply with the requirements of the PECR. This is because consent is not knowingly given, clear and specific….As such, we have written to the organisation to remind them of their obligations under the PECR and ensure that valid consent is obtained from individuals.

Great. I’m glad they agree – casual disregard of PECR seems to be rife throughout politics. As I’ve written recently, the Labour Party, UKIP and Plaid Cymru have also spammed my dedicated email account. But I also asked the ICO to consider taking enforcement action (as is my right under regulation 32 of PECR). Disappointingly, they have declined to do so, saying:

enforcement action is not taken routinely and it is our decision whether to take it. We cannot take enforcement action in every case that is reported to us

It’s also disappointing that they don’t say why this is their decision. I know they cannot take enforcement action in every case reported to them, which is why I requested it in this specific case.

However, I will be interested to see whether the outcome of this case changes the Lib Dems’ approach. Maybe it will, but, as I say, they are by no means the only offenders, and enforcement action by the ICO might just have helped to address this wider problem.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

8 Comments

Filed under consent, enforcement, Information Commissioner, marketing, PECR, spam, Uncategorized

The Lib Dems’ digital rights bill – an empty promise?

On the 11th of April the Liberal Democrats announced that they would introduce a “Digital Rights Bill” if they were to form part of a coalition government in the next parliament. Among the measures the bill would contain would be, they said

Beefed up powers for the Information Commissioner to fine and enforce disciplinary action on government bodies if they breach data protection lawsLegal rights to compensation for consumers when companies make people sign up online to deliberately misleading and illegible terms & conditions

I found this interesting because the Lib Dems have recently shown themselves particularly unconcerned with digital rights contained in ePrivacy laws. Specifically, they have shown a lack of compliance with the requirement at regulation 22 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). This regulation forbids the sending of direct marketing by email unless the recipient has notified the sender that she consents to the email being sent. The European directive to which PECR give effect specifies that “consent” should be taken to have been given only by use of

any appropriate method enabling a freely given specific and informed indication of the user’s wishes, including by ticking a box when visiting an Internet website

And the Information Commissioner’s Office (ICO), which regulates PECR, explains in guidance [pdf] that

the person must understand what they are consenting to. Organisations must make sure they clearly and prominently explain exactly what the person is agreeing to, if this is not obvious. Including information in a dense privacy policy or hidden in ‘small print’ which is hard to find, difficult to understand, or rarely read will not be enough to establish informed consent…consent must be a positive expression of choice. It does not necessarily have to be a proactive declaration of consent – for example, consent might sometimes be given by submitting an online form, if there was a clear and prominent statement that this would be taken as agreement and there was the option to opt out. But organisations cannot assume consent from a failure to opt out

But in July last year I began conducting an experiment. I put my name (actually, typed my email address) to a statement on the Lib Dem website saying

Girls should never be cut. We must end FGM

I gave no consent to the sending of direct email marketing from the Lib Dems, and, indeed, the Lib Dems didn’t even say they would send direct email marketing as a result of my submitting the email address (and, to be clear, the ICO takes the, correct, view [pdf] that promotion of a political party meets the PECR, and Data Protection Act, definition of “marketing”). Yet since October last year they have sent me 23 unsolicited emails constituting direct marketing. I complained directly to the Lib Dems, who told me

we have followed the policies we have set out ion [sic] our privacy policy which follow the guidance we have been given by the ICO

which hardly explains how they feel they have complied with their legal obligations, and I will be raising this as a complaint with the ICO. I could take the route of making a claim under regulation 30 of PECR, but this requires that I must have suffered “damage”. By way of comparison, around the same time I also submitted my email address, in circumstances in which I was not consenting to future receipt of email marketing, to other major parties. To their credit, none of the Conservatives, the SNP and the Greens have sent any unsolicited marketing. However, Labour have sent 8 emails, Plaid Cymru 10 and UKIP, the worst offenders, 37 (there is little that is more nauseating, by the way, than receiving an unsolicited email from Nigel Farage addressing one as “Friend”). I rather suspect that consciously or not, some political parties have decided that the risk of legal or enforcement action (and possibly the apparent ambiguity – although really there is none – about the meaning of “consent”) is so low that it is worth adopting a marketing strategy like this. Maybe that’s a sensible act of political pragmatism. But it stinks, and the Lib Dems’ cavalier approach to ePrivacy compliance makes me completely doubt the validity and sincerity of Nick Clegg’s commitment to

enshrine into law our rights as citizens of this country to privacy, to stop information about us being abused online

And, as Pat Walshe noticed the other day, even the Lib Dems’ own website advert inviting support for their proposed Digital Rights Bill has a pre-ticked box (in non-compliance with ICO guidance) for email updates. One final point, I note that clicking on the link in the first paragraph of this post, to the Lib Dems’ announcement of the proposed Bill, opens up, or attempts to open up, a pdf file of a consultation paper. This might just be a coding error, but it’s an odd, and dodgy, piece of script.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under consent, Data Protection, Information Commissioner, marketing, PECR, spam

A cookie for your health problems

Imagine this. You enter a shop (let’s call it Shop A) to browse, and you look at an item of interest (let’s call it Item Q). While you do so, an unbeknown to you, a shop assistant places a sticker on your back, revealing that you looked at this item, and when and where. You leave and a few days later enter another shop, where a shop assistant says “I understand a few days ago you were interested in Item Q, here are some similar items you might be interested in”.

You might initially think “how helpful”, but afterwards you might start to wonder how the second shop knew about your interest, and to think that it’s a bit off that they seemed to have been able to track your movements and interests.

But try this as well. You go to your doctor, because you’re concerned about a medical condition – let’s say you fear you may have a sexually transmitted disease. As you leave the doctor secretly puts a sticker on your back saying when and where you visited and what you were concerned about. You later visit a pharmacy to buy your lunch. While you queue to pay an assistant approaches you and says openly “I understand you’ve been making enquiries recently about STDs – here are some ointments we sell”.

The perceptive reader may by now have realised I am clunkily trying to illustrate by analogy how cookies, and particularly tracking cookies work. We have all come to curse the cookie warning banners we encounter on web sites based in Europe, but the law mandating them (or at least mandating the gaining of some sort of consent to receive cookies) was introduced for a reason. As the Article 29 Working Party of European Data Protection Authorities noted in 2011

Many public surveys showed, and continue to show, that the average internet user is not aware that his/her behaviour is being tracked with the help of cookies or other unique identifiers, by whom or for what purpose. This lack of awareness contrasts sharply with the increasing dependence of many European citizens on access to internet for ordinary everyday activities

The amendments to the 2002 EC Directive, implemented in domestic law by amendment regulations to the The Privacy and Electronic Communications (EC Directive) Regulations 2003 aimed to ensure that there was “an adequate level of privacy protection and security of personal data transmitted or processed in connection with the use of electronic communications networks” (recital 63). And Article 5 of the Directive specified that

Member States shall ensure that the storing of information, or the gaining of access to information already stored, in the terminal equipment of a subscriber or user is only allowed on condition that the subscriber or user concerned has given his or her consent, having been provided with clear and comprehensive information, in accordance with Directive 95/46/EC [the 1995 Data Protection Directive], inter alia, about the purposes of the processing

Of course, the requirement that users of electronic communications networks should give consent to the storing of or gaining access to information stored in their terminal equipment (i.e. that they should consent to the serving of cookies) has not been an easy one to implement, and even the Information Commissioner’s Office’s in 2013 rowed back on attempts to gather explicit consent, claiming that there was now no need because people were more aware of the existence of cookies. But I made what to me was an interesting observation recently when I was asked to advise on a cookie notice for a private company: it appeared to me, as I compared competitors’ sites, that those which had a prominent cookie banner warning actually looked more professional than those that didn’t. So despite my client’s wariness about having a banner, it seemed to me that, ironically, it would actually be of some professional benefit.

I digress.

Just what cookies are and can achieve is brought sharply home in a piece on the Fast Company website, drawing on the findings of a doctoral research student at the University of Pennsylvania. The paper, and the article, describe the use of web analytics, often in the form of information gathered from tracking cookies, for marketing in the health arena in the US. Tim Libert, the paper’s author discovered that

over 90% of the 80,000 health-related pages he looked at on the Internet exposed user information to third parties. These pages included health information from commercial, nonprofit, educational, and government websites…Although personal data is anonymized from these visits, they still lead to targeted advertisements showing up on user’s computers for health issues, as well as giving advertisers leads (which can be deciphered without too much trouble) that a user has certain health issues and what issues those are

The US lacks, of course, federal laws like PECR and the DPA which seek – if imperfectly – to regulate the use of tracking and other cookies. But given that enforcement of the cookie provisions of PECR is largely non-existent, are there similar risks to the privacy of web users’ health information in the UK?

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under consent, cookies, Data Protection, PECR

Attend ICO DP conference, get unsolicited marketing from a hotel…

I greatly enjoyed yesterday’s (2 March 2015) Data Protection Practitioner Conference run by the Information Commissioner’s Office. I was representing NADPO on our stand, and the amount of interest was both gratifying and illustrative of the importance of having a truly representative body for professionals working in the field of information rights. NADPO were at pains – in running our prize draw (winners picked at random on stage by Information Commissioner Christopher Graham) – to make sure we let participants know what would or would not happen with their details. Feedback from delegates about this was also positive, and I’m pleased at least one privacy professional picked up on it.  Therefore the irony of the following events is not lost on me.

I’d stayed overnight on Sunday, in a Macdonald hotel I booked through the agency Expedia. Naturally, I’m not one to encourage the sending to me of direct electronic marketing, and as the unsolicited sending of such marketing is contrary to regulation 22 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 I didn’t expect to receive any, either from the agent or the hotel. Yet yesterday I did receive some, from the hotel group. So I’ve sent them this complaint:

I booked the hotel through your agent, Expedia.co.uk. As a professional working in the field of privacy and data protection I always make sure I opt out of any electronic marketing. Hence, when making my booking, I checked the Expedia box which said

“Check the box if you do not want to receive emails from Expedia with travel deals, special offers, and other information”.

However, I also consulted their privacy policy, which says:

“Expedia.co.uk may share your information with [suppliers] such as hotel, airline, car rental, and activity providers, who fulfill your travel reservations. Throughout Expedia.co.uk, all services provided by a third-party supplier are described as such. We encourage you to review the privacy policies of any third-party travel supplier whose products you purchase through Expedia.co.uk. Please note that these suppliers also may contact you as necessary to obtain additional information about you, facilitate your travel reservation, or respond to a review you may submit.”

I then consulted Macdonald Hotels’ privacy policy, but this seems to relate only to your website, and is silent on the use of clients’ data passed on by an agent.

Accordingly, I cannot be said to have consented to the sending by you to me of electronic marketing. Yet yesterday at 13.07 I received an email saying “Thank you for registering with Macdonald Hotels and Resorts…As a member of our mailing list you will shortly start to receive [further unsolicited electronic marketing].”

Ironically enough, I was in Manchester to attend the annual Data Protection Practitioners’ Conference run by the Information Commissioner’s Office (ICO). As you will be aware, the ICO regulates compliance with the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). Before I raise a complaint with the ICO I would appreciate a) your removing me from any marketing database b) not receiving any further unsolicited marketing, and c) receiving your comments regarding your apparent breach of your legal obligations.

Each instance of unsolicited marketing is at best one of life’s minor irritants, but I have concerns that, because of this, some companies treat compliance with legal obligations as, at best, a game in which they try to trick customers into agreeing to receiving marketing, and at worst, as unnecessary. It may be that I received this particular unsolicited marketing from Macdonald Hotels by mistake (although that in itself might raise data protection concerns about the handling of and accuracy of customer data) but it happens too often. The media have rightly picked up on the forthcoming changes to PECR which will make it easier for the ICO to take enforcement actions regarding serious contraventions, but, sadly, I don’t see the lower level, less serious contraventions, decreasing.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under consent, Data Protection, Information Commissioner, marketing, PECR

Labour’s “HowManyOfMe” – legitimate use of the electoral register?

Is Labour’s shiny new web widget “HowManyOfMe” compliant with the party’s obligations under electoral and ePrivacy law?

Regulations 102 and 106 of the Representation of the People (England and Wales) Regulations 2001 (as amended)mean that registered political parties can apply for a copy of the full electoral register, but they can only supply, disclose or make use of the information therein for “electoral purposes”. As far as I can see “electoral purposes” is nowhere defined, and, accordingly, I suspect it permits relatively broad interpretation, but, nevertheless, it clearly limits the use to which a political party can make use of electoral registration information.

With this in mind, it is worth considering whether the apparent use of such information by the Labour Party, in a new website widget, is a use which can be described as “for electoral purposes”. The widget in question invites people to submit their name (or indeed anyone else’s), email address and postcode and it will tell you how many voters in the country have that name. Thus, I find that there are 393 voters who have the name “Christopher Graham”. The widget then encourages users to register to vote. In small print underneath it says

in case you’re interested, this tool uses an aggregate figure from the electoral register and we’ve taken steps to protect the privacy of individuals

Well, I am interested. I’m interested to know whether this use of the electoral register is purely for electoral purposes. If it is, if its purpose is to encourage people to register to vote, then why does it need an email address? The widget goes on to say

The Labour Party and its elected representatives may contact you about issues we think you may be interested in or with campaign updates. You may unsubscribe at any point. You can see our privacy policy here.

But if they are using the electoral register to encourage people to give up email addresses which may then receive political marketing, surely this is stretching the use of “for electoral purposes” too far? Moreover, and despite the small print privacy notice, and the almost-hidden link to a generic privacy policy, any emails received by individuals will be likely to be sent in contravention of Labour’s obligations under The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR), which give effect to the UK’s obligations under Directive 2002/58/EC. This is because regulation 22 of PECR prohibits, in terms, the sending of electronic direct marketing (and promotion of a political party constitutes such marketing) without the prior consent of the recipient. Consent, the Directive tells us, must be “a freely given specific and informed indication of the user’s wishes”.  A vague description, as the widget here gives us, of what may happen if one submits an email address, and a statement about unsubscribing, do not legitimise any subsequent sending of direct marketing.

The email address I used is one I reserve for catching spammers; I’ve not received anything yet, but I expect to do so. I would be prepared to argue that any email I receive cannot be said to relate to the electoral purpose which permit use of the electoral register, and will be sent in contravention of PECR.  As I said recently, one of the key battlegrounds in the 2015 general election will be online, and unless action is taken to restrain abuse of people’s personal information, things will get nasty.

1The legislation.gov.uk doesn’t provide updated (“consolidated”) versions of secondary legislation, so there’s no point in linking to their version of the regulations.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

4 Comments

Filed under consent, Data Protection, marketing, PECR, privacy notice

Online privacy – a general election battleground

It’s becoming increasingly clear that one of the key battlegrounds in the 2015 General Election will be online. The BBC’s Ross Hawkins reports that the Conservatives are spending large amounts each month on Facebook advertising, and Labour and UKIP, while not having the means to spend as much, are ramping up their online campaigning. But, as Hawkins says

the aim is not to persuade people to nod thoughtfully while they stare at a screen. They want consumers of their online media to make donations or, even better, to get their friends’ support or to knock on doors in marginal constituencies…[but] for all the novelties of online marketing, email remains king. Those Tory Facebook invoices show that most of the money was spent encouraging Conservative supporters to hand over their email addresses. Labour and the Conservatives send emails to supporters, and journalists, that appear to come from their front benchers, pleading for donations

I know this well, because in July last year, after growing weary of blogging about questionable compliance with ePrivacy laws by all the major parties and achieving nothing, I set a honey trap: I submitted an email address to the Conservative, Labour, LibDem, Green, UKIP, SNP and Plaid Cymru websites. In each case I was apparently agreeing with a proposition (such as the particularly egregious LibDem FGM example)  giving no consent to reuse, and in each case there was no clear privacy notice which accorded with the Information Commissioner’s Office’s Privacy Notices Code of Practice (I do not, and nor does the ICO, at least if one refers to that Code, accept that a generic website privacy policy is sufficient in case like this). Since then, the fictional, and trusting but naive, Pam Catchers (geddit??!!) has received over 60 emails, from all parties contacted. A lot of them begin, “Friend, …” and exhort Pam to perform various types of activism. Of course, as a fictional character, Pam might have trouble enforcing her rights, or complaining to the ICO, but the fact is that this sort of bad, and illegal, practice, is rife.

To be honest, I thought Pam would receive more than this number of unsolicited emails (but I’m probably more cynical than her). But the point is that each of these emails was sent in breach of the parties’ obligations under the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) which demands that recipients of electronic direct marketing communications must have given explicit consent prior to the sending. By extension, therefore, the parties are also in breach of the Data Protection Act 1998 (DPA), which, when requiring “fair” processing of personal data, makes clear that a valid privacy notice must be given in order to achieve this.

The ICO makes clear that promotion by a political party can constitute direct marketing, and has previously taken enforcement action to try to ensure compliance. It has even produced guidance for parties about their PECR and DPA obligations. This says

In recent years we have investigated complaints about political parties and referendum campaigners using direct marketing, and on occasion we have used our enforcement powers to prevent them doing the same thing again. Failure to comply with an enforcement notice is a criminal offence.

But by “recent” I think they are referring at least six years back.

A data controller’s compliance, or lack thereof, with data protection laws in one area is likely to be indicative of its attitude to compliance elsewhere. Surely the time has come for the ICO at least to remind politicians that online privacy rights are not to be treated with contempt?

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under consent, Data Protection, enforcement, Information Commissioner, marketing, PECR, privacy notice

Samaritans cannot deny being data controller for #samaritansradar

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

So, Samaritans continue to support the #samaritansradar app, about which I, and many others, have already written. A large number of people suffering from, or with experience of mental health problems, have pleaded with Samaritans to withdraw the app, which monitors the tweets of the people one follows on twitter, applies an algorithm to identify tweets from potentially vulnerable people, and emails that information to the app user, all without the knowledge of the person involved. As Paul Bernal has eloquently said, this is not really an issue about privacy, and nor is it about data protection – it is about the threat many vulnerable people feel from the presence of the app. Nonetheless, privacy and data protection law, in part, are about the rights of the vulnerable; last night (4 November) Samaritans issued their latest sparse statement, part of which dealt with data protection:

We have taken the time to seek further legal advice on the issues raised. Our continuing view is that Samaritans Radar is compliant with the relevant data protection legislation for the following reasons:

o   We believe that Samaritans are neither the data controller or data processor of the information passing through the app

o   All information identified by the app is available on Twitter, in accordance with Twitter’s Ts&Cs (link here). The app does not process private tweets.

o   If Samaritans were deemed to be a data controller, given that vital interests are at stake, exemptions from data protection law are likely to apply

It is interesting that there is reference here to “further” legal advice: none of the previous statements from Samaritans had given any indication that legal or data protection advice had been sought prior to the launch of the app. It would be enormously helpful to discussion of the issue if Samaritans actually disclosed their advice, but I doubt very much that they will do so. Nonetheless, their position appears to be at odds with the legal authorities.

In May this year the Court of Justice of the European Union (CJEU) gave its ruling in the Google Spain case. The most widely covered aspect of that case was, of course, the extent of a right to be forgotten – a right to require Google to remove search terms in certain specified cases. But the CJEU also was asked to rule on the question of whether a search engine, such as Google, was a data controller in circumstances in which it engages in the indexing of web pages. Before the court Google argued that

the operator of a search engine cannot be regarded as a ‘controller’ in respect of that processing since it has no knowledge of those data and does not exercise control over the data

and this would appear to be a similar position to that adopted by Samaritans in the first bullet point above. However, the CJEU dismissed Google’s argument, holding that

the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results…It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of [the activity at issue] and which must, consequently, be regarded as the ‘controller’ in respect of that processing

Inasmuch as I understand how it works, I would submit that #samaritansradar, while not a search engine as such, collects data (personal data), records and organises it, stores it on servers and discloses it to its users in the form of a result. The app has been developed by and launched by Samaritans, it carries their name and seeks to further their aims: it is clearly “their” app, and they are, as clearly, a data controller with attendant legal responsibilities and liabilities. In further proof of this Samaritans introduced, after the app launch and in response to outcry, a “whitelist” of twitter users who have specifically informed Samaritans that they do not want their tweets to be monitored (update on 30 October). If Samaritans are effectively saying they have no role in the processing of the data, how on earth would such a whitelist be expected to work?

And it’s interesting to consider the apparent alternative view that they are implicitly putting forward. If they are not data controller, then who is? The answer must be the users who download and run the app, who would attract all the legal obligations that go with being a data controller. The Samaritans appear to want to back out of the room, leaving app users to answer all the awkward questions.1

Also very interesting is that Samaritans clearly accept that others might have a different view to theirs on the issue of controllership; they suggest that if they were held to be a data controller they would avail themselves of “exemptions” in data protection law relating to “vital interest” to legitimise their activities. One presumes this to be a reference to certain conditions in Schedule 2 and 3 of the Data Protection Act 1998 (DPA). Those schedules contain conditions which must be met, in order for the processing of, respectively, personal data and sensitive personal data, to be fair and lawful. As we are here clearly talking about sensitive personal data (personal data relating to someone’s physical or mental health is classed as sensitive), let us look at the relevant condition in Schedule 3:

The processing is necessary—
(a)in order to protect the vital interests of the data subject or another person, in a case where—
(i)consent cannot be given by or on behalf of the data subject, or
(ii)the data controller cannot reasonably be expected to obtain the consent of the data subject, or
(b)in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld

Samaritans alternative defence founders on the first four words: in what way can this processing be necessary to protect vital interests? The Information Commissioner’s Office explains that this condition only applies

in cases of life or death, such as where an individual’s medical history is disclosed to a hospital’s A&E department treating them after a serious road accident

The evidence suggests this app is actually delivering a very large number of false positives (as it’s based on what seems to be a crude keyword algorithm, this is only to be expected). Given that, and, indeed, given that Samaritans have – expressly – no control over what happens once the app notifies a user of a concerning tweet, it is absolutely preposterous to suggest that the processing is necessary to protect people’s vital interests. Moreover, the condition above also explains that it can only be relied on where consent cannot be given by the data subject or the controller cannot reasonably be expected to obtain consent. Nothing prevents Samaritans from operating an app which would do the same thing (flag a tweet of concern) but basing it on a consent model, whereby someone agrees that their tweets will be monitored in that way. Indeed, such a model would fit better with Samaritans stated aim of allowing people to “lead the conversation at their own pace”. It is clear, nonetheless, that consent could be sought for this processing, but that Samaritans have failed to design an app which allows it to be sought.

The Information Commissioner’s Office is said to be looking into the issues raised by Samaritans’ app. It may be that it will only be through legal enforcement action that it will actually be – as I think it should – removed. But it would be extremely sad if it came to that. It should be removed voluntarily by Samaritans, so they can rethink, re-programme, take full legal advice, but – most importantly – listen to the voices of the most vulnerable, who feel so threatened and betrayed by the app.

1On a strict and nuanced analysis of data protection law users of the app probably are data controllers, acting as joint ones with Samaritans. However, given the regulatory approach of the Information Commissioner they would probably be able to avail themselves of the general exemption from all of the DPA for processing which is purely domestic (although even that is arguably wrong). These are matters for another blog post however, and the fact that users might be held to be data controllers doesn’t alter the fact that Samaritans are, and in a much clearer way

43 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

Samaritans Radar – serious privacy concerns raised

UPDATE: 31 October

It appears Samaritans have silently tweaked their FAQs (so the text near the foot of this post no longer appears). They now say tweets will only be retained by the app for seven (as opposed to thirty) days, and have removed the words saying the app will retain a “Count of flags against a Twitter Users Friends ID”. Joe Ferns said on Twitter that the inclusion of this in the original FAQs was “a throw back to a stage of the development where that was being considered”. Samaritans also say “The only people who will be able to see the alerts, and the tweets flagged in them, are followers who would have received these Tweets in their current feed already”, but this does not absolve them of their data controller status: a controller does not need to access data in order to determine the means by which and the manner in which personal data are being processed, and they are still doing this. Moreover, this changing of the FAQs, with no apparent change to the position that those whose tweets are processed get no fair processing notice whatsoever, makes me more concerned that this app has been released without adequate assessment of its impact on people’s privacy.

END UPDATE

UPDATE: 30 October

Susan Hall has written a brilliant piece expanding on mine below, and she points out that section 12 of the Data Protection Act 1998 in terms allows a data subject to send a notice to a data controller requiring it to ensure no automated decisions are taken by processing their personal data for the purposes of evaluating matters such as their conduct. It seems to me that is precisely what “Samaritans Radar” does. So I’ve sent the following to Samaritans

Dear Samaritans

This is a notice pursuant to section 12 Data Protection Act 1998. Please ensure that no decision is taken by you or on your behalf (for instance by the “Samaritans Radar” app) based solely on the processing by automatic means of my personal data for the purpose of evaluating my conduct.

Thanks, Jon Baines @bainesy1969

I’ll post here about any developments.

END UPDATE

Samaritans have launched a Twitter App “to help identify vulnerable people”. I have only ever had words of praise and awe about Samaritans and their volunteers, but this time I think they may have misjudged the effect, and the potential legal implications of “Samaritans Radar”. Regarding the effect, this post from former volunteer @elphiemcdork is excellent:

How likely are you to tweet about your mental health problems if you know some of your followers would be alerted every time you did? Do you know all your followers? Personally? Are they all friends? What if your stalker was a follower? How would you feel knowing your every 3am mental health crisis tweet was being flagged to people who really don’t have your best interests at heart, to put it mildly? In this respect, this app is dangerous. It is terrifying to think that anyone can monitor your tweets, especially the ones that disclose you may be very vulnerable at that time

As for the legal implications, it seems to be potentially the case that Samaritans are processing sensitive personal data, in circumstances where there may not be a legal basis to do so. And some rather worrying misconceptions have accompanied the app launch. The first and most concerning of these is in the FAQs prepared for the media. In reply to the question “Isn’t there a data privacy issue here? Is Samaritans Radar spying on people?” the following answer is given

All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets. It does not look at private Tweets

The idea that, because something is in the public domain it cannot engage privacy issues is a horribly simplistic one, and if that constitutes the impact assessment undertaken, then serious questions have to be asked. Moreover, it doesn’t begin to consider the data protection considerations: personal data is personal data, whether it’s in the public domain or not. A tweet from an identified tweeter is inescapably the personal data of that person, and, if it is, or appears to be, about the person’s physical or mental health, then it is sensitive personal data, afforded a higher level of protection under the Data Protection Act 1998 (DPA). It would appear that Samaritans, as the legal person who determines the purposes for which, and the manner in which, the personal data are processed (i.e. they have produced an app which identifies a tweet on the basis of words, or sequences of words, and push it to another person) are acting as a data controller. As such, any processing has to be in accordance with their obligation to abide by the data protection principles in Schedule One of the DPA. The first principle says that personal data must be processed fairly and lawfully, and that a condition for processing contained in Schedule Two (and for sensitive personal data Schedule Two and Three) must be met. Looking only at Schedule Three, I struggle to see the condition which permits the app to identify a tweet, decide that it is from a potentially suicidal person and send it as such to a third party. The one condition which might apply, the fifth “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject” is undercut by the fact that the data in question is not just the public tweet, but the “package” of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.

The reliance on “all the data used in the app is public, so user privacy is not an issue” has carried through in messages sent on twitter by Samaritans Director of Policy, Research and Development, Joe Ferns, in response to people raising concerns, such as

existing Twitter search means anyone can search tweets unless you have set to private. #SamaritansRadar is like an automated search

Again, this misses the point that it is not just “anyone” doing a search on twitter, it is an app in Samaritans name which specifically identifies (in an automated way) certain tweets as of concern, and pushes them to third parties. Even more concerning was Mr Ferns’ response to someone asking if there was a way to opt out of having their tweets scanned by the app software:

if you use Twitter settings to mark your tweets private #SamaritansRadar will not see them

What he is actually suggesting there is that to avoid what some people clearly feel are intrusive actions they should lock their account and make it private. And, of course, going back to @elphiemcdork’s points, it is hard to avoid the conclusion that those who will do this might be some of the most vulnerable people.

A further concern is raised (one which confirms the data controller point above) about retention and reuse of data. The media FAQ states

Where will all the data be stored? Will it be secure? The data we will store is as follows:
• Twitter User ID – a unique ID that is associated with a Twitter account
• All Twitter User Friends ID’s – The same as above but for all the users friends that they
follow
• Any flagged Tweets – This is the data associated with the Tweet, we will store the raw
data for the Tweet as well
• Count of flags against a Twitter Users Friends ID – We store a count of flags against an
individual User
• To prevent the Database growing exponentially we will remove flagged Tweets that are
older than 30 days.

So it appears that Samaritans will be amassing data on unwitting twitter users, and in effect profiling them. This sort of data is terrifically sensitive, and no indication is given regarding the location of this data, and security measures in place to protect it.

The Information Commissioner’s Office recently produced some good guidance for app developers on Privacy in Mobile Apps. The guidance commends the use of Privacy Impact Assessments when developing apps. I would be interested to know if one was undertaken for Samaritans Radar, and, if so, how it dealt with the serious concerns that have been raised by many people since its launch.

This post was amended to take into account the observations in the comments by Susan Hall, to whom I give thanks. I have also since seen a number of excellent blog posts dealing with wider concerns. I commend, in particular, this by Adrian Short and this by @latentexistence

 

 

33 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

The Crown Estate and behavioural advertising

A new app for Regent Street shoppers will deliver targeted behavioural advertising – is it processing personal data?

My interest was piqued by a story in the Telegraph that

Regent Street is set to become the first shopping street in Europe to pioneer a mobile phone app which delivers personalised content to shoppers during their visit

Although this sounds like my idea of hell, it will no doubt appeal to some people. It appears that a series of Bluetooth beacons will deliver mobile content (for which, read “targeted behavioural advertising”) to the devices of users who have installed the Regent Street app. Users will indicate their shopping preferences, and a profile of them will be built by the app.

Electronic direct marketing in the UK is ordinarily subject to compliance with The Privacy and Electronic Communications (EC Directive) Regulations 2003 (“PECR”). However, the definition of “electronic mail” in PECR is “any text, voice, sound or image message sent over a public electronic communications network or in the recipient’s terminal equipment until it is collected by the recipient and includes messages sent using a short message service”. In 2007 the Information Commissioner, upon receipt of advice, changed his previous stance that Bluetooth marketing would be caught by PECR, to one under which it would not be caught, because Bluetooth does not involve a “public electronic communications network”. Nonetheless, general data protection law relating to consent to direct marketing will still apply, and the Direct Marketing Association says

Although Bluetooth is not considered to fall within the definition of electronic mail under the current PECR, in practice you should consider it to fall within the definition and obtain positive consent before using it

This reference to “positive consent” reflects the definition in the Data Protection directive, which says that it is

any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed

And that word “informed” is where I start to have a possible problem with this app. Ever one for thoroughness, I decided to download it, to see what sort of privacy information it provided. There wasn’t much, but in the Terms and Conditions (which don’t appear to be viewable until you download the app) it did say

The App will create a profile for you, known as an autoGraph™, based on information provided by you using the App. You will not be asked for any personal information (such as an email address or phone number) and your profile will not be shared with third parties

autograph (don’t forget the™) is software which, in its words “lets people realise their interests, helping marketers drive response rates”, and it does so by profiling its users

In under one minute without knowing your name, email address or any personally identifiable information, autograph can figure out 5500 dimensions about you – age, income, likes and dislikes – at over 90% accuracy, allowing businesses to serve what matters to you – offers, programs, music… almost anything

Privacy types might notice the jarring words in that blurb. Apparently the software can quickly “figure out” thousands of potential identifiers about a user, without knowing “any personally identifiable information”. To me, that’s effectively saying “we will create a personally identifiable profile of you, without using any personally identifiable information”. The fact of the matter is that people’s likes, dislikes, preferences, choices etc (and does this app capture device information, such as IMEI?) can all be used to build up a picture which renders them identifiable. It is trite law that “personal data” is data which relate to a living individual who can be identified from those data or from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller. The Article 29 Working Party (made up of representatives from the data protection authorities of each EU member state) delivered an Opinion in 2010 on online behavioural advertising which stated that

behavioural advertising is based on the use of identifiers that enable the creation of very detailed user profiles which, in most cases, will be deemed personal data

If this app is, indeed, processing personal data, then I would suggest that the limited Terms and Conditions (which users are not even pointed to when they download the app, let alone be invited to agree them) are inadequate to mean that a user is freely giving specific and informed consent to the processing. And if the app is processing personal data to deliver electronic marketing failure to comply with PECR might not matter, but failure to comply with the Data Protection Act 1998 brings potential liability to legal claims and enforcement action.

The Information Commissioner last year produced good guidance on Privacy in Mobile Apps which states that

Users of your app must be properly informed about what will happen to their personal data if they install and use the app. This is part of Principle 1 in the DPA which states that “Personal data shall be processed fairly and lawfully”. For processing to be fair, the user must have suitable information about the processing and they must to be told about the purposes

The relevant data controller for Regent Street Online happens to be The Crown Estate. On the day that the Queen sent her first tweet, it is interesting to consider the extent to which her own property company are in compliance with their obligations under privacy laws.

This post has been edited as a result of comments on the original, which highlighted that PECR does not, in strict terms, apply to Bluetooth marketing

4 Comments

Filed under consent, Data Protection, Directive 95/46/EC, Information Commissioner, marketing, PECR, Privacy, tracking

Brooks Newmark, the press, and “the other woman”

UPDATE: 30.09.14 Sunday Mirror editor Lloyd Embley is reported by the BBC and other media outlets to have apologised for the use of women’s photos (it transpires that two women’s images appropriated), saying

We thought that pictures used by the investigation were posed by models, but we now know that some real pictures were used. At no point has the Sunday Mirror published any of these images, but we would like to apologise to the women involved for their use in the investigation

What I think is interesting here is the implicit admission that (consenting) models could have been used in the fake profiles. Does this mean therefore, the processing of the (non-consenting) women’s personal data was not done in the reasonable belief that it was in the public interest?

Finally, I think it’s pretty shoddy that former Culture Secretary Maria Miller resorts to victim-blaming, and missing the point, when she is reported to have said that the story “showed why people had to be very careful about the sorts of images they took of themselves and put on the internet”

END UPDATE.

With most sex scandals involving politicians, there is “the other person”. For every Profumo, a Keeler;  for every Mellor, a de Sancha; for every Clinton, a Lewinsky. More often than not the rights and dignity of these others are trampled in the rush to revel in outrage at the politicians’ behaviour. But in the latest, rather tedious, such scandal, the person whose rights have been trampled was not even “the other person”, because there was no other person. Rather, it was a Swedish woman* whose image was appropriated by a journalist without her permission or even her knowledge. This raises the question of whether such use, by the journalist, and the Sunday Mirror, which ran the exposé, was in accordance with their obligations under data protection and other privacy laws.

The story run by the Sunday Mirror told of how a freelance journalist set up a fake social media profile, purportedly of a young PR girl called Sophie with a rather implausible interest in middle-aged Tory MPs. He apparently managed to snare the Minister for Civil Society and married father of five, Brooks Newmark, and encourage him into sending explicit photographs of himself. The result was that the newspaper got a lurid scoop, and the Minister subsequently resigned. Questions are being asked about the ethics of the journalism involved, and there are suggestions that this could be the first difficult test for IPSO, the new Independent Press Standards Organisation.

But for me much the most unpleasant part of this unpleasant story was that the journalist appears to have decided to attach to the fake twitter profile the image of a Swedish woman. It’s not clear where he got this from, but it is understood that the same image had apparently already appeared on several fake Facebook accounts (it is not suggested, I think, that the same journalist was responsible for those accounts). The woman is reported to be distressed at the appropriation:

It feels really unpleasant…I have received lot of emails, text messages and phone calls from various countries on this today. It feels unreal…I do not want to be exploited in this way and someone has used my image like this feels really awful, both for me and the others involved in this. [Google translation of original Swedish]

Under European and domestic law the image of an identifiable individual is their personal data. Anyone “processing” such data as a data controller (“the person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be, processed”) has to do so in accordance with the law. Such processing as happened here, both by the freelance journalist, when setting up and operating the social media account(s), and by the Sunday Mirror, in publishing the story, is covered by the UK Data Protection Act 1998 (DPA). This will be the case even though the person whose image was appropriated is in Sweden. The DPA requires, among other things, that processing of personal data be “fair and lawful”. It affords aggrieved individuals the right to bring civil claims for compensation for damage and distress arising from contraventions of data controllers’ obligations under the DPA. It also affords them the right to ask the Information Commissioner’s Office (ICO) for an assessment of the likelihood (or not) that processing was in compliance with the DPA.

However, section 32 of the DPA also gives journalism a very broad exemption from almost all of the Act, if the processing is undertaken with a view to publication, and the data controller reasonably believes that publication would be in the public interest and that compliance with the DPA would be incompatible with the purposes of journalism. As the ICO says

The scope of the exemption is very broad. It can disapply almost all of the DPA’s provisions, and gives the media a significant leeway to decide for themselves what is in the public interest

The two data controllers here (the freelancer and the paper) would presumably have little problem satisfying a court, or the ICO, that when it came to processing of Brooks Newmark’s personal data, they acted in the reasonable belief that the public interest justified the processing. But one wonders to what extent they even considered the processing of (and associated intrusion into the private life of) the Swedish woman whose image was appropriated. Supposing they didn’t even consider this processing – could they reasonably say they that they reasonably believed it to have been in the public interest?

These are complex questions, and the breadth and ambit of the section 32 exemption are likely to be tested in litigation between the mining and minerals company BSG and the campaigning group Global Witness (currently stalled/being considered at the ICO). But even if a claim or complaint under DPA would be a tricky one to make, there are other legal issues raised. Perhaps in part because of the breadth of the section 32 DPA exemption (and perhaps because of the low chance of significant damages under the DPA), claims of press intrusion into private lives are more commonly brought under the cause of action of “misuse of private information “, confirmed – it would seem – as a tort, in the ruling of Mr Justice Tugendhat in Vidal Hall and Ors v Google Inc [2014] EWHC 13 (QB), earlier this year. Damage awards for successful claims in misuse of private information have been known to be in the tens of thousands of pounds – most notably recently an award of £10,000 for Paul Weller’s children, after photographs taken covertly and without consent had been published in the Mail Online.

IPSO expects journalists to abide by the Editor’s Code, Clause 3 of which says

i) Everyone is entitled to respect for his or her private and family life, home, health and correspondence, including digital communications.

ii) Editors will be expected to justify intrusions into any individual’s private life without consent. Account will be taken of the complainant’s own public disclosures of information

and the ICO will take this Code into account when considering complaints about journalistic processing of personal data. One notes that “account will be taken of the complainant’s own public disclosures of information”, but one hopes that this would not be seen to justify the unfair and unethical appropriation of images found elsewhere on the internet.

*I’ve deliberately, although rather pointlessly – given their proliferation in other media – avoided naming the woman in question, or posting her photograph

4 Comments

Filed under Confidentiality, consent, Data Protection, Information Commissioner, journalism, Privacy, social media