Category Archives: Privacy

Should victims of “revenge porn” be granted anonymity?

I got into an interesting twitter discussion a few days ago with a journalist who had run a story* about a woman convicted under the Malicious Communications Act 1988 (MCA) for uploading a sex tape involving a former friend of hers. The story named the offender, but also the victim, and I asked Luke Traynor, the Mirror journalist, whether he had considered not naming the latter, who was the victim of what I described as a “sexual crime”.  To his credit, Luke replied, saying that he’d “Checked the law, and she’s not a sexual crime victim, but a victim of malicious communication”.

I think Luke is partly correct – a victim of a section 1 MCA offence is not classed as a victim of a specified sexual offence pursuant to section 2 of the Sexual Offences (Amendment) Act 1992, and is not, therefore, automatically granted lifetime anonymity from the press under section 1. This is the case even where – as here – the crime was a targeted attempt to embarrass or damage the victim on the basis of their sexual behaviour. The Mirror even described this case as one of “Revenge Porn” and, indeed, moves are currently being made to create a specific offence of disclosing private sexual photographs and films with intent to cause distress (clause 33 of the Criminal Justice and Courts Bill refers). If that Bill is passed, I would argue that serious thought should be given to awarding anonymity to victims of this offence.

But merely because statutory anonymity was not available to the victim of the offence reported by the Mirror it does not mean that it was right to name her, and (as you might expect from me) I think that data protection law is in play. Information relating to an identifiable individual’s sexual life is her sensitive personal data, afforded particular protection under the Data Protection Directive 95/46 and the UK Data Protection Act 1998 (DPA) to which it gives domestic effect. Publication of sensitive personal data without one of the conditions in Schedule 3 of the DPA being met (and I cannot see which would be met in this instance) is as a general rule unlawful. There is though, at section 32 of the DPA, as I have written about recently, an effective exemption from most of the Act for personal data processed only for the purposes of journalism. I suspect The Mirror, or any other media outlet naming the victim in this case, would claim this exemption, but it is important to note that, as broad as the exemption is, it can only be claimed if

the data controller reasonably believes that, having regard in particular to the special importance of the public interest in freedom of expression, publication would be in the public interest, and…the data controller reasonably believes that, in all the circumstances, compliance with that provision is incompatible with [journalism]

I invited Luke to explain whether he thought that publication of the victim’s name was in the public interest, but his reply

It was said in a public court, in accordance with the law, which takes into account ethics and public interest

did not really deal with the section 32 point – just because something was said in public court it does not mean that it is in the public interest to publish it. And unless Luke (or, rather, the Mirror, as data controller) reasonably believed that it was so, the exemption falls away.

Of course, in the absence of any complaint from the individual, all of this might seem otiose. But I think it raises further important issues about the extent of the section 32 exemption, as well as whether there should be some clearer right to privacy for victims of certain types of communications offences.

And, as Tim Turner pointed out, this sort of story shows why some might want to exercise a “right to be forgotten” – if unnecessary and unfair information is published about them on the internet, can some people be blamed for wanting it removed, or made less prominent?

*I have avoided linking directly to the article in question for reasons which should be obvious, given the content of this post. However, it is not difficult to find. That, of course, is the problem. 

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

 

Leave a comment

Filed under communications offence, Data Protection, Privacy

Watching the detective

The ICO might be mostly powerless to take action against the operators of the Russian web site streaming unsecured web cams, but the non-domestic users of the web cams could be vulnerable to enforcement action

The Information Commissioner’s Office (ICO) warned yesterday of the dangers of failing to secure web cams which are connected to the internet. This was on the back of stories about a Russian-based web site which aggregates feeds from thousands of compromised cameras worldwide.

This site was drawn to my attention a few weeks ago, and, although I tweeted obliquely about it, I thought it best not to identify it because of the harm it could potentially cause. However, although most news outlets didn’t identify the site, the cat is now, as they say, out of the bag. No doubt this is why the ICO chose to issue sensible guidance on network security in its blog post.

I also noticed that the Information Commissioner himself, Christopher Graham, rightly pointed to the difficulties in shutting down the site, and the fact that it is users’ responsibility to secure their web cams:

It is not within my jurisdiction, it is not within the European Union, it is Russia.

I will do what I can but don’t wait for me to have sorted this out.

This is, of course, true, and domestic users of web cams would do well to note the advice. Moreover, this is just the latest of these aggregator sites to appear. But news reports suggested that some of the 500-odd (or was it 2000-odd?) feeds on the site from the UK were from cameras of businesses or other non-domestic users (I saw a screenshot, for instance, of a feed from a pizza takeaway). Those users, if their web cams are capturing images of identifiable individuals, are processing personal data in the role of a data controller. And they can’t claim the exemption in the Data Protection Act 1998 (DPA) that applies to processing for purely domestic purposes. They must, therefore comply with the seventh data protection principle, which requires them to take appropriate measures to safeguard against unauthorised and unlawful processing of personal data. Allowing one’s web can to be compromised and its feed streamed on a Russian website is a pretty good indication that one is not complying with the seventh principle. Serious contraventions of the obligation to comply with the data protection principles can, of course, lead to ICO enforcement action, such as monetary penalty notices, to a maximum of £500,000.

The ICO is not, therefore, completely powerless here. Arguably it should be (maybe it is?) looking at the feeds on the site to determine which are from non-domestic premises, and looking to take appropriate enforcement action against them. So to that extent, one is rather watching Mr Graham, to see if he can sort this out.

2 Comments

Filed under Data Protection, Information Commissioner, Privacy

Samaritans cannot deny being data controller for #samaritansradar

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

So, Samaritans continue to support the #samaritansradar app, about which I, and many others, have already written. A large number of people suffering from, or with experience of mental health problems, have pleaded with Samaritans to withdraw the app, which monitors the tweets of the people one follows on twitter, applies an algorithm to identify tweets from potentially vulnerable people, and emails that information to the app user, all without the knowledge of the person involved. As Paul Bernal has eloquently said, this is not really an issue about privacy, and nor is it about data protection – it is about the threat many vulnerable people feel from the presence of the app. Nonetheless, privacy and data protection law, in part, are about the rights of the vulnerable; last night (4 November) Samaritans issued their latest sparse statement, part of which dealt with data protection:

We have taken the time to seek further legal advice on the issues raised. Our continuing view is that Samaritans Radar is compliant with the relevant data protection legislation for the following reasons:

o   We believe that Samaritans are neither the data controller or data processor of the information passing through the app

o   All information identified by the app is available on Twitter, in accordance with Twitter’s Ts&Cs (link here). The app does not process private tweets.

o   If Samaritans were deemed to be a data controller, given that vital interests are at stake, exemptions from data protection law are likely to apply

It is interesting that there is reference here to “further” legal advice: none of the previous statements from Samaritans had given any indication that legal or data protection advice had been sought prior to the launch of the app. It would be enormously helpful to discussion of the issue if Samaritans actually disclosed their advice, but I doubt very much that they will do so. Nonetheless, their position appears to be at odds with the legal authorities.

In May this year the Court of Justice of the European Union (CJEU) gave its ruling in the Google Spain case. The most widely covered aspect of that case was, of course, the extent of a right to be forgotten – a right to require Google to remove search terms in certain specified cases. But the CJEU also was asked to rule on the question of whether a search engine, such as Google, was a data controller in circumstances in which it engages in the indexing of web pages. Before the court Google argued that

the operator of a search engine cannot be regarded as a ‘controller’ in respect of that processing since it has no knowledge of those data and does not exercise control over the data

and this would appear to be a similar position to that adopted by Samaritans in the first bullet point above. However, the CJEU dismissed Google’s argument, holding that

the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results…It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of [the activity at issue] and which must, consequently, be regarded as the ‘controller’ in respect of that processing

Inasmuch as I understand how it works, I would submit that #samaritansradar, while not a search engine as such, collects data (personal data), records and organises it, stores it on servers and discloses it to its users in the form of a result. The app has been developed by and launched by Samaritans, it carries their name and seeks to further their aims: it is clearly “their” app, and they are, as clearly, a data controller with attendant legal responsibilities and liabilities. In further proof of this Samaritans introduced, after the app launch and in response to outcry, a “whitelist” of twitter users who have specifically informed Samaritans that they do not want their tweets to be monitored (update on 30 October). If Samaritans are effectively saying they have no role in the processing of the data, how on earth would such a whitelist be expected to work?

And it’s interesting to consider the apparent alternative view that they are implicitly putting forward. If they are not data controller, then who is? The answer must be the users who download and run the app, who would attract all the legal obligations that go with being a data controller. The Samaritans appear to want to back out of the room, leaving app users to answer all the awkward questions.1

Also very interesting is that Samaritans clearly accept that others might have a different view to theirs on the issue of controllership; they suggest that if they were held to be a data controller they would avail themselves of “exemptions” in data protection law relating to “vital interest” to legitimise their activities. One presumes this to be a reference to certain conditions in Schedule 2 and 3 of the Data Protection Act 1998 (DPA). Those schedules contain conditions which must be met, in order for the processing of, respectively, personal data and sensitive personal data, to be fair and lawful. As we are here clearly talking about sensitive personal data (personal data relating to someone’s physical or mental health is classed as sensitive), let us look at the relevant condition in Schedule 3:

The processing is necessary—
(a)in order to protect the vital interests of the data subject or another person, in a case where—
(i)consent cannot be given by or on behalf of the data subject, or
(ii)the data controller cannot reasonably be expected to obtain the consent of the data subject, or
(b)in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld

Samaritans alternative defence founders on the first four words: in what way can this processing be necessary to protect vital interests? The Information Commissioner’s Office explains that this condition only applies

in cases of life or death, such as where an individual’s medical history is disclosed to a hospital’s A&E department treating them after a serious road accident

The evidence suggests this app is actually delivering a very large number of false positives (as it’s based on what seems to be a crude keyword algorithm, this is only to be expected). Given that, and, indeed, given that Samaritans have – expressly – no control over what happens once the app notifies a user of a concerning tweet, it is absolutely preposterous to suggest that the processing is necessary to protect people’s vital interests. Moreover, the condition above also explains that it can only be relied on where consent cannot be given by the data subject or the controller cannot reasonably be expected to obtain consent. Nothing prevents Samaritans from operating an app which would do the same thing (flag a tweet of concern) but basing it on a consent model, whereby someone agrees that their tweets will be monitored in that way. Indeed, such a model would fit better with Samaritans stated aim of allowing people to “lead the conversation at their own pace”. It is clear, nonetheless, that consent could be sought for this processing, but that Samaritans have failed to design an app which allows it to be sought.

The Information Commissioner’s Office is said to be looking into the issues raised by Samaritans’ app. It may be that it will only be through legal enforcement action that it will actually be – as I think it should – removed. But it would be extremely sad if it came to that. It should be removed voluntarily by Samaritans, so they can rethink, re-programme, take full legal advice, but – most importantly – listen to the voices of the most vulnerable, who feel so threatened and betrayed by the app.

1On a strict and nuanced analysis of data protection law users of the app probably are data controllers, acting as joint ones with Samaritans. However, given the regulatory approach of the Information Commissioner they would probably be able to avail themselves of the general exemption from all of the DPA for processing which is purely domestic (although even that is arguably wrong). These are matters for another blog post however, and the fact that users might be held to be data controllers doesn’t alter the fact that Samaritans are, and in a much clearer way

43 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

Samaritans Radar – serious privacy concerns raised

UPDATE: 31 October

It appears Samaritans have silently tweaked their FAQs (so the text near the foot of this post no longer appears). They now say tweets will only be retained by the app for seven (as opposed to thirty) days, and have removed the words saying the app will retain a “Count of flags against a Twitter Users Friends ID”. Joe Ferns said on Twitter that the inclusion of this in the original FAQs was “a throw back to a stage of the development where that was being considered”. Samaritans also say “The only people who will be able to see the alerts, and the tweets flagged in them, are followers who would have received these Tweets in their current feed already”, but this does not absolve them of their data controller status: a controller does not need to access data in order to determine the means by which and the manner in which personal data are being processed, and they are still doing this. Moreover, this changing of the FAQs, with no apparent change to the position that those whose tweets are processed get no fair processing notice whatsoever, makes me more concerned that this app has been released without adequate assessment of its impact on people’s privacy.

END UPDATE

UPDATE: 30 October

Susan Hall has written a brilliant piece expanding on mine below, and she points out that section 12 of the Data Protection Act 1998 in terms allows a data subject to send a notice to a data controller requiring it to ensure no automated decisions are taken by processing their personal data for the purposes of evaluating matters such as their conduct. It seems to me that is precisely what “Samaritans Radar” does. So I’ve sent the following to Samaritans

Dear Samaritans

This is a notice pursuant to section 12 Data Protection Act 1998. Please ensure that no decision is taken by you or on your behalf (for instance by the “Samaritans Radar” app) based solely on the processing by automatic means of my personal data for the purpose of evaluating my conduct.

Thanks, Jon Baines @bainesy1969

I’ll post here about any developments.

END UPDATE

Samaritans have launched a Twitter App “to help identify vulnerable people”. I have only ever had words of praise and awe about Samaritans and their volunteers, but this time I think they may have misjudged the effect, and the potential legal implications of “Samaritans Radar”. Regarding the effect, this post from former volunteer @elphiemcdork is excellent:

How likely are you to tweet about your mental health problems if you know some of your followers would be alerted every time you did? Do you know all your followers? Personally? Are they all friends? What if your stalker was a follower? How would you feel knowing your every 3am mental health crisis tweet was being flagged to people who really don’t have your best interests at heart, to put it mildly? In this respect, this app is dangerous. It is terrifying to think that anyone can monitor your tweets, especially the ones that disclose you may be very vulnerable at that time

As for the legal implications, it seems to be potentially the case that Samaritans are processing sensitive personal data, in circumstances where there may not be a legal basis to do so. And some rather worrying misconceptions have accompanied the app launch. The first and most concerning of these is in the FAQs prepared for the media. In reply to the question “Isn’t there a data privacy issue here? Is Samaritans Radar spying on people?” the following answer is given

All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets. It does not look at private Tweets

The idea that, because something is in the public domain it cannot engage privacy issues is a horribly simplistic one, and if that constitutes the impact assessment undertaken, then serious questions have to be asked. Moreover, it doesn’t begin to consider the data protection considerations: personal data is personal data, whether it’s in the public domain or not. A tweet from an identified tweeter is inescapably the personal data of that person, and, if it is, or appears to be, about the person’s physical or mental health, then it is sensitive personal data, afforded a higher level of protection under the Data Protection Act 1998 (DPA). It would appear that Samaritans, as the legal person who determines the purposes for which, and the manner in which, the personal data are processed (i.e. they have produced an app which identifies a tweet on the basis of words, or sequences of words, and push it to another person) are acting as a data controller. As such, any processing has to be in accordance with their obligation to abide by the data protection principles in Schedule One of the DPA. The first principle says that personal data must be processed fairly and lawfully, and that a condition for processing contained in Schedule Two (and for sensitive personal data Schedule Two and Three) must be met. Looking only at Schedule Three, I struggle to see the condition which permits the app to identify a tweet, decide that it is from a potentially suicidal person and send it as such to a third party. The one condition which might apply, the fifth “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject” is undercut by the fact that the data in question is not just the public tweet, but the “package” of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.

The reliance on “all the data used in the app is public, so user privacy is not an issue” has carried through in messages sent on twitter by Samaritans Director of Policy, Research and Development, Joe Ferns, in response to people raising concerns, such as

existing Twitter search means anyone can search tweets unless you have set to private. #SamaritansRadar is like an automated search

Again, this misses the point that it is not just “anyone” doing a search on twitter, it is an app in Samaritans name which specifically identifies (in an automated way) certain tweets as of concern, and pushes them to third parties. Even more concerning was Mr Ferns’ response to someone asking if there was a way to opt out of having their tweets scanned by the app software:

if you use Twitter settings to mark your tweets private #SamaritansRadar will not see them

What he is actually suggesting there is that to avoid what some people clearly feel are intrusive actions they should lock their account and make it private. And, of course, going back to @elphiemcdork’s points, it is hard to avoid the conclusion that those who will do this might be some of the most vulnerable people.

A further concern is raised (one which confirms the data controller point above) about retention and reuse of data. The media FAQ states

Where will all the data be stored? Will it be secure? The data we will store is as follows:
• Twitter User ID – a unique ID that is associated with a Twitter account
• All Twitter User Friends ID’s – The same as above but for all the users friends that they
follow
• Any flagged Tweets – This is the data associated with the Tweet, we will store the raw
data for the Tweet as well
• Count of flags against a Twitter Users Friends ID – We store a count of flags against an
individual User
• To prevent the Database growing exponentially we will remove flagged Tweets that are
older than 30 days.

So it appears that Samaritans will be amassing data on unwitting twitter users, and in effect profiling them. This sort of data is terrifically sensitive, and no indication is given regarding the location of this data, and security measures in place to protect it.

The Information Commissioner’s Office recently produced some good guidance for app developers on Privacy in Mobile Apps. The guidance commends the use of Privacy Impact Assessments when developing apps. I would be interested to know if one was undertaken for Samaritans Radar, and, if so, how it dealt with the serious concerns that have been raised by many people since its launch.

This post was amended to take into account the observations in the comments by Susan Hall, to whom I give thanks. I have also since seen a number of excellent blog posts dealing with wider concerns. I commend, in particular, this by Adrian Short and this by @latentexistence

 

 

33 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

The Crown Estate and behavioural advertising

A new app for Regent Street shoppers will deliver targeted behavioural advertising – is it processing personal data?

My interest was piqued by a story in the Telegraph that

Regent Street is set to become the first shopping street in Europe to pioneer a mobile phone app which delivers personalised content to shoppers during their visit

Although this sounds like my idea of hell, it will no doubt appeal to some people. It appears that a series of Bluetooth beacons will deliver mobile content (for which, read “targeted behavioural advertising”) to the devices of users who have installed the Regent Street app. Users will indicate their shopping preferences, and a profile of them will be built by the app.

Electronic direct marketing in the UK is ordinarily subject to compliance with The Privacy and Electronic Communications (EC Directive) Regulations 2003 (“PECR”). However, the definition of “electronic mail” in PECR is “any text, voice, sound or image message sent over a public electronic communications network or in the recipient’s terminal equipment until it is collected by the recipient and includes messages sent using a short message service”. In 2007 the Information Commissioner, upon receipt of advice, changed his previous stance that Bluetooth marketing would be caught by PECR, to one under which it would not be caught, because Bluetooth does not involve a “public electronic communications network”. Nonetheless, general data protection law relating to consent to direct marketing will still apply, and the Direct Marketing Association says

Although Bluetooth is not considered to fall within the definition of electronic mail under the current PECR, in practice you should consider it to fall within the definition and obtain positive consent before using it

This reference to “positive consent” reflects the definition in the Data Protection directive, which says that it is

any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed

And that word “informed” is where I start to have a possible problem with this app. Ever one for thoroughness, I decided to download it, to see what sort of privacy information it provided. There wasn’t much, but in the Terms and Conditions (which don’t appear to be viewable until you download the app) it did say

The App will create a profile for you, known as an autoGraph™, based on information provided by you using the App. You will not be asked for any personal information (such as an email address or phone number) and your profile will not be shared with third parties

autograph (don’t forget the™) is software which, in its words “lets people realise their interests, helping marketers drive response rates”, and it does so by profiling its users

In under one minute without knowing your name, email address or any personally identifiable information, autograph can figure out 5500 dimensions about you – age, income, likes and dislikes – at over 90% accuracy, allowing businesses to serve what matters to you – offers, programs, music… almost anything

Privacy types might notice the jarring words in that blurb. Apparently the software can quickly “figure out” thousands of potential identifiers about a user, without knowing “any personally identifiable information”. To me, that’s effectively saying “we will create a personally identifiable profile of you, without using any personally identifiable information”. The fact of the matter is that people’s likes, dislikes, preferences, choices etc (and does this app capture device information, such as IMEI?) can all be used to build up a picture which renders them identifiable. It is trite law that “personal data” is data which relate to a living individual who can be identified from those data or from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller. The Article 29 Working Party (made up of representatives from the data protection authorities of each EU member state) delivered an Opinion in 2010 on online behavioural advertising which stated that

behavioural advertising is based on the use of identifiers that enable the creation of very detailed user profiles which, in most cases, will be deemed personal data

If this app is, indeed, processing personal data, then I would suggest that the limited Terms and Conditions (which users are not even pointed to when they download the app, let alone be invited to agree them) are inadequate to mean that a user is freely giving specific and informed consent to the processing. And if the app is processing personal data to deliver electronic marketing failure to comply with PECR might not matter, but failure to comply with the Data Protection Act 1998 brings potential liability to legal claims and enforcement action.

The Information Commissioner last year produced good guidance on Privacy in Mobile Apps which states that

Users of your app must be properly informed about what will happen to their personal data if they install and use the app. This is part of Principle 1 in the DPA which states that “Personal data shall be processed fairly and lawfully”. For processing to be fair, the user must have suitable information about the processing and they must to be told about the purposes

The relevant data controller for Regent Street Online happens to be The Crown Estate. On the day that the Queen sent her first tweet, it is interesting to consider the extent to which her own property company are in compliance with their obligations under privacy laws.

This post has been edited as a result of comments on the original, which highlighted that PECR does not, in strict terms, apply to Bluetooth marketing

4 Comments

Filed under consent, Data Protection, Directive 95/46/EC, Information Commissioner, marketing, PECR, Privacy, tracking

Monitoring of blogs and lawful/unlawful surveillance

Tim Turner wrote recently about the data protection implications of the monitoring of Sara Ryan’s blog by Southern Health NHS Trust. Tim’s piece is an exemplary analysis of how the processing of personal data which is in the public domain is still subject to compliance with the Data Protection Act 1998 (DPA):

there is nothing in the Data Protection Act that says that the public domain is off-limits. Whatever else, fairness still applies, and organisations have to accept that if they want to monitor what people are saying, they have to be open about it

But it is not just data protection law which is potentially engaged by the Trust’s actions. Monitoring of social media and networks by public authorities for the purposes of gathering intelligence might well constitute directed surveillance, bringing us explicitly into the area of human rights law. Sir Christopher Rose, the Chief Surveillance Commissioner said, in his most recent annual report

my commissioners remain of the view that the repeat viewing of individual “open source” sites for the purpose of intelligence gathering and data collation should be considered within the context of the protection that RIPA affords to such activity

“RIPA” there of course refers to the complex Regulation of Investigatory Powers Act 2000 (RIPA) (parts of which were reputedly “intentionally drafted for maximum obscurity”)1. What is not complex, however, is to note which public authorities are covered by RIPA when they engage in surveillance activities. A 2006 statutory instrument2 removed NHS Trusts from the list (at Schedule One of RIPA) of relevant public authorities whose surveillance was authorised by RIPA. Non-inclusion on the Schedule One lists doesn’t as a matter of fact or law mean that a public authority cannot undertake surveillance. This is because of the rather odd provision at section 80 of RIPA, which effectively explains that surveillance is lawful if carried out in accordance with RIPA, but surveillance not carried out in accordance with RIPA is not ipso facto unlawful. As the Investigatory Powers Tribunal put it, in C v The Police and the Home Secretary IPT/03/32/H

Although RIPA provides a framework for obtaining internal authorisations of directed surveillance (and other forms of surveillance), there is no general prohibition in RIPA against conducting directed surveillance without RIPA authorisation. RIPA does not require prior authorisation to be obtained by a public authority in order to carry out surveillance. Lack of authorisation under RIPA does not necessarily mean that the carrying out of directed surveillance is unlawful.

But it does mean that where surveillance is not specifically authorised by RIPA questions would arise about its legality under Article 8 of the European Convention on Human Rights, as incorporated into domestic law by the Human Rights Act 1998. The Tribunal in the above case went on to say

the consequences of not obtaining an authorisation under this Part may be, where there is an interference with Article 8 rights and there is no other source of authority, that the action is unlawful by virtue of section 6 of the 1998 Act.3

So, when the Trust was monitoring Sara Ryan’s blog, was it conducting directed surveillance (in a manner not authorised by RIPA)? RIPA describes directed surveillance as covert (and remember, as Tim Turner pointed out – no notification had been given to Sara) surveillance which is “undertaken for the purposes of a specific investigation or a specific operation and in such a manner as is likely to result in the obtaining of private information about a person (whether or not one specifically identified for the purposes of the investigation or operation)” (there is a further third limb which is not relevant here). One’s immediate thought might be that no private information was obtained or intended to be obtained about Sara, but one must bear in mind that, by section 26(10) of RIPA “‘private information’, in relation to a person, includes any information relating to his private or family life” (emphasis added). This interpretation of “private information” of course is to be read alongside the protection afforded to the respect for one’s private and family life under Article 8. The monitoring of Sara’s blog, and the matching of entries in it against incidents in the ward on which her late son, LB, was placed, unavoidably resulted in the obtaining of information about her and LB’s family life. This, of course, is the sort of thing that Sir Christopher Rose warned about in his most recent report, in which he went on to say

In cash-strapped public authorities, it might be tempting to conduct on line investigations from a desktop, as this saves time and money, and often provides far more detail about someone’s personal lifestyle, employment, associates, etc. But just because one can, does not mean one should.

And one must remember that he was talking about cash-strapped public authorities whose surveillance could be authorised under RIPA. When one remembers that this NHS Trust was not authorised to conduct directed surveillance under RIPA, one struggles to avoid the conclusion that monitoring was potentially in breach of Sara’s and LB’s human rights.

1See footnote to Caspar Bowden’s submission to the Intelligence and Security Committee
2The Regulation of Investigatory Powers (Directed Surveillance and Covert Human Intelligence Sources) (Amendment) Order 2006
3This passage was apparently lifted directly from the explanatory notes to RIPA

3 Comments

Filed under Data Protection, human rights, NHS, Privacy, RIPA, social media, surveillance, surveillance commissioner

Brooks Newmark, the press, and “the other woman”

UPDATE: 30.09.14 Sunday Mirror editor Lloyd Embley is reported by the BBC and other media outlets to have apologised for the use of women’s photos (it transpires that two women’s images appropriated), saying

We thought that pictures used by the investigation were posed by models, but we now know that some real pictures were used. At no point has the Sunday Mirror published any of these images, but we would like to apologise to the women involved for their use in the investigation

What I think is interesting here is the implicit admission that (consenting) models could have been used in the fake profiles. Does this mean therefore, the processing of the (non-consenting) women’s personal data was not done in the reasonable belief that it was in the public interest?

Finally, I think it’s pretty shoddy that former Culture Secretary Maria Miller resorts to victim-blaming, and missing the point, when she is reported to have said that the story “showed why people had to be very careful about the sorts of images they took of themselves and put on the internet”

END UPDATE.

With most sex scandals involving politicians, there is “the other person”. For every Profumo, a Keeler;  for every Mellor, a de Sancha; for every Clinton, a Lewinsky. More often than not the rights and dignity of these others are trampled in the rush to revel in outrage at the politicians’ behaviour. But in the latest, rather tedious, such scandal, the person whose rights have been trampled was not even “the other person”, because there was no other person. Rather, it was a Swedish woman* whose image was appropriated by a journalist without her permission or even her knowledge. This raises the question of whether such use, by the journalist, and the Sunday Mirror, which ran the exposé, was in accordance with their obligations under data protection and other privacy laws.

The story run by the Sunday Mirror told of how a freelance journalist set up a fake social media profile, purportedly of a young PR girl called Sophie with a rather implausible interest in middle-aged Tory MPs. He apparently managed to snare the Minister for Civil Society and married father of five, Brooks Newmark, and encourage him into sending explicit photographs of himself. The result was that the newspaper got a lurid scoop, and the Minister subsequently resigned. Questions are being asked about the ethics of the journalism involved, and there are suggestions that this could be the first difficult test for IPSO, the new Independent Press Standards Organisation.

But for me much the most unpleasant part of this unpleasant story was that the journalist appears to have decided to attach to the fake twitter profile the image of a Swedish woman. It’s not clear where he got this from, but it is understood that the same image had apparently already appeared on several fake Facebook accounts (it is not suggested, I think, that the same journalist was responsible for those accounts). The woman is reported to be distressed at the appropriation:

It feels really unpleasant…I have received lot of emails, text messages and phone calls from various countries on this today. It feels unreal…I do not want to be exploited in this way and someone has used my image like this feels really awful, both for me and the others involved in this. [Google translation of original Swedish]

Under European and domestic law the image of an identifiable individual is their personal data. Anyone “processing” such data as a data controller (“the person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be, processed”) has to do so in accordance with the law. Such processing as happened here, both by the freelance journalist, when setting up and operating the social media account(s), and by the Sunday Mirror, in publishing the story, is covered by the UK Data Protection Act 1998 (DPA). This will be the case even though the person whose image was appropriated is in Sweden. The DPA requires, among other things, that processing of personal data be “fair and lawful”. It affords aggrieved individuals the right to bring civil claims for compensation for damage and distress arising from contraventions of data controllers’ obligations under the DPA. It also affords them the right to ask the Information Commissioner’s Office (ICO) for an assessment of the likelihood (or not) that processing was in compliance with the DPA.

However, section 32 of the DPA also gives journalism a very broad exemption from almost all of the Act, if the processing is undertaken with a view to publication, and the data controller reasonably believes that publication would be in the public interest and that compliance with the DPA would be incompatible with the purposes of journalism. As the ICO says

The scope of the exemption is very broad. It can disapply almost all of the DPA’s provisions, and gives the media a significant leeway to decide for themselves what is in the public interest

The two data controllers here (the freelancer and the paper) would presumably have little problem satisfying a court, or the ICO, that when it came to processing of Brooks Newmark’s personal data, they acted in the reasonable belief that the public interest justified the processing. But one wonders to what extent they even considered the processing of (and associated intrusion into the private life of) the Swedish woman whose image was appropriated. Supposing they didn’t even consider this processing – could they reasonably say they that they reasonably believed it to have been in the public interest?

These are complex questions, and the breadth and ambit of the section 32 exemption are likely to be tested in litigation between the mining and minerals company BSG and the campaigning group Global Witness (currently stalled/being considered at the ICO). But even if a claim or complaint under DPA would be a tricky one to make, there are other legal issues raised. Perhaps in part because of the breadth of the section 32 DPA exemption (and perhaps because of the low chance of significant damages under the DPA), claims of press intrusion into private lives are more commonly brought under the cause of action of “misuse of private information “, confirmed – it would seem – as a tort, in the ruling of Mr Justice Tugendhat in Vidal Hall and Ors v Google Inc [2014] EWHC 13 (QB), earlier this year. Damage awards for successful claims in misuse of private information have been known to be in the tens of thousands of pounds – most notably recently an award of £10,000 for Paul Weller’s children, after photographs taken covertly and without consent had been published in the Mail Online.

IPSO expects journalists to abide by the Editor’s Code, Clause 3 of which says

i) Everyone is entitled to respect for his or her private and family life, home, health and correspondence, including digital communications.

ii) Editors will be expected to justify intrusions into any individual’s private life without consent. Account will be taken of the complainant’s own public disclosures of information

and the ICO will take this Code into account when considering complaints about journalistic processing of personal data. One notes that “account will be taken of the complainant’s own public disclosures of information”, but one hopes that this would not be seen to justify the unfair and unethical appropriation of images found elsewhere on the internet.

*I’ve deliberately, although rather pointlessly – given their proliferation in other media – avoided naming the woman in question, or posting her photograph

4 Comments

Filed under Confidentiality, consent, Data Protection, Information Commissioner, journalism, Privacy, social media

Dancing to the beat of the Google drum

With rather wearying predictability, certain parts of the media are in uproar about the removal by Google of search results linking to a positive article about a young artist. Roy Greenslade, in the Guardian, writes

The Worcester News has been the victim of one of the more bizarre examples of the European court’s so-called “right to be forgotten” ruling.

The paper was told by Google that it was removing from its search archive an article in praise of a young artist.

Yes, you read that correctly. A positive story published five years ago about Dan Roach, who was then on the verge of gaining a degree in fine art, had to be taken down.

Although no one knows who made the request to Google, it is presumed to be the artist himself, as he had previously asked the paper itself to remove the piece,  on the basis that he felt it didn’t reflect the work he is producing now. But there is a bigger story here, and in my opinion it’s one of Google selling itself as an unwilling censor, and of media uncritically buying it.

Firstly, Google had no obligation to remove the results. The judgment of the Court of Justice of the European Union (CJEU) in the Google Spain case was controversial, and problematic, but its effect was certainly not to oblige a search engine to respond to a takedown request without considering whether it has a legal obligation to do so. What it did say was that, although as a rule data subjects’ rights to removal override the interest of the general public having access to the information delivered by a search query, there may be particular reasons why the balance might go the other way.

Furthermore, even if the artist here had a legitimate complaint that the results constituted his personal data, and that the continued processing by Google was inadequate, inaccurate, excessive or continuing for longer than was necessary (none of which, I would submit, would actually be likely to apply in this case), Google could simply refuse to comply with the takedown request. At that point, the requester would be left with two options: sue, or complain to the Information Commissioner’s Office (ICO). The former option is an interesting one (and I wonder if any such small claims cases will be brought in the County Court) but I think in the majority of cases people will be likely to take the latter. However, if the ICO receives a complaint, it appears that the first thing it is likely to do is refer the person to the publisher of the information in question. In a blog post in August the Deputy Commissioner David Smith said

We’re about to update our website* with advice on when an individual should complain to us, what they need to tell us and how, in some cases, they might be better off pursuing their complaint with the original publisher and not just the search engine [emphasis added]

This is in line with their new approach to handling complaints by data subjects – which is effectively telling them to go off and resolve it with the data controller in the first place.

Even if the complaint does make its way to an ICO case officer, what that officer will be doing is assessing – pursuant to section 42 of the Data Protection Act 1998 (DPA) – “whether it is likely or unlikely that the processing has been or is being carried out in compliance with the provisions of [the DPA]”. What the ICO is not doing is determining an appeal. An assessment of “compliance not likely” is no more than that – it does not oblige the data controller to take action (although it may be accompanied by recommendations). An assessment of “compliance likely”, moreover, leaves an aggrieved data subject with no other option but to attempt to sue the data controller. Contrary to what Information Commissioner Christopher Graham said at the recent Rewriting History debate, there is no right of appeal to the Information Tribunal in these circumstances.

Of course the ICO could, in addition to making a “compliance not likely” assessment, serve Google with an enforcement notice under section 42 DPA requiring them to remove the results. An enforcement notice does have proper legal force, and it is a criminal offence not comply with one. But they are rare creatures. If the ICO does ever serve one on Google things will get interesting, but let’s not hold our breath.

So, simply refusing to take down the results would, certainly in the short term, cause Google no trouble, nor attract any sanction.

Secondly (sorry, that was a long “firstly”) Google appear to have notified the paper of the takedown, in the same way they notified various journalists of takedowns of their pieces back in June this year (with, again, the predictable result that the journalists were outraged, and republicised the apparently taken down information). The ICO has identified that this practice by Google may in itself constitute unfair and unlawful processing: David Smith says

We can certainly see an argument for informing publishers that a link to their content has been taken down. However, in some cases, informing the publisher has led to the complained about information being republished, while in other cases results that are taken down will link to content that is far from legitimate – for example to hate sites of various sorts. In cases like that we can see why informing the content publisher could exacerbate an already difficult situation and could in itself have a very detrimental effect on the complainant’s privacy

Google is a huge and hugely rich organisation. It appears to be trying to chip away at the CJEU judgment by making it look ridiculous. And in doing so it is cleverly using the media to help portray it as a passive actor – victim, along with the media, of censorship. As I’ve written previously, Google is anything but passive – it has algorithms which prioritise certain results above others, for commercial reasons, and it will readily remove search results upon receipt of claims that the links are to copyright material. Those elements of the media who are expressing outrage at the spurious removal of links might take a moment to reflect whether Google is really as interested in freedom of expression as they are, and, if not, why it is acting as it is.

 

 
*At the time of writing this advice does not appear to have been made available on the ICO website.

4 Comments

Filed under Data Protection, Directive 95/46/EC, enforcement, Information Commissioner, Privacy

Big Political Data

I’ve written over the past few months about questionable compliance by the Conservative, Labour, Liberal Democratic and Scottish National Parties with their obligations under the Data Protection Act 1998 and the Privacy and Electronic Communications (EC Directive) Regulations 2003. And, as I sat down to write this post, I thought I’d check a couple of other parties’ sites, and, sure enough, similar issues are raised by the UKIP and Plaid Cymru sites

ukipplaid

No one except a few enthusiasts in this area of law/compliance seems particularly concerned, and I will, no doubt, eventually get fed up with the dead horse I am flogging. However, a fascinating article in The Telegraph by James Kirkup casts a light on just why political parties might be so keen to harvest personal data, and not be transparent about their uses of it.

Kirkup points out how parties have begun an

extraordinarily extensive – and expensive – programme of opinion polls and focus groups generating huge volumes of data about voters’ views and preferences…Traditional polls and focus groups have changed little in the past two decades. They help parties discover what voters think, what they want to hear, and how best to say it to them. That is the first stage of campaigning. The second is to identify precisely which voters you need to speak to. With finite time and resources, parties cannot afford to waste effort either preaching to the converted or trying to win over diehard opponents who will never change sides. The party that finds the waverers in the middle gains a crucial advantage.

It seems clear to me that the tricks, and opacity, which are used to get people to give up their personal information, are part of this drive to amass more and more data for political purposes. It’s unethical, it’s probably unlawful, but few seem to care, and no one, including the Information Commissioner’s Office (which has, in the past taken robust action against dodgy marketing practices in party politics) has seemed prepared so far to do anything to prevent it. However, the ICO has good guidance for the parties on this, and in May this year, issued a warning to play by the marketing rules in the run-up to local and European elections. Let’s hope this warning, and the threat of enforcement action, extends to the bigger stage of the national elections next year.

 

 

 

 

2 Comments

Filed under Confidentiality, consent, Data Protection, Information Commissioner, marketing, PECR, Privacy

Political attitudes to ePrivacy – this goes deep

With the rushing through of privacy-intrusive legislation under highly questionable procedures, it almost seems wrong to bang on about political parties and their approach to ePrivacy and marketing, but a) much better people have written on the #DRIP bill, and b) I think the two issues are not entirely unrelated.

Last week I was taking issue with Labour’s social media campaign which invited people to submit their email address to get a number relating to when they were born under the NHS.

Today, prompted by a twitter exchange with the excellent Lib Dem councillor James Baker, in which I observed that politicians and political parties seem to be exploiting people’s interest in discrete policy issues to harvest emails, I looked at the Liberal Democrats’ home page. It really couldn’t have illustrated my point any better. People are invited to “agree” that they’re against female genital mutilation, by submitting their email address.

libdem

There’s no information whatsoever about what will happen to your email address once you submit it. So, just as Labour were, but even more clearly here, the Lib Dems are in breach of the The Privacy and Electronic Communications (EC Directive) Regulations 2003 and the Data Protection Act 1998. James says he’ll contact HQ to make them aware. But how on earth are they not already aware? The specific laws have been in place for eleven years, but the principles are much older – be fair and transparent with people’s private information. And it is not fair (in fact it’s pretty damn reprehensible) to use such a bleakly emotive subject as FGM to harvest emails (which is unavoidably the conclusion I arrive at when wondering what the purpose of the page is).

So, in the space of a few months I’ve written about the Conservatives, Labour and the Lib Dems breaching eprivacy laws. If they’re unconcerned about or – to be overly charitable – ignorant of these laws, then is it any wonder that they railroad each other into passing “emergency” laws (which are anything but) with huge implications for our privacy?

UPDATE: 13.07.14

Alistair Sloan draws attention to the Scottish National Party’s website, which is similarly harvesting emails with no adequate notification of the purposes of future use. The practice is rife, and, as Tim Turner says in the comments below, the Information Commissioner’s Office needs to take action.

snp

7 Comments

Filed under consent, Data Protection, PECR, Privacy, transparency