Tag Archives: privacy

Who’s yer da? Language misunderstandings in the courts

The stereotype of the out-of-touch judge goes back centuries, and is epitomised by the (probably apocryphal) example in the 1960s of the judge asking plaintively “who are the Beatles?” Often, one suspects, a judge will in fact be asking a question to which she knows the answer, but which she feels would benefit from explanation by counsel, or a witness.

But I noticed an interesting example of what might be a real misunderstanding in a recent judgment on an application to strike out claims arising from publication of a screenshot from Facebook, with associated statements. The claims have been brought in defamation, harassment, data protection and misuse of private information.

The screenshot was of a photograph of the claimant, said to have been taken outside a school, and in one case, posted on Twitter, it was accompanied by words, having the effect of a caption, saying “I see yer Da is doing ‘community watch’ again”.

In respect of the application to strike out the misuse of private information claim, the judge hearing the application had to consider whether the tweet constituted information in which the claimant had a reasonable expectation of privacy. One of the features he took into account was this:

The location was outside the school which the claimant’s daughter attended. The Facebook Post did not say this (because Ms K made clear that she did not know who the claimant was and there is no sign in the photograph of the claimant’s daughter). But that does not change the fact that the claimant was photographed outside his daughter’s school having just done the school run. The expression “yer Da” (part of the caption to the first tweet of the screenshot) suggested, correctly, that he was a parent. [emphasis added]

I do not think this is right. I do not think the expression did, nor was intended to, suggest the claimant was a parent. Those who spend some time on the internet become familiar with its particular idioms, and “yer Da” is one of those. It is not meant to be taken literally nor to suggest someone is a parent. The Urban Dictionary’s definition is on point:

A common meme of the mid-2010s, most popular in the UK, from the Scottish dialect of “your dad”, which involves someone making statements on a news story through the eyes of a stereotypically right-wing, conservative, reactionary middle aged British man, increasingly baffled and angry at the modern world.

It gives a number of example uses which it’s not necessary to quote here, but suffice to say that I suspect the use of “yer Da” was intended to be mockery, but not to suggest the claimant was a parent.

This is not to say that what I see as a misunderstanding by the judge has any real significance to the case (the phrase was by no means the only factor taken into account, in what is a multi-pronged claim arising from a clearly fractious background).

But it does show that language and idioms and the context in which they are used are complex things. The irony is that this is (partly) a libel case, an area of law where the subtleties of meaning can be profoundly relevant.

The views in this post (and indeed most posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under defamation, misuse of private information

Samaritans cannot deny being data controller for #samaritansradar

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

So, Samaritans continue to support the #samaritansradar app, about which I, and many others, have already written. A large number of people suffering from, or with experience of mental health problems, have pleaded with Samaritans to withdraw the app, which monitors the tweets of the people one follows on twitter, applies an algorithm to identify tweets from potentially vulnerable people, and emails that information to the app user, all without the knowledge of the person involved. As Paul Bernal has eloquently said, this is not really an issue about privacy, and nor is it about data protection – it is about the threat many vulnerable people feel from the presence of the app. Nonetheless, privacy and data protection law, in part, are about the rights of the vulnerable; last night (4 November) Samaritans issued their latest sparse statement, part of which dealt with data protection:

We have taken the time to seek further legal advice on the issues raised. Our continuing view is that Samaritans Radar is compliant with the relevant data protection legislation for the following reasons:

o   We believe that Samaritans are neither the data controller or data processor of the information passing through the app

o   All information identified by the app is available on Twitter, in accordance with Twitter’s Ts&Cs (link here). The app does not process private tweets.

o   If Samaritans were deemed to be a data controller, given that vital interests are at stake, exemptions from data protection law are likely to apply

It is interesting that there is reference here to “further” legal advice: none of the previous statements from Samaritans had given any indication that legal or data protection advice had been sought prior to the launch of the app. It would be enormously helpful to discussion of the issue if Samaritans actually disclosed their advice, but I doubt very much that they will do so. Nonetheless, their position appears to be at odds with the legal authorities.

In May this year the Court of Justice of the European Union (CJEU) gave its ruling in the Google Spain case. The most widely covered aspect of that case was, of course, the extent of a right to be forgotten – a right to require Google to remove search terms in certain specified cases. But the CJEU also was asked to rule on the question of whether a search engine, such as Google, was a data controller in circumstances in which it engages in the indexing of web pages. Before the court Google argued that

the operator of a search engine cannot be regarded as a ‘controller’ in respect of that processing since it has no knowledge of those data and does not exercise control over the data

and this would appear to be a similar position to that adopted by Samaritans in the first bullet point above. However, the CJEU dismissed Google’s argument, holding that

the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results…It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of [the activity at issue] and which must, consequently, be regarded as the ‘controller’ in respect of that processing

Inasmuch as I understand how it works, I would submit that #samaritansradar, while not a search engine as such, collects data (personal data), records and organises it, stores it on servers and discloses it to its users in the form of a result. The app has been developed by and launched by Samaritans, it carries their name and seeks to further their aims: it is clearly “their” app, and they are, as clearly, a data controller with attendant legal responsibilities and liabilities. In further proof of this Samaritans introduced, after the app launch and in response to outcry, a “whitelist” of twitter users who have specifically informed Samaritans that they do not want their tweets to be monitored (update on 30 October). If Samaritans are effectively saying they have no role in the processing of the data, how on earth would such a whitelist be expected to work?

And it’s interesting to consider the apparent alternative view that they are implicitly putting forward. If they are not data controller, then who is? The answer must be the users who download and run the app, who would attract all the legal obligations that go with being a data controller. The Samaritans appear to want to back out of the room, leaving app users to answer all the awkward questions.1

Also very interesting is that Samaritans clearly accept that others might have a different view to theirs on the issue of controllership; they suggest that if they were held to be a data controller they would avail themselves of “exemptions” in data protection law relating to “vital interest” to legitimise their activities. One presumes this to be a reference to certain conditions in Schedule 2 and 3 of the Data Protection Act 1998 (DPA). Those schedules contain conditions which must be met, in order for the processing of, respectively, personal data and sensitive personal data, to be fair and lawful. As we are here clearly talking about sensitive personal data (personal data relating to someone’s physical or mental health is classed as sensitive), let us look at the relevant condition in Schedule 3:

The processing is necessary—
(a)in order to protect the vital interests of the data subject or another person, in a case where—
(i)consent cannot be given by or on behalf of the data subject, or
(ii)the data controller cannot reasonably be expected to obtain the consent of the data subject, or
(b)in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld

Samaritans alternative defence founders on the first four words: in what way can this processing be necessary to protect vital interests? The Information Commissioner’s Office explains that this condition only applies

in cases of life or death, such as where an individual’s medical history is disclosed to a hospital’s A&E department treating them after a serious road accident

The evidence suggests this app is actually delivering a very large number of false positives (as it’s based on what seems to be a crude keyword algorithm, this is only to be expected). Given that, and, indeed, given that Samaritans have – expressly – no control over what happens once the app notifies a user of a concerning tweet, it is absolutely preposterous to suggest that the processing is necessary to protect people’s vital interests. Moreover, the condition above also explains that it can only be relied on where consent cannot be given by the data subject or the controller cannot reasonably be expected to obtain consent. Nothing prevents Samaritans from operating an app which would do the same thing (flag a tweet of concern) but basing it on a consent model, whereby someone agrees that their tweets will be monitored in that way. Indeed, such a model would fit better with Samaritans stated aim of allowing people to “lead the conversation at their own pace”. It is clear, nonetheless, that consent could be sought for this processing, but that Samaritans have failed to design an app which allows it to be sought.

The Information Commissioner’s Office is said to be looking into the issues raised by Samaritans’ app. It may be that it will only be through legal enforcement action that it will actually be – as I think it should – removed. But it would be extremely sad if it came to that. It should be removed voluntarily by Samaritans, so they can rethink, re-programme, take full legal advice, but – most importantly – listen to the voices of the most vulnerable, who feel so threatened and betrayed by the app.

1On a strict and nuanced analysis of data protection law users of the app probably are data controllers, acting as joint ones with Samaritans. However, given the regulatory approach of the Information Commissioner they would probably be able to avail themselves of the general exemption from all of the DPA for processing which is purely domestic (although even that is arguably wrong). These are matters for another blog post however, and the fact that users might be held to be data controllers doesn’t alter the fact that Samaritans are, and in a much clearer way

43 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

Samaritans Radar – serious privacy concerns raised

UPDATE: 31 October

It appears Samaritans have silently tweaked their FAQs (so the text near the foot of this post no longer appears). They now say tweets will only be retained by the app for seven (as opposed to thirty) days, and have removed the words saying the app will retain a “Count of flags against a Twitter Users Friends ID”. Joe Ferns said on Twitter that the inclusion of this in the original FAQs was “a throw back to a stage of the development where that was being considered”. Samaritans also say “The only people who will be able to see the alerts, and the tweets flagged in them, are followers who would have received these Tweets in their current feed already”, but this does not absolve them of their data controller status: a controller does not need to access data in order to determine the means by which and the manner in which personal data are being processed, and they are still doing this. Moreover, this changing of the FAQs, with no apparent change to the position that those whose tweets are processed get no fair processing notice whatsoever, makes me more concerned that this app has been released without adequate assessment of its impact on people’s privacy.

END UPDATE

UPDATE: 30 October

Susan Hall has written a brilliant piece expanding on mine below, and she points out that section 12 of the Data Protection Act 1998 in terms allows a data subject to send a notice to a data controller requiring it to ensure no automated decisions are taken by processing their personal data for the purposes of evaluating matters such as their conduct. It seems to me that is precisely what “Samaritans Radar” does. So I’ve sent the following to Samaritans

Dear Samaritans

This is a notice pursuant to section 12 Data Protection Act 1998. Please ensure that no decision is taken by you or on your behalf (for instance by the “Samaritans Radar” app) based solely on the processing by automatic means of my personal data for the purpose of evaluating my conduct.

Thanks, Jon Baines @bainesy1969

I’ll post here about any developments.

END UPDATE

Samaritans have launched a Twitter App “to help identify vulnerable people”. I have only ever had words of praise and awe about Samaritans and their volunteers, but this time I think they may have misjudged the effect, and the potential legal implications of “Samaritans Radar”. Regarding the effect, this post from former volunteer @elphiemcdork is excellent:

How likely are you to tweet about your mental health problems if you know some of your followers would be alerted every time you did? Do you know all your followers? Personally? Are they all friends? What if your stalker was a follower? How would you feel knowing your every 3am mental health crisis tweet was being flagged to people who really don’t have your best interests at heart, to put it mildly? In this respect, this app is dangerous. It is terrifying to think that anyone can monitor your tweets, especially the ones that disclose you may be very vulnerable at that time

As for the legal implications, it seems to be potentially the case that Samaritans are processing sensitive personal data, in circumstances where there may not be a legal basis to do so. And some rather worrying misconceptions have accompanied the app launch. The first and most concerning of these is in the FAQs prepared for the media. In reply to the question “Isn’t there a data privacy issue here? Is Samaritans Radar spying on people?” the following answer is given

All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets. It does not look at private Tweets

The idea that, because something is in the public domain it cannot engage privacy issues is a horribly simplistic one, and if that constitutes the impact assessment undertaken, then serious questions have to be asked. Moreover, it doesn’t begin to consider the data protection considerations: personal data is personal data, whether it’s in the public domain or not. A tweet from an identified tweeter is inescapably the personal data of that person, and, if it is, or appears to be, about the person’s physical or mental health, then it is sensitive personal data, afforded a higher level of protection under the Data Protection Act 1998 (DPA). It would appear that Samaritans, as the legal person who determines the purposes for which, and the manner in which, the personal data are processed (i.e. they have produced an app which identifies a tweet on the basis of words, or sequences of words, and push it to another person) are acting as a data controller. As such, any processing has to be in accordance with their obligation to abide by the data protection principles in Schedule One of the DPA. The first principle says that personal data must be processed fairly and lawfully, and that a condition for processing contained in Schedule Two (and for sensitive personal data Schedule Two and Three) must be met. Looking only at Schedule Three, I struggle to see the condition which permits the app to identify a tweet, decide that it is from a potentially suicidal person and send it as such to a third party. The one condition which might apply, the fifth “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject” is undercut by the fact that the data in question is not just the public tweet, but the “package” of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.

The reliance on “all the data used in the app is public, so user privacy is not an issue” has carried through in messages sent on twitter by Samaritans Director of Policy, Research and Development, Joe Ferns, in response to people raising concerns, such as

existing Twitter search means anyone can search tweets unless you have set to private. #SamaritansRadar is like an automated search

Again, this misses the point that it is not just “anyone” doing a search on twitter, it is an app in Samaritans name which specifically identifies (in an automated way) certain tweets as of concern, and pushes them to third parties. Even more concerning was Mr Ferns’ response to someone asking if there was a way to opt out of having their tweets scanned by the app software:

if you use Twitter settings to mark your tweets private #SamaritansRadar will not see them

What he is actually suggesting there is that to avoid what some people clearly feel are intrusive actions they should lock their account and make it private. And, of course, going back to @elphiemcdork’s points, it is hard to avoid the conclusion that those who will do this might be some of the most vulnerable people.

A further concern is raised (one which confirms the data controller point above) about retention and reuse of data. The media FAQ states

Where will all the data be stored? Will it be secure? The data we will store is as follows:
• Twitter User ID – a unique ID that is associated with a Twitter account
• All Twitter User Friends ID’s – The same as above but for all the users friends that they
follow
• Any flagged Tweets – This is the data associated with the Tweet, we will store the raw
data for the Tweet as well
• Count of flags against a Twitter Users Friends ID – We store a count of flags against an
individual User
• To prevent the Database growing exponentially we will remove flagged Tweets that are
older than 30 days.

So it appears that Samaritans will be amassing data on unwitting twitter users, and in effect profiling them. This sort of data is terrifically sensitive, and no indication is given regarding the location of this data, and security measures in place to protect it.

The Information Commissioner’s Office recently produced some good guidance for app developers on Privacy in Mobile Apps. The guidance commends the use of Privacy Impact Assessments when developing apps. I would be interested to know if one was undertaken for Samaritans Radar, and, if so, how it dealt with the serious concerns that have been raised by many people since its launch.

This post was amended to take into account the observations in the comments by Susan Hall, to whom I give thanks. I have also since seen a number of excellent blog posts dealing with wider concerns. I commend, in particular, this by Adrian Short and this by @latentexistence

 

 

33 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

Brooks Newmark, the press, and “the other woman”

UPDATE: 30.09.14 Sunday Mirror editor Lloyd Embley is reported by the BBC and other media outlets to have apologised for the use of women’s photos (it transpires that two women’s images appropriated), saying

We thought that pictures used by the investigation were posed by models, but we now know that some real pictures were used. At no point has the Sunday Mirror published any of these images, but we would like to apologise to the women involved for their use in the investigation

What I think is interesting here is the implicit admission that (consenting) models could have been used in the fake profiles. Does this mean therefore, the processing of the (non-consenting) women’s personal data was not done in the reasonable belief that it was in the public interest?

Finally, I think it’s pretty shoddy that former Culture Secretary Maria Miller resorts to victim-blaming, and missing the point, when she is reported to have said that the story “showed why people had to be very careful about the sorts of images they took of themselves and put on the internet”

END UPDATE.

With most sex scandals involving politicians, there is “the other person”. For every Profumo, a Keeler;  for every Mellor, a de Sancha; for every Clinton, a Lewinsky. More often than not the rights and dignity of these others are trampled in the rush to revel in outrage at the politicians’ behaviour. But in the latest, rather tedious, such scandal, the person whose rights have been trampled was not even “the other person”, because there was no other person. Rather, it was a Swedish woman* whose image was appropriated by a journalist without her permission or even her knowledge. This raises the question of whether such use, by the journalist, and the Sunday Mirror, which ran the exposé, was in accordance with their obligations under data protection and other privacy laws.

The story run by the Sunday Mirror told of how a freelance journalist set up a fake social media profile, purportedly of a young PR girl called Sophie with a rather implausible interest in middle-aged Tory MPs. He apparently managed to snare the Minister for Civil Society and married father of five, Brooks Newmark, and encourage him into sending explicit photographs of himself. The result was that the newspaper got a lurid scoop, and the Minister subsequently resigned. Questions are being asked about the ethics of the journalism involved, and there are suggestions that this could be the first difficult test for IPSO, the new Independent Press Standards Organisation.

But for me much the most unpleasant part of this unpleasant story was that the journalist appears to have decided to attach to the fake twitter profile the image of a Swedish woman. It’s not clear where he got this from, but it is understood that the same image had apparently already appeared on several fake Facebook accounts (it is not suggested, I think, that the same journalist was responsible for those accounts). The woman is reported to be distressed at the appropriation:

It feels really unpleasant…I have received lot of emails, text messages and phone calls from various countries on this today. It feels unreal…I do not want to be exploited in this way and someone has used my image like this feels really awful, both for me and the others involved in this. [Google translation of original Swedish]

Under European and domestic law the image of an identifiable individual is their personal data. Anyone “processing” such data as a data controller (“the person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be, processed”) has to do so in accordance with the law. Such processing as happened here, both by the freelance journalist, when setting up and operating the social media account(s), and by the Sunday Mirror, in publishing the story, is covered by the UK Data Protection Act 1998 (DPA). This will be the case even though the person whose image was appropriated is in Sweden. The DPA requires, among other things, that processing of personal data be “fair and lawful”. It affords aggrieved individuals the right to bring civil claims for compensation for damage and distress arising from contraventions of data controllers’ obligations under the DPA. It also affords them the right to ask the Information Commissioner’s Office (ICO) for an assessment of the likelihood (or not) that processing was in compliance with the DPA.

However, section 32 of the DPA also gives journalism a very broad exemption from almost all of the Act, if the processing is undertaken with a view to publication, and the data controller reasonably believes that publication would be in the public interest and that compliance with the DPA would be incompatible with the purposes of journalism. As the ICO says

The scope of the exemption is very broad. It can disapply almost all of the DPA’s provisions, and gives the media a significant leeway to decide for themselves what is in the public interest

The two data controllers here (the freelancer and the paper) would presumably have little problem satisfying a court, or the ICO, that when it came to processing of Brooks Newmark’s personal data, they acted in the reasonable belief that the public interest justified the processing. But one wonders to what extent they even considered the processing of (and associated intrusion into the private life of) the Swedish woman whose image was appropriated. Supposing they didn’t even consider this processing – could they reasonably say they that they reasonably believed it to have been in the public interest?

These are complex questions, and the breadth and ambit of the section 32 exemption are likely to be tested in litigation between the mining and minerals company BSG and the campaigning group Global Witness (currently stalled/being considered at the ICO). But even if a claim or complaint under DPA would be a tricky one to make, there are other legal issues raised. Perhaps in part because of the breadth of the section 32 DPA exemption (and perhaps because of the low chance of significant damages under the DPA), claims of press intrusion into private lives are more commonly brought under the cause of action of “misuse of private information “, confirmed – it would seem – as a tort, in the ruling of Mr Justice Tugendhat in Vidal Hall and Ors v Google Inc [2014] EWHC 13 (QB), earlier this year. Damage awards for successful claims in misuse of private information have been known to be in the tens of thousands of pounds – most notably recently an award of £10,000 for Paul Weller’s children, after photographs taken covertly and without consent had been published in the Mail Online.

IPSO expects journalists to abide by the Editor’s Code, Clause 3 of which says

i) Everyone is entitled to respect for his or her private and family life, home, health and correspondence, including digital communications.

ii) Editors will be expected to justify intrusions into any individual’s private life without consent. Account will be taken of the complainant’s own public disclosures of information

and the ICO will take this Code into account when considering complaints about journalistic processing of personal data. One notes that “account will be taken of the complainant’s own public disclosures of information”, but one hopes that this would not be seen to justify the unfair and unethical appropriation of images found elsewhere on the internet.

*I’ve deliberately, although rather pointlessly – given their proliferation in other media – avoided naming the woman in question, or posting her photograph

4 Comments

Filed under Confidentiality, consent, Data Protection, Information Commissioner, journalism, Privacy, social media

Dancing to the beat of the Google drum

With rather wearying predictability, certain parts of the media are in uproar about the removal by Google of search results linking to a positive article about a young artist. Roy Greenslade, in the Guardian, writes

The Worcester News has been the victim of one of the more bizarre examples of the European court’s so-called “right to be forgotten” ruling.

The paper was told by Google that it was removing from its search archive an article in praise of a young artist.

Yes, you read that correctly. A positive story published five years ago about Dan Roach, who was then on the verge of gaining a degree in fine art, had to be taken down.

Although no one knows who made the request to Google, it is presumed to be the artist himself, as he had previously asked the paper itself to remove the piece,  on the basis that he felt it didn’t reflect the work he is producing now. But there is a bigger story here, and in my opinion it’s one of Google selling itself as an unwilling censor, and of media uncritically buying it.

Firstly, Google had no obligation to remove the results. The judgment of the Court of Justice of the European Union (CJEU) in the Google Spain case was controversial, and problematic, but its effect was certainly not to oblige a search engine to respond to a takedown request without considering whether it has a legal obligation to do so. What it did say was that, although as a rule data subjects’ rights to removal override the interest of the general public having access to the information delivered by a search query, there may be particular reasons why the balance might go the other way.

Furthermore, even if the artist here had a legitimate complaint that the results constituted his personal data, and that the continued processing by Google was inadequate, inaccurate, excessive or continuing for longer than was necessary (none of which, I would submit, would actually be likely to apply in this case), Google could simply refuse to comply with the takedown request. At that point, the requester would be left with two options: sue, or complain to the Information Commissioner’s Office (ICO). The former option is an interesting one (and I wonder if any such small claims cases will be brought in the County Court) but I think in the majority of cases people will be likely to take the latter. However, if the ICO receives a complaint, it appears that the first thing it is likely to do is refer the person to the publisher of the information in question. In a blog post in August the Deputy Commissioner David Smith said

We’re about to update our website* with advice on when an individual should complain to us, what they need to tell us and how, in some cases, they might be better off pursuing their complaint with the original publisher and not just the search engine [emphasis added]

This is in line with their new approach to handling complaints by data subjects – which is effectively telling them to go off and resolve it with the data controller in the first place.

Even if the complaint does make its way to an ICO case officer, what that officer will be doing is assessing – pursuant to section 42 of the Data Protection Act 1998 (DPA) – “whether it is likely or unlikely that the processing has been or is being carried out in compliance with the provisions of [the DPA]”. What the ICO is not doing is determining an appeal. An assessment of “compliance not likely” is no more than that – it does not oblige the data controller to take action (although it may be accompanied by recommendations). An assessment of “compliance likely”, moreover, leaves an aggrieved data subject with no other option but to attempt to sue the data controller. Contrary to what Information Commissioner Christopher Graham said at the recent Rewriting History debate, there is no right of appeal to the Information Tribunal in these circumstances.

Of course the ICO could, in addition to making a “compliance not likely” assessment, serve Google with an enforcement notice under section 42 DPA requiring them to remove the results. An enforcement notice does have proper legal force, and it is a criminal offence not comply with one. But they are rare creatures. If the ICO does ever serve one on Google things will get interesting, but let’s not hold our breath.

So, simply refusing to take down the results would, certainly in the short term, cause Google no trouble, nor attract any sanction.

Secondly (sorry, that was a long “firstly”) Google appear to have notified the paper of the takedown, in the same way they notified various journalists of takedowns of their pieces back in June this year (with, again, the predictable result that the journalists were outraged, and republicised the apparently taken down information). The ICO has identified that this practice by Google may in itself constitute unfair and unlawful processing: David Smith says

We can certainly see an argument for informing publishers that a link to their content has been taken down. However, in some cases, informing the publisher has led to the complained about information being republished, while in other cases results that are taken down will link to content that is far from legitimate – for example to hate sites of various sorts. In cases like that we can see why informing the content publisher could exacerbate an already difficult situation and could in itself have a very detrimental effect on the complainant’s privacy

Google is a huge and hugely rich organisation. It appears to be trying to chip away at the CJEU judgment by making it look ridiculous. And in doing so it is cleverly using the media to help portray it as a passive actor – victim, along with the media, of censorship. As I’ve written previously, Google is anything but passive – it has algorithms which prioritise certain results above others, for commercial reasons, and it will readily remove search results upon receipt of claims that the links are to copyright material. Those elements of the media who are expressing outrage at the spurious removal of links might take a moment to reflect whether Google is really as interested in freedom of expression as they are, and, if not, why it is acting as it is.

 

 
*At the time of writing this advice does not appear to have been made available on the ICO website.

5 Comments

Filed under Data Protection, Directive 95/46/EC, enforcement, Information Commissioner, Privacy

Political attitudes to ePrivacy – this goes deep

With the rushing through of privacy-intrusive legislation under highly questionable procedures, it almost seems wrong to bang on about political parties and their approach to ePrivacy and marketing, but a) much better people have written on the #DRIP bill, and b) I think the two issues are not entirely unrelated.

Last week I was taking issue with Labour’s social media campaign which invited people to submit their email address to get a number relating to when they were born under the NHS.

Today, prompted by a twitter exchange with the excellent Lib Dem councillor James Baker, in which I observed that politicians and political parties seem to be exploiting people’s interest in discrete policy issues to harvest emails, I looked at the Liberal Democrats’ home page. It really couldn’t have illustrated my point any better. People are invited to “agree” that they’re against female genital mutilation, by submitting their email address.

libdem

There’s no information whatsoever about what will happen to your email address once you submit it. So, just as Labour were, but even more clearly here, the Lib Dems are in breach of the The Privacy and Electronic Communications (EC Directive) Regulations 2003 and the Data Protection Act 1998. James says he’ll contact HQ to make them aware. But how on earth are they not already aware? The specific laws have been in place for eleven years, but the principles are much older – be fair and transparent with people’s private information. And it is not fair (in fact it’s pretty damn reprehensible) to use such a bleakly emotive subject as FGM to harvest emails (which is unavoidably the conclusion I arrive at when wondering what the purpose of the page is).

So, in the space of a few months I’ve written about the Conservatives, Labour and the Lib Dems breaching eprivacy laws. If they’re unconcerned about or – to be overly charitable – ignorant of these laws, then is it any wonder that they railroad each other into passing “emergency” laws (which are anything but) with huge implications for our privacy?

UPDATE: 13.07.14

Alistair Sloan draws attention to the Scottish National Party’s website, which is similarly harvesting emails with no adequate notification of the purposes of future use. The practice is rife, and, as Tim Turner says in the comments below, the Information Commissioner’s Office needs to take action.

snp

7 Comments

Filed under consent, Data Protection, PECR, Privacy, transparency

Privacy issues with Labour Party website

Two days ago I wrote about a page on the Labour Party website which was getting considerable social media coverage. It encourages people to submit their date of birth to find out, approximately, of all the births under the NHS, what number they were.

I was concerned that it was grabbing email address without an opt-out option. Since then, I’ve been making a nuisance of myself asking, via twitter, various Labour politicians and activists for their comments. I know I’m an unimportant blogger, and it was the weekend, but only one chose to reply: councillor for Lewisham Mike Harris, who, as campaign director for DontSpyOnUs, I would expect to be concerned, and, indeed, to his credit, he said “You make a fair point, there should be the ability to opt out”. Mike suggested I email Labour’s compliance team.

In the interim I’d noticed that elsewhere on the Labour website there were other examples of emails being grabbed in circumstances where people would not be sure about the collection. For instance: this “calculator” which purports to calculate how much less people would pay under Labour for energy bills, which gives no privacy notice whatsoever. Or even this, on the home page, which similarly gives no information about what will happen with your data

homepage

Now, some might say that, if you’re giving your details to “get involved”, then you are consenting to further contact. This is probably true, but it doesn’t mean the practice is properly compliant with data collection laws. And this is not unimportant; as well as potentially contributing to the global spam problem, poor privacy notices/lack of opt-out facilities at the point of collection of email address contribute to the unnecessary amassing of private information, and when it is done by a political party, this can even be dangerous. It should not need pointing out that, historically, and elsewhere in the world, political party lists have often been used by opposition parties and repressive governments to target and oppress activists. Indeed, the presence of one’s email on a party marketing database might well constitute sensitive personal data – as it can be construed as information on one’s political opinions (per section 2 of the Data Protection Act 1998).

So, these are not unimportant issues, and I decided to follow Mike Harris’s suggestion to email Labour’s compliance unit. However, the contact details I found on the overarching privacy policy merely gave a postal address. I did notice though that that page said

If you have any questions about our privacy policy, the information we have collected from you online, the practices of this site or your interaction with this website, please contact us by clicking here

But if I follow the “clicking here” link, it takes me to – wait for it – a contact form which gives no information whatsoever about what will happen if I submit it, other than the rather stalinesque

The Labour Party may contact you using the information you supply

And returning to the overarching privacy policy didn’t assist here – none of the categories on that page fitted the circumstances of someone contacting the party to make a general enquiry.

I see that the mainstream media have been covering the NHS birth page which originally prompted me to look at this issue. Some, like the Metro, and unsurprisingly, the Mirror, are wholly uncritical. The Independent does note that it is a clever way of harvesting emails, but fails to note the questionable legality of the practice. Given that this means that more and more email addresses will be hoovered up, without people fully understanding why, and what will happen with them, I really think that senior party figures, and the Information Commissioner, should start looking at Labour’s online privacy activities.

(By the way, if anyone thinks this is a politically-motivated post by me, I would point out that, until 2010, when I voted tactically (never again), I had only ever voted for one party in my whole life, and that wasn’t the Conservatives or the Lib Dems.)

6 Comments

Filed under Data Protection, Information Commissioner, marketing, PECR, Privacy, privacy notice, social media, tracking

The Partridge Review reveals apparently huge data protection breaches

Does the Partridge Review of NHS transfers of hospital episode patient data point towards one of the biggest DPA breaches ever?

In February this year Tim Kelsey, NHS England’s National Director for Patients and Information, and vocal cheerleader for the care.data initiative, assured the public, in an interview on the Radio 4 Today programme, that in the twenty five years that Hospital Episode Statistics (HES) have been shared with other organisations

the management of the hospital episode database…there has never been a single example of that data being compromised, the privacy of patients being compromised…

When pressed by medConfidential‘s Phil Booth about this, and about risks of reidentification from the datasets, Tim repeated that no patient’s privacy had been compromised.

Some of us doubted this, as news of specific incidents of data loss emerged, and even more so as further news emerged suggesting that there had been transfers (a.k.a. sale) of huge amounts of potentially identifiable patient data to, for instance, the Institute and Faculty of Actuaries. The latter news led me to ask the Information Commissioner’s Office (ICO) to assess the lawfulness of this processing, an assessment which has not been completed four months later.

However, with the publication on 17 June of Sir Nick Partridge’s Review of Data Releases by the NHS Information Centre one questions the basis for Tim’s assertions. Sir Nick commissioned PwC to analyse a total of 3059 data releases between 2005 and 2013 (when the NHS Information Centre (NHSIC) ceased to exist, and was replaced by the Health and Social Care Information Centre HSCIC). The summary report to the Review says that

It disappoints me to report that the review has discovered lapses in the strict arrangements that were supposed to be in place to ensure that people’s personal data would never be used improperly

and it reveals a series of concerning and serious failures of data governance, including

  • lack of detailed records between 1 April 2005 and 31 March 2009
  • two cases of data that was apparently released without a proper record remaining of which organisation received the data
  • [no] evidence that Northgate [the NHSIC contractor responsible for releases] got permission from the NHS IC before making releases as it was supposed to do
  • PwC could not find records to confirm full compliance in about 10% of the sample

 Sir Nick observes that

 the system did not have the checks and balances needed to ensure that the appropriate authority was always in place before data was released. In many cases the decision making process was unclear and the records of decisions are incomplete.

and crucially

It also seems clear that the responsibilities of becoming a data controller, something that happens as soon as an organisation receives data under a data sharing agreement, were not always clear to those who received data. The importance of data controllers understanding their responsibilities remains vital to the protection of people’s confidentiality

(This resonates with my concern, in my request to the ICO to assess the transfer of data from HES to the actuarial society, about what the legal basis was for the latter’s processing).

Notably, Sir Nick dispenses with the idea that data such as HES was anonymised:

The data provided to these other organisations under data sharing agreements is not anonymised. Although names and addresses are normally removed, it is possible that the identity of individuals may be deduced if the data is linked to other data

 And if it was not anonymised, then the Data Protection Act 1998 (DPA) is engaged.

All of this indicates a failure to take appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data, which the perspicacious among you will identify as one of the key statutory obligations placed on data controllers by the seventh data protection principle in the DPA.

Sir Nick may say

 It is a matter of fact that no individual ever complained that their confidentiality had been breached as a result of data being shared or lost by the NHS IC

but simply because no complaint was made (at the time – complaints certainly have been made since concerns started to be raised) does not mean that the seventh principle was not contravened, in a serious way.  And a serious contravention of the DPA of a kind likely to cause substantial damage or substantial distress can potentially lead to the ICO serving a monetary penalty notice (MPN) to a maximum of £500,000 (at least for contraventions after April 2010, when the ICO’s powers commenced).

The NHSIC is no more (although as Sir Nick says, HSCIC “inherited many of the NHS IC’s staff and procedures”). But that has not stopped the ICO serving MPNs on successor organisation in circumstances where their predecessors committed the contravention.  One waits with interest to see whether the ICO will take any enforcement action, but I think it’s important that they consider doing so, because, even though Sir Nick makes nine very sensible recommendations to HSCIC, one could be forgiven – having been given clear assurances previously, by the likes of Tim Kelsey and others – for having reservations as to future governance of our confidential medical data. I would suggest it is imperative that HSCIC know that their processing of personal data is now subject to close oversight by all relevant regulatory bodies.

 

 

 

 

 

 

 

 

 

2 Comments

Filed under care.data, Confidentiality, Data Protection, data sharing, Information Commissioner, monetary penalty notice, NHS, Privacy

Right now, you are being monitored

This morning, as I was leaving the house for work, I wanted to check the weather forecast so started tapping and swiping away at my newish iPhone to find the weather screen. I was startled to see some text appear which said

Right now, it would take you about 11 minutes to drive to [workplace address]

(It looked a bit like this (not my phone I stress)).

It was correct, it would indeed take me about that long to drive to work at that time, but I was genuinely taken aback. After a bit of research I see that this was a new feature in iOS7, (and, indeed, the weather widget was lost at the same time). Sure enough, I find that my new phone has been logging frequently visited locations, but must have also been logging the fact that I travel between A (home) and B (work) frequently. It is described by Apple as being a way to

Allow your iPhone to learn places you frequently visit in order to provide useful location-related information

I’m not going to argue whether this is a useful service or not, or even whether on general principles it is concerning or not. What I am going to say is that, because I’ve not had much time recently to sit down and learn about my new phone, to customise it in the most privacy-friendly way, I’ve been saddled with a default setting which has captured an extraordinarily accurate dataset about my travel habits without my knowledge. And yes, I know that tracking is a prerequisite of mobile phone functionality, but I would just rather it was, as default, limited to the bare minimum. 

p.s. to turn off this default setting, navigate to Settings/Privacy/Location Services [scroll to very bottom]/System Services/Frequent Locations and set to “off”

 

Leave a comment

Filed under Data Protection, interception, Privacy, surveillance, tracking

Data Protection for Baddies

Should Chris Packham’s admirable attempts to expose the cruelties of hunting in Malta be restrained by data protection law? And who is protected by the data protection exemption for journalism?

I tend sometimes to lack conviction, but one thing I am pretty clear about is that I am not on the side of people who indiscriminately shoot millions of birds, and whose spokesman tries to attack someone by mocking their well-documented mental health problems. So, when I hear that the FNKF, the Maltese “Federation for Hunting and Conservation” has

presented a judicial protest against the [Maltese] Commissioner of Police and the Commissioner for Data Protection, for allegedly not intervening in “contemplated” or possible breaches of privacy rules

with the claim being that they have failed to take action to prevent

BBC Springwatch presenter Chris Packham [from] violating hunters’ privacy by “planning to enter hunters’ private property” and by posting his video documentary on YouTube, which would involve filming them without their consent

My first thought is that this is an outrageous attempt to manipulate European privacy and data protection laws to try to prevent legitimate scruting of activities which sections of society find offensive and unacceptable. It’s my first thought, and my lasting one, but it does throw some interesting light on how such laws can potentially be used to advance or support causes which might not be morally or ethically attractive. (Thus it was that, in 2009, a former BNP member was prosecuted under section 55 the UK Data Protection Act 1998 (DPA 1998) for publishing a list of party members on the internet. Those members, however reprehensible their views or actions, had had their sensitive personal data unlawfully processed, and attracted the protection of the DPA (although the derisory £200 fine the offender received barely served as a deterrent)).

I do not profess to being an expert in Maltese Data Protection law, but, as a member state of the European Union, Malta was obliged to implement Directive EC/95/46 on the Protection of Individuals with regard to the Processing of Personal Data (which it did in its Data Protection Act of 2001). The Directive is the bedrock of all European data protection law, generally containing minimum standards which member states must implement in domestic law, but often allowing them to legislate beyond those minimum standards.

It may well be that the activities of Chris Packham et al do engage Maltese data protection law. In fact, if, for instance, film footage or other information which identifies individuals is recorded and broadcast in other countries in the European Union, it would be likely to constitute an act of “processing” under Article 2(b) of the Directive which would engage data protection law in whichever member state it was processed.

Data protection law at European level has a scope whose potential breadth has been described as “breath-taking”. “Personal data” is “any information relating to an identified or identifiable natural person” (that is “one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity”), and “processing” encompasses “any operation or set of operations which is performed upon personal data, whether or not by automatic means, such as collection, recording, organization, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, blocking, erasure or destruction”.

However, the broad scope does not necessarily means broad prohibitions on activities involving processing. Personal data must be processed “fairly and lawfully”, and can (broadly) be processed without the data subject’s consent in circumstances where there is a legal obligation to do so, or where it is necessary in the public interest, or necessary where the legitimate interests of the person processing it, or of a third party, outweigh the interests for fundamental rights and freedoms of the data subject. These legitimising conditions are implemented into the Maltese Data Protection Act 2001 (at section 9), so it can be seen that the FKNF’s claim that Packham requires the hunters’ consent to film might not have legs.

Moreover, Article 9 of the Directive, transposed in part at section 6 of the 2001 Maltese Act, provides for an exemption to most of the general data protection obligations where the processing is for journalistic purposes, which almost certainly be engaged for Packham’s activities. Whether, however, any other Maltese laws might apply is, I’m afraid, well outside my area of knowledge.

But what about activists who might not normally operate under the banner of “journalism”? What if Packham were, rather than a BBC journalist/presenter, “only” a naturalist? Would he be able to claim the journalistic data protection exemption?

Some of these sorts of issues are currently edging towards trial in litigation brought in the UK, under the DPA 1998, by a mining corporation (or, in its own words, a “diversified natural resources business”), BSG Resources, against Global Witness, an NGO one of whose stated goals is to “expose the corrupt exploitation of natural resources and international trade systems”. BSGR’s claims are several, but are all made under the DPA 1998, and derive from the fact they have sought to make subject access requests to Global Witness to know what personal data of the BSGR claimants is being processed, for what purposes and to whom it is being or may be disclosed. Notably, BSGR have chosen to upload their grounds of claim for all to see. For more background on this see the ever-excellent Panopticon blog, and this article in The Economist.

This strikes me as a potentially hugely significant case, firstly because it illustrates how data protection is increasingly being used to litigate matters more traditionally seen as being in the area of defamation law, or the tort of misuse of private information, but secondly because it goes to the heart of questions about what journalism is, who journalists are and what legal protection (and obligations) those who don’t fit the traditional model/definition of journalism have or can claim.

I plan to blog in more detail on this case in due course, but for the time being I want to make an observation. Those who know me will not have too much trouble guessing on whose side my sympathies would tend to fall in the BSGR/Global Witness litigation, but I am not so sure how I would feel about extending journalism privileges to, say, an extremist group who were researching the activities of their opponents with a view to publishing those opponents’ (sensitive) personal data on the internet. If society wishes to extend the scope of protection traditionally afforded to journalists to political activists, or citizen bloggers, or tweeters, it needs to be very careful that it understands the implications of doing so. Freedom of expression and privacy rights coexist in a complex relationship, which ideally should be an evenly balanced one. Restricting the scope of data protection law, by extending the scope of the exemption for journalistic activities, could upset that balance.

7 Comments

Filed under Data Protection, Europe, human rights, journalism, Privacy, Uncategorized