Tag Archives: DPA

Why what Which did wears my patience thin

Pre-ticked consent boxes and unsolicited emails from the Consumers’ Association

Which?, the brand name of the Consumers’ Association, publishes a monthly magazine. In an era of social media, and online reviews, its mix of consumer news and product ratings might seem rather old-fashioned, but it is still (according to its own figures1) Britain’s best-selling monthly magazine. Its rigidly paywalled website means that one must generally subscribe to get at the magazine’s contents. That’s fair enough (although after my grandmother died several years ago, we found piles of unread, unopened even, copies of Which? She had apparently signed up to a regular Direct Debit payment, probably to receive a “free gift”, and had never cancelled it: so one might draw one’s own conclusion about how many of Which?’s readers are regular subscribers for similar reasons).

In line with its general “locked-down” approach, Which?’s recent report into the sale of personal data was, except for snippets, not easy to access, but it got a fair bit of media coverage. Intrigued, I bit: I subscribed to the magazine. This post is not about the report, however, although the contents of the report drive the irony of what happened next.

As I went through the online sign-up process, I arrived at that familiar type of page where the subject of future marketing is broached. Which? had headlined their report “How your data could end up in the hands of scammers” so it struck me as amusing, but also irritating, that the marketing options section of the sign-in process came with a pre-ticked box:

img_0770

As guidance from the Information Commissioner’s Office makes clear, pre-ticked boxes are not a good way to get consent from someone to future marketing:

Some organisations provide pre-ticked opt-in boxes, and rely on the user to untick it if they don’t want to consent. In effect, this is more like an opt-out box, as it assumes consent unless the user clicks the box. A pre-ticked box will not automatically be enough to demonstrate consent, as it will be harder to show that the presence of the tick represents a positive, informed choice by the user.

The Article 29 Working Party goes further, saying in its opinion on unsolicited communications for marketing purposes that inferring consent to marketing from the use of pre-ticked boxes is not compatible with the data protection directive. By extension, therefore, any marketing subsequently sent on the basis of a pre-ticked box will be a contravention of the data protection directive (and, in the UK, the Data Protection Act 1998) and the ePrivacy directive (in the UK, the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR)).

Nothwithstanding this, I certainly did not want to consent to receive subsequent marketing, so, as well as making a smart-arse tweet, I unticked the box. However, to my consternation, if not my huge surprise, I have subsequently received several marketing emails from Which? They do not have my consent to send these, so they are manifestly in contravention of regulation 22 of PECR.

It’s not clear how this has happened. Could it be a deliberate tactic by Which?  to ignore subscribers’ wishes? One presumes not: Which? says it “exists to make individuals as powerful as the organisations they deal with in their daily live” – deliberately ignoring clear expressions regarding consent would hardly sit well with that mission statement. So is it a general website glitch – which means that those expressions are lost in the sign-up process? If so, how many individuals are affected? Or is it just a one-off glitch, affecting only me?

Let’s hope it’s the last. Because the ignoring or overriding of expressions of consent, and the use of pre-ticked boxes for gathering consent, are some of the key things which fuel trade in and disrespect for personal data. The fact that I’ve experience this issue with a charity which exists to represent consumers, as a result of my wish to read their report into misuse of personal data, is shoddy, to say the least.

I approached Which? for a comment, and a spokesman said:

We have noted all of your comments relating to new Which? members signing up, including correspondence received after sign-up, and we are considering these in relation to our process.

I appreciate the response, although I’m not sure it really addresses my concerns.

1Which? Annual Report 2015/2016

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

1 Comment

Filed under consent, Data Protection, Directive 95/46/EC, Information Commissioner, marketing, PECR, spam, subject access

Data Protection distress compensation for CCTV intrusion

The Information Commissioner’s Office (ICO) recently (2 February) successfully prosecuted a business owner for operating CCTV without an appropriate notification under section 18 of the Data Protection Act 1998 (DPA), announcing:

Businesses could face fines for ignoring CCTV data protection law

But a recent case in the Scottish Sheriff Court shows that CCTV and data protection can also have relevance in private law civil proceedings. In Woolley against Akbar [2017] ScotsSC 7 the husband and wife pursuers (equivalent to claimants in England and Wales) successfully brought a claim for compensation for distress caused by the defender’s (defendant in England and Wales) use of CCTV cameras which were continuously recording video and audio, and which were deliberately set to cover the pursuers’ private property (their garden area and the front of their home). Compensation was assessed at £8634 for each of the pursuers (so £17268 in total) with costs to be assessed at a later date.

Two things are of particular interest to data protection fans: firstly, the willingness of the court to rule unequivocally that CCTV operated in non-compliance with the DPA Schedule One principles was unlawful; and secondly, the award of compensation despite the absence of physical damage.

The facts were that Mr and Mrs Woolley own and occupy the upper storey of a dwelling place, while Mrs Akbar owns and operates the lower storey as a guest house, managed by her husband Mr Akram. In 2013 the relationship between the parties broke down. Although both parties have installed CCTV systems, the pursuers’ system only monitors their own property, but this was not the case with the defender’s:

any precautions to ensure that coverage of the pursuers’ property was minimised or avoided. The cameras to the front of the house record every person approaching the pursuers’ home. The cameras to the rear were set deliberately to record footage of the pursuers’ private garden area. There was no legitimate reason for the nature and extent of such video coverage. The nature and extent of the camera coverage were obvious to the pursuers, as they could see where the cameras were pointed. The coverage was highly intrusive…the defender also made audio recordings of the area around the pursuers’ property…they demonstrated an ability to pick up conversations well beyond the pursuers’ premises. There are four audio boxes. The rear audio boxes are capable of picking up private conversations in the pursuers’ rear garden. Mr Akram, on one occasion, taunted the pursuers about his ability to listen to them as the pursuers conversed in their garden. The defender and Mr Akram were aware of this at all times, and made no effort to minimise or avoid the said audio recording. The nature of the coverage was obvious to the pursuers. Two audio boxes were installed immediately below front bedroom windows. The pursuers feared that conversations inside their home could also be monitored. The said coverage was highly intrusive.

Although, after the intervention of the ICO, the defender realigned the camera at the rear of the property, Sheriff Ross held that the coverage “remains intrusive”. Fundamentally, the sheriff held that the CCTV use was: unfair (in breach of the first data protection principle); excessive in terms of the amount of data captured (in breach of the third data protection principle); and retained for too long (in breach of the fifth data protection principle).

The sheriff noted that, by section 13(2) of the DPA, compensation for distress can only be awarded if the pursuer has suffered “damage”, which was not the case here. However, the sheriff further correctly noted, and was no doubt taken to, the decision of the Court of Appeal in Vidal-Hall & Ors v Google [2015] EWCA Civ 311 in which the court struck down section 13(2) as being incompatible with the UK’s obligations under the European data protection directive and the Charter of Fundamental Rights (my take on Vidal Hall is here). Accordingly, “pure” distress compensation was available.

Although the facts here show a pretty egregious breach of DPA, it is good to see a court understanding and assessing the issues so well, no doubt assisted in doing so by Paul Motion, of BTO Solicitors, who appeared for the pursuers.

One niggle I do have is about the role of the ICO in all this: they were clearly apprised of the situation, and could surely have taken enforcement action to require the stopping of the CCTV (although admittedly ICO cannot make an award of compensation). It’s not clear to me why they didn’t.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under damages, Data Protection, Information Commissioner

Get rights right, gov.uk

Government page on subject access rights is not accurate

Right of access to data about oneself is recognised as a fundamental right (article 8(2) of the Charter of Fundamental Rights of the European Union*). Section 7 of the UK’s Data Protection Act 1998 (DPA) gives expression to this, and provides that as a general right individuals are entitled to be told whether someone else is processing their data, and why, and furthermore (in terms) to be given a copy of that data. The European General Data Protection Regulation retains and bolsters this right, and recognises its importance by placing it in the category of provisions non-compliance with which could result in an administrative fine for a data controller of up to €20m or 4% of turnover (whichever is higher).

So subject access is important, and this is reflected in the fact that it is almost certainly the most litigated of provisions of the DPA (a surprisingly under-litigated piece of legislation). Many data controllers need to commit significant resources to comply with it, and the Information Commissioner’s Office (ICO) produced a statutory code of practice on the subject in 2014.

But it is not an absolute right. The DPA explains that there are exemptions to the right where, for instance, compliance would be likely to prejudice the course of criminal justice, or national security, or, in the case of health and social care records, would be likely to cause serious harm to the data subject or another person. Additionally the DPA recognises that, where complying with a subject access request would involve disclosing information about another individual, the data controller should not comply unless that other person consents, or unless it “is reasonable in all the circumstances to comply with the request without the consent of the other individual” (section 7(4) DPA).

But this important caveat (the engagement of the parallel rights of third parties) to the right of subject access is something which is almost entirely omitted in the government’s own web guidance regarding access to CCTV footage of oneself. It says

The CCTV owner must provide you with a copy of the footage that you can be seen in. They can edit the footage to protect the identities of other people.

The latter sentence is true, and especially in the case where footage captures third parties it is often appropriate to take measures to mask their identities. But the first sentence is simply not true. And I think it is concerning that “the best place to find government services and information” (as gov.uk describes itself) is wrong in its description of a fundamental right.

A data controller (let’s ignore the point that a “CCTV owner” might not necessarily be the data controller) does not have an unqualified obligation to provide information in response to a subject access request. As anyone working in data protection knows, the obligation is qualified by a number of exemptions. The page does allude to one of these (at section 29 of the DPA):

They can refuse your request if sharing the footage will put a criminal investigation at risk

But there are others – and the ICO has an excellent resource explaining them.

What I don’t understand is why the gov.uk page fails to provide better (accurate) information, and why it doesn’t provide a link to the ICO site. I appreciate that the terms and condition of gov.uk make clear that there is no guarantee that information is accurate, but I think there’s a risk here that data subjects could gather unreasonable expectations of their rights, and that this could lead to unnecessary grievances or disputes with data controllers.

Gov.uk invite comments about content, and I will be taking up this invitation. I hope they will amend.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

 

Leave a comment

Filed under Data Protection, Information Commissioner, subject access

Any Safe Harbor in a storm…?

The ICO has contacted me to say that it actually selected SnapSurveys because they offered clients the option of hosting survey response on UK servers, and it has checked with SnapSurveys that this remains the case. I’ve been pointed me to http://www.snapsurveys.com/survey-software/security-accessibility-and-professional-outline/ which confirms this point.

So the answer to my question

Is the ICO making unlawful transfers of personal data to the US?

I’m pleased to confirm, appears to be “no”.

Earlier this week the Information Commissioner’s Office (ICO) published a blogpost by Deputy Commissioner David Smith, entitled The US Safe Harbor – breached but perhaps not destroyed!

“Don’t panic” says David to those data controllers who are currently relying on Safe Harbor as a means of ensuring that personal data transferred by them to the United States has adequate protection (in line with the requirements of Article 25 of the European Data Protection Directive, and the eighth principle of schedule one of the UK’s Data Protection Act 1998 (DPA)). He is referring, of course, to the recent decision of the Court of Justice of the European Union in Schrems. which Data controllers should, he says, “take stock” and “make their own minds up”:

businesses in the UK don’t have to rely on Commission decisions on adequacy. Although you won’t get the same degree of legal certainty, UK law allows you to rely on your own adequacy assessment. Our guidance tells you how to go about doing this.  Much depend [sic] here on the nature of the data that you are transferring and who you are transferring it to but the big question is can you reduce the risks to the personal data, or rather the individuals whose personal data it is, to a level where the data are adequately protected after transfer? The Safe Harbor can still play a role here.

Smith also refers to a recent statement by the Article 29 Working Party – the grouping of representatives of the various European data protection authorities, of which he is a member – and refers to “the substance of the statement being measured, albeit expressed strongly”. What he doesn’t say is how unequivocal it is in saying that

transfers that are still taking place under the Safe Harbour decision after the CJEU judgment are unlawful

And this is particularly interesting because, as I discovered today, the ICO itself appears (still) to be making transfers under Safe Harbor. I reported a nuisance call using its online tool (in doing so I included some sensitive personal data about a family member) and noticed that the tool was operated by SnapSurveys. The ICO’s own website privacy notice says

We collect information volunteered by members of the public about nuisance calls and texts using an online reporting tool hosted by Snap Surveys. This company is a data processor for the ICO and only processes personal information in line with our instructions.

while SnapSurveys’ privacy policy explains that

Snap Surveys NH, Inc. complies with the U.S. – E.U. Safe Harbor framework

This does not unambiguously say that SnapSurveys are transferring the personal data of those submitting reports to the ICO to the US under Safe Harbor – it is possible that the ICO has set up some bespoke arrangement with its processor, under which they process that specific ICO data within the European Economic Area – but it strongly suggests it.

It is understandable that a certain amount of regulatory leeway and leniency be offered to data controllers who have relied on Safe Harbor until now – to that extent I agree with the light-touch approach of the ICO. But if it is really the case that peoples’ personal data are actually being transferred by the regulator to the US, three weeks after the European Commission decision of 2000 that Safe Harbor provided adequate protection was struck down, serious issues arise. I will be asking the ICO for confirmation about this, and whether, if it is indeed making these transfers, it has undertaken its own adequacy assessment.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

 

1 Comment

Filed under 8th principle, Data Protection, Directive 95/46/EC, Information Commissioner, safe harbor

Blackpool Displeasure Breach, redux

Over a year ago I blogged about a tweet by a member of the Oyston family connected with Blackpool FC:

a fan replies to a news item about the club’s manager, and calls the Oyston family “wankers”. Sam Oyston responds by identifying the seat the fan – presumably a season-ticket holder – occupies, and implies that if he continues to be rude the ticket will be withdrawn

For the reasons in that post I thought this raised interesting, and potentially concerning, data protection issues, and I mentioned that the Information Commissioner’s Office (ICO) had powers to take action. It was one of (perhaps the) most read posts (showing, weirdly, that football is possibly more of interest to most people than data protection itself) and it seemed that some people did intend complaining to the ICO. So, recently, I made an FOI request to the ICO for any information held by them concerning Blackpool FC’s data protection compliance. This was the reply

We have carried out thorough searches of the information we hold and have identified one instance where a member of the public raised concerns with the ICO in September 2014, about the alleged processing of personal data by Blackpool FC.

We concluded that there was insufficient evidence to consider the possibility of a s55 offence under the Data Protection Act 1998 (the DPA), and were unable to make an assessment as the individual had not yet raised their concerns with Blackpool FC direct.  We therefore advised the individual to contact the Club and to come back to us if they were still concerned, however we did not hear from them again.  As such, no investigation took place, nor was any assessment made of the issues raised.

This suggests the ICO appears wrongly to consider itself unable to undertake section 42 assessments under the Data Protection Act 1998 unless the data subject has complained to the data controller – a stance strongly criticised by Dr David Erdos on this blog, and one which has the potential to put the data subject further in dispute with the data controller (as I can imagine could have happened here, with a family some of whose members are ready to sue to protect their reputation). It also suggests though that maybe people weren’t quite as interested as the page views suggested. Nonetheless, I am posting this brief update, because a few people asked about it.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, Information Commissioner

Complaint about Google’s Innuendo, redux

Some time ago I complained to the Information Commissioner’s Office (ICO) about the innuendo carried in the message that Google serves with search results on most personal names: “Some results may have been removed under data protection law in Europe”. I had already complained to Google UK, and wrote about it here. Google UK denied any responsibility or liability, and referred me to their enormous, distant, parents at 1600 Amphitheatre Parkway. I think they were wrong to do so, in light of the judgment of the Court of Justice of the European Union in the Google Spain case C‑131/12, but I will probably pursue that separately.

However, section 42 of the Data Protection Act 1998 (DPA) allows me to ask the ICO to assess whether a data controller has likely or not complied with its obligations under the DPA. So that’s what I did (pointing out that a search on “Jon Baines” or “Jonathan Baines” threw up the offending message).

In her response the ICO case officer did not address the jurisdiction point which Google had produced, and nor did she actually make a section 42 assessment (in fairness, I had not specifically cited section 42). What she did say was this

As you know, the Court of Justice of the European Union judgement in May 2014 established that Google was a data controller in respect of the processing of personal data to produce search results. It is not in dispute that some of the search results do relate to you. However, it is also clear that some of them will relate to other individuals with the same name. For example, the first result returned on a search on ‘Jonathan Baines’ is ‘LinkedIn’, which says in the snippet that there are 25 professionals named Jonathan Baines, who use LinkedIn.

It is not beyond the realms of possibility that one or more of the other individuals who share your name have had results about them removed. We cannot comment on this. However, we understand that this message appears in an overwhelming majority of cases when searching on any person’s name. This is likely to be regardless of whether any links have actually been removed.

True, I guess. Which is why I’ve reverted with this clarification of my complaint:

If it assists, and to extend my argument and counter your implied question “which Jon Baines are we talking about?”, if you search < “Jon Baines” Information Rights and Wrongs > (where the search term is actually what lies between the < >) you will get a series of results which undoubtedly relate to me, and from which I can be identified. Google is processing my personal data here (that is unavoidable a conclusion, given the ruling by the Court of Justice of the European Union in “Google Spain” (Case C‑131/12)). The message “Some results may have been removed under data protection law in Europe” appears as a result of the processing of my personal data, because it does not appear on every search (for instance < prime minister porcine rumours > or < “has the ICO issued the cabinet office an enforcement notice yet” >). As a product of the processing of my personal data, I argue that the message relates to me, and constitutes my personal data. As it carries an unfair innuendo (unfair because it implies I might have asked for removal of search results) I would ask that you assess whether Google have or have not likely complied with their obligation under section 4(4) to comply with the first and fourth data protection principles. (Should you doubt the innuendo point, please look at the list of results on a Twitter search for “Some results may have been removed”).

Let’s hope this allows the ICO to make the assessment, without my having to consider whether I need to litigate against one of the biggest companies in world history.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

8 Comments

Filed under Data Protection, Information Commissioner

Big Brother is misleading you

The best books… are those that tell you what you know already…

Big Brother Watch (BBW) is a campaigning organisation, a spin-off from the right-wing lobby group The Taxpayers’ Alliance, described as a “poorly disguised Conservative front”, a large part of whose funds come “from wealthy donors, many of whom are prominent supporters of the Conservative party“. To an extent, that doesn’t matter to me: BBW has done a lot to highlight privacy issues which chime with some of my own concerns – eg excessive use of CCTV, biometrics in schools – but regularly they rail against local authority “databreaches” in a way I think is both unhelpful and disingenuous.

The latest example is a report issued this week (on 11th August 2015) entitled “A Breach of Trust – how local authorities commit 4 data breaches every day”. Martin Hoskins has already done an excellent job in querying and critiquing the findings

At first glance, it looks impressive. It’s almost 200 pages long. But, and this is a big but, there are only a few pages of analysis – once you get past page 12, a series of annexes contain the responses from each local authority, revealing how minor the vast majority of the reported incidents (occurring between April 2011 and April 2014) actually were.

BBW started work on this report by submitting FOI requests to each local authority in June 2014. Quite why it has taken so to publish the results, bearing in mind that FOI requests should be returned within 20 days, is beyond me. Although BBW claims to have received a 98% response rate, some 212 authorities either declined to provide information, or claimed that they had experienced no data breaches between 2011 and 2014.

But plenty of media outlets have already uncritically picked the report up and run stories such as the BBC’s “Council data security ‘shockingly lax'” and the Mail’s “Councils losing personal data four times a day”. Local news media also willingly ran stories about their local councils’ data.

However, my main criticism of this BBW report is a fundamental one: their methodology was so flawed that the results are effectively worthless. Helpfully, although at the end of the report, they outline that methodology:

A Freedom of Information request was sent to all local authorities beginning on the 9th June 2014.

We asked for the number of individuals that have been convicted for breaking the Data Protection Act, the number that had had their employment terminated as the result of a DPA breach, the number that were disciplined internally, the number that resigned during proceedings and the number of instances where no action was taken.

The FOI request itself asked for

a list of the offences committed by the individual in question

The flaw is this: individuals within an organisation can not, in general terms “break” or “breach” the Data Protection Act 1998 (DPA). An employee is a mere agent of his or her employer, and under the DPA the legal person with the general obligations and liabilities is the “data controller”: an employee of an organisation does not have any real status under the DPA – the employer will be the “person who determines the purposes for which and the manner in which personal data are processed”, that is, the data controller. An individual employee could, in specific terms, “break” or “breach” the DPA but only if they committed an offence under section 55, of unlawfully obtaining etc. personal data without the consent of the data controller. There is a huge amount of confusion, and sloppy thinking, when it comes to what is meant by a data protection “breach”, but the vast majority of the incidents BBW report on are simply incidents in which personal data has been compromised by the council in question as data controller. No determination of whether the DPA was actually contravened will have been made (if only because the function of determining whether the Act has been contravened is one which falls to the Information Commissioner’s Office, or the police, or the courts). And if BBW wanted a list of offences committed, that list would be tiny.

To an extent, therefore, those councils who responded with inaccurate information are to blame. FOI practitioners are taught (when they are well taught) to read a request carefully, and where there is uncertainty or ambiguity, to seek clarification from the requester. In this instance, I did in fact advise one local authority to do so. Regrettably, rather than clarifying their request, BBW chose not to respond, and the council is listed in the report as “no response received”, which is both unfair and untrue.

I am not saying that data security and data protection in councils is not an area of concern. Indeed, I am sure that in some places it is lax. But councils deal with an enormous amount of sensitive personal data, and mistakes and near misses will sometimes happen. Councils are encouraged to (and should be applauded for) keeping registers of such incidents. But they shouldn’t disclose those registers in response to ill-informed and badly worded FOI requests, because the evidence here is that they, and the facts, will be misleadingly represented in order to fit a pre-planned agenda.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

2 Comments

Filed under Data Protection, Freedom of Information

FOI, data protection and rogue landlords 

On 23rd July the Chartered Institute of Environmental Health (CIEH), in conjunction with the Guardian, published a database of landlords who have been convicted of offences under the Housing Act 2004. This showed, for example, that one landlord has been prosecuted seven times for issues relating to disrepair and poor state of properties rented out. It also showed apparent regional discrepancies regarding prosecutions, with some councils carrying out only one prosecution since 2006.

This public interest investigative journalism was, however not achieved without a fight: in September last year the information Commissioners office (ICO) issued a decision notice finding that the journalists request for this information had been correctly refused by the Ministry of Justice on the grounds that the information was sensitive personal data and disclosure under the Freedom of Information Act 2000 (FOIA) would contravene the MoJ’s obligations under the Data Protection Act 1998 (DPA). Section 40(2) of FOIA provides that information is exempt from disclosure under FOIA if disclosure would contravene any of the data protection principles in Schedule One of the DPA (it also provides that it would be exempt if disclosure would contravene section 10 of the DPA, but this is rarely invoked). The key data protection principle is the first, which says that personal data must be processed fairly and lawfully, and in particular that the processing must meet one of the conditions in Schedule Two, and also – for sensitive personal data – one of the conditions in Schedule Three.

The ICO, in its decision notice, after correctly determining that information about identifiable individuals (as opposed to companies) within the scope of the request was sensitive personal data (because it was about offences committed by those individuals) did not accept the requester’s submission that a Schedule Three condition existed which permitted disclosure. The only ones which could potentially apply – condition 1 (explicit consent) or condition 5 (information already made public by the individual) – were not engaged.

However, the ICO did not at the time consider the secondary legislation made under condition 10: the Data Protection (Processing of Sensitive Personal Data) Order 2000 provides further bases for processing of sensitive personal data, and, as the the First-tier Tribunal (Information Rights) (FTT) accepted upon appeal by the applicant, part 3 of the Schedule to that Order permits processing where the processing is “in the substantial public interest”, is in connection with “the commission by any person of any unlawful act” and is for journalistic purposes and is done with a “view to the publication of those data by any person and the data controller reasonably believes that such publication would be in the public interest”. In fairness to the ICO, this further condition was identified by them in their response to the appeal.

In this case, the information was clearly sought with a view to the future publication in the CIEH’s Magazine, “Environmental Health News” and the requester was the digital editor of the latter. This, the FTT decided, taken with the (objective) substantial public interest in the publication of the information, was sufficient to make disclosure under FOIA fair and lawful. In a passage (paras 28-30) worth quoting in full the FTT said

Unfit housing is a matter of major public concern and has a significant impact on the health of tenants.  The Housing Act is a key mechanism for local authorities to improve housing standards and protect the health of vulnerable tenants.  One mechanism for doing this is by means of prosecution, another is licensing schemes for landlords.  Local authorities place vulnerable families in accommodation outside their areas tenants seek accommodation, The publication of information about convictions under the Housing Act would be of considerable value to local authorities in discharge of their functions and assist prospective tenants and those assisting them in avoiding landlords with a history of breaches of the Housing Act.

The sanctions under the Housing Act are comparatively small and the  opprobrium of a conviction may well not rank with other forms of criminal misbehaviour, however the potential for harm to others from such activity is very great, the potential for financial benefit from the misbehaviour is also substantial.  Breaches of the Housing Act are economically motivated and what is proposed is a method of advancing the policy objective of the Housing Act by increasing the availability of relevant information to key actors in the rented housing market – the local authorities as regulator and purchaser and the tenants themselves.  Any impact on the data subjects will overwhelmingly be on their commercial reputations rather than more personal matters.

The Tribunal is therefore satisfied that not only is the disclosure of this information in the substantial public interest, but also any reasonably informed data controller with  knowledge of the social needs and the impact of such disclosure would so conclude.

It is relatively rare that sensitive personal data will be disclosed, or ordered to be disclosed, under FOIA, but it is well worth remembering the 2000 Order, particularly when it comes to publication or proposed publication of such data under public interest journalism.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with..

Leave a comment

Filed under Data Protection, Freedom of Information, Information Commissioner, Information Tribunal, journalism, Open Justice

Dear Google…Dear ICO…

On 15 June this year I complained to Google UK. I have had no response, so I have now asked the Information Commissioner’s Office to assess the lawfulness of Google’s actions. This is my email to the ICO

Hi

I would like to complain about Google UK. On 15 June 2015 I wrote to them at their registered address in the following terms

Complaint under Data Protection Act 1998

When a search is made on Google for my name “Jonathan Baines”, and, alternatively, “Jon Baines”, a series of results are returned, but at the foot of the page a message (“the message”) is displayed:

Some results may have been removed under data protection law in Europe. Learn more

To the best of my knowledge, no results have in fact been removed.

The first principle in Schedule One of the Data Protection Act 1998 (DPA) requires a data controller to process personal data fairly and lawfully. In the circumstances I describe, “Jonathan Baines”, “Jon Baines” and the message constitute my personal data, of which you are clearly data controller.

It is unfair to suggest that some results may have been removed under data protection law. This is because the message carries an innuendo that what may have been removed was content that was embarrassing, or that I did not wish to be returned by a Google search. This is not the case. I do not consider that the hyperlink “Learn more” nullifies the innuendo: for instance, a search on Twitter for the phrase “some results may have been removed” provides multiple examples of people assuming the message carries an innuendo meaning.

Accordingly, please remove the message from any page containing the results of a search on my name Jonathan Baines, or Jon Baines, and please confirm to me that you have done so. You are welcome to email me to this effect at [redacted]”

I have had no response to this letter, and furthermore I have twice contacted Google UK’s twitter account “@googleuk” to ask about a response, but have had none.

I am now asking, pursuant to my right to do so at section 42 of the Data Protection Act 1998, for you to conduct an assessment as to whether it is likely or unlikely that the processing by Google UK has been or is being carried out in compliance with the provisions of that Act.

I note that in Case C‑131/12 the Grand Chamber of the Court of Justice of the European Union held that “when the operator of a search engine sets up in a Member State a branch or subsidiary which is intended to promote and sell advertising space offered by that engine and which orientates its activity towards the inhabitants of that Member State” then “the processing is carried out in the context of the activities of an establishment of the controller on the territory of the Member State”. I also note that Google UK’s notification to your offices under section 18 of the Data Protection Act 1998 says “We process personal information to enable us to promote our goods and services”. On this basis alone I would submit that Google UK is carrying out processing as a data controller in the UK jurisdiction.

I hope I have provided sufficient information for you to being to assess Google UK’s compliance with its obligations under the Data Protection Act 1998, but please contact me if you require any further information.

with best wishes,

Jon Baines

Leave a comment

Filed under Data Protection, Information Commissioner

ICO: Samaritans Radar failed to comply with Data Protection Act

I’ve written so much previously on this blog about Samaritans Radar, the misguided Twitter app launched last year by Samaritans, that I’d thought I wouldn’t have to do so again. However, this is just a brief update on the outcome of the investigation by the Information Commissioner’s Office (ICO) into whether Samaritans were obliged to comply with data protection law when running the app, and, if so, the extent to which they did comply.

To recap, the app monitored the timelines of those the user followed on Twitter, and, if certain trigger words or phrases were tweeted, would send an email alert to the user. This was intended to be a “safety net” so that potential suicidal cries for help were not missed. But what was missed by Samaritans was the fact the those whose tweets were being monitored in this way would have no knowledge of it, and that this could lead to a feeling of uncertainty and unease in some of the very target groups they sought to protect. People with mental health issues raised concerns that the app could actually drive people off Twitter, where there were helpful and supportive networks of users, often tweeting the phrases and words the app was designed to pick up.

Furthermore, questions were raised, by me and many others, about the legality of the app under data protection law. So I made a request to the ICO under the Freedom of Information Act for

any information – such as an assessment of legality, correspondence etc. – which you hold about the “Samaritans Radar” app which Samaritans recently launched, then withdrew in light of serious legal and ethical concerns being raised

After an initial refusal because their investigation was ongoing, the ICO have now disclosed a considerable amount of information. Within it, however, is the substantive assessment I sought, in the form of a letter from the Group Manager for Government and Society to Samaritans. I think it is important to post it in full, and I do so below. I don’t have much to add, other than it vindicates the legal position put forward at the time by me and others (notably Susan Hall and Tim Turner).

19 December 2014

Samaritans Radar app

Many thanks for coming to our office and explaining the background to the development of the Radar application and describing how it worked.  We have now had an opportunity to consider the points made at the  meeting, as well as study the information provided in earlier  teleconferences and on the Samaritans’ website. I am writing to let you know our conclusions on how the Data Protection Act applies to the Radar  application.

We recognise that the Radar app was developed with the best of intentions and was withdrawn shortly after its launch but, as you know, during its operation we received a number of queries and concerns about the application. We have been asked for our vtew on whether personal data was processed in compliance with data protection prlnciples and whether the Samaritans are data controllers. You continue to believe that you are not data controllers or that personal data has been processed so I am writing to explain detail our conclusions on these points.

Personal data

Personal data is data that relates to an identifiable living individual. It is  our well-established position that data which identifies an individual, even without a name associated with it, may be personal data where it is processed to learn or record something about that individual, or where the processing of that information has an impact upon that individual. According to the information you have provided, the Radar app was a web-based application that used a specially designed algorithm that searched for specific keywords within the Twitter feeds of subscribers to the Radar app. When words indicating distress were detected within a Tweet, an email alert was automatically sent from the Samaritans to the subscriber saying Radar had detected someone they followed who may be going through a tough time and provided a link to that individual’s Tweet. The email asked the subscriber whether they were worried about the Tweet and if yes, they were re-directed to the Samaritans’ website for guidance on the best way of providing support to a follower who may be distressed. According to your FAQs, you also stored Twitter User IDs, Twitter User friends’ IDs, all tagged Tweets including the raw data associated with it and a count of flags against an individual Twitter user’s friends’ ID. These unique identifiers are personal data, in that they can easily be linked back to identifiable individuals.

Based on our understanding of how the application worked, we have reached the conclusion that the Radar service did involve processing of personal data. It used an algorithm to search for words that triggered an automated decision about an individual, at which point it sent an email alert to a Radar subscriber. It singled out an individual’s data with the purpose of differentiating them and treating them differently. In addition, you also stored information about all the Tweets that were tagged.

Data controller

We are aware of your view that you “are neither the data controller nor data processor of the information passing through the app”.

The concept of a data controller is defined in section 1 of the Data Protection 1998 (the DPA) as

“a person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be, processed”

We have concluded that the Radar service has involved processing of personal data. We understand that you used the agency [redacted] to develop and host the application. We are not fully aware of the role of [redacted] but given your central role in setting up and promoting the Radar application, we consider that the Samaritans have determined the manner and purpose of the processing of this personal data and as such you are data controllers. If you wish to be reminded of the approach we take in this area you may find it helpful to consult our guidance on data controllers and data processors. Here’s the link: https://ico.org.uk/media/about-the-ico/documents/1042555/data-controllers-and-data-processors-dp-guidance.pdf

Sensitive personal data

We also discussed whether you had processed sensitive personal data. You explained that the charity did deal with people seeking help for many different reasons and the service was not aimed at people with possible mental health issues. However the mission of the Samaritans is to alleviate emotional distress and reduce the incidence of suicide feelings and suicidal behaviours. In addition, the stated aims of the Radar project, the research behind it and the information provided in the FAQs all emphasise the aim of helping vulnerable peopie online and using the app to detect someone who is suicidal. For example, you say “research has shown there is a strong correlation between “suicidal tweets” and actual suicides and with Samaritans Radar we can turn a social net into a safety net”. Given the aims of the project, it is highly likely that some of the tweets identified to subscribers included information about an
individual’s mental health or other medical information and therefore would have been sensitive personal data.

At our meetings you said that even if you were processing sensitive personal data then Schedule 3 condiüon 5 (“The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject”) was sufficient to legitimise this processing. Our guidance in our Personal Information Online Code of Practice makes it clear that although people post personal information in a way that becomes publicly visible, organisations still have an overarching duty to handle it fairly and to comply with the rules of data protection. The Samaritans are well respected in this field and receiving an email from your organisation carries a lot of weight. Linking an individual’s tweet to an email alert from the Samaritans is unlikely to be perceived in the same light as the information received in the original Tweet — not least because of the risk that people’s tweets were flagged when they were not in any distress at all.

Fair processing

Any processing of personal data must be fair and organisations must consider the effect of the processing on the individuals concerned and whether the processing would be within their reasonable expectations. You indicated that although you had undertaken some elements of an impact assessment, you had not carried out a full privacy impact assessment. You appear to have reached the conclusion that since the Tweets were publicly visible, you did not need to fully consider the privacy risks. For example, on your website you say that “all the data is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of people you follow, which are public Tweets. It does not look at private Tweets.”

It is our view that if organisations collect information from the internet and use it in a way that’s unfair, they could still breach the data protection principles even though the information was obtained from a publicly available source. It is particularly important that organisations should consider the data protection implications if they are planning to use analytics to make automated decisions that could have a direct effect on individuals. Under section 12 Of the Data Protection Act, individuals have certain rights to prevent decisions being taken about them that are solely based on automated processing of their personal data. The quality of the data being used as a basis for these decisions may also be an issue.

We note that the application was a year in development and that you used leading academics in linguistics to develop your word search algorithm. You also tested the application on a large number of people, although, as we discussed, most if not of these were connected to the project in some way and many were enthusiastic to see the project succeed. As our recent paper on Big Data explains, it is not so much a question of whether the data accurately records what someone says but rather to what extent that information provides a reliable basis for drawing conclusions. Commentators expressed concern at the apparent high level of false positives involving the Radar App (figures in the media suggest only 4% of email alerts were genuine). This raises questions about whether a System operating with such a low success rate could represent fair processing and indicates that many Tweets were being flagged up unnecessarily.

Since you did not consider yourselves to be data controllers, you have not sought the consent of, or provided fair processing notices to, the individuals whose Tweets you flagged to subscribers. It seems unlikely that it would be within people’s reasonable expectations that certain words and phrases from their Tweets would trigger an automatic email alert from the Samaritans saying Radar had detected someone who may be going throuqh a tough time. Our Personal Information Online Code of Practice says it is good practice to only use publicly available information in a way that is unlikely to cause embarrassment, distress or anxiety to the individual concerned. Organisations should only use their information in a way they are likely to expect and to be comfortable with. Our advice is that if in doubt about this, and you are unable to ask permission, you should not collect their information in the first place.

Conclusion

Based on our observations above, we have reached the conclusion that the Radar application did risk causing distress to individuals and was unlikely to be compliant with the Data Protection Act.

We acknowledge that the Samaritans did take responsibility for dealing with the many concerns raised about the application very quickly. The application was suspended on 7 November and we welcomed [redacted] assurances on 14 November that not only was the application suspended but it would not be coming back in anything like its previous form. We also understand that there have been no complaints that indicate that anyone had suffered damage and distress in the very short period the application was in operation.

We do not want to discourage innovation but it is important that organisations should consider privacy throughout the development and implementation of new projects. Failing to do so risks undermining people’s trust in an organisation. We strongly recommend that if you are considering further projects involving the use of online information and technologies you should carry out and publish a privacy impact assessment. This will help you to build trust and engage with the wider public. Guidance on this can be found in our PIA Code of Practice. We also recommend that you look at our paper on Big Data and Data Protection and our Personal Information Online Code of Practice. Building trust and adopting an ethical approach to such projects can also help to ensure you handle people’s information in a fair and transparent way. We would be very happy to advise the Samaritans on data protection compliance in relation any future projects.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

12 Comments

Filed under Data Protection, Information Commissioner, social media