Watching the detective

The ICO might be mostly powerless to take action against the operators of the Russian web site streaming unsecured web cams, but the non-domestic users of the web cams could be vulnerable to enforcement action

The Information Commissioner’s Office (ICO) warned yesterday of the dangers of failing to secure web cams which are connected to the internet. This was on the back of stories about a Russian-based web site which aggregates feeds from thousands of compromised cameras worldwide.

This site was drawn to my attention a few weeks ago, and, although I tweeted obliquely about it, I thought it best not to identify it because of the harm it could potentially cause. However, although most news outlets didn’t identify the site, the cat is now, as they say, out of the bag. No doubt this is why the ICO chose to issue sensible guidance on network security in its blog post.

I also noticed that the Information Commissioner himself, Christopher Graham, rightly pointed to the difficulties in shutting down the site, and the fact that it is users’ responsibility to secure their web cams:

It is not within my jurisdiction, it is not within the European Union, it is Russia.

I will do what I can but don’t wait for me to have sorted this out.

This is, of course, true, and domestic users of web cams would do well to note the advice. Moreover, this is just the latest of these aggregator sites to appear. But news reports suggested that some of the 500-odd (or was it 2000-odd?) feeds on the site from the UK were from cameras of businesses or other non-domestic users (I saw a screenshot, for instance, of a feed from a pizza takeaway). Those users, if their web cams are capturing images of identifiable individuals, are processing personal data in the role of a data controller. And they can’t claim the exemption in the Data Protection Act 1998 (DPA) that applies to processing for purely domestic purposes. They must, therefore comply with the seventh data protection principle, which requires them to take appropriate measures to safeguard against unauthorised and unlawful processing of personal data. Allowing one’s web can to be compromised and its feed streamed on a Russian website is a pretty good indication that one is not complying with the seventh principle. Serious contraventions of the obligation to comply with the data protection principles can, of course, lead to ICO enforcement action, such as monetary penalty notices, to a maximum of £500,000.

The ICO is not, therefore, completely powerless here. Arguably it should be (maybe it is?) looking at the feeds on the site to determine which are from non-domestic premises, and looking to take appropriate enforcement action against them. So to that extent, one is rather watching Mr Graham, to see if he can sort this out.

2 Comments

Filed under Data Protection, Information Commissioner, Privacy

The voluntary data controller

One last post on #samaritansradar. I hope.

I am given to understand that Samaritans, having pulled their benighted app, have begun responding to the various legal notices people served on them under the Data Protection Act 1998 (specifically, these were notices under section 7 (subject access) section 10 (right to prevent processing likely to cause damage or distress) and section 12 (rights in relation to automated processing)). I tweeted my section 12 notice, but I doubt I’ll get a response to that, because they’ve never engaged with me once on twitter or elsewhere.

However, I have been shown a response to a section 7 request (which I have permission to blog about) and it continues to raise questions about Samaritans’ handling of this matter (and indeed, their legal advice – which hasn’t been disclosed, or even  really hinted at). The response, in relevant part, says

We are writing to acknowledge the subject access request that you sent to Samaritans via DM on 6 November 2014.  Samaritans has taken advice on this matter and believe that we are not a data controller of information passing through the Samaritans Radar app. However, in response to concerns that have been raised, we have agreed to voluntarily take on the obligations of a data controller in an attempt to facilitate requests made as far as we can. To this end, whilst a Subject Access Request made under the Data Protection Act can attract a £10 fee, we do not intend to charge any amount to provide information on this occasion.

So, Samaritans continue to deny being data controller for #samaritansradar, although they continue also merely to give assertions, not any legal analysis. But, notwithstanding their belief that they are not controller they are taking on the obligations of a data controller.

I think they need to be careful. A person who knowingly discloses personal data without the consent of the data controller potentially commits a criminal offence under section 55 DPA. One can’t just step in, grab personal data and start processing it, without acting in breach of the law. Unless one is a data controller.

And, in seriousness, this purported adoption of the role of “voluntary data controller” just bolsters the view that Samaritans have been data controllers from the start, for reasons laid out repeatedly on this blog and others. They may have acted as joint data controller with users of the app, but I simply cannot understand how they can claim not to have been determining the purposes for which and the manner in which personal data were processed. And if they were, they were a data controller.

 

Leave a comment

Filed under Data Protection, social media

Do your research. Properly

Campaigning group Big Brother Watch have released a report entitled “NHS Data Breaches”. It purports to show the extent of such “breaches” within the NHS. However it fails properly to define its terms, and uses very questionable methodology. I think, most worryingly, this sort of flawed research could lead to a reluctance on the part of public sector data controllers to monitor and record data security incidents.

As I checked my news alerts over a mug of contemplative coffee last Friday morning, the first thing I noticed was an odd story from a Bedfordshire news outlet:

Bedford Hospital gets clean bill of health in new data protection breach report, unlike neighbouring counties…From 2011 to 2014 the hospital did not breach the data protection act once, unlike neighbours Northampton where the mental health facility recorded 346 breaches, and Cambridge University Hospitals which registered 535 (the third worst in the country).

Elsewhere I saw that one NHS Trust had apparently breached data protection law 869 times in the same period, but many others, like Bedford Hospital had not done so once. What was going on – are some NHS Trusts so much worse in terms of legal compliance than others? Are some staffed by people unaware and unconcerned about patient confidentiality? No. What was going on was that campaigning group Big Brother Watch had released a report with flawed methodology, a misrepresentation of the law and flawed conclusions, which I fear could actually lead to poorer data protection compliance in the future.

I have written before about the need for clear terminology when discussing data protection compliance, and of the confusion which can be caused by sloppiness. The data protection world is very found of the word “breach”, or “data breach”, and it can be a useful term to describe a data security incident involving compromise or potential compromise of personal data, but the confusion arises because it can also be used to describe, or assumed to apply to, a breach of the law, a breach of the Data Protection Act 1998 (DPA). But a data security incident is not necessarily a breach of a legal obligation in the DPA: the seventh data protection principle in Schedule One requires that

Appropriate technical and organisational measures shall be taken [by a data controller] against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data

And section 4(4) of the DPA obliges a data controller to comply with the Schedule One data protection principles. This means that when appropriate technical and organisational measures are taken but unauthorised or unlawful processing, or accidental loss or destruction of, or damage to, personal data nonetheless occurs, the data controller is not in breach of its obligations (at least under the seventh principle). This distinction between a data security incident, and a breach, or contravention, of legal obligations, is one that the Information Commissioner’s Office (ICO) itself has sometimes failed to appreciate (as the First-tier Tribunal found in the Scottish Borders Council case EA/2012/0212). Confusion only increases when one takes into account that under The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) which are closely related to the DPA, and which deal with data security in – broadly – the telecoms arena, there is an actual legislative provision (regulation 2, as amended) which talks in terms of a “personal data breach”, which is

a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed in connection with the provision of a public electronic communications service

and regulation 5A obliges a relevant data controller to inform the ICO when there has been a “personal data breach”. It is important to note, however, that a “personal data breach” under PECR will not be a breach, or contravention, of the seventh DPA data protection principle, provided the data controller took appropriate technical and organisational to safeguard the data.

Things get even more complex when one bears in mind that the draft European General Data Protection Regulation proposes a similar approach as PECR, and defines a “personal data breach” in similar terms as above (simply removing the words “in connection with the provision of a public electronic communications service“).

Notwithstanding this, the Big Brother Watch report is entitled “NHS Data Breaches”, so one would hope that it would have been clear about its own terms. It has led to a lot of coverage, with media outlets picking up on headline-grabbing claims of “7225 breaches” in the NHS between 2011 and 2014, which is the equivalent to “6 breaches a day”. But when one looks at the methodology used, serious questions are raised about the research. It used Freedom of Information requests to all NHS Trusts and Bodies, and the actual request was in the following terms

1. The number of a) medical personnel and b) non-medical personnel that have been convicted for breaches of the Data Protection Act.

2. The number of a) medical personnel and b) non-medical personnel that have had their employment terminated for breaches of the Data Protection Act.

3. The number of a) medical personnel and b) non-medical personnel that have been disciplined internally but have not been prosecuted for breaches of the Data Protection Act.

4. The number of a) medical personnel and b) non-medical personnel that have resigned during disciplinary procedures.

5. The number of instances where a breach has not led to any disciplinary action.

The first thing to note is that, in broad terms, the only way that an individual NHS employee can “breach the Data Protection Act” is by committing a criminal offence under section 55 of unlawfully obtaining personal data without the consent of the (employer) data controller. All the other relevant legal obligations under the DPA are ones attaching to the NHS body itself, as data controller. Thus, by section 4(4) the NHS body has an obligation to comply with the data protection principles in Schedule One of the DPA, not individual employees. And so, except in the most serious of cases, where an employee acts without the consent of the employer to unlawfully obtain personal data, individual employees, whether medical or non-medical personnel, cannot as a matter of law “breach the Data Protection Act”.

One might argue that it is easy to infer that what Big Brother Watch meant to ask for was information about the number of times when actions of individual employees meant that their employer NHS body had breached its obligations under the DPA, and, yes, that it probably what was meant, but the incorrect terms and lack of clarity vitiated the purported research from the start. This is because NHS bodies have to comply with the NHS/Department of Health Information Governance Toolkit. This toolkit actually requires NHS bodies to record serious data security incidents even where those incidents did not, in fact, constitute a breach of the body’s obligations under the DPA (i.e. incidents might be recorded which were “near misses” or which did not constitute a failure of the obligation to comply with the seventh, data security, principle).

The results Big Brother Watch got in response to their ambiguous and inaccurately termed FOI request show that some NHS bodies clearly interpreted it expansively, to encompass all data security incidents, while others – those with zero returns in any of the fields, for instance – clearly interpreted it restrictively. In fact, in at least one case an NHS Trust highlighted that its return included “near misses”, but these were still categorised by Big Brother Watch as “breaches”.

And this is not unimportant: data security and data protection are of immense importance in the NHS, which has to handle huge amounts of highly sensitive personal data, often under challenging circumstances. Awful contraventions of the DPA do occur, but so too do individual and unavoidable instances of human error. The best data controllers will record and act on the latter, even though they don’t give rise to liability under the DPA, and they should be applauded for doing so. Naming and shaming NHS bodies on the basis of such flawed research methodology might well achieve Big Brother Watch’s aim of publicising its call for greater sanctions for criminal offences, but I worry that it might lead to some data controllers being wary of recording incidents, for fear that they will be disclosed and misinterpreted in the pursuit of questionable research.

1 Comment

Filed under Data Protection, Freedom of Information, Information Commissioner, NHS

A strict test for compliance with access to information laws

The High Court has quashed planning permission for a wind turbine because the Council involved failed to make information available beforehand, in breach of its legal obligations

The statutory rights to information held by public authorities which commenced in January 2005 - when the Freedom of Information Act 2000 and the Environmental Information Regulations 2004 came into effect – are not the only legal mechanism whereby people can or must have public information imparted to them. For instance, sections 100A-E of the Local Government Act 1972 (as inserted by the Local Government (Access to Information) Act 1985) deal with access to meetings of and information relating to meetings of specified local authorities (broadly, County, Borough, District, City or Unitary Councils). Section 100B deals with access to agendas and reports and section 100D with access to background papers. In both cases these must be “open to inspection by members of the public at the offices of the council” at least five clear days before the meeting (“clear days” refers to weekday working days and does not include the day of publication or the day of the meeting (R v Swansea City Council, ex p Elitestone Ltd (1993) 66 P. & C.R. 422)).

But what happens if these obligations are not complied with? what, for example, happens if background papers are not available for inspection for five clear days before a meeting? Often, nothing happens at all, but sometimes such a failure can be significant and costly. In a recent case (Joicey, R (on the Application of) v Northumberland County Council [2014] EWHC 3657) this is exactly what transpired. A planning application for a wind turbine was at issue,1 with a meeting scheduled for 5 November 2013 to consider it. The judgment informs us that “the officer’s report recommending approval…subject to conditions, was made available on 23 October” (it is not clear whether this means made available only for inspection, or whether it was also available on the Council’s website, although nothing turns on this). A Dr Ferguson, opposing the application (and a friend of the applicant Mr Joicey) noticed from the officer report that an external noise assessment report had been commissioned and produced. He emailed the Council on 30 October asking about the noise assessment report, getting no immediate reply, and attended the Council offices on 1 November to inspect the files, but no noise assessment report was included. On 4 November, the day before the committee meeting, he received a reply to his 30 October email, with a copy of the noise assessment report attached. The same day a copy of the report was uploaded to the Council website.

The committee approved the application, despite Mr Joicey addressing the meeting in the following terms

Noise impact assessment has been carried out again, in full, for this application, but I don’t suppose any of you have seen it, because this highly relevant document (74 pages of it) appeared only yesterday, and that was after requests to see it. If you study it, and you are properly armed with the knowledge of previous planning history connected with this site, you will find that it is actually fundamentally flawed, again, and that it shows that this application must actually be refused on noise grounds.

Mr Joicey brought judicial review proceedings on six grounds, but the one which concerns us here is the first: the non-availability of the noise assessment. As the noise assessment report was not included in a list of the background papers for the report to the committee, and was not available for inspection five clear days before the meeting there was, said Mr Justice Cranston

no doubt that there were a number of breaches of the public’s right to know under the Local Government Act 1972

Furthermore, the fact that the report was not available on the Council’s website was a breach of its undertakings in its Statement of Community Involvement (SCI) prepared pursuant to its obligations under section 18(1) of the Planning and Compulsory Purchase Act 2004. The Council’s SCI stated that “Once a valid planning application has been received we will…Publish details of the application with supporting documentation on the council website.” The Council even conceded that, although the report had been uploaded on 4 November, it had been described as published on 9 September, and the judge took a “dim view of any public authority backdating a document in a manner which could give a false impression to the public”. The undertaking in the SCI went further, said the judge, than the statutory obligations in the 1972 Act, and constituted a continuing promise giving rise to a legitimate expectation on the part of the public, and “otherwise the public’s right to know what is being proposed regarding a planning application would be frustrated”.

But what was the effect of these failings? The Council submitted that no prejudice had been caused to the claimant, because the planning committee’s decision had been inevitable and, adopting the test in Bolton MBC v Secretary of State for the Environment (1990) 61 P. & C.R. 343, if the court was uncertain whether, absent the failings, there would be a real possibility of a different decision being there was no basis for concluding that it was invalid. However, Mr Justice Cranston held that the correct test was different: drawing on the authorities of Simplex GE Holdings Ltd v Secretary of State for Environment (1988) 3 PLR 25 and R (on the application of Holder) v Gedding District Council [2014] EWCA Civ 599 he said that

the claimant will be entitled to relief unless the decision-maker can demonstrate that the decision it took would inevitably have been the same had it complied with its statutory obligation to disclose information in a timely fashion [emphasis not in original]

And in this case the Council failed to persuade him that the decision would inevitably have been the same if the noise assessment report had been made available earlier: the issue of noise had been a key one in earlier challenges to the developments and remained so now, and Mr Joicey could have made further representations and sought further expert opinion which might have persuaded the planning committee.

Some of Mr Joicey’s other grounds of challenge succeeded, and some failed, but the merits of the successful challenges led to the planning permission being quashed.

Local authorities would do well to note the strictness of the test here: breaches of the access to information provisions of the 1972 Local Government Act, and of the undertakings in a Statement of Community Involvement, will mean decisions taken are liable to be quashed upon challenge, unless the decision would inevitably have been the same without the breaches. Inevitability is a hard thing to prove.

1Northumberland County Council, despite its name, is a unitary authority, and, therefore, a local planning authority

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under local government, transparency

So farewell then #samaritansradar…

…or should that be au revoir?

With an interestingly timed announcement (18:00 on a Friday evening) Samaritans conceded that they were pulling their much-heralded-then-much-criticised app “Samaritans Radar”, and, as if some of us didn’t feel conflicted enough criticising such a normally laudable charity, their Director of Policy Joe Ferns managed to get a dig in, hidden in what was purportedly an apology:

We are very aware that the range of information and opinion, which is circulating about Samaritans Radar, has created concern and worry for some people and would like to apologise to anyone who has inadvertently been caused any distress

So, you see, it wasn’t the app, and the creepy feeling of having all one’s tweets closely monitored for potentially suicidal expressions, which caused concern and worry and distress – it was all those nasty people expressing a range of information and opinion. Maybe if we’d all kept quiet the app could have continued on its unlawful and unethical merry way.

However, although the app has been pulled, it doesn’t appear to have gone away

We will…be testing a number of potential changes and adaptations to the app to make it as safe and effective as possible for both subscribers and their followers

There is a survey at the foot of this page which seeks feedback and comment. I’ve completed it, and would urge others to do so. I’ve also given my name and contact details, because one of my main criticisms of the launch of the app was that there was no evidence that Samaritans had taken advice from anyone on its data protection implications – and I’m happy to do so for no fee. As Paul Bernal says, “[Samaritans] need to talk to the very people who brought down the app: the campaigners, the Twitter activists and so on”.

Data protection law’s place in our digital lives is of profound importance, and of profound interest to me. Let’s not forget that its genesis in the 1960s and 1970s was in the concerns raised by the extraordinary advances that computing brought to data analysis. For me some of the most irritating counter-criticism during the recent online debates about Samaritans Radar was from people who equated what the app did to mere searching of tweets, or searching for keywords. As I said before, the sting of this app lay in the overall picture – it was developed, launched and promoted by Samaritans – and in the overall processing of data which went on – it monitored tweets, identified potentially worrying ones and pushed this information to a third party, all without the knowledge of the data subject.

But also irritating were comments from people who told us that other organisations do similar analytics, for commercial reasons, so why, the implication went, shouldn’t Samaritans do it for virtuous ones? It is no secret that an enormous amount of analysis takes place of information on social media, and people should certainly be aware of this (see Adrian Short’s excellent piece here for some explanation), but the fact that it can and does take place a) doesn’t mean that it is necessarily lawful, nor that the law is impotent within the digital arena, and b) doesn’t mean that it is necessarily ethical. And for both those reasons Samaritans Radar was an ill-judged experiment that should never have taken place as it did. If any replacement is to be both ethical and lawful a lot of work, and a lot of listening, needs to be done.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

5 Comments

Filed under Data Protection, social media

Samaritans cannot deny being data controller for #samaritansradar

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

So, Samaritans continue to support the #samaritansradar app, about which I, and many others, have already written. A large number of people suffering from, or with experience of mental health problems, have pleaded with Samaritans to withdraw the app, which monitors the tweets of the people one follows on twitter, applies an algorithm to identify tweets from potentially vulnerable people, and emails that information to the app user, all without the knowledge of the person involved. As Paul Bernal has eloquently said, this is not really an issue about privacy, and nor is it about data protection – it is about the threat many vulnerable people feel from the presence of the app. Nonetheless, privacy and data protection law, in part, are about the rights of the vulnerable; last night (4 November) Samaritans issued their latest sparse statement, part of which dealt with data protection:

We have taken the time to seek further legal advice on the issues raised. Our continuing view is that Samaritans Radar is compliant with the relevant data protection legislation for the following reasons:

o   We believe that Samaritans are neither the data controller or data processor of the information passing through the app

o   All information identified by the app is available on Twitter, in accordance with Twitter’s Ts&Cs (link here). The app does not process private tweets.

o   If Samaritans were deemed to be a data controller, given that vital interests are at stake, exemptions from data protection law are likely to apply

It is interesting that there is reference here to “further” legal advice: none of the previous statements from Samaritans had given any indication that legal or data protection advice had been sought prior to the launch of the app. It would be enormously helpful to discussion of the issue if Samaritans actually disclosed their advice, but I doubt very much that they will do so. Nonetheless, their position appears to be at odds with the legal authorities.

In May this year the Court of Justice of the European Union (CJEU) gave its ruling in the Google Spain case. The most widely covered aspect of that case was, of course, the extent of a right to be forgotten – a right to require Google to remove search terms in certain specified cases. But the CJEU also was asked to rule on the question of whether a search engine, such as Google, was a data controller in circumstances in which it engages in the indexing of web pages. Before the court Google argued that

the operator of a search engine cannot be regarded as a ‘controller’ in respect of that processing since it has no knowledge of those data and does not exercise control over the data

and this would appear to be a similar position to that adopted by Samaritans in the first bullet point above. However, the CJEU dismissed Google’s argument, holding that

the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results…It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of [the activity at issue] and which must, consequently, be regarded as the ‘controller’ in respect of that processing

Inasmuch as I understand how it works, I would submit that #samaritansradar, while not a search engine as such, collects data (personal data), records and organises it, stores it on servers and discloses it to its users in the form of a result. The app has been developed by and launched by Samaritans, it carries their name and seeks to further their aims: it is clearly “their” app, and they are, as clearly, a data controller with attendant legal responsibilities and liabilities. In further proof of this Samaritans introduced, after the app launch and in response to outcry, a “whitelist” of twitter users who have specifically informed Samaritans that they do not want their tweets to be monitored (update on 30 October). If Samaritans are effectively saying they have no role in the processing of the data, how on earth would such a whitelist be expected to work?

And it’s interesting to consider the apparent alternative view that they are implicitly putting forward. If they are not data controller, then who is? The answer must be the users who download and run the app, who would attract all the legal obligations that go with being a data controller. The Samaritans appear to want to back out of the room, leaving app users to answer all the awkward questions.1

Also very interesting is that Samaritans clearly accept that others might have a different view to theirs on the issue of controllership; they suggest that if they were held to be a data controller they would avail themselves of “exemptions” in data protection law relating to “vital interest” to legitimise their activities. One presumes this to be a reference to certain conditions in Schedule 2 and 3 of the Data Protection Act 1998 (DPA). Those schedules contain conditions which must be met, in order for the processing of, respectively, personal data and sensitive personal data, to be fair and lawful. As we are here clearly talking about sensitive personal data (personal data relating to someone’s physical or mental health is classed as sensitive), let us look at the relevant condition in Schedule 3:

The processing is necessary—
(a)in order to protect the vital interests of the data subject or another person, in a case where—
(i)consent cannot be given by or on behalf of the data subject, or
(ii)the data controller cannot reasonably be expected to obtain the consent of the data subject, or
(b)in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld

Samaritans alternative defence founders on the first four words: in what way can this processing be necessary to protect vital interests? The Information Commissioner’s Office explains that this condition only applies

in cases of life or death, such as where an individual’s medical history is disclosed to a hospital’s A&E department treating them after a serious road accident

The evidence suggests this app is actually delivering a very large number of false positives (as it’s based on what seems to be a crude keyword algorithm, this is only to be expected). Given that, and, indeed, given that Samaritans have – expressly – no control over what happens once the app notifies a user of a concerning tweet, it is absolutely preposterous to suggest that the processing is necessary to protect people’s vital interests. Moreover, the condition above also explains that it can only be relied on where consent cannot be given by the data subject or the controller cannot reasonably be expected to obtain consent. Nothing prevents Samaritans from operating an app which would do the same thing (flag a tweet of concern) but basing it on a consent model, whereby someone agrees that their tweets will be monitored in that way. Indeed, such a model would fit better with Samaritans stated aim of allowing people to “lead the conversation at their own pace”. It is clear, nonetheless, that consent could be sought for this processing, but that Samaritans have failed to design an app which allows it to be sought.

The Information Commissioner’s Office is said to be looking into the issues raised by Samaritans’ app. It may be that it will only be through legal enforcement action that it will actually be – as I think it should – removed. But it would be extremely sad if it came to that. It should be removed voluntarily by Samaritans, so they can rethink, re-programme, take full legal advice, but – most importantly – listen to the voices of the most vulnerable, who feel so threatened and betrayed by the app.

1On a strict and nuanced analysis of data protection law users of the app probably are data controllers, acting as joint ones with Samaritans. However, given the regulatory approach of the Information Commissioner they would probably be able to avail themselves of the general exemption from all of the DPA for processing which is purely domestic (although even that is arguably wrong). These are matters for another blog post however, and the fact that users might be held to be data controllers doesn’t alter the fact that Samaritans are, and in a much clearer way

40 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

No harm done

Why does nobody listen to me?

Quite a few media outlets and commentators have picked up on the consultation by the Department for Culture, Media and Sport I blogged about recently. The consultation is about the possibility of legislative change to make it easier for the Information Commissioner’s Office (ICO)(ICO) to “fine” (in reality, serve a civil monetary penalty notice) on people or organisations who commit serious contraventions of ePrivacy law in sending unsolicited electronic marketing messages (aka spam calls, texts, emails etc).

However, almost every report I have seen has missed a crucial point. So, we have The Register saying “ICO to fine UNBIDDEN MARKETEERS who cause ‘ANXIETY’…Inconvenience, annoyance also pass the watchdog’s stress test”, and Pinsent Masons, Out-Law.com saying “Unsolicited marketing causing ‘annoyance, inconvenience or anxiety’ could result in ICO fine”. We even have 11KBW’s formidable Christopher Knight saying

the DCMS has just launched a consultation exercise on amending PECR with a view to altering the test from “substantial damage or distress” to causing “annoyance, inconvenience or anxiety”

But none of these spot that the preferred option of DCMS, and the ICO is actually to go further, and give the ICO the power to serve a monetary penalty notice even when no harm has been shown at all

Remove the existing legal threshold of “substantial damage and distress” (this is the preferred option of both ICO and DCMS. There would be no need to prove “substantial damage and distress”, or any other threshold such as ‘annoyance, inconvenience or anxiety’…

So yes, this is a blog post purely to moan about the fact that people haven’t read my previous post. It’s my blog and I’ll cry if I want to.

UPDATE:

Chris Knight is so formidable that he’s both updated the Panopticon post and pointed out the oddness of option 3 being preferred when nearly all of the consultation paper is predicated on option 2 being victorious.

Leave a comment

Filed under Information Commissioner, marketing, monetary penalty notice, PECR, spam texts