Around this time last year I wrote two blog posts about two separate police forces’ decision to tweet the names of drivers charged (but not – yet, at least – convicted) of drink driving offences. In the latter example Staffordshire police were actually using a hashtag #drinkdriversnamedontwitter, and I argued that
If someone has merely been charged with an offence, it is contrary to the ancient and fundamental presumption of innocence to shame them for that fact. Indeed, I struggle to understand how it doesn’t constitute contempt of court to do so, or to suggest that someone who has not been convicted of drink-driving is a drink driver. Being charged with an offence does not inevitably lead to conviction. I haven’t been able to find statistics relating to drink-driving acquittals, but in 2010 16% of all defendants dealt with by magistrates’ courts were either acquitted or not proceeded against
The Information Commissioner’s Office investigated whether there had been a breach of the first principle of Schedule One of the Data Protection Act 1998 (DPA), which requires that processing of personal data be “fair and lawful”, but decided to take no action after Staffs police agreed not to use the hashtag again, saying
Our concern was that naming people who have only been charged alongside the label ‘drink-driver’ strongly implies a presumption of guilt for the offence. We have received reassurances from Staffordshire Police the hashtag will no longer be used in this way and are happy with the procedures they have in place. As a result, we will be taking no further action.
But my first blog post had raised questions about whether the mere naming of those charged was in accordance with the same DPA principle. Newspaper articles talked of naming and “shaming”, but where is the shame in being charged with an offence? I wondered why Sussex police didn’t correct those newspapers who attributed the phrase to them.
And this year, Sussex police, as well as neighbouring Surrey, and Somerset and Avon are doing the same thing: naming drivers charged with drink driving offences on twitter or elsewhere online. The media happily describe this as a “naming and shaming” tactic, and I have not seen the police disabusing them, although Sussex police did at least enter into a dialogue with me and others on twitter, in which they assured us that their actions were in pursuit of open justice, and that they were not intending to shame people. However, this doesn’t appear to tally with the understanding of the Sussex Police and Crime Commissioner who said earlier this year
I am keen to find out if the naming and shaming tactic that Sussex Police has adopted is actually working
But I also continue to question whether the practice is in accordance with police forces’ obligations under the DPA. Information relating to the commission or alleged commission by a person of an offence is that person’s sensitive personal data, and for processing to be fair and lawful a condition in both of Schedule Two and, particularly, Schedule Three must be met. And I struggle to see which Schedule Three condition applies – the closest is probably
The processing is necessary…for the administration of justice
while the adjective “necessary”, within the meaning of article 10(2) [of the European Convention on Human Rights] is not synonymous with “indispensable”, neither has it the flexibility of such expressions as “admissible”, “ordinary”, “useful”, “reasonable” or “desirable” and that it implies the existence of a “pressing social need.”
should reflect the meaning attributed to it by the European Court of Human Rights when justifying an interference with a recognised right, namely that there should be a pressing social need and that the interference was both proportionate as to means and fairly balanced as to ends
This is another tool in our campaign to stop people driving while under the influence of drink or drugs. If just one person is persuaded not to take to the road as a result, then it is worthwhile as far as we are concerned.
and Sussex police’s Chief Inspector Natalie Moloney says
I hope identifying all those who are to appear in court because of drink or drug driving will act as a deterrent and make Sussex safer for all road users
which firstly fails to use the word “alleged” before “drink or drug driving”, and secondly – as Supt Corrigan – suggests the purpose of naming is not to promote open justice, but rather to deter drink drivers.
Deterring drink driving is certainly a worthy public aim (and I stress that I have no sympathy whatsoever with those convicted of such offences) but should the sensitive personal data of who have not been convicted of any offence be used to their detriment in pursuance of that aim?
I worry that unless such naming practices are scrutinised, and challenged when they are unlawful and unfair, the practice will spread, and social “shame” will be encouraged to be visited on the innocent. I hope the Information Commissioner investigates.
The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.
Filed under Data Protection, human rights, Information Commissioner, Open Justice, police, social media
The voluntary data controller
One last post on #samaritansradar. I hope.
I am given to understand that Samaritans, having pulled their benighted app, have begun responding to the various legal notices people served on them under the Data Protection Act 1998 (specifically, these were notices under section 7 (subject access) section 10 (right to prevent processing likely to cause damage or distress) and section 12 (rights in relation to automated processing)). I tweeted my section 12 notice, but I doubt I’ll get a response to that, because they’ve never engaged with me once on twitter or elsewhere.
However, I have been shown a response to a section 7 request (which I have permission to blog about) and it continues to raise questions about Samaritans’ handling of this matter (and indeed, their legal advice – which hasn’t been disclosed, or even really hinted at). The response, in relevant part, says
We are writing to acknowledge the subject access request that you sent to Samaritans via DM on 6 November 2014. Samaritans has taken advice on this matter and believe that we are not a data controller of information passing through the Samaritans Radar app. However, in response to concerns that have been raised, we have agreed to voluntarily take on the obligations of a data controller in an attempt to facilitate requests made as far as we can. To this end, whilst a Subject Access Request made under the Data Protection Act can attract a £10 fee, we do not intend to charge any amount to provide information on this occasion.
So, Samaritans continue to deny being data controller for #samaritansradar, although they continue also merely to give assertions, not any legal analysis. But, notwithstanding their belief that they are not controller they are taking on the obligations of a data controller.
I think they need to be careful. A person who knowingly discloses personal data without the consent of the data controller potentially commits a criminal offence under section 55 DPA. One can’t just step in, grab personal data and start processing it, without acting in breach of the law. Unless one is a data controller.
And, in seriousness, this purported adoption of the role of “voluntary data controller” just bolsters the view that Samaritans have been data controllers from the start, for reasons laid out repeatedly on this blog and others. They may have acted as joint data controller with users of the app, but I simply cannot understand how they can claim not to have been determining the purposes for which and the manner in which personal data were processed. And if they were, they were a data controller.
Filed under Data Protection, social media
Do your research. Properly
Campaigning group Big Brother Watch have released a report entitled “NHS Data Breaches”. It purports to show the extent of such “breaches” within the NHS. However it fails properly to define its terms, and uses very questionable methodology. I think, most worryingly, this sort of flawed research could lead to a reluctance on the part of public sector data controllers to monitor and record data security incidents.
As I checked my news alerts over a mug of contemplative coffee last Friday morning, the first thing I noticed was an odd story from a Bedfordshire news outlet:
Bedford Hospital gets clean bill of health in new data protection breach report, unlike neighbouring counties…From 2011 to 2014 the hospital did not breach the data protection act once, unlike neighbours Northampton where the mental health facility recorded 346 breaches, and Cambridge University Hospitals which registered 535 (the third worst in the country).
Elsewhere I saw that one NHS Trust had apparently breached data protection law 869 times in the same period, but many others, like Bedford Hospital had not done so once. What was going on – are some NHS Trusts so much worse in terms of legal compliance than others? Are some staffed by people unaware and unconcerned about patient confidentiality? No. What was going on was that campaigning group Big Brother Watch had released a report with flawed methodology, a misrepresentation of the law and flawed conclusions, which I fear could actually lead to poorer data protection compliance in the future.
I have written before about the need for clear terminology when discussing data protection compliance, and of the confusion which can be caused by sloppiness. The data protection world is very found of the word “breach”, or “data breach”, and it can be a useful term to describe a data security incident involving compromise or potential compromise of personal data, but the confusion arises because it can also be used to describe, or assumed to apply to, a breach of the law, a breach of the Data Protection Act 1998 (DPA). But a data security incident is not necessarily a breach of a legal obligation in the DPA: the seventh data protection principle in Schedule One requires that
Appropriate technical and organisational measures shall be taken [by a data controller] against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data
And section 4(4) of the DPA obliges a data controller to comply with the Schedule One data protection principles. This means that when appropriate technical and organisational measures are taken but unauthorised or unlawful processing, or accidental loss or destruction of, or damage to, personal data nonetheless occurs, the data controller is not in breach of its obligations (at least under the seventh principle). This distinction between a data security incident, and a breach, or contravention, of legal obligations, is one that the Information Commissioner’s Office (ICO) itself has sometimes failed to appreciate (as the First-tier Tribunal found in the Scottish Borders Council case EA/2012/0212). Confusion only increases when one takes into account that under The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) which are closely related to the DPA, and which deal with data security in – broadly – the telecoms arena, there is an actual legislative provision (regulation 2, as amended) which talks in terms of a “personal data breach”, which is
a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed in connection with the provision of a public electronic communications service
and regulation 5A obliges a relevant data controller to inform the ICO when there has been a “personal data breach”. It is important to note, however, that a “personal data breach” under PECR will not be a breach, or contravention, of the seventh DPA data protection principle, provided the data controller took appropriate technical and organisational to safeguard the data.
Things get even more complex when one bears in mind that the draft European General Data Protection Regulation proposes a similar approach as PECR, and defines a “personal data breach” in similar terms as above (simply removing the words “in connection with the provision of a public electronic communications service“).
Notwithstanding this, the Big Brother Watch report is entitled “NHS Data Breaches”, so one would hope that it would have been clear about its own terms. It has led to a lot of coverage, with media outlets picking up on headline-grabbing claims of “7225 breaches” in the NHS between 2011 and 2014, which is the equivalent to “6 breaches a day”. But when one looks at the methodology used, serious questions are raised about the research. It used Freedom of Information requests to all NHS Trusts and Bodies, and the actual request was in the following terms
1. The number of a) medical personnel and b) non-medical personnel that have been convicted for breaches of the Data Protection Act.
2. The number of a) medical personnel and b) non-medical personnel that have had their employment terminated for breaches of the Data Protection Act.
3. The number of a) medical personnel and b) non-medical personnel that have been disciplined internally but have not been prosecuted for breaches of the Data Protection Act.
4. The number of a) medical personnel and b) non-medical personnel that have resigned during disciplinary procedures.
5. The number of instances where a breach has not led to any disciplinary action.
The first thing to note is that, in broad terms, the only way that an individual NHS employee can “breach the Data Protection Act” is by committing a criminal offence under section 55 of unlawfully obtaining personal data without the consent of the (employer) data controller. All the other relevant legal obligations under the DPA are ones attaching to the NHS body itself, as data controller. Thus, by section 4(4) the NHS body has an obligation to comply with the data protection principles in Schedule One of the DPA, not individual employees. And so, except in the most serious of cases, where an employee acts without the consent of the employer to unlawfully obtain personal data, individual employees, whether medical or non-medical personnel, cannot as a matter of law “breach the Data Protection Act”.
One might argue that it is easy to infer that what Big Brother Watch meant to ask for was information about the number of times when actions of individual employees meant that their employer NHS body had breached its obligations under the DPA, and, yes, that it probably what was meant, but the incorrect terms and lack of clarity vitiated the purported research from the start. This is because NHS bodies have to comply with the NHS/Department of Health Information Governance Toolkit. This toolkit actually requires NHS bodies to record serious data security incidents even where those incidents did not, in fact, constitute a breach of the body’s obligations under the DPA (i.e. incidents might be recorded which were “near misses” or which did not constitute a failure of the obligation to comply with the seventh, data security, principle).
The results Big Brother Watch got in response to their ambiguous and inaccurately termed FOI request show that some NHS bodies clearly interpreted it expansively, to encompass all data security incidents, while others – those with zero returns in any of the fields, for instance – clearly interpreted it restrictively. In fact, in at least one case an NHS Trust highlighted that its return included “near misses”, but these were still categorised by Big Brother Watch as “breaches”.
And this is not unimportant: data security and data protection are of immense importance in the NHS, which has to handle huge amounts of highly sensitive personal data, often under challenging circumstances. Awful contraventions of the DPA do occur, but so too do individual and unavoidable instances of human error. The best data controllers will record and act on the latter, even though they don’t give rise to liability under the DPA, and they should be applauded for doing so. Naming and shaming NHS bodies on the basis of such flawed research methodology might well achieve Big Brother Watch’s aim of publicising its call for greater sanctions for criminal offences, but I worry that it might lead to some data controllers being wary of recording incidents, for fear that they will be disclosed and misinterpreted in the pursuit of questionable research.
Filed under Data Protection, Freedom of Information, Information Commissioner, NHS
So farewell then #samaritansradar…
…or should that be au revoir?
With an interestingly timed announcement (18:00 on a Friday evening) Samaritans conceded that they were pulling their much-heralded-then-much–criticised app “Samaritans Radar”, and, as if some of us didn’t feel conflicted enough criticising such a normally laudable charity, their Director of Policy Joe Ferns managed to get a dig in, hidden in what was purportedly an apology:
We are very aware that the range of information and opinion, which is circulating about Samaritans Radar, has created concern and worry for some people and would like to apologise to anyone who has inadvertently been caused any distress
So, you see, it wasn’t the app, and the creepy feeling of having all one’s tweets closely monitored for potentially suicidal expressions, which caused concern and worry and distress – it was all those nasty people expressing a range of information and opinion. Maybe if we’d all kept quiet the app could have continued on its unlawful and unethical merry way.
However, although the app has been pulled, it doesn’t appear to have gone away
We will…be testing a number of potential changes and adaptations to the app to make it as safe and effective as possible for both subscribers and their followers
There is a survey at the foot of this page which seeks feedback and comment. I’ve completed it, and would urge others to do so. I’ve also given my name and contact details, because one of my main criticisms of the launch of the app was that there was no evidence that Samaritans had taken advice from anyone on its data protection implications – and I’m happy to do so for no fee. As Paul Bernal says, “[Samaritans] need to talk to the very people who brought down the app: the campaigners, the Twitter activists and so on”.
Data protection law’s place in our digital lives is of profound importance, and of profound interest to me. Let’s not forget that its genesis in the 1960s and 1970s was in the concerns raised by the extraordinary advances that computing brought to data analysis. For me some of the most irritating counter-criticism during the recent online debates about Samaritans Radar was from people who equated what the app did to mere searching of tweets, or searching for keywords. As I said before, the sting of this app lay in the overall picture – it was developed, launched and promoted by Samaritans – and in the overall processing of data which went on – it monitored tweets, identified potentially worrying ones and pushed this information to a third party, all without the knowledge of the data subject.
But also irritating were comments from people who told us that other organisations do similar analytics, for commercial reasons, so why, the implication went, shouldn’t Samaritans do it for virtuous ones? It is no secret that an enormous amount of analysis takes place of information on social media, and people should certainly be aware of this (see Adrian Short’s excellent piece here for some explanation), but the fact that it can and does take place a) doesn’t mean that it is necessarily lawful, nor that the law is impotent within the digital arena, and b) doesn’t mean that it is necessarily ethical. And for both those reasons Samaritans Radar was an ill-judged experiment that should never have taken place as it did. If any replacement is to be both ethical and lawful a lot of work, and a lot of listening, needs to be done.
The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.
Filed under Data Protection, social media
Dancing to the beat of the Google drum
With rather wearying predictability, certain parts of the media are in uproar about the removal by Google of search results linking to a positive article about a young artist. Roy Greenslade, in the Guardian, writes
The Worcester News has been the victim of one of the more bizarre examples of the European court’s so-called “right to be forgotten” ruling.
The paper was told by Google that it was removing from its search archive an article in praise of a young artist.
Yes, you read that correctly. A positive story published five years ago about Dan Roach, who was then on the verge of gaining a degree in fine art, had to be taken down.
Although no one knows who made the request to Google, it is presumed to be the artist himself, as he had previously asked the paper itself to remove the piece, on the basis that he felt it didn’t reflect the work he is producing now. But there is a bigger story here, and in my opinion it’s one of Google selling itself as an unwilling censor, and of media uncritically buying it.
Firstly, Google had no obligation to remove the results. The judgment of the Court of Justice of the European Union (CJEU) in the Google Spain case was controversial, and problematic, but its effect was certainly not to oblige a search engine to respond to a takedown request without considering whether it has a legal obligation to do so. What it did say was that, although as a rule data subjects’ rights to removal override the interest of the general public having access to the information delivered by a search query, there may be particular reasons why the balance might go the other way.
Furthermore, even if the artist here had a legitimate complaint that the results constituted his personal data, and that the continued processing by Google was inadequate, inaccurate, excessive or continuing for longer than was necessary (none of which, I would submit, would actually be likely to apply in this case), Google could simply refuse to comply with the takedown request. At that point, the requester would be left with two options: sue, or complain to the Information Commissioner’s Office (ICO). The former option is an interesting one (and I wonder if any such small claims cases will be brought in the County Court) but I think in the majority of cases people will be likely to take the latter. However, if the ICO receives a complaint, it appears that the first thing it is likely to do is refer the person to the publisher of the information in question. In a blog post in August the Deputy Commissioner David Smith said
We’re about to update our website* with advice on when an individual should complain to us, what they need to tell us and how, in some cases, they might be better off pursuing their complaint with the original publisher and not just the search engine [emphasis added]
This is in line with their new approach to handling complaints by data subjects – which is effectively telling them to go off and resolve it with the data controller in the first place.
Even if the complaint does make its way to an ICO case officer, what that officer will be doing is assessing – pursuant to section 42 of the Data Protection Act 1998 (DPA) – “whether it is likely or unlikely that the processing has been or is being carried out in compliance with the provisions of [the DPA]”. What the ICO is not doing is determining an appeal. An assessment of “compliance not likely” is no more than that – it does not oblige the data controller to take action (although it may be accompanied by recommendations). An assessment of “compliance likely”, moreover, leaves an aggrieved data subject with no other option but to attempt to sue the data controller. Contrary to what Information Commissioner Christopher Graham said at the recent Rewriting History debate, there is no right of appeal to the Information Tribunal in these circumstances.
Of course the ICO could, in addition to making a “compliance not likely” assessment, serve Google with an enforcement notice under section 42 DPA requiring them to remove the results. An enforcement notice does have proper legal force, and it is a criminal offence not comply with one. But they are rare creatures. If the ICO does ever serve one on Google things will get interesting, but let’s not hold our breath.
So, simply refusing to take down the results would, certainly in the short term, cause Google no trouble, nor attract any sanction.
Secondly (sorry, that was a long “firstly”) Google appear to have notified the paper of the takedown, in the same way they notified various journalists of takedowns of their pieces back in June this year (with, again, the predictable result that the journalists were outraged, and republicised the apparently taken down information). The ICO has identified that this practice by Google may in itself constitute unfair and unlawful processing: David Smith says
We can certainly see an argument for informing publishers that a link to their content has been taken down. However, in some cases, informing the publisher has led to the complained about information being republished, while in other cases results that are taken down will link to content that is far from legitimate – for example to hate sites of various sorts. In cases like that we can see why informing the content publisher could exacerbate an already difficult situation and could in itself have a very detrimental effect on the complainant’s privacy
Google is a huge and hugely rich organisation. It appears to be trying to chip away at the CJEU judgment by making it look ridiculous. And in doing so it is cleverly using the media to help portray it as a passive actor – victim, along with the media, of censorship. As I’ve written previously, Google is anything but passive – it has algorithms which prioritise certain results above others, for commercial reasons, and it will readily remove search results upon receipt of claims that the links are to copyright material. Those elements of the media who are expressing outrage at the spurious removal of links might take a moment to reflect whether Google is really as interested in freedom of expression as they are, and, if not, why it is acting as it is.
*At the time of writing this advice does not appear to have been made available on the ICO website.
You can’t take it with you
A paralegal has been convicted for taking client data with him when he left his role. Douglas Carswell MP denies taking Tory Party data, but what of his civil obligations with the data he has retained?
I blogged recently about the data protection implications of the news that Douglas Carswell MP was resigning his seat and seeking re-election as a UKIP MP. I mused on the fact that UKIP were reported to be “purring” over the data he was bringing with him, and I questioned whether, if this was personal data of constituents, his processing was compliant with his obligations under the Data Protection Act 1998. Paul Bernal blogged as well, and Paul was quoted in a subsequent article in The Times (which now seems to have been moved, or removed), in which Carswell defended himself against allegations of illegality
“Any data that the Conservative Party gathered while I was a member of the Conservative party is, was and must remain the property of the Conservative party.” He said that the suggestion that he had taken such information was “desperate briefing from within the Tory machine” and was extremely regrettable. The former MP did say, however, that he planned to use his own private data gathered during nine years as a Conservative MP. He insisted that he would not be sharing this with UKIP
With respect to Mr Carswell, this still doesn’t convince me that no data protection concerns exist. If by his “own private data”, he means information about constituents which is their personal data, then I would still argue that such use could potentially be in contravention of his civil obligations under the first and second principles in Schedule One to the Data Protection Act 1998. As I said previously
If constituents have given Carswell their details on the basis that it would be processed as part of his constituency work as a Conservative MP they might rightly be aggrieved if that personal data were then used by him in pursuit of his campaign as a UKIP candidate
Even if he didn’t share such data with UKIP, data protection obligations would clearly be engaged.
It seems to me that his quote to The Times was perhaps to refute any possible allegations that his use of data was criminal. A recent prosecution by the Information Commissioner’s Office (ICO) illustrates how taking personal data from one job, or one role, to another, without the consent of the data controller, can be a criminal offence. The offender was a paralegal at a Yorkshire solicitor’s practice who, before he left the firm, emailed himself (presumably to a private address) information, in the form of workload lists, file notes and template documents. However, the information also contained the personal data of over 100 clients of the firm. Accordingly, he was convicted of the offence at section 55 of the DPA, of (in terms) unlawfully obtaining personal data without the consent of the data controller. The fine was, as they tend to be for section 55 offences, small – £300, plus a £30 victim surcharge and £438.63 prosecution costs – but the offender’s future job prospects in the legal sector might be adversely affected.
The ICO’s Head of Enforcement Steve Eckersley is quoted, and though he talks in terms of “employees”, his words might well be equally applicable to people leaving elected posts
Employees may think work related documents that they have produced or worked on belong to them and so they are entitled to take them when they leave. But if they include people’s details, then taking them without permission is breaking the law
Mr Carswell was wise not to retain data for which the Conservative Party was data controller. But I’m still not sure about the (non-criminal) implications of his use of data for which he is data controller.
Filed under crime, Data Protection, Information Commissioner
Helping the ICO (but will ICO accept the help?)
I think the ICO should consider operating a priority alert system when well-informed third-parties alert them to exposures of personal data. They certainly shouldn’t leave those third parties to do in-depth investigation.
My attention was recently drawn to the existence of sensitive personal data being made available online. Google’s bots are brute things, and will effectively cache anything they can, such as data exposed by an unsecured ftp server, and that is what appears to have happened in this case. I looked at the names of the files and folders exposed, and I felt very uncomfortable. I don’t want to see this information, and the people involved certainly wouldn’t want me to. Furthermore, neither would the data controller – a voluntary service organisation. And section 55 of the Data Protection Act 1998 (DPA) creates, in terms, an offence of obtaining personal data knowingly, without the consent of the data controller. Admittedly, if one does so and it is justified as in the public interest, then the elements of the offence are not made out, but my feeling was very much that, having seen very briefly the extent of the inadvertent exposure, I should go so far, and no further.
But what to do then? The short answer, is, to alert the data controller and refer the matter to the Information Commissioner’s Office (ICO). The ICO’s duties are to regulate and enforce the DPA, and promote the following of good practice by data controllers. Although their website is predicated on the basis that a person reporting a concern will have a direct interest in the situation, it is still possible to report a third party concern. However, when I recently reported the fact that a local authority was exposing huge amounts of personal data as open data, firstly, the case officer could not understand why the data in question allowed individuals to be identified, and secondly, asked me to explain why, by providing screenshots. (I should add that I never received a reply from the local authority.) And I know of two other people who have been asked by the ICO to provide specific and detailed examples, such as screenshots, of exposed personal data. The problem with this is that it is dragging concerned third parties directly into potential illegality: taking and emailing screenshots of personal data is processing, without the consent of the data controller, and will (or should) involve encryption (although the ICO doesn’t appear to offer this to third parties) and issues about retention. I’m not suggesting that people will be prosecuted for doing a beneficial civic act, but it is far from ideal.
As always, I understand and accept that the ICO is woefully underfunded. They can only afford to pay new case officers about £4.5k above the annual minimum wage, but I do think they should have a system in place for people to report serious exposures of personal data, and for these reports to be treated and investigated with some urgency. In my recent “open data” case, I didn’t receive any acknowledgment of receipt of my concerns (other than an automated one indicating my email had been received) and the case officer, when I did get a reply, rather impatiently explained that their service standards mean “that if you have reported a concern to us you can expect to receive a response within 30 days”. But I noted that the MS Word doc. that was sent to me was called “ICO to DS raising concerns”. I presume “DS” means “data subject”, but, of course, that is not what I was in this case. A data subject raising concerns is, in the vast majority of cases, not going to be reporting the public exposure of large amounts of sensitive personal data (most often they will be complaining about a discrete incident involving their own data).
I have spoken to people who have reported what were quite clearly horrendous exposures of personal data, but by the time the ICO looked at the case the problem had either been rectified by the data controller, or, for instance, the Google cache links had expired. Of course, that is good on one view, but when it comes to the ICO’s regulatory role, it effectively means that delays in considering these reports allow evidence of serious contraventions by data controllers to be erased.
Almost a year ago I was alerted to a horrendous exposure of highly sensitive personal data (I understand that, again, an unsecured ftp server was to blame). And I remember the frustration and consternation that I and others felt at the apparent delay by Newcastle Citizen’s Advice Bureau in getting the data removed from the web. I’m rather amazed we never heard anything from the ICO about that incident – did they complete their investigation? did they take action? if not, how on earth did the CAB manage to persuade them there wasn’t a serious DPA contravention warranting enforcement action? And, as far as I know, the CAB branch never acknowledged what had happened, nor apologised for it, nor thanked those who had alerted them to the situation.
There are many expert and well-informed people who are prepared to alert data controllers and the ICO to potentially harmful exposures of personal data. Could there not be some sort of priority alert system? (If necessary, it could be through some sort of “trusted third-party” list.) If data controllers, but particularly if the ICO, are not willing to embrace the sort of public-spiritedness which identifies and alerts them to exposures of personal data, then it’s a poor lookout for data subjects.
Filed under Breach Notification, Data Protection, Information Commissioner
