Around this time last year I wrote two blog posts about two separate police forces’ decision to tweet the names of drivers charged (but not – yet, at least – convicted) of drink driving offences. In the latter example Staffordshire police were actually using a hashtag #drinkdriversnamedontwitter, and I argued that
If someone has merely been charged with an offence, it is contrary to the ancient and fundamental presumption of innocence to shame them for that fact. Indeed, I struggle to understand how it doesn’t constitute contempt of court to do so, or to suggest that someone who has not been convicted of drink-driving is a drink driver. Being charged with an offence does not inevitably lead to conviction. I haven’t been able to find statistics relating to drink-driving acquittals, but in 2010 16% of all defendants dealt with by magistrates’ courts were either acquitted or not proceeded against
The Information Commissioner’s Office investigated whether there had been a breach of the first principle of Schedule One of the Data Protection Act 1998 (DPA), which requires that processing of personal data be “fair and lawful”, but decided to take no action after Staffs police agreed not to use the hashtag again, saying
Our concern was that naming people who have only been charged alongside the label ‘drink-driver’ strongly implies a presumption of guilt for the offence. We have received reassurances from Staffordshire Police the hashtag will no longer be used in this way and are happy with the procedures they have in place. As a result, we will be taking no further action.
But my first blog post had raised questions about whether the mere naming of those charged was in accordance with the same DPA principle. Newspaper articles talked of naming and “shaming”, but where is the shame in being charged with an offence? I wondered why Sussex police didn’t correct those newspapers who attributed the phrase to them.
And this year, Sussex police, as well as neighbouring Surrey, and Somerset and Avon are doing the same thing: naming drivers charged with drink driving offences on twitter or elsewhere online. The media happily describe this as a “naming and shaming” tactic, and I have not seen the police disabusing them, although Sussex police did at least enter into a dialogue with me and others on twitter, in which they assured us that their actions were in pursuit of open justice, and that they were not intending to shame people. However, this doesn’t appear to tally with the understanding of the Sussex Police and Crime Commissioner who said earlier this year
I am keen to find out if the naming and shaming tactic that Sussex Police has adopted is actually working
But I also continue to question whether the practice is in accordance with police forces’ obligations under the DPA. Information relating to the commission or alleged commission by a person of an offence is that person’s sensitive personal data, and for processing to be fair and lawful a condition in both of Schedule Two and, particularly, Schedule Three must be met. And I struggle to see which Schedule Three condition applies – the closest is probably
The processing is necessary…for the administration of justice
while the adjective “necessary”, within the meaning of article 10(2) [of the European Convention on Human Rights] is not synonymous with “indispensable”, neither has it the flexibility of such expressions as “admissible”, “ordinary”, “useful”, “reasonable” or “desirable” and that it implies the existence of a “pressing social need.”
should reflect the meaning attributed to it by the European Court of Human Rights when justifying an interference with a recognised right, namely that there should be a pressing social need and that the interference was both proportionate as to means and fairly balanced as to ends
This is another tool in our campaign to stop people driving while under the influence of drink or drugs. If just one person is persuaded not to take to the road as a result, then it is worthwhile as far as we are concerned.
and Sussex police’s Chief Inspector Natalie Moloney says
I hope identifying all those who are to appear in court because of drink or drug driving will act as a deterrent and make Sussex safer for all road users
which firstly fails to use the word “alleged” before “drink or drug driving”, and secondly – as Supt Corrigan – suggests the purpose of naming is not to promote open justice, but rather to deter drink drivers.
Deterring drink driving is certainly a worthy public aim (and I stress that I have no sympathy whatsoever with those convicted of such offences) but should the sensitive personal data of who have not been convicted of any offence be used to their detriment in pursuance of that aim?
I worry that unless such naming practices are scrutinised, and challenged when they are unlawful and unfair, the practice will spread, and social “shame” will be encouraged to be visited on the innocent. I hope the Information Commissioner investigates.
The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.
Filed under Data Protection, human rights, Information Commissioner, Open Justice, police, social media
ICO confirm they are considering enforcement action over #samaritansradar app
FOI response from ICO refuses disclosure of correspondence with Samaritans because it could prejudice ongoing investigations
On 12 November I asked the Information Commissioner’s Office to disclose to me, under the Freedom of Information Act (FOIA) information relating to their assessment of the legality of the “Samaritans Radar” app (see blog posts passim).
The ICO have now responded to me, refusing to disclose because of the FOIA exemption for “law enforcement”. As the ICO say
The exemption at section 31(1)(g) of the FOIA refers to circumstances
where the disclosure of information “would, or would be likely to,
prejudice – … the exercise by any public authority of its functions for
any of the purposes specified in subsection (2).”The purposes referred to in sections 31(2)(a) and (c) are –
“(a) the purpose of ascertaining whether any person has failed to comply
with the law” and“(c) the purpose of ascertaining whether circumstances which would
justify regulatory action in pursuance of any enactment exist or may arise
…”Clearly, these purposes apply when the Information Commissioner is
considering whether or not an organisation has breached the Data Protection Act
But the exemption is subject to a public interest test, and the ICO acknowledge that there is public interest in the matter, particularly in how Samaritans have responded to their enquiries. Nonetheless, as the investigation is ongoing, and as no decision has apparently been made about whether enforcement action should be taken, the balance in the public interest test falls on the side of non-disclosure.
The question of potential enforcement action is an interesting one. Although the ICO have power to serve monetary penalty notices (to a maximum of £500,000) they can also issue enforcement notices, requiring organisations (who are data controllers, as I maintain Samaritans were for the app) to cease or not begin processing personal data for specific purposes. They also can ask data controllers to sign undertakings to take or not take specific action. This is of interest because Samaritans have indicated that they might want to launch a reworked version of the app.
It is by no means certain that enforcement action will result – the ICO are likely to be reluctant to enforce against a generally admirable charity – but the fact that it is being considered is in itself of interest.
The ICO acknowledge that the public interest in maintaining this particular exemption wanes once the specific investigation has been completed. Consequently I have asked them, outwith FOIA, to commit to disclosing this information proactively once the investigation has finished. They have no obligation to do so, but it would be to the benefit of public transparency, which their office promotes, if they did.
The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.
Watching the detective
The ICO might be mostly powerless to take action against the operators of the Russian web site streaming unsecured web cams, but the non-domestic users of the web cams could be vulnerable to enforcement action
The Information Commissioner’s Office (ICO) warned yesterday of the dangers of failing to secure web cams which are connected to the internet. This was on the back of stories about a Russian-based web site which aggregates feeds from thousands of compromised cameras worldwide.
This site was drawn to my attention a few weeks ago, and, although I tweeted obliquely about it, I thought it best not to identify it because of the harm it could potentially cause. However, although most news outlets didn’t identify the site, the cat is now, as they say, out of the bag. No doubt this is why the ICO chose to issue sensible guidance on network security in its blog post.
I also noticed that the Information Commissioner himself, Christopher Graham, rightly pointed to the difficulties in shutting down the site, and the fact that it is users’ responsibility to secure their web cams:
It is not within my jurisdiction, it is not within the European Union, it is Russia.
I will do what I can but don’t wait for me to have sorted this out.
This is, of course, true, and domestic users of web cams would do well to note the advice. Moreover, this is just the latest of these aggregator sites to appear. But news reports suggested that some of the 500-odd (or was it 2000-odd?) feeds on the site from the UK were from cameras of businesses or other non-domestic users (I saw a screenshot, for instance, of a feed from a pizza takeaway). Those users, if their web cams are capturing images of identifiable individuals, are processing personal data in the role of a data controller. And they can’t claim the exemption in the Data Protection Act 1998 (DPA) that applies to processing for purely domestic purposes. They must, therefore comply with the seventh data protection principle, which requires them to take appropriate measures to safeguard against unauthorised and unlawful processing of personal data. Allowing one’s web can to be compromised and its feed streamed on a Russian website is a pretty good indication that one is not complying with the seventh principle. Serious contraventions of the obligation to comply with the data protection principles can, of course, lead to ICO enforcement action, such as monetary penalty notices, to a maximum of £500,000.
The ICO is not, therefore, completely powerless here. Arguably it should be (maybe it is?) looking at the feeds on the site to determine which are from non-domestic premises, and looking to take appropriate enforcement action against them. So to that extent, one is rather watching Mr Graham, to see if he can sort this out.
Filed under Data Protection, Information Commissioner, Privacy
The voluntary data controller
One last post on #samaritansradar. I hope.
I am given to understand that Samaritans, having pulled their benighted app, have begun responding to the various legal notices people served on them under the Data Protection Act 1998 (specifically, these were notices under section 7 (subject access) section 10 (right to prevent processing likely to cause damage or distress) and section 12 (rights in relation to automated processing)). I tweeted my section 12 notice, but I doubt I’ll get a response to that, because they’ve never engaged with me once on twitter or elsewhere.
However, I have been shown a response to a section 7 request (which I have permission to blog about) and it continues to raise questions about Samaritans’ handling of this matter (and indeed, their legal advice – which hasn’t been disclosed, or even really hinted at). The response, in relevant part, says
We are writing to acknowledge the subject access request that you sent to Samaritans via DM on 6 November 2014. Samaritans has taken advice on this matter and believe that we are not a data controller of information passing through the Samaritans Radar app. However, in response to concerns that have been raised, we have agreed to voluntarily take on the obligations of a data controller in an attempt to facilitate requests made as far as we can. To this end, whilst a Subject Access Request made under the Data Protection Act can attract a £10 fee, we do not intend to charge any amount to provide information on this occasion.
So, Samaritans continue to deny being data controller for #samaritansradar, although they continue also merely to give assertions, not any legal analysis. But, notwithstanding their belief that they are not controller they are taking on the obligations of a data controller.
I think they need to be careful. A person who knowingly discloses personal data without the consent of the data controller potentially commits a criminal offence under section 55 DPA. One can’t just step in, grab personal data and start processing it, without acting in breach of the law. Unless one is a data controller.
And, in seriousness, this purported adoption of the role of “voluntary data controller” just bolsters the view that Samaritans have been data controllers from the start, for reasons laid out repeatedly on this blog and others. They may have acted as joint data controller with users of the app, but I simply cannot understand how they can claim not to have been determining the purposes for which and the manner in which personal data were processed. And if they were, they were a data controller.
Filed under Data Protection, social media
Do your research. Properly
Campaigning group Big Brother Watch have released a report entitled “NHS Data Breaches”. It purports to show the extent of such “breaches” within the NHS. However it fails properly to define its terms, and uses very questionable methodology. I think, most worryingly, this sort of flawed research could lead to a reluctance on the part of public sector data controllers to monitor and record data security incidents.
As I checked my news alerts over a mug of contemplative coffee last Friday morning, the first thing I noticed was an odd story from a Bedfordshire news outlet:
Bedford Hospital gets clean bill of health in new data protection breach report, unlike neighbouring counties…From 2011 to 2014 the hospital did not breach the data protection act once, unlike neighbours Northampton where the mental health facility recorded 346 breaches, and Cambridge University Hospitals which registered 535 (the third worst in the country).
Elsewhere I saw that one NHS Trust had apparently breached data protection law 869 times in the same period, but many others, like Bedford Hospital had not done so once. What was going on – are some NHS Trusts so much worse in terms of legal compliance than others? Are some staffed by people unaware and unconcerned about patient confidentiality? No. What was going on was that campaigning group Big Brother Watch had released a report with flawed methodology, a misrepresentation of the law and flawed conclusions, which I fear could actually lead to poorer data protection compliance in the future.
I have written before about the need for clear terminology when discussing data protection compliance, and of the confusion which can be caused by sloppiness. The data protection world is very found of the word “breach”, or “data breach”, and it can be a useful term to describe a data security incident involving compromise or potential compromise of personal data, but the confusion arises because it can also be used to describe, or assumed to apply to, a breach of the law, a breach of the Data Protection Act 1998 (DPA). But a data security incident is not necessarily a breach of a legal obligation in the DPA: the seventh data protection principle in Schedule One requires that
Appropriate technical and organisational measures shall be taken [by a data controller] against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data
And section 4(4) of the DPA obliges a data controller to comply with the Schedule One data protection principles. This means that when appropriate technical and organisational measures are taken but unauthorised or unlawful processing, or accidental loss or destruction of, or damage to, personal data nonetheless occurs, the data controller is not in breach of its obligations (at least under the seventh principle). This distinction between a data security incident, and a breach, or contravention, of legal obligations, is one that the Information Commissioner’s Office (ICO) itself has sometimes failed to appreciate (as the First-tier Tribunal found in the Scottish Borders Council case EA/2012/0212). Confusion only increases when one takes into account that under The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) which are closely related to the DPA, and which deal with data security in – broadly – the telecoms arena, there is an actual legislative provision (regulation 2, as amended) which talks in terms of a “personal data breach”, which is
a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed in connection with the provision of a public electronic communications service
and regulation 5A obliges a relevant data controller to inform the ICO when there has been a “personal data breach”. It is important to note, however, that a “personal data breach” under PECR will not be a breach, or contravention, of the seventh DPA data protection principle, provided the data controller took appropriate technical and organisational to safeguard the data.
Things get even more complex when one bears in mind that the draft European General Data Protection Regulation proposes a similar approach as PECR, and defines a “personal data breach” in similar terms as above (simply removing the words “in connection with the provision of a public electronic communications service“).
Notwithstanding this, the Big Brother Watch report is entitled “NHS Data Breaches”, so one would hope that it would have been clear about its own terms. It has led to a lot of coverage, with media outlets picking up on headline-grabbing claims of “7225 breaches” in the NHS between 2011 and 2014, which is the equivalent to “6 breaches a day”. But when one looks at the methodology used, serious questions are raised about the research. It used Freedom of Information requests to all NHS Trusts and Bodies, and the actual request was in the following terms
1. The number of a) medical personnel and b) non-medical personnel that have been convicted for breaches of the Data Protection Act.
2. The number of a) medical personnel and b) non-medical personnel that have had their employment terminated for breaches of the Data Protection Act.
3. The number of a) medical personnel and b) non-medical personnel that have been disciplined internally but have not been prosecuted for breaches of the Data Protection Act.
4. The number of a) medical personnel and b) non-medical personnel that have resigned during disciplinary procedures.
5. The number of instances where a breach has not led to any disciplinary action.
The first thing to note is that, in broad terms, the only way that an individual NHS employee can “breach the Data Protection Act” is by committing a criminal offence under section 55 of unlawfully obtaining personal data without the consent of the (employer) data controller. All the other relevant legal obligations under the DPA are ones attaching to the NHS body itself, as data controller. Thus, by section 4(4) the NHS body has an obligation to comply with the data protection principles in Schedule One of the DPA, not individual employees. And so, except in the most serious of cases, where an employee acts without the consent of the employer to unlawfully obtain personal data, individual employees, whether medical or non-medical personnel, cannot as a matter of law “breach the Data Protection Act”.
One might argue that it is easy to infer that what Big Brother Watch meant to ask for was information about the number of times when actions of individual employees meant that their employer NHS body had breached its obligations under the DPA, and, yes, that it probably what was meant, but the incorrect terms and lack of clarity vitiated the purported research from the start. This is because NHS bodies have to comply with the NHS/Department of Health Information Governance Toolkit. This toolkit actually requires NHS bodies to record serious data security incidents even where those incidents did not, in fact, constitute a breach of the body’s obligations under the DPA (i.e. incidents might be recorded which were “near misses” or which did not constitute a failure of the obligation to comply with the seventh, data security, principle).
The results Big Brother Watch got in response to their ambiguous and inaccurately termed FOI request show that some NHS bodies clearly interpreted it expansively, to encompass all data security incidents, while others – those with zero returns in any of the fields, for instance – clearly interpreted it restrictively. In fact, in at least one case an NHS Trust highlighted that its return included “near misses”, but these were still categorised by Big Brother Watch as “breaches”.
And this is not unimportant: data security and data protection are of immense importance in the NHS, which has to handle huge amounts of highly sensitive personal data, often under challenging circumstances. Awful contraventions of the DPA do occur, but so too do individual and unavoidable instances of human error. The best data controllers will record and act on the latter, even though they don’t give rise to liability under the DPA, and they should be applauded for doing so. Naming and shaming NHS bodies on the basis of such flawed research methodology might well achieve Big Brother Watch’s aim of publicising its call for greater sanctions for criminal offences, but I worry that it might lead to some data controllers being wary of recording incidents, for fear that they will be disclosed and misinterpreted in the pursuit of questionable research.
Filed under Data Protection, Freedom of Information, Information Commissioner, NHS
So farewell then #samaritansradar…
…or should that be au revoir?
With an interestingly timed announcement (18:00 on a Friday evening) Samaritans conceded that they were pulling their much-heralded-then-much–criticised app “Samaritans Radar”, and, as if some of us didn’t feel conflicted enough criticising such a normally laudable charity, their Director of Policy Joe Ferns managed to get a dig in, hidden in what was purportedly an apology:
We are very aware that the range of information and opinion, which is circulating about Samaritans Radar, has created concern and worry for some people and would like to apologise to anyone who has inadvertently been caused any distress
So, you see, it wasn’t the app, and the creepy feeling of having all one’s tweets closely monitored for potentially suicidal expressions, which caused concern and worry and distress – it was all those nasty people expressing a range of information and opinion. Maybe if we’d all kept quiet the app could have continued on its unlawful and unethical merry way.
However, although the app has been pulled, it doesn’t appear to have gone away
We will…be testing a number of potential changes and adaptations to the app to make it as safe and effective as possible for both subscribers and their followers
There is a survey at the foot of this page which seeks feedback and comment. I’ve completed it, and would urge others to do so. I’ve also given my name and contact details, because one of my main criticisms of the launch of the app was that there was no evidence that Samaritans had taken advice from anyone on its data protection implications – and I’m happy to do so for no fee. As Paul Bernal says, “[Samaritans] need to talk to the very people who brought down the app: the campaigners, the Twitter activists and so on”.
Data protection law’s place in our digital lives is of profound importance, and of profound interest to me. Let’s not forget that its genesis in the 1960s and 1970s was in the concerns raised by the extraordinary advances that computing brought to data analysis. For me some of the most irritating counter-criticism during the recent online debates about Samaritans Radar was from people who equated what the app did to mere searching of tweets, or searching for keywords. As I said before, the sting of this app lay in the overall picture – it was developed, launched and promoted by Samaritans – and in the overall processing of data which went on – it monitored tweets, identified potentially worrying ones and pushed this information to a third party, all without the knowledge of the data subject.
But also irritating were comments from people who told us that other organisations do similar analytics, for commercial reasons, so why, the implication went, shouldn’t Samaritans do it for virtuous ones? It is no secret that an enormous amount of analysis takes place of information on social media, and people should certainly be aware of this (see Adrian Short’s excellent piece here for some explanation), but the fact that it can and does take place a) doesn’t mean that it is necessarily lawful, nor that the law is impotent within the digital arena, and b) doesn’t mean that it is necessarily ethical. And for both those reasons Samaritans Radar was an ill-judged experiment that should never have taken place as it did. If any replacement is to be both ethical and lawful a lot of work, and a lot of listening, needs to be done.
The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.
Filed under Data Protection, social media
DCMS consulting on lower threshold for “fining” spammers
UPDATE: 08.11.14
Rich Greenhill has spotted another odd feature of this consultation. Options one and two both use the formulation “the contravention was deliberate or the person knew or ought to have known that there was a risk that the contravention would occur”, however, option three omits the words “…or ought to have known”. This is surely a typo, because if it were a deliberate omission it would effectively mean that penalties could not be imposed for negligent contraventions (only deliberate or wilful contraventions would qualify). I understand Rich has asked DCMS to clarify this, and will update as and when he hears anything.
END UPDATE
UPDATE: 04.11.14
An interesting development of this story was how many media outlets and commentators reported that the consultation was about lowering the threshold to “likely to cause annoyance, inconvenience or anxiety”, ignoring in the process that the preferred option of DCMS and ICO was for no harm threshold at all. Christopher Knight, on 11KBW’s Panopticon blog kindly amended his piece when I drew this point to his attention. He did, however observe that most of the consultation paper, and DCMS’s website, appeared predicated on the assumption that the lower-harm threshold was at issue. Today, Rich Greenhill informs us all that he has spoken to DCMS, and that their preference is indeed for a “no harm” approach: “Just spoke to DCMS: govt prefers PECR Option 3 (zero harm), its PR is *wrong*”. How very odd.
END UPDATE
The Department of Culture, Media and Sport (DCMS) has announced a consultation on lowering the threshold for the imposing of financial sanctions on those who unlawfully send electronic direct marketing. They’ve called it a “Nuisance calls consultation”, which, although they explain that it applies equally to nuisance text messages, emails etc., doesn’t adequately describe what could be an important development in electronic privacy regulation.
When, a year ago, the First-tier Tribunal (FTT) upheld the appeal by spam texter Christopher Niebel against the £300,000 monetary penalty notice (MPN) served on him by the Information Commissioner’s Office (ICO), it put the latter in an awkward position. And when the Upper Tribunal dismissed the ICO’s subsequent appeal, there was binding authority on the limits to the ICO’s power to serve MPNs for serious breaches of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). There was no dispute that, per the mechanism at section 55A of the Data Protection Act 1998 (DPA), adopted by PECR by virtue of regulation 31, Niebel’s contraventions were serious and deliberate, but what was at issue was whether they were “of a kind likely to cause substantial damage or substantial distress”. The FTT held that they were not – no substantial damage would be likely to arise and when it came to distress
the effect of the contravention is likely to be widespread irritation but not widespread distress…we cannot construct a logical likelihood of substantial distress as a result of the contravention.
When the Upper Tribunal agreed with the FTT, and the ICO’s Head of Enforcement said it had “largely [rendered] our power to issue fines for breaches of PECR involving spam texts redundant” it seemed clear that, for the time being at least, there was in effect a green light for spam texters, and, by extension, other spam electronic marketers. The DCMS consultation is in response to calls from the ICO, and others, such as the All Party Parliamentary Group (APPG) on Nuisance Calls, the Direct Marketing Association and Which for a change in the law.
The consultation proposes three options – 1) do nothing, 2) lower the threshold from “likely to cause substantial damage or substantial distress” to “likely to cause annoyance, inconvenience or anxiety”, or 3) remove the threshold altogether, so any serious and deliberate (or reckless) contravention of the PECR provisions would attract the possibility of a monetary penalty. The third option is the one favoured by DCMS and the ICO.
If either of the second or third options is ultimately enacted, this could, I feel, lead to a significant reduction in the prevalence of spam marketing. The consultation document notes that (despite the fact that the MPN was overturned on appeal) the number of unsolicited spam SMS text message sent reduced by a significant number after the Niebel MPN was served. A robust and prominent campaign of enforcement under a legislative scheme which makes it much easier to impose penalties to a maximum of £500,000, and much more difficult to appeal them, could put many spammers out of business, and discourage others. This will be subject, of course, both to the willingness and the resources of the ICO. The consultation document notes that there might be “an expectation that [MPNs] would be issued by the ICO in many more cases than its resources permit” but the ICO has said (according to the document) that it is “ready and equipped to investigate and progress a significant number of additional cases with a view to taking greater enforcement action including issuing more CMPs”.
There appears to be little resistance (as yet, at least) to the idea of lowering or removing the penalty threshold. Given that, and given the ICO’s apparent willingness to take on the spammers, we may well see a real and significant attack on the scourge. Of course, this only applies to identifiable spammers in the domestic jurisdiction – let’s hope it doesn’t just drive an increase in non-traceable, overseas spam.
If at first you don’t succeed…
The Information Commissioner’s Office (ICO) has uploaded to its website (24 October) two undertakings for breaches of data controllers’ obligations under the Data Protection Act 1998 (DPA). Undertakings are part of the ICO’s suite of possible enforcement actions against controllers.
One undertaking was signed by Gwynedd Council, after incidents in which social care information was posted to the wrong address, and a social care file went missing in transit between two sites. The other, more notably, was signed by the Disclosure and Barring Service (DBS), who signed a previous undertaking in March this year, after failing to amend a question (“e55″) on its application form which had been rendered obsolete by legislative changes. The March undertaking noted that
Question e55 of the application form asked the individuals ‘Have you ever been convicted of a criminal offence or received a caution, reprimand or warning?’ [Some applicants] responded positively to this question even though it was old and minor caution/conviction information that would have been filtered under the legislation. The individual’s positive response to question e55 was then seen by prospective employers who withdrew their job offers
This unnecessary disclosure was, said the ICO, unfair processing of sensitive personal data, and the undertaking committed DBS to amend the question on the form by the end of March.
However, the latest undertaking reveals that
application forms which do not contain the necessary amendments remain in circulation. This is because a large number of third party organisations are continuing to rely on legacy forms issued prior to the amendment of question e55. In the Commissioner’s view, the failure to address these legacy forms could be considered to create circumstances under which the unfair processing of personal data arises
The March undertaking had also committed DBS to ensure that supporting information provided to those bodies with access to the form be
kept under review to ensure that they continue to receive up to date, accurate and relevant guidance in relation to filtered matters
One might cogently argue that part of that provision of up-to-date guidance should have involved ensuring that those bodies destroyed old, unamended forms. And if one did argue that successfully, one would arrive at the conclusion that DBS could be in breach of the March undertaking for failing to do so. Breach of an undertaking does not automatically result in more serious sanctions, but they are available to the ICO, in the form of monetary penalties and enforcement notices. DBS might consider themselves lucky to have been given a second (or third?) chance, under which they must, by the end of of the year at the latest ensure that unamended legacy application forms containing are either rejected or removed from circulation.
One final point I would make is that no press release appears to have been put out about yesterday’s undertakings, nothing is on the ICO’s home page, and there wasn’t even a tweet from their twitter account. A large part of a successful enforcement regime is publicising when action has been taken. The ICO’s own policy on this says
Publicising our enforcement and regulatory activities is an important part of our role as strategic regulator, and a deterrent for potential offenders
Letting “offenders” off the publicising hook runs the risk of diminishing that deterrent effect.
Filed under Data Protection, enforcement, Information Commissioner, undertaking
