SWEET F.A.
Go and learn some economics. Something’s value is determined by what people are prepared to pay for it, and no one wants to buy your twitter account. Don’t be so greedy.
SWEET F.A.
Go and learn some economics. Something’s value is determined by what people are prepared to pay for it, and no one wants to buy your twitter account. Don’t be so greedy.
Filed under nonsense, social media
The Sunday Times reports that a billion patient records have been sold to a marketing consultancy. Is it time for an independent review of these highly questionable data sharing practices?
In 2012, at the behest of the then Secretary of State for Health, Andrew Lansley (driver of the Health and Social Care Act 2012), Dame Fiona Caldicott chaired a review of information governance in the NHS. Her report, which focused on the issue of sharing of information, was published in April 2013. At the time a statement in it, referring to the Information Commissioner’s Office (ICO) stood out to me, and it stands out even more now, but for different reasons. It says
The ICO told the Review Panel that no civil monetary penalties have been served for a breach of the Data Protection Act due to formal data sharing between data controllers in any organisation for any purpose
At the time, I thought “Well duh” – of course the ICO is not going to take enforcement action where there has been a formal data sharing agreement, because, clearly, the parties entering into such an agreement are going to make sure they do so lawfully, and with regard to the ICO guidance on data sharing – lawful and proportionate data sharing is, er, lawful, so the ICO wouldn’t be able to take action.
But now, with the frequent and worrying stories emerging of apparent data sharing arrangements between the NHS Information Centre (NHSIC), and its successor, the Health and Social Care Information Centre (HSCIC), I start to think the ICO’s comments are remarkable for what they might reveal about them looking in the wrong direction, when they should have been paying more attention to the lawfulness of huge scale data sharing arrangements between the NHS and private bodies. And now, The Sunday Times reports that
A BILLION NHS records containing details of patients’ hospital admissions and operations have been sold to a marketing consultancy working for some of the world’s biggest drug companies
I think it is time for a wholesale review, properly funded, by the ICO as independent regulator, of these “formal data sharing” arrangements. They appear to have a questionable legal basis, based to a large extent on questionable assumptions and assurances that pseudonymisation equates to anonymisation (which anyone who looks into will realise is nonsense).
And I think the review should also consider how and why these arrangements appear to have deliberately been taking place behind the backs of the patients whose data has been “shared”.
Yesterday’s data breach involving Morrisons supermarket and its staff payroll illustrates how difficult it is properly to handle such incidents, and perhaps provides some learning points for the future. But also raises issues about what is a “data breach”
What do we mean by “data breach”, “personal data breach”, “data security breach” etc?
The draft European General Data Protection Regulation (GDPR), which continues to slouch its way towards implementation, says in its current form that
In the case of a personal data breach, the controller shall without undue delay notify the personal data breach to the supervisory authority [and]
When the personal data breach is likely to adversely affect the protection of the personal data, the privacy, the rights or the legitimate interests of the data subject, the controller shall, after the notification referred to in Article 31, communicate the personal data breach to the data subject without undue delay
“without undue delay” is, by virtue of (current) recital 67, said to be “not later than 72 hours” (in the original draft it was “where feasible, within 24 hours”). However “personal data breach” is not defined – it is suggested rather that the proposed European Data Protection Board will set guidelines etc for determining what a “breach” is.What is not clear to me is whether a “breach” is to be construed as “a breach of the data controller’s legal obligations under this Regulation”, or, more generally, “a breach of data security”. Certainly under the current domestic scheme there is, I would argue, confusion about this. A “breach of data security” is not necessarily equivalent to a breach of the Data Protection Act 1998 (DPA). To give a ludicrous example: if a gunman holds a person hostage, and demands that they unencrypt swathes of personal data from a computer system and give it to them, then it is hard to see that the data controller has breached the DPA, which requires only that “appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data” (which clearly cannot be construed as an unlimited obligation) but there has most certainly been a breach of data security.
It is unclear whether Morrisons chose to inform the Information Commissioner (ICO) about their incident, but the wording they’ve used to describe it suggests they are seeing this not as a breach of their obligations under the DPA, but as a potentially criminal act of which they were the victim: on their Facebook page they describe it as an “illegal theft of data” and that they are liaising with “the police and highest level of cyber crime authorities” (a doughnut to anyone who can explain to me what the latter is, by the way). If an offence has been committed under section 55 of the DPA (or possibly under the Computer Misuse Act 1990) there is a possible argument that the data controller is not at fault (although sometimes the two can go together – as I discuss in a recent post). Morrisons make no mention of the ICO, although I have no doubt that they (ICO) will now be aware and making enquiries. And, if Morrisons’ initial assessment was that they hadn’t breached the DPA (i.e. that they had taken the appropriate technical and organisational measures to mean they were not in breach of the seventh DPA principle), they might quite understandably argue that there was no need to inform the ICO, who, after all, regulates only compliance with the DPA and not broader issues around security breaches. There was certainly no legal obligation under current law for Morrisons to self-notify. Plenty of data controllers do, often ones in the public sector (the NHS Information Governance toolkit even automatically delivers a message to the ICO if an NHS data controller records a qualifying incident) but even the ICO’s guidance is unclear as to the circumstances which would trigger the need to self-notify. Their guidance is called “Notification of data security breaches to the ICO” but in the overview at the very start of that guidance it says
Report serious breaches of the seventh principle
a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed in connection with the provision of a public electronic communications service
The people whose data was apparently compromised in the Morrisons “breach” were its staff – it was payroll information which was allegedly stolen and misused. It appears that Morrisons emailed those staff with internal email addresses (how many checkout staff and shelf-stackers have one of those?) and then, as any modern, forward-thinking organisation might, it posted a message on its Facebook page.However, I really wonder about that as a strategy. The comments on that Facebook page seem to be threatening to turn the incident into a personnel, and public communications disaster, with many people saying they had heard nothing until they read the message. Moreover, one wonders to what extent some staff might have been misled, or have misled themselves, into assuming that the comments they were posting were on some closed forum or network. As was suggested to me on twitter yesterday, some of the comments look to be career-limiting ones, but by engaging on its social media platform, might Morrisons be seen to have encouraged that sort of robust response from employees?
Much of this still has to play out – notably whether there was any contravention of the DPA by Morrisons – but, in a week when their financial performance came under close scrutiny, their PR handling of this “data breach” will also be looked at very closely by other data controllers for lessons in case they are ever faced with a similar situation.
The Guardian reports that
A police force faces a fine from the information commissioner and compensation claims from thousands of motorists after an officer stole accident victims’ details from a police computer and sold them on to personal injury solicitors
The crime here was shocking: the ex-officer, with a co-conspirator, accessed accident victims’ records on police systems, and then rang them, posing as a car repairs company, urging them to claim compensation. She would then pass the information to solicitors for a referral fee. Because there is currently no custodial sentence available for offences under the Data Protection Act 1998 (DPA), and because she was a public officer, she was prosecuted for the offence of misconduct in a public office, and sentenced to three and a half years’ imprisonment (her co-conspirator received three years).
But what interests me is the Guardian’s suggestion, prompted it seems by comments made in court, that the employing police force (Thames Valley Police), as data controller, is potentially to face civil claims from aggrieved individuals and civil enforcement action from the Information Commissioner’s Office (ICO). For the force to be liable to either of these, it must be shown to have contravened its obligations under the DPA. And, contrary to what many people think, the mere fact that a data controller has lost, or had stolen, personal data, does not mean ineluctably that it has contravened the DPA.
The seventh principle of the DPA provides
Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data
and an allegation of a failure to do so (and hence of a contravention of the obligation, at section 4(4), to comply with the eight DPA principles) is likely to be the basis of any civil action.
Moreover, for civil enforcement, in the form of a monetary penalty notice (MPN), under section 55A, to be taken by the ICO, the contravention must be a “serious” one, “of a kind likely to cause significant damage or significant distress” and the data controller has to have known there was a risk of such a contravention happening, but to have failed to take reasonable steps to prevent it. This presents a series of boxes for the ICO to tick before enforcement action, and his experience in having an MPN recently overturned by the First-tier Tribunal (Information Rights) (FTT) will have shown how potentially onerous it is to successfully serve one. In that instance, the FTT found that, although Scottish Borders Council had committed a serious contravention of the seventh principle, in allowing its contractor to dispose of pensions records unsecurely, it was not a of a kind likely to cause significant damage or significant distress (the FTT was unimpressed by the ICO’s claim that data subjects were put at risk of identity fraud).
The test for successful civil claims for compensation (under section 13 DPA) to be brought by data subjects against a data controller is not so onerous, however. All that a claimant needs to show is that there has been “any contravention of any requirements of the Act” by a data controller which has caused the claimant to suffer damage (note that it doesn’t have to have been a “serious” contravention, and the damage doesn’t have to have been serious, but it must have been real damage, not merely the likelihood of such). If the claimant can prove she has suffered damage, she may also be able to claim for consequent distress (the law as it stands does not permit compensation for distress alone).
But, if the personal data in question has been compromised, or lost, through no attributable fault of the data controller, then no liability can attach to them. This may often be the case with a “rogue employee”, and is the reason that, often, criminal prosecution of an individual will not run parallel with civil claims or enforcement action against a data controller. I blogged on the contrary position recently, arguing that if someone was not criminally liable for data loss, then would the (civil) liability attach to the data controller? And, of course, it does not mean that the two cannot run in parallel – Tim Turner blogged last week on the civil MPN served on the British Pregnancy Advisory Service, after it was subject to a criminal act not by a rogue employee, but by a hacker. As Tim suggests, being victim of a criminal act does not give you a shield against enforcement action, when you are shown to have allowed the criminal act to happen, through contravening your obligations under the DPA.
In the case of Thames Valley Police, it may well be that there are details which were available to the court but not made public, and I do not intend to speculate on the chances of successful civil claims or enforcement action, but it will be an interesting case to watch develop.
David Evans is Senior Policy Officer at the Information Commissioner’s Office (ICO). In an interview with “The Information Daily.com” uploaded on 12 March, he spoke about data sharing in general, and specifically about care.data (elsewhere on this blog passim). There’s a video of his interview, which has a backdrop with adverts for “Boilerhouse Health” and “HCI Daily“, both of which appear to be communications companies offering services to the health sector. David says
care.data…the overall project is very good because it’s all about making better use of information in the health service…what care.data appear to have done is failed to get that message across
Oddly, this view, that if only the people behind care.data had communicated its benefits better it would have sailed through, is very similar to that expressed by Tim Kelsey, NHS National Director for Patients and Information and head cheerleader for care.data. Tim said, for instance, after the announcement of a (further) six-month delay in implementation
We have been told very clearly that patients need more time to learn about the benefits of sharing information and their right to object to their information being shared
Both David and Tim are right that there has been a failure of communication, but I think it is completely wrong to see it merely as a failure to communicate the benefits. Any project involving the wholesale upload of confidential medical records, to be processed and disclosed, at various levels of deidentification, to third parties, is going to involve risk, and will necessitate explanation of and mitigation of that risk. What the public have so far had communicated to them is plenty about the benefits, but very little about the risks, and the organisational and technical measures being taken by the various bodies involved to mitigate or contain that risk. Tim Gough has argued eloquently for a comprehensive and independent Privacy Impact Assessment to be undertaken (while criticising the one that was published in January
To be fair, NHS England did publish a PIA in January 2014, which does appear a little late in the day for a project of this kind. It also glosses over information which is extremely important to address in full detail. Leaving it out makes it look like something is being hidden
As far as I am aware there has been no official response to this (other than a tweet from Geraint Lewis referring us to our well-thumbed copies of the ICO’s nearly-superseded PIA Handbook).
To an extent I can understand Tim Kelsey feeling he and his colleagues need to do more to communicate the benefits of care.data – after all, it’s their job to deliver it. But I do have real concerns that a senior officer at the ICO thinks that public concerns can be allayed through yet more plugging of the benefits, with none of the detailed reassurances and legal and technical justifications whose absence has been so strongly noted.
In passing, I note that, other than a message from their very pleasant Senior Press Officer for my blog, I have had no acknowledgement from the ICO of my request for them to assess the lawfulness of previous health data upload and linking.
UPDATE: 14.03.14
The ICO has kindly acknowledged receipt of my request for assessment, saying it has been passed to their health sector team for “further detailed consideration”.
Filed under care.data, Data Protection, data sharing, Information Commissioner, NHS
The Court of Appeal has ordered disclosure of private correspondence between Prince Charles and the government. The judgment is potentially a triumph for transparency, but I have my doubts whether it reflects Parliament’s intentions when passing the FOI Act. And there will be a further appeal…
In September 2012 the Administrative Appeals Chamber of the Upper Tribunal (UT) handed down a judgment which struck me then, as it does now, as a remarkable work of research and scholarship. It was ruling on requests by the Guardian journalist Rob Evans – made as far back as April 2005 – under the Freedom of Information Act 2000 (FOIA) and the Environmental Information Regulations 2004 (EIR) for disclosure of information in private letters sent by the Prince of Wales to government ministers on matters of official policy. The UT’s judgment ran to 65 pages with three annexes, went into detailed analysis of constitutional conventions regarding the heir to the throne, and its decision was that the correspondence should be disclosed (overturning the prior decisions of the Information Commissioner (IC)). Subsequently, the Attorney General issued a certificate under section 53 FOIA – a “ministerial veto” – whose effect was to disapply the UT’s decision. The Attorney General’s certificate, in rather wider-spaced text, ran to ten pages.
Section 53 requires only that the accountable person (a minister)
gives the [Information] Commissioner a certificate signed by him stating that he has on reasonable grounds formed the opinion [that there had not been a failure to comply with the FOIA]
It is, as I’ve argued before , a bludgeon of an executive weapon, but it is, as are all acts of public authorities, potentially amenable to judicial review. So it was that, despite any statutory right of appeal, the Guardian made such an application. However, in July 2013, the High Court effectively decided that, although the ministerial power to override a superior court of record (let alone the statutory decision-maker, in the form of the IC) appeared to be a “constitutional aberration”, the proposition that “the accountable person is not entitled simply to prefer his own view to that of the tribunal” must be rejected. As Davis LJ said (para 111)
why not? It is inherent in the whole operation of s.53 that the accountable person will have formed his own opinion which departs from the previous decision (be it of Information Commissioner, tribunal or court) and may certify without recourse to an appeal. As it seems to me, therefore, disagreement with the prior decision (be it of Information Commissioner, tribunal or court) is precisely what s.53 contemplates, without any explicit or implicit requirement for the existence of fresh evidence or of irrationality etc. in the original decision which the certificate is designed to override
However, Davis LJ refused to accept that the wording of section 53 (“…stating that he has on reasonable grounds formed the opinion…”) permitted of an interpretation that:
the accountable person can, as it were, self-certify as to the availability of reasonable grounds
rather,
In my view, the language chosen clearly is sufficient to connote that an objective test is to be applied
But how to conduct that objective test? For Davis LJ, it must be that the reasonable grounds are “cogent”:
if an accountable person is to interfere, by way of exercise of the power of executive override, with the decision of an independent judicial body then that accountable person must be prepared and able to justify doing so. I am reluctant to talk in terms of burden of proof. But in terms of burden of argument the burden is in practice on the accountable person to show that the grounds for certifying are reasonable
Lord Dyson in the Court of Appeal has taken issue with this, saying (para 38) that
I do not consider that it is reasonable for an accountable person to issue a section 53(2) certificate merely because he disagrees with the decision of the tribunal. Something more is required […]
a material change of circumstances since the tribunal decision or that the decision of the tribunal was demonstrably flawed in fact or in law
As Dr Mark Elliot argues Lord Dyson here “adopted a significantly more exacting conception of reasonableness” than had the High Court and I would commend Dr Elliot’s piece to you as an expert analysis I am not competent to give.
However – and it pains me to say it, because I really don’t like section 53 – wasn’t it precisely Parliament’s intention that the accountable person did “merely” have to state that he had formed – on reasonable grounds – a different opinion to the preceding tribunal? If he cannot arrive at a different opinion, in the absence of “something else”, isn’t section 53 fundamentally weakened, even sidestepped? Indeed, Lord Dyson in my view arrives at this point, when he says
On the approach of the Divisional Court to section 53(2), the accountable person can override the decision of an independent and impartial tribunal which (i) is reasonable, (ii) is the product of a detailed examination (fairly conducted) of the issues after an adversarial hearing at which all parties have been represented and (iii) is not challenged on appeal. All that is required is that the accountable person gives sensible and rational reasons for disagreeing with the tribunal’s conclusion. If section 53(2) has that effect, it is a remarkable provision not only because of its constitutional significance (the point emphasised by the Divisional Court), but also because it seriously undermines the efficacy of the rights of appeal accorded by sections 57 and 58 of the FOIA
No doubt we shall see this explored more – the Attorney General is reported to have sought, and been given, leave to appeal to the Supreme Court.
Recent news reports show that 24 English local authorities are using, to varying degrees, a form of “lie detector” to try to assess levels of stress in potential benefit claimants’ voices on the telephone. I question whether use is compliant with data protection obligations
Responding to freedom of information (FOI) requests, 24 local authorities confirmed they had employed or were considering the use of “voice risk analysis” (VRA) software, which its makers say can pick out fraudulent claimants by listening in on calls and identifying signs of stress.
The efficacy and reliability of VRA have frequently been called into question, and, in 2010, after expensive trials, the Department of Work and Pensions (DWP) scrapped plans to introduce it nationwide. The DWP said at the time that they
conducted the research to investigate whether VRA worked when applied to the benefit system. From our findings we cannot conclude that VRA works effectively and consistently in the benefits environment. The evidence is not compelling enough to recommend the use of VRA within DWP
Although DWP avoided pronouncing on the effectiveness of “the technological aspects”. And, notably, though, the then Minister of State (the benighted Chris Grayling) left the field open for councils to continue to use it
Local authorities can continue to use voice risk analysis at their own discretion and at their own expense
I had rather assumed though that they wouldn’t bother, given the controversy the subject causes, but I am shown to have been a bit naive.
However, if the evidential basis for the efficacy of VRA is weak, why are councils using it? The information divulged during telephone calls is going to constitute personal data (information which is being processed by means of equipment operating automatically in response to instructions given for that purpose which relates to a living individual who can be identified from those data (Section 1(1) Data Protection Act 1998 (DPA))) and it will be being “processed” by the councils, who, as “data controllers” must comply (per section 4(4) DPA) with the data protection principles in Schedule One to the DPA.
The first data protection principle says that personal data must be processed “fairly and lawfully”. Councils would have to satisfy themselves that it is “fair” to use questionable technology to assess potential claimants which might wrongly categorise them as potentially fraudulent (I would say they would also have to satisfy themselves that it is financially sensible, given that people who are fraudulent might be wrongly categorised as low-risk). Equally, the fourth data protection principle requires that “personal data shall be accurate…” – but if Francisco Lacerda, head of linguistics at Stockholm University was correct when he told the Guardian
There’s no scientific basis for this method. From the output it generates this analysis is closer to astrology than science
I struggle to see how processing data in this way can meet the first and fourth principle obligations.
But, ironically, my main concern about the use of the this controversial technology relates to those people whose data potentially won’t be processed: the vulnerable, the uncertain, the-otherwise-oppressed who might feel intimidated into not applying for benefits, for fear of being wrong categorised. On a broader level, beyond first data protection principle “fairness”, it doesn’t seem fair, in a general sense, to operate systems in this way.
Filed under Data Protection
On 28 February the Information Commissioner’s Office (ICO) served a Monetary Penalty Notice (MPN), pursuant to powers under section 55A of the Data Protection Act 1998 (DPA), on the British Pregnancy Advisory Service, in the sum of £200,000 (which would be reduced to £160,000 if promptly paid). The ICO’s new release explains
An ICO investigation found the charity didn’t realise its own website was storing the names, address, date of birth and telephone number of people who asked for a call back for advice on pregnancy issues. The personal data wasn’t stored securely and a vulnerability in the website’s code allowed [a] hacker to access the system and locate the information.
The hacker threatened to publish the names of the individuals whose details he had accessed, though that was prevented after the information was recovered by the police following an injunction obtained by the BPAS
The back story to this is that the hacker in question was subsequently jailed for 32 months for offences under the Computer Misuse Act 1990 (no doubt the prosecutors recognised that the criminal sanctions under the DPA were too weedy to bother with).
The section 55A DPA powers are triggered where there has been a qualifying serious contravention by a data controller of its obligations under section 4(4) to comply with the data protection principles in Schedule One. The most pertinent of these in the instant case (and in the large majority of ICO MPNs) was the seventh
Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data
which extends to the need to, when contracting with someone to process data on your behalf, require them to take equivalent security measures and evidence this contractual provision in writing. As the ICO’s MPN says
BPAS failed to take appropriate technical and organisational measures against the unauthorised processing of personal data stored on the BPAS website such as having a detailed specification about the parameters of the CMS to ensure that either the website did not store any personal data or alternatively, that effective and appropriate security measures were applied such as storing administrative passwords securely; ensuring stated standards of communication confidentiality were met; carrying out appropriate security testing on the website which would have alerted them to the vulnerabilities that were present or ensuring that the underlying software supporting the website was kept up to date
(Interestingly, the MPN also makes clear that there was a contravention of the fifth principle – which provides that “personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes”. This was because “the call back details were kept for five years longer than was necessary for [BPAS’s] purposes”).
The original crime was a particularly nasty one – the offender appears to have had an ideological, or at least personal, opposition to abortion in general, and the apparently very real threat to publish people’s details, given to BPAS in highly sensitive circumstances, is probably what elevated the BPAS contravention to a level which justifies such a high sum being served on a charity. However, BPAS have announced that they intend to appeal, and their press release about this is interesting. It suggests that the appeal will be not about the issuing of the MPN, but about its amount (section 55B(5) DPA permits appeals on either basis):
We accept that no hacker should have been able to steal our data but we are horrified by the scale of the fine
but it goes on to make the valid point that, by serving an MPN of this large amount, the ICO potentially gives the offender something that he wanted – to harm the charity:
It is appalling that a hacker who acted on the basis of his opposition to abortion should see his actions rewarded in this way
This, though, seems to be a matter of ethics, rather than law, but it will be interesting to note if the argument makes it in some form into the grounds of appeal. More likely, if the challenge is to be made solely on the amount (under section 55B(5)(b)), focus will fall on to the suggestion that
This fine seems out of proportion when compared with those levelled against other organisations who were not themselves the victims of a crime
Of course, by a circular argument, the “fine” would not have been served, if the data controller had not, by its omissions, permitted itself to be a victim of the crime.
An extra frisson is caused when one considers the compelling argument by the solicitor-advocates for Scottish Borders Council, who successfully helped the latter win an appeal of an MPN last year. Although their argument – that MPNs were more correctly to be considered criminal, as opposed to civil, penalties – did not fall to be decided by the First-tier Tribunal, it did observe that
One general question hovering over this appeal is whether proceedings in respect of monetary penalties are “criminal” in nature. There are certainly enough indications, not least in the title of the amending statute, [the Criminal Justice and Immigration Act 2008] to make an arguable case for them being so…We have concluded that there is no need for us to make any decision or pronouncement in the abstract; but there is a need for us to be vigilant to ensure that the proceedings are fair
If this line of argument continues to be developed – that recipients of MPNs are entitled to be afforded the equivalent rights to fairness, of hearing under Article 6 of the European Convention on Human Rights, afforded to those accused of crimes – then MPNs, and the circumstances and manner in which they are served, may be subject to a much greater level of scrutiny, and the cash-strapped ICO may find itself under even more pressure from legal challenges.
These issues may be aired, and possibly determined, in the forthcoming appeal on the Upper Tribunal of the MPN served on Christopher Niebel, and subsequently overturned by the First-tier Tribunal.
Failings in records management hampered the Ellison Review. In the absence of legal enforcement mechanisms, we should recognise the important of records managers
It is a truism that good records management is essential to good information rights practice. Section 46 of the Freedom of Information Act 2000 requires the Lord Chancellor to issue a records management code of practice, and the code itself says
Freedom of information legislation is only as good as the quality of the records and other information to which it provides access. Access rights are of limited value if information cannot be found when requested or, when found, cannot be relied upon as authoritative
Similarly, records management is embedded in the principles of Schedule One to the Data Protection Act 1998, particularly those relating to adequacy, accuracy and retention of personal data.
But Mark Ellison QC’s report following The Stephen Lawrence Independent Review throws even sharper focus on how important records management can be in the service of justice, and the rule of law. Ellison’s Review was not a statutory inquiry, and thus did not have the legal powers to search records, or compel production of information (although its terms of reference did say that it should be given access to all necessary files). However, it appears to have been hampered by what looks like failings in records management. The report notes that
a number of potentially important areas of documentation…have not been provided to us. The explanation for this absence varies between:
a) a suspicion (or sometimes hard evidence) that they have been destroyed;
b) a belief that they must exist but cannot be found; or
c) that there simply is no record available and no way of knowing if one was ever made
Note that none of these explanations gives an indication that information has been deliberately withheld, so the subsequent announcement by the Home Secretary that there will now be a public inquiry (with full legal powers to gather information) into the infiltration methods of undercover police does not necessarily mean that information-gap will be filled.
The revelations of the disgraceful “spying” on the Lawrence family during the initial McPherson inquiry into Stephen’s death are, of course, the most important outcome of the Ellison Review. However, what unnerves me about the Ellison Review’s difficulties in getting information is that they starkly show that a failure to follow good records management practice potentially enables corruption and illegality to be covered-up, and that there is a lack of enforcement and regulatory mechanisms to prevent or punish this. The criminal sanctions regarding wilful destruction or withholding of information under FOIA apply only if the actions occur following the submission of a FOIA request, and, under the DPA, criminal sanctions only apply to unlawful obtaining or disclosure of personal data: destruction or hiding of information is unlikely to be a criminal act, in the absence of other factors.
I think this shows that Records Managers hold an exceptionally important role, one which is vital for organisational governance and compliance, and one which is sadly not recognised by some organisations. Records Managers should sit on information governance boards, should have a hotline to the Chief Information Officer, Head of Legal, Senior Information Risk Officer etc., and should be properly resourced and supported by those senior officers.
Stephen Lawrence would have been forty this year. The Stephen Lawrence Charitable Trust helps transform the lives of the young people it supports.
Filed under Data Protection, Freedom of Information, police, records management
Breaches of the DPA are not always about data security. I’m not sure NHS England have grasped this. Worse, I’m not sure the ICO understands public concern about what is happening with confidential medical information. They both need to listen.
Proponents of the care.data initiative have been keen to reassure us of the safeguards in place for any GP records uploaded to the Health and Social Care Information Centre (HSCIC) by saying that similar data from hospitals (Hospital Episode Statistics, or HES) has been uploaded safely for about two decades. Thus, Tim Kelsey, National Director for Patients and Information in the National Health Service, said on twitter recently that there had been
No data breach in SUS*/HES ever
I’ve been tempted to point out that this is a bit like a thief arguing that he’s been stealing from your pockets for twenty years, so why complain when you catch him stealing from your wallet? However, whether Tim’s claim is true or not partly depends on how you define a “breach”, and I suspect he is thinking of some sort of inadvertent serious loss of data, in breach of the seventh (data security) principle of the Data Protection Act 1998 (DPA). Whether there have been any of those is one issue, and, in the absence of transparency of how HES processing has been audited, I don’t know how he is so sure (an FOI request for audit information is currently stalled, while HSCIC consider whether commercial interests are or are likely to prejudiced by disclosure). But data protection is not all about data security, and the DPA can be “breached” in other ways. As I mentioned last week, I have asked the Information Commissioner’s Office to assess the lawfulness of the processing surrounding the apparent disclosure of a huge HES dataset to the Institute and Faculty of Actuaries, whose Society prepared a report based on it (with HSCIC’s logo on it, which rather tends to undermine their blaming the incident on their NHSIC predecessors). My feeling is that this has nothing, or very little, to do with data security – I am sure the systems used were robust and secure – but a lot to do with some of the other DPA principles, primarily, the first (processing must be fair and lawful and have an appropriate Schedule 2 and Schedule 3 condition), and the second “Personal data shall be obtained only for one or more specified and lawful purposes”).
Since the story about the actuarial report, at least three other possible “breaches” have come to light. They are listed in this Register article, but it is the first that has probably caused the most concern. It appears that the entire HES dataset, pseudonymised (not, note, anonymised) of around one terabyte, was uploaded to Google storage, and processed using Big Query. An apparently rather unconcerned statement from HSCIC (maybe they’ll blame their predecessors again, if necessary) said
The NHS Information Centre (NHS IC) signed an agreement to share pseudonymised Hospital Episodes Statistics data with PA Consulting in November 2011…PA Consulting used a product called Google BigQuery to manipulate the datasets provided and the NHS IC was aware of this. The NHS IC had written confirmation from PA Consulting prior to the agreement being signed that no Google staff would be able to access the data; access continued to be restricted to the individuals named in the data sharing agreement
So that’s OK then? Well, not necessarily. Google’s servers (and, remember “cloud” really means “someone else’s computer”) are dotted around the world, although mostly in the US, and when you upload data to the cloud, one of the problems (or benefits) is you don’t have, or don’t tend to think you have, a real say in where it is hosted. By a certain argument, this even makes the cloud provider, in DPA terms, a data controller, because it is partly determining “the manner in which any personal data are, or are to be, processed”. If the hosting is outside the European Economic Area the eight DPA principle comes into play:
Personal data shall not be transferred to a country or territory outside the European Economic Area unless that country or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data
The rather excellent Tim Gough who is producing some incredibly helpful stuff on his site, has a specific page on DPA and the cloud and I commend it to you. Now, it may be that, because Google has conferred on itself “Safe Harbor” status, the eight principle is deemed to have been complied with, but I’m not sure it’s as straightforward because, in any case, Safe Harbor itself is of current questionable status and assurance.
I don’t know if PA Consulting’s upload of HES data to the cloud was in compliance with their and NHSIC’s/HSCIC’s DPA obligations, but, then again, I’m not the regulator of the DPA. So, in addition to last week’s request for assessment, I’ve asked the ICO to assess this processing as well
Hi again
I don’t yet have any reference number, but please note my previous email for reference. News has now emerged that the entire HES database may have been uploaded to some form of Google cloud storage. Would you also please assess this for compliance with the DPA? I am particularly concerned to know whether it was in compliance with the first, seventh and eighth data protection principle. This piece refers to the alleged upload to Google servers http://t.co/zWF2QprsTN
best wishes,
Jon
However, I’m now genuinely concerned by a statement from the ICO, in response to the news that they are to be given compulsory powers of audit of NHS bodies. They say (in the context of the GP data proposed to be uploaded under the care.data initiative)
The concerns around care.data come from this idea that the health service isn’t particularly good at looking after personal information
I’m not sure if they’re alluding to their own concerns, or the public’s, but I think the statement really misunderstands the public’s worries about care.data, and the use of medical data in general. From many, many discussions with people, and from reading more about this subject than is healthy, it seems to me that people have a general worry about, and objection to, their confidential medical information possibly being made available to commercial organisations, for the potential profit of the latter, and this concern stems from the possibility that this processing will lead to them being identified, and adversely affected by that processing. If the ICO doesn’t understand this, then I really think they need to start listening. And, that, of course, also goes for NHS England.
*“SUS” refers to HSCIC’s, and its predecessor, NHSIC’s Secondary Uses Service
Filed under care.data, Data Protection, data sharing, Information Commissioner, NHS