Tag Archives: health

Hospital episode data – confidential data uploaded by mistake

Rather hidden away in the new IIGOP annual report is a worrying and revealing report of a serious data breach involving hospital episode data

In February last year Tim Kelsey, NHS England’s National Director for Patients and Information, and vocal cheerleader for the care.data initiative, assured the public, in an interview on the Radio 4 Today programme, that in the twenty five years that Hospital Episode Statistics (HES) have been shared with other organisations

the management of the hospital episode database…there has never been a single example of that data being compromised, the privacy of patients being compromised…

However, as Sir Nick Partridge’s Review of Data Releases by the NHS Information Centre in June of last year revealed, there had been

lapses in the strict arrangements that were supposed to be in place to ensure that people’s personal data would never be used improperly

As I said at the time

One waits with interest to see whether the [Information Commissioner’s Office (ICO)] will take any enforcement action, but I think it’s important that they consider doing so, because, even though Sir Nick makes nine very sensible recommendations to HSCIC, one could be forgiven – having been given clear assurances previously, by the likes of Tim Kelsey and others – for having reservations as to future governance of our confidential medical data

Now, with the launch of the first annual report of the Independent Information Governance Oversight Panel (IIGOP), chaired by Dame Fiona Caldicott and established at the request of the Secretary of State to “advise, challenge and report on the state of information governance across the health and care system in England”, we see further evidence of HES data “being compromised, the privacy of patients being compromised”. The report informs us of an incident whereby

New inspection procedures introduced by the HSCIC had uncovered a number of organisations which were sending HES data and failing to follow data dictionary standards. This meant they were inadvertently enabling personal confidential data to enter the data base. Following an alert to the Information Commissioners’ Office this was understood as a large scale problem, although having a low level potential impact, as the affected data fields were unknown to either senders or receivers of HES data. The relevant organisations were contacted to gain their cooperation in closing the breach, without alerting any unfriendly observer to the location of the confidential details. This was important to preserve the general ignorance of the detail of the breach and continue to protect individuals’ privacy. Trusts and others were encouraged to provide named contacts who would then start cleaning up their data flows to the HSCIC. In order to manage any untoward reporting in the media, trade titles were informed and briefed about the importance of restricting their reporting to avoid any risk of leading people towards this confidential data.

Now this to me seems pretty serious: those organisations who failed to “follow data dictionary standards” by data controller organisations who were sending HES data sounds very likely to be a contravention of the data controllers’ obligation, under section 4(4) of the Data Protection Act 1998 (DPA) to comply with the seventh data protection principle, which requires that they take

Appropriate technical and organisational measures…against unauthorised or unlawful processing of personal data

Serious contraventions, of a kind likely to cause substantial damage or substantial distress, can result in the ICO serving a monetary penalty notice, under section 55A of the DPA, to a maximum of £500,000.

So, what does one make of these incidents? It’s hard to avoid the conclusion that they would be held to be “serious”, and if the data in question had been misused, there would have been the potential for substantial damage and substantial distress – public disclosure of hospital record data could have a multitude of pernicious effects – and this much is evidenced by the fact that (successful) attempts had to be made to avoid the errors coming to light, including asking journalists to avoid reporting. But were they contraventions likely to cause these things? IIGOP suggests that they had a “low level potential impact” because the data was hidden within large amounts of non-offensive data, and I think it is probably the case that the incidents would not be held to have been likely to cause substantial damage or substantial distress (in Niebel, the leading case on monetary penalty notices, Wikeley J in the Upper Tribunal accepted that the likely in s55A DPA took the same meaning attributed to it by Munby J, in R (Lord) v Secretary of State for the Home Department [2003] EWHC 2073 (Admin), namely “‘likely’ meant something more than ‘a real risk’, i.e. a significant risk, ‘even if the risk falls short of being more probable than not'”).

But a monetary penalty notice is not the only action open to the ICO. He has the power to serve enforcement notices, under s40 DPA, to require data controllers to do, or refrain from doing, specified actions, or to take informal action such as requiring the signing of undertakings (to similar effect). Given that we have heard about these incidents from IIGOP, and in an annual report, it seems unlikely that any ICO enforcement action will be forthcoming. Perhaps that’s correct as a matter of law and as a matter of the exercise of discretion, but in my view the ICO has not been vocal enough about the profound issues raised by the amalgamation and sharing of health data, and the concerns raised by incidents of potentially inappropriate or excessive processing. Care.data of course remains on the agenda, and the IIGOP report is both revealing and encouragingly critical of what has taken place so far, but one would not want a situation to emerge where the ICO took a back seat and allowed IIGOP (which lacks regulatory and enforcement powers) to deal with the issue.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under care.data, Data Protection, data sharing, Information Commissioner, NHS

The wrong test for anonymisation?

UPDATE: 23.01.15 The ICO has responded [.doc file] to my request for a review of their decision. I drew their attention to the arguments on this page but they don’t even mention them, let alone provide a counter-analysis, in dismissing my complaints (“Having reviewed the matter, I agree with the explanations provided”). I am invited by the ICO to consider taking my own legal action. I understand that the ICO and I might have differing views on a DPA matter, but what I find difficult to accept is the refusal even to enter into a discussion with me about the detailed arguments I’ve made. END UPDATE

In February this year I asked the Information Commissioner’s Office (ICO) to investigate reports that Hospital Episode Statistics (HES) data had apparently been sold to an actuarial society by the NHS Information Centre (NHSIC), the predecessor to the Health and Social Care Information Centre (HSCIC). Specifically I requested, as a data subject can under s42 of the Data Protection Act 1998 (DPA), that the ICO assess whether it was likely or not that the processing of my personal data by NHSIC and others had been in compliance with the DPA.

Nine months later, I was still awaiting the outcome. But a clue to how the assessment would turn out was contained in the text of Sir Nick Partridge’s six month review of various data releases by NHSIC (his original report in June seemed to me to point to multiple potential DPA contraventions). In the review document he says

Six investigations have been separately instigated by the HSCIC or Information Commissioner’s Office (ICO)and shared with both parties as these focussed on whether individuals were at risk of being identified. In the cases it has investigated, the ICO has upheld the HSCIC approach and informed us that it has “seen no evidence to suggest that re-identification has occurred or is reasonably likely to occur.”
And sure enough, after chasing the ICO for the outcome of my nine-month wait, I received this (in oddly formatted text, which rather whiffed of a lot of cutting-and-pasting)
Following the recent issue regarding HSCIC, PA Consulting, and Google we investigated the issue of whether HES data could be considered personal data. This detailed work involved contacting HSCIC, PA Consulting, and Google and included the analysis of the processes for the extraction and disclosure of HES data both generally and in that case in particular. We concluded that we did not consider that the HES dataset constitutes personal data.Furthermore we also investigated whether this information had been linked to other data to produce “personal data” which was subject to the provisions of the Act. We have no evidence that there has been any re-identification either on the part of PA Consulting or Google. We also noted that HSCIC have stated that the HES dataset does not include individual level patient data even at a pseudonymised level. Our view is that the data extracted and provided to PA Consulting did not identify any individuals and there was no reasonable likelihood that re-identification would be possible.
I have added the emphasis to the words “reasonable likelihood” above. They appear in similar terms in the Partridge Review, and they struck me as rather odd. An awful lot of analysis has taken and continues to take place on the subject of when can personal data be “rendered fully anonymous in the sense that it is information from which the data subject is no longer identifiable” (Lord Hope’s dicta in Common Services Agency v Scottish Information Commissioner [2008] UKHL 47). Some of that analysis has been academic, some takes the form of “soft law” guidance, for instance Opinion 05/2014 of the Article 29 Working Party, and the ICO Anonymisation Code of Practice. The former draws on the Data Protection Directive 95/46/EC, and notes that

Recital 26 signifies that to anonymise any data, the data must be stripped of sufficient elements such that the data subject can no longer be identified. More precisely, that data must be processed in such a way that it can no longer be used to identify a natural person by using “all the means likely reasonably to be used”

Anonymisation has also been subject to judicial analysis, notably in the Common Services Agency case, but, even more key, in the judgment of Mr Justice Cranston in Department of Health v Information Commissioner ([2011] EWHC 1430). The latter case, involving the question of disclosure of late-term abortion statistics, is by no means an easy judgment to parse (ironically so, given that it makes roughly the same observation of the Common Services Agency case). The judge held that the First-tier Tribunal had been wrong to say that the statistics in question were personal data, but that it had on the evidence been entitled to say that “the possibility of identification by a third party from these statistics was extremely remote”. The fact that the possibility of identification by a third party was extremely remote meant that “the requested statistics were fully anonymised” (¶55). I draw from this that for personal data to be anonymised in statistical format the possibility of identification of individuals by a third party must be extremely remote. The ICO’s Anonymisation Code, however, says of the case:

The High Court in the Department of Health case above stated that the risk of identification must be greater than remote and reasonably likely for information to be classed as personal data under the DPA [emphasis added]

But this seems to me to be an impermissible description of the case – the High Court did not state what the ICO says it stated – the phrases “greater than remote” and “reasonably likely” do not appear in the judgment. And that phrase “reasonably likely” is one that, as I say, makes it way into the Partridge Review, and the ICO’s assessment of the lawfulness of HES data “sale”.

I being to wonder if the ICO has taken the phrase from recital 26 of the Directive, which talks about the need to consider “all the means likely reasonably to be used” to identify an individual, and transformed it into a position from which, if identification is not reasonably likely, it will accept that data are anonymised. This cannot be right: there is a world of difference between a test which considers whether possibility of identification is “extremely remote” and whether it is “reasonably likely”.

I do not have a specific right to a review of the section 42 assessment decision that the processing of my personal data was likely in compliance with NHSIC’s obligations under the DPA, but I have asked for one. I am aware of course that others complained (après moi, la deluge) notably, in March, FIPR, MedConfidential and Big Brother Watch . I suspect they will also be pursuing this.

In October this year I attended an event at which the ICO’s Iain Bourne spoke. Iain was a key figure in the drawing up of the ICO’s Anonymisation Code, and I took the rather cheeky opportunity to ask about the HES investigations. He said that his initial view was that NHSIC had been performing good anonymisation practice. This reassured me at the time, but now, after considering this question of whether the Anonymisation Code (and the ICO) adopts the wrong test on the risks of identification, I am less reassured. Maybe “reasonably likely that an individual can be identified” is an appropriate test for determining when data is no longer anonymised, and becomes personal data, but it does not seem to me that the authorities support it.

Postscript Back in August of this year I alerted the ICO to the fact that a local authority had published open data sets which enabled individuals to be identified (for instance, social care and housing clients). More than four months later the data is still up (despite the ICO saying they would raise the issue with the council): is this perhaps because the council has argued that the risk of identification is not “reasonably likely”?

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

3 Comments

Filed under anonymisation, care.data, Data Protection, Directive 95/46/EC, Information Commissioner, NHS

Monitoring of blogs and lawful/unlawful surveillance

Tim Turner wrote recently about the data protection implications of the monitoring of Sara Ryan’s blog by Southern Health NHS Trust. Tim’s piece is an exemplary analysis of how the processing of personal data which is in the public domain is still subject to compliance with the Data Protection Act 1998 (DPA):

there is nothing in the Data Protection Act that says that the public domain is off-limits. Whatever else, fairness still applies, and organisations have to accept that if they want to monitor what people are saying, they have to be open about it

But it is not just data protection law which is potentially engaged by the Trust’s actions. Monitoring of social media and networks by public authorities for the purposes of gathering intelligence might well constitute directed surveillance, bringing us explicitly into the area of human rights law. Sir Christopher Rose, the Chief Surveillance Commissioner said, in his most recent annual report

my commissioners remain of the view that the repeat viewing of individual “open source” sites for the purpose of intelligence gathering and data collation should be considered within the context of the protection that RIPA affords to such activity

“RIPA” there of course refers to the complex Regulation of Investigatory Powers Act 2000 (RIPA) (parts of which were reputedly “intentionally drafted for maximum obscurity”)1. What is not complex, however, is to note which public authorities are covered by RIPA when they engage in surveillance activities. A 2006 statutory instrument2 removed NHS Trusts from the list (at Schedule One of RIPA) of relevant public authorities whose surveillance was authorised by RIPA. Non-inclusion on the Schedule One lists doesn’t as a matter of fact or law mean that a public authority cannot undertake surveillance. This is because of the rather odd provision at section 80 of RIPA, which effectively explains that surveillance is lawful if carried out in accordance with RIPA, but surveillance not carried out in accordance with RIPA is not ipso facto unlawful. As the Investigatory Powers Tribunal put it, in C v The Police and the Home Secretary IPT/03/32/H

Although RIPA provides a framework for obtaining internal authorisations of directed surveillance (and other forms of surveillance), there is no general prohibition in RIPA against conducting directed surveillance without RIPA authorisation. RIPA does not require prior authorisation to be obtained by a public authority in order to carry out surveillance. Lack of authorisation under RIPA does not necessarily mean that the carrying out of directed surveillance is unlawful.

But it does mean that where surveillance is not specifically authorised by RIPA questions would arise about its legality under Article 8 of the European Convention on Human Rights, as incorporated into domestic law by the Human Rights Act 1998. The Tribunal in the above case went on to say

the consequences of not obtaining an authorisation under this Part may be, where there is an interference with Article 8 rights and there is no other source of authority, that the action is unlawful by virtue of section 6 of the 1998 Act.3

So, when the Trust was monitoring Sara Ryan’s blog, was it conducting directed surveillance (in a manner not authorised by RIPA)? RIPA describes directed surveillance as covert (and remember, as Tim Turner pointed out – no notification had been given to Sara) surveillance which is “undertaken for the purposes of a specific investigation or a specific operation and in such a manner as is likely to result in the obtaining of private information about a person (whether or not one specifically identified for the purposes of the investigation or operation)” (there is a further third limb which is not relevant here). One’s immediate thought might be that no private information was obtained or intended to be obtained about Sara, but one must bear in mind that, by section 26(10) of RIPA “‘private information’, in relation to a person, includes any information relating to his private or family life” (emphasis added). This interpretation of “private information” of course is to be read alongside the protection afforded to the respect for one’s private and family life under Article 8. The monitoring of Sara’s blog, and the matching of entries in it against incidents in the ward on which her late son, LB, was placed, unavoidably resulted in the obtaining of information about her and LB’s family life. This, of course, is the sort of thing that Sir Christopher Rose warned about in his most recent report, in which he went on to say

In cash-strapped public authorities, it might be tempting to conduct on line investigations from a desktop, as this saves time and money, and often provides far more detail about someone’s personal lifestyle, employment, associates, etc. But just because one can, does not mean one should.

And one must remember that he was talking about cash-strapped public authorities whose surveillance could be authorised under RIPA. When one remembers that this NHS Trust was not authorised to conduct directed surveillance under RIPA, one struggles to avoid the conclusion that monitoring was potentially in breach of Sara’s and LB’s human rights.

1See footnote to Caspar Bowden’s submission to the Intelligence and Security Committee
2The Regulation of Investigatory Powers (Directed Surveillance and Covert Human Intelligence Sources) (Amendment) Order 2006
3This passage was apparently lifted directly from the explanatory notes to RIPA

3 Comments

Filed under Data Protection, human rights, NHS, Privacy, RIPA, social media, surveillance, surveillance commissioner

Articles on care.data

I thought I was rather flogging the care.data horse on this blog, so, in the spirit of persistence, I thought why not go and do it somewhere else? The Society of Computers and Law kindly asked me to write a broadly “anti” piece, while asking Martin Hoskins to do a broadly “pro” one. They are here:

Care.data the Cons
Care.data the Pros

I am pleased to announce that Martin and I are still on speaking terms.

Leave a comment

Filed under care.data, Data Protection, data sharing, NHS

Opting patients out of care.data – in breach of data protection law?

The ICO appear to think that GPs who opt patients out of care.data without informing them would be breaching the Data Protection Act.  They say it would be unfair processing

In February of this year GP Dr Gordon Gancz was threatened with termination of his contract, because he had indicated he would not allow his patients’ records to be uploaded to the national health database which as planned to be created under the care.data initiative. He was informed that if he didn’t remove information on his website, and if he went on to add “opt-out codes” to patients’ electronic records, he would be in breach of the NHS (GMS contract) Regulations 2004. Although this threatened action was later withdrawn, and care.data put on hold for six months, Dr Gancz might have been further concerned to hear that in the opinion of the Information Commissioner’s Office (ICO) he would also have been in breach of the Data Protection Act 1998 (DPA).

A few weeks ago fellow information rights blogger Tim Turner (who has given me permission to use the material) asked NHS England about the basis for Health Services Minister Dan Poulter’s statement in Parliament that

NHS England and the Health and Social Care Information Centre will work with the British Medical Association, the Royal College of General Practitioners, the Information Commissioner’s Office and with the Care Quality Commission to review and work with GP practices that have a high proportion of objections [to care.data] on a case-by-case basis

Tim wanted to know what role the ICO would play. NHS England replied saying, effectively, that they didn’t know, but they did disclose some minutes of a meeting held with the ICO in December 2013. Those minutes indicate that

The ICO had received a number of enquiries regarding bulk objections from practices. Their view was that adding objection codes would constitute processing of data in terms of the Data Protection Act.  If objection codes had been added without writing to inform their patients then the ICO’s view was that this would be unfair processing and technically a breach of the Act so action could be taken by the ICO

One must stress that this is not necessarily a complete or accurate respresentation of the ICO’s views. However, what appears to be being said here is that, if GPs took the decision to “opt out” their patients from care.data, without writing to inform them, this would be an act of “processing” according to the definition at section 1(1) of the DPA, and would not be compliant with the GPs’ obligations under the first DPA principle to process personal data fairly.

On a very strict reading of the DPA this may be technically correct – for processing of personal data to be fair data subjects must be informed of the purposes for which the data are being processed, and, strictly, adding a code which would prevent an upload (which would otherwise happen automatically) would be processing of personal data. And, of course, the “fairness” requirement is absent from the proposed care.data upload, because Parliament, in its wisdom, decided to give the NHS the legal power to override it. But “fairness” requires a broad brush, and the ICO’s interpretation here would have the distinctly odd effect of rendering unlawful a decision to maintain the status quo whereby patients’ GP data does not leave the confidential confines of their surgery. It also would have the effect of supporting NHS England’s apparent view that GPs who took such action would be liable to sanctions.

In fairness (geddit???!!) to the ICO, if a patient was opted out who wanted to be included in the care.data upload, then I agree that this would be in breach of the first principle, but it would be very easily rectified, because, as we know, it will be simple to opt-in to care.data from a previous position of “opt-out”, but the converse doesn’t apply – once your data is uploaded it is uploaded in perpetuity (see my last bullet point here).

A number of GPs (and of course, others) have expressed great concern at what care.data means for the confidential relationship between doctor and patient, which is fundamental for the delivery of health care. In light of those concerns, and in the absence of clarity about the secondary uses of patient data under care.data, would it really be “unfair” to patients if GPs didn’t allow the data to be collected? Is that (outwith DPA) fair to GPs?

Leave a comment

Filed under care.data, Confidentiality, Data Protection, data sharing, Information Commissioner, NHS

We thought you cared(ata)

David Evans is Senior Policy Officer at the Information Commissioner’s Office (ICO). In an interview with “The Information Daily.com” uploaded on 12 March, he spoke about data sharing in general, and specifically about care.data (elsewhere on this blog passim). There’s a video of his interview, which has a backdrop with adverts for “Boilerhouse Health” and “HCI Daily“, both of which appear to be communications companies offering services to the health sector. David says

care.data…the overall project is very good because it’s all about making better use of information in the health service…what care.data appear to have done is failed to get that message across

Oddly, this view, that if only the people behind care.data had communicated its benefits better it would have sailed through, is very similar to that expressed by Tim Kelsey, NHS National Director for Patients and Information and head cheerleader for care.data. Tim said, for instance, after the announcement of a (further) six-month delay in implementation

We have been told very clearly that patients need more time to learn about the benefits of sharing information and their right to object to their information being shared

Both David and Tim are right that there has been a failure of communication, but I think it is completely wrong to see it merely as a failure to communicate the benefits. Any project involving the wholesale upload of confidential medical records, to be processed and disclosed, at various levels of deidentification, to third parties, is going to involve risk, and will necessitate explanation of and mitigation of that risk. What the public have so far had communicated to them is plenty about the benefits, but very little about the risks, and the organisational and technical measures being taken by the various bodies involved to mitigate or contain that risk. Tim Gough has argued eloquently for a comprehensive and independent Privacy Impact Assessment to be undertaken (while criticising the one that was published in January

To be fair, NHS England did publish a PIA in January 2014, which does appear a little late in the day for a project of this kind.  It also glosses over information which is extremely important to address in full detail. Leaving it out makes it look like something is being hidden

As far as I am aware there has been no official response to this (other than a tweet from Geraint Lewis referring us to our well-thumbed copies of the ICO’s nearly-superseded PIA Handbook).

To an extent I can understand Tim Kelsey feeling he and his colleagues need to do more to communicate the benefits of care.data – after all, it’s their job to deliver it. But I do have real concerns that a senior officer at the ICO thinks that public concerns can be allayed through yet more plugging of the benefits, with none of the detailed reassurances and legal and technical justifications whose absence has been so strongly noted.

In passing, I note that, other than a message from their very pleasant Senior Press Officer for my blog, I have had no acknowledgement from the ICO of my request for them to assess the lawfulness of previous health data upload and linking.

UPDATE: 14.03.14

The ICO has kindly acknowledged receipt of my request for assessment, saying it has been passed to their health sector team for “further detailed consideration”.

1 Comment

Filed under care.data, Data Protection, data sharing, Information Commissioner, NHS

Health data breaches – missing the point?

Breaches of the DPA are not always about data security. I’m not sure NHS England have grasped this. Worse, I’m not sure the ICO understands public concern about what is happening with confidential medical information. They both need to listen.

Proponents of the care.data initiative have been keen to reassure us of the safeguards in place for any GP records uploaded to the Health and Social Care Information Centre (HSCIC) by saying that similar data from hospitals (Hospital Episode Statistics, or HES) has been uploaded safely for about two decades. Thus, Tim Kelsey, National Director for Patients and Information in the National Health Service, said on twitter recently that there had been

No data breach in SUS*/HES ever

I’ve been tempted to point out that this is a bit like a thief arguing that he’s been stealing from your pockets for twenty years, so why complain when you catch him stealing from your wallet? However, whether Tim’s claim is true or not partly depends on how you define a “breach”, and I suspect he is thinking of some sort of inadvertent serious loss of data, in breach of the seventh (data security) principle of the Data Protection Act 1998 (DPA). Whether there have been any of those is one issue, and, in the absence of transparency of how HES processing has been audited, I don’t know how he is so sure (an FOI request for audit information is currently stalled, while HSCIC consider whether commercial interests are or are likely to prejudiced by disclosure). But data protection is not all about data security, and the DPA can be “breached” in other ways. As I mentioned last week, I have asked the Information Commissioner’s Office to assess the lawfulness of the processing surrounding the apparent disclosure of a huge HES dataset to the Institute and Faculty of Actuaries, whose Society prepared a report based on it (with HSCIC’s logo on it, which rather tends to undermine their blaming the incident on their NHSIC predecessors). My feeling is that this has nothing, or very little, to do with data security – I am sure the systems used were robust and secure – but a lot to do with some of the other DPA principles, primarily, the first (processing must be fair and lawful and have an appropriate Schedule 2 and Schedule 3 condition), and the second “Personal data shall be obtained only for one or more specified and lawful purposes”).

Since the story about the actuarial report, at least three other possible “breaches” have come to light. They are listed in this Register article, but it is the first that has probably caused the most concern. It appears that the entire HES dataset, pseudonymised (not, note, anonymised) of around one terabyte, was uploaded to Google storage, and processed using Big Query. An apparently rather unconcerned statement from HSCIC (maybe they’ll blame their predecessors again, if necessary) said

The NHS Information Centre (NHS IC) signed an agreement to share pseudonymised Hospital Episodes Statistics data with PA Consulting  in November 2011…PA Consulting used a product called Google BigQuery to manipulate the datasets provided and the NHS IC  was aware of this.  The NHS IC  had written confirmation from PA Consulting prior to the agreement being signed that no Google staff would be able to access the data; access continued to be restricted to the individuals named in the data sharing agreement

So that’s OK then? Well, not necessarily. Google’s servers (and, remember “cloud” really means “someone else’s computer”) are dotted around the world, although mostly in the US, and when you upload data to the cloud, one of the problems (or benefits) is you don’t have, or don’t tend to think you have, a real say in where it is hosted. By a certain argument, this even makes the cloud provider, in DPA terms, a data controller, because it is partly determining “the manner in which any personal data are, or are to be, processed”. If the hosting is outside the European Economic Area the eight DPA principle comes into play:

Personal data shall not be transferred to a country or territory outside the European Economic Area unless that country or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data

The rather excellent Tim Gough who is producing some incredibly helpful stuff on his site, has a specific page on DPA and the cloud and I commend it to you. Now, it may be that, because Google has conferred on itself “Safe Harbor” status, the eight principle is deemed to have been complied with, but I’m not sure it’s as straightforward because, in any case, Safe Harbor itself is of current questionable status and assurance.

I don’t know if PA Consulting’s upload of HES data to the cloud was in compliance with their and NHSIC’s/HSCIC’s DPA obligations, but, then again, I’m not the regulator of the DPA. So, in addition to last week’s request for assessment, I’ve asked the ICO to assess this processing as well

Hi again

I don’t yet have any reference number, but please note my previous email for reference. News has now emerged that the entire HES database may have been uploaded to some form of Google cloud storage. Would you also please assess this for compliance with the DPA? I am particularly concerned to know whether it was in compliance with the first, seventh and eighth data protection principle. This piece refers to the alleged upload to Google servers http://t.co/zWF2QprsTN

best wishes,
Jon

However, I’m now genuinely concerned by a statement from the ICO, in response to the news that they are to be given compulsory powers of audit of NHS bodies. They say (in the context of the GP data proposed to be uploaded under the care.data initiative)

The concerns around care.data come from this idea that the health service isn’t particularly good at looking after personal information

I’m not sure if they’re alluding to their own concerns, or the public’s, but I think the statement really misunderstands the public’s worries about care.data, and the use of medical data in general. From many, many discussions with people, and from reading more about this subject than is healthy, it seems to me that people have a general worry about, and objection to, their confidential medical information possibly being made available to commercial organisations, for the potential profit of the latter, and this concern stems from the possibility that this processing will lead to them being identified, and adversely affected by that processing. If the ICO doesn’t understand this, then I really think they need to start listening. And, that, of course, also goes for NHS England.

*“SUS” refers to HSCIC’s, and its predecessor, NHSIC’s Secondary Uses Service

4 Comments

Filed under care.data, Data Protection, data sharing, Information Commissioner, NHS

Why no prison sentences for misuse of medical data?

So, the government, roused from its torpor by the public outrage at the care.data proposals, and the apparent sale of 47 million patient records to actuaries, is said to be proposing, as a form of reassurance, amendments to the Care Bill. The Telegraph reports that

Jeremy Hunt will unveil new laws to ensure that medical records can only be released when there is a “clear health benefit” rather than for “purely commercial” use by insurers and other companies.

Ministers will also bolster criminal sanctions for organisations which breach data protection laws by disclosing people’s personal data. Under a “one strike and you’re out” approach, they will be permanently banned from accessing NHS data

One needs to be aware that this is just a newspaper report, and as far as I know it hasn’t been confirmed by the minister or anyone else in the government, but if it is accurate, I fear it shows further contempt for public concerns about the risks to the confidentiality of their medical records.

The first of the reported amendments sounds like a statutory backing to the current assurances that patient data will only be made available to third parties if it is for the purposes that will benefit the health and social care system (see FAQ 39 on the Guide for GP Practices). It also sounds like a very difficult piece of legislation to draft, and it will be very interesting to see what the proposed amendment actually says – will it allow secondary use for commercial purposes, as long as the primary use is for a “clear health benefit”? and, crucially, how on earth will it be regulated and enforced? (will properly resourced regulators be allowed to audit third parties’ use of data? – I certainly hope so).

The second amendment implies that the Data Protection Act 1998 (DPA) will also be amended. This also sounds like a difficult provision to draft: the Telegraph says

Those that have committed even one prior offence involving patient data will be barred from accessing NHS medical records indefinitely as part of a “one strike and you’re out” approach

But what do we mean by “offence”? The Telegraph falls into the common error of thinking that the Information Commissioner’s Office’s (ICO’s) powers to serve monetary penalty notices (MPNs) to a maximum of £500,000 are criminal justice powers; they are not – MPNs are civil notices, and the money paid is not a “fine” but a penalty. The only relevant current criminal offence in the DPA is that of (in terms) deliberately or recklessly obtaining or disclosing personal data without authority of the data controller. This is an either-way offence, which means it currently carries a maximum sanction of a £5000 fine in a magistrates court, or an unlimited fine in Crown Court (it is very rare for cases to be tried in the latter though). Prosecutions under this section (55) are generally brought against individuals, because the offence involves obtaining or disclosing the data without the authority of the data controller. It is unlikely that a company would commit a section 55 offence. More likely is that a company would seriously contravene the DPA in a manner which would lead to a (civil) MPN, or more informal ICO enforcement action. More likely still is simply that the ICO would have made a finding of “unlikely to have complied” with the DPA, under section 42 – a finding which carries little weight. Are prior civil or informal action, or a section 42 “unlikely to have complied” assessment going to count for the “one strike and you’re out” approach? And even if they are, what is to stop miscreant individuals or companies functioning through proxies, or agents? or even simply lying to get access to the data?

Noteworthy by its absence in the Telegraph reports of the proposed amendments was any reference to the one change to data protection law which actually might have a deterrent effect on those who illegally obtain or disclose personal data – the possibility of being sent to prison. As I and others have written before, all that is needed to achieve this is for the government to commence Section 77 of the Criminal Justice and Immigration Act 2008, which would create the power to alter the penalty (including a custodial sentence) for a section 55 DPA offence. However, the government has long been lobbied by certain sections of the press industry not to do so, because of apparent fears that it would give the state the power to imprison investigative journalists (despite the fact that section 78 of the Criminal Justice Act 2008 – also uncommenced – creating a new defence for journalistic, literary or artistic purposes). The Information Commissioner has repeatedly called for the law to be changed so that there is a real sanction for serious criminal data protection offences, but to no avail.

Chris Pounder has argued that the custodial sentence provisions (discussion of which was kicked into the long grass which grew up in the aftermath of the Leveson inquiry) might never be introduced. Despite the calls for such strong penalties for misuse of medical data, from influential voices such as Ben Goldacre, the proposals for change outlined by the Telegraph seem to support Dr Pounder’s view.

One of the main criticisms of the disastrous public relations and communications regarding the care.data initiative is that people’s acute concerns about the security of their medical records have been dismissed with vague or misleading reassurances. With the announcement of these vague and probably ineffectual proposed legal sanctions, what a damned shame that that looks to be continuing.

3 Comments

Filed under care.data, Data Protection, data sharing, Information Commissioner, Leveson, monetary penalty notice, NHS

Big Pharma and care.data

Patients’ identifiable medical data will end up in the hands of large pharmaceutical companies, under the care.data initiative. With “Big Pharma” beholden to shareholders, and its abysmal record on transparency, is this another reason to consider opting out?

We are often told by those publicly defending the care.data programme (I’m thinking particularly of NHS Chief Data Officer Geraint Lewis, and NHS National Director for Patients and Information Tim Kelsey, who at least are prepared to engage with critics – although the latter has a habit of resorting to personal attacks at times) that patients’ identifiable/amber/pseudonymised data will not be made available to commercial organisations to use for their own purposes. So, we are told, it cannot be used for the purposes of selling or administering any kind of insurance, or for marketing purposes. As the pdf of FAQs, to which we are often referred (by Geraint in particular) says

Potentially identifiable data – these data do not include identifiers but may be considered identifiable (e.g. due to a patient in an area having a rare disease or a rare combination of characteristics). There are strict controls around the limited release of such data. For example, there must be a contract in place, the data are only released to approved organisations, and restricted to a specific purposes that will benefit the health and social care system
Let’s ignore for now the awkward question of how these restrictions can effectively be enforced. Let’s also ignore the fact that this data will not simply be “released” – organisations will pay for it, and a commercial organisation, with fiduciary obligations to its owners or shareholders, is not going to pay for something unless there is potential financial benefit.
What I wanted to highlight is that purposes that will benefit the health and social care system will generally boil down to two things: commissioning of services, and research. Regarding the latter, as the NHS Health Research Authority says this can take many forms, and be undertaken by many different bodies, but it will be no big revelation if I point out that vast amounts of research are conducted by, or under the control of, huge pharmaceutical companies – Big Pharma. Doctor and journalist Ben Goldacre has been campaigning for a number of years, following up the lead of others such as Iain Chalmers to expose the fact that an enormous amount of data and results from research – specifcally, admittedly, of clinical trials – is withheld by Big Pharma. This led to the setting-up of the AllTrials campaign. As Ben said, on the publication of a damning report by the Public Accounts Committee into the withholding of trial results for Tamiflu
[the] report is a complete vindication of AllTrials’ call for all the results, of all the trials, on all the uses of all currently prescribed treatments. None of the proposed new legislation or codes of conduct come anywhere close to this simple, vital ask. Industry has claimed it is on the verge of delivering transparency for over two decades. While obfuscating and delaying, ever more results have been withheld. Some in industry now claim that results from even a decade ago may be lost and inaccessible. This is both implausible and unacceptable…We cannot make informed decisions about which treatment is best when vitally important information is routinely and legally kept secret. Future generations will look back at this absurd situation in the same way that we look back on mediaeval bloodletting
This is the same industry which will be able to purchase patients’ identifiable medical data, uploaded from their GP records for research purposes. Will the NHS ever see the results of this research if, for instance, those results could have a potentially adverse effect on the companies’ share prices? Will there be any legal or contractual mechanisms in place to ensure that we don’t see similar obfuscating and delaying, and withholding of results?
Is it really the insurance and marketing companies we need to worry about?

Leave a comment

Filed under care.data, Confidentiality, data sharing, NHS, Privacy