Tag Archives: ICO

My small business advice…let’s be blunt.

In recent months I’ve seen plenty of articles and comments, on regular and social media, to the effect that either the government, or the Information Commissioner’s Office (ICO), or both, must do more to educate businesses about the General Data Protection Regulation (GDPR) and to help them comply with its requirements.

My response to this is blunt: when setting up and when running a business, it is for the owner/directors/board to exercise appropriate diligence to understand and comply with the laws relating to the business. Furthermore, the costs of this diligence and compliance have to be factored into any new or ongoing business plan. Even more bluntly – if you can’t afford to find out what the applicable law is, and you can’t afford to comply, then you haven’t got a viable business.

(Less bluntly, there is of course a wealth of information, mostly from the ICO, about what GDPR means and how to comply. Ultimately, however, data protection law is principles-based and risk-based and no one but those responsible for running it can reasonably say what compliance means in the context of that particular business).

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Data Protection, GDPR, Information Commissioner

This old world will never change

Complacency about data protection in the NHS won’t change unless ICO takes firm action

Back in September 2016 I spoke to Vice’s Motherboard, about reports that various NHS bodies were still running Windows XP, and I said

If hospitals are knowingly using insecure XP machines and devices to hold and otherwise process patient data they may well be in serious contravention of their [data protection] obligations

Subsequently, in May this year, the Wannacry exploit indicated that those bodies were indeed vulnerable, with multiple NHS Trusts and GP practices subject to ransomware demands and major system disruption.

That this had enormous impact on patients is evidenced by a new report on the incident from the National Audit Office (NAO), which shows that

6,912 appointments had been cancelled, and [it is] estimated [that] over 19,000 appointments would have been cancelled in total. Neither the Department nor NHS England know how many GP appointments were cancelled, or how many ambulances and patients were diverted from the five accident and emergency departments that were unable to treat some patients

The NAO investigation found that the Department of Health and the Cabinet Office had written to Trusts

saying it was essential they had “robust plans” to migrate away from old software, such as Windows XP, by April 2015. [And in] March and April 2017, NHS Digital had issued critical alerts warning organisations to patch their systems to prevent WannaCry

Although the NAO report is critical of the government departments themselves for failure to do more, it does correctly note that individual healthcare organisations are themselves responsible for the protection of patient information. This is, of course, correct: under the Data Protection Act 1998 (DPA) each organisation is a data controller, and responsible for, among other things, for ensuring that appropriate technical and organisational measures are taken against unauthorised or unlawful processing of personal data.

Yet, despite these failings, and despite the clear evidence of huge disruption for patients and the unavoidable implication that delays in treatment across all NHS services occurred, the report was greeted by the following statement by Keith McNeil, Chief Clinical Information Officer for NHS England

As the NAO report makes clear, no harm was caused to patients and there were no incidents of patient data being compromised or stolen

In fairness to McNeil, he is citing the report itself, which says that “NHS organisations did not report any cases of harm to patients or of data being compromised or stolen” (although that is not quite the same thing). But the report continues

If the WannaCry ransomware attack had led to any patient harm or loss of data then NHS England told us that it would expect trusts to report cases through existing reporting channels, such as reporting data loss direct to the Information Commissioner’s Office (ICO) in line with existing policy and guidance on information governance

So it appears that the evidence for no harm arising is because there were no reports of “data loss” to the ICO. This emphasis on “data loss” is frustrating, firstly because personal data does not have to be lost for harm to arise, and it is difficult to understand how delays and emergency diversions would not have led to some harm, but secondly because it is legally mistaken: the DPA makes clear that data security should prevent all sorts of unauthorised processing, and removal/restriction of access is clearly covered by the definition of “processing”.

It is also illustrative of a level of complacency which is deleterious to patient health and safety, and a possible indicator of how the Wannacry incidents happened in the first place. Just because data could not be accessed as a result the malware does not mean that this was not a very serious situation.

It’s not clear whether the ICO will be investigating further, or taking action as a result of the NAO report (their response to my tweeted question – “We will be considering the contents of the report in more detail. We continue to liaise with the health sector on this issue” was particularly unenlightening). I know countless dedicated, highly skilled professionals working in the fields of data protection and information governance in the NHS, they’ve often told me their frustrations with senior staff complacency. Unless the ICO does take action (and this doesn’t necessarily have to be by way of fines) these professionals, but also – more importantly – patients, will continue to be let down, and in the case of the latter, put at the risk of harm.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under 7th principle, Data Protection, data security, enforcement, Information Commissioner, NHS

DCMS Statement of Intent on the Data Protection Bill

Not so much a Statement of Intent, as a Statement of the Bleeding Obvious

The wait is not quite over. We don’t yet have a Data Protection Bill, but we do have a Statement of Intent from DCMS, explaining what the proposed legislation will contain. I though it would be helpful to do a short briefing note based on my very quick assessment of the Statement. So here it is

IT’S JUST AN ANNOUNCEMENT OF ALL THE THINGS THE UK WOULD HAVE TO IMPLEMENT ANYWAY UNDER EUROPEAN LAW

By which I mean, it proposes law changes which will be happening in May next year, when the General Data Protection Regulation becomes directly applicable, or changes made under our obligation to implement the Police and Crime Directive. In a little more detail, here are some things of passing interest, none of which is hugely unexpected.

As predicted by many, at page 8 it is announced that the UK will legislate to require parents to give consent to children’s access to information society services (i.e. online services) where the child is under 13 (rather than GDPR’s default 16). As the UK lobbied to give member states discretion on this, it is no surprise.

Exemptions from compliance with majority of data protection law when the processing is for the purposes of journalism will remain (page 19). The Statement says that the government

believe the existing exemptions set out in section 32 strike the right balance between privacy and freedom of expression

But of potential note is the suggestion that

The main difference will be to amend provisions relating to the ICO’s enforcement powers to strengthen the ICO’s ability to enforce the re-enacted section 32 exemptions effectively

Without further details it is impossible to know what will be proposed here, but any changes to the existing regime which might have the effect of decreasing the size of the media’s huge carve-out will no doubt be vigorously lobbied against.

There is confirmation (at pp17 and 18) that third parties (i.e. not just criminal justice bodies) will be able to access criminal conviction information. Again, this is not unexpected – the regime for criminal records checks for employers etc was unlikely to be removed.

The Statement proposes a new criminal offence of intentionally or recklessly re-identifying individuals from anonymised or pseudonymised data, something the Commons Science and Technology Committee has called for. Those who subsequently process such data will also be guilty of an offence. The details here will be interesting to see – as with most privacy-enhancing technology, in order for anonymisation to be robust it needs to stress-tested – such testing will not be effective if those undertaking do so at risk of committing an offence, so presumably the forthcoming Bill will provide for this.

The Bill will also introduce an offence of altering records with intent to prevent disclosure following a subject access request. This will use the current mechanism at section 77 of the Freedom of Information Act 2000. Whether that section itself will be amended (time limits for prosecutions militate against its effectiveness) remains unknown.

I also note that the existing offence of unlawfully obtaining personal data will be widened to those who retain personal data against the wishes of the data controller, even where it was initially obtained lawfully. This will probably cover those situations where people gather or are sent personal data in error, and then refuse to return it.

There is one particular howler at page 21, which suggests the government doesn’t understand what privacy by design and privacy by default mean:

The Bill will also set out to reassure citizens by promoting the concept of “privacy by default and design”. This is achieved by giving citizens the right to know when their personal data has been released in contravention of the data protection safeguards, and also by offering them a clearer right of redress

Privacy by design/default is about embedding privacy protection throughout the lifecycle of a project or process etc., and has got nothing at all to do with notifying data subjects of breaches, and whether this is a drafting error in the Statement, or a fundamental misunderstanding, it is rather concerning that the government, which makes much of “innovation” (around which privacy by design should be emphasised), fails to get this right.

So that’s a whistle stop tour of the Statement, ignoring all the fluff about implementing things which are required under GDPR and the Directive. I’ll update this piece in due course, if anything else emerges from a closer reading.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

10 Comments

Filed under Data Protection, GDPR, Information Commissioner, journalism

On some sandy beach

[EDITED 25.07.17 to include references to “sandpits” in the report of the Deepmind Health Independent Review Panel]

What lies behind the Information Commissioner’s recent reference to “sandbox regulation”?

The government minister with responsibility for data protection, Matt Hancock, recently spoke to the Leverhulme Centre. He touched on data protection:

a new Data Protection Bill in this Parliamentary Session…will bring the laws up to date for the modern age, introduce new safeguards for citizens, stronger penalties for infringement, and important new features like the right to be forgotten. It will bring the EU’s GDPR and Law Enforcement Directive into UK law, ensuring we are prepared for Brexit.

All pretty standard stuff (let’s ignore the point that the “right to be forgotten” such as it is, exists under existing law – a big clue to this being that the landmark case was heard by the CJEU in 2014). But Hancock went on to cite with approval some recent words of the Information Commissioner, Elizabeth Denham:

I think the ICO’s proposal of a data regulatory “sandbox” approach is very impressive and forward looking. It works in financial regulation and I look forward to seeing it in action here.

This refers to Denham’s recent speech on “Promoting privacy with innovation within the law”, in which she said

We are…looking at how we might be able to engage more deeply with companies as they seek to implement privacy by design…How we can contribute to a “safe space” by building a sandbox where companies can test their ideas, services and business models. How we can better recognise the circular rather than linear nature of the design process.

I thought this was interesting – “sandbox regulation” in the financial services sector involves an application to the Financial Conduct Authority (FCA), for the testing of “innovative” products that don’t necessarily fit into existing regulatory frameworks – the FCA will even where necessary waive rules, and undertake not to take enforcement action.

That this model works for financial services does not, though, necessarily mean it would work when it comes to regulation of laws, such as data protection laws, which give effect to fundamental rights. When I made enquiries to the Information Commissioner’s Office (ICO) for further guidance on what Denham intends, I was told that they “don’t have anything to add to what [she’s] already said about engaging with companies to help implement privacy by design”.

The recent lack of enforcement action by the ICO against the Royal Free NHS Trust regarding its deal with Google Deepmind raised eyebrows in some circles: if the unlawful processing of 1.6 million health records (by their nature sensitive personal data) doesn’t merit formal enforcement, then does anything?

Was that a form of “sandbox regulation”? Presumably not, as it doesn’t appear that the ICO was aware of the arrangement prior to it taking place, but if, as it seems to me, such regulation may involve a light-touch approach where innovation is involved, I really hope that the views and wishes of data subjects are not ignored. If organisations are going to play in the sand with our personal data, we should at the very least know about it.

**EDIT: I have had my attention drawn to references to “sandpits” in the Annual Report of the Deepmind Health Independent Review Panel:

We think it would be helpful if there was a space, similar to the ‘sandpits’ established by the Research Councils, which would allow regulators, the Department of Health and tech providers to discuss these issues at an early stage of product development. The protection of data during testing is an issue that should be discussed in a similar collaborative forum. We believe that there must be a mechanism that allows effective testing without compromising confidential patient information.

It would seem a bit of a coincidence that this report should be published around the same time Denham and Hancock were making their speeches – and I would argue that this only bolsters the case for more transparency from the ICO about how this type of collaborative regulation will take place.

And I notice that the Review Panel say nothing about involving data subjects in “product development”. Until “innovators” understand that data subjects are the key stakeholder in this, I don’t hold out much hope for the proper protection of rights.**

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

1 Comment

Filed under Data Protection, enforcement, human rights, Information Commissioner

FOI enforcement – if not now, when?

Recent ICO decision notices show the Home Office and MoJ repeatedly simply failing to respond to FOI requests. Surely the time has come for ICO action?

The Information Commissioner’s Office (ICO) recently stated to me that they were not monitoring the Home Office’s and Ministry of Justice’s (MoJ) compliance with the statutory timescales required by section 10 of the Freedom of Information Act 2000 (FOIA)

This was despite the fact that they’d published decision notices about delays by those two government bodies which reported that “The delay in responding to this request will be logged as part of ongoing monitoring of the MoJ’s compliance with the FOIA”. This was not formal monitoring, I was told; rather, it was informal monitoring. Ah. Gotcha.

So what does trigger formal monitoring? Interestingly, the ICO’s own position on this has recently changed, and got a bit stricter. It’s generally meant to be initiated in the following circumstances:

our analysis of complaints received by the ICO suggests that we have received in the region of 4 to 8 or more complaints citing delays within a specific authority within a six month period

(for those authorities which publish data on timeliness) – it appears that less than 90% of requests are receiving a response within the appropriate timescales. [this used to be 85%]

Evidence of a possible problem in the media, other external sources or internal business intelligence.

Despite the apparent increase in robustness of approach, the ICO do not appear to be monitoring any public authorities at the moment. The last monitoring took place between May and July 2016 when Trafford Council were in their sights. Although they are not mentioned in the relevant report, an ICO news item from July last year says that the Metropolitan Police, who have been monitored off and on for a period of years without any real outward signs of improvement, were also still being monitored.

But if they aren’t monitoring the compliance of any authorities at the moment, but particularly the Home Office and the MoJ, one is led to wonder why, when one notes the pattern in recent ICO decision notices involving those two authorities. Because, in 16 out of the last 25 decision notices involving the Home Office, and 6 out of the last 25 involving the MoJ, the ICO has formally issued decision notices finding that the authorities had failed to comply with the FOI request in question, by the time the decision notice was issued.

At this point, it might be helpful to explain the kind of chronology and process that would lead up to the issuing of such decision notices. First, a request must be made, and there will have been a failure by the authority to reply within twenty working days. Then, the requester will normally (before the ICO will consider the case) have had to ask for an internal review by the authority of its handling of the request. Then, the requester will have complained to the ICO. Then, the ICO will have normally made informal enquiries of the authority, effectively “geeing” them up to provide a response. Then, as still no response will have been sent, the ICO will have moved to issuing a formal decision notice. At any point in this process the authority could (and should) still respond to the original request, but no – in all of these cases (again – 16 of the last 25 Home Office decisions, 6 of the last 25 MoJ ones) the authorities have still not responded many months after the original request. Not only does this show apparent contempt for the law, but also for the regulator.

So why does the ICO not do more? I know many FOI officers (and their public authority employers) who work their socks off to make sure they respond to requests in a timely manner. In the absence of formal monitoring of (let alone enforcement action against) those authorities who seem to ignore their legal duties much of the time, those FOI officers would be forgiven for asking why they bother: it is to their credit that bother they still do.

Elizabeth Denham became Information Commissioner in July last year, bringing with her an impressive track record and making strong statements about enforcing better FOI compliance. Her first few months, with GDPR and Brexit to deal with, will not have been easy, and she could be forgiven for not having had the time to focus on FOI, but the pressing question now surely is “if not now, when?”

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Freedom of Information, Home Office, Information Commissioner

Why what Which did wears my patience thin

Pre-ticked consent boxes and unsolicited emails from the Consumers’ Association

Which?, the brand name of the Consumers’ Association, publishes a monthly magazine. In an era of social media, and online reviews, its mix of consumer news and product ratings might seem rather old-fashioned, but it is still (according to its own figures1) Britain’s best-selling monthly magazine. Its rigidly paywalled website means that one must generally subscribe to get at the magazine’s contents. That’s fair enough (although after my grandmother died several years ago, we found piles of unread, unopened even, copies of Which? She had apparently signed up to a regular Direct Debit payment, probably to receive a “free gift”, and had never cancelled it: so one might draw one’s own conclusion about how many of Which?’s readers are regular subscribers for similar reasons).

In line with its general “locked-down” approach, Which?’s recent report into the sale of personal data was, except for snippets, not easy to access, but it got a fair bit of media coverage. Intrigued, I bit: I subscribed to the magazine. This post is not about the report, however, although the contents of the report drive the irony of what happened next.

As I went through the online sign-up process, I arrived at that familiar type of page where the subject of future marketing is broached. Which? had headlined their report “How your data could end up in the hands of scammers” so it struck me as amusing, but also irritating, that the marketing options section of the sign-in process came with a pre-ticked box:

img_0770

As guidance from the Information Commissioner’s Office makes clear, pre-ticked boxes are not a good way to get consent from someone to future marketing:

Some organisations provide pre-ticked opt-in boxes, and rely on the user to untick it if they don’t want to consent. In effect, this is more like an opt-out box, as it assumes consent unless the user clicks the box. A pre-ticked box will not automatically be enough to demonstrate consent, as it will be harder to show that the presence of the tick represents a positive, informed choice by the user.

The Article 29 Working Party goes further, saying in its opinion on unsolicited communications for marketing purposes that inferring consent to marketing from the use of pre-ticked boxes is not compatible with the data protection directive. By extension, therefore, any marketing subsequently sent on the basis of a pre-ticked box will be a contravention of the data protection directive (and, in the UK, the Data Protection Act 1998) and the ePrivacy directive (in the UK, the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR)).

Nothwithstanding this, I certainly did not want to consent to receive subsequent marketing, so, as well as making a smart-arse tweet, I unticked the box. However, to my consternation, if not my huge surprise, I have subsequently received several marketing emails from Which? They do not have my consent to send these, so they are manifestly in contravention of regulation 22 of PECR.

It’s not clear how this has happened. Could it be a deliberate tactic by Which?  to ignore subscribers’ wishes? One presumes not: Which? says it “exists to make individuals as powerful as the organisations they deal with in their daily live” – deliberately ignoring clear expressions regarding consent would hardly sit well with that mission statement. So is it a general website glitch – which means that those expressions are lost in the sign-up process? If so, how many individuals are affected? Or is it just a one-off glitch, affecting only me?

Let’s hope it’s the last. Because the ignoring or overriding of expressions of consent, and the use of pre-ticked boxes for gathering consent, are some of the key things which fuel trade in and disrespect for personal data. The fact that I’ve experience this issue with a charity which exists to represent consumers, as a result of my wish to read their report into misuse of personal data, is shoddy, to say the least.

I approached Which? for a comment, and a spokesman said:

We have noted all of your comments relating to new Which? members signing up, including correspondence received after sign-up, and we are considering these in relation to our process.

I appreciate the response, although I’m not sure it really addresses my concerns.

1Which? Annual Report 2015/2016

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

1 Comment

Filed under consent, Data Protection, Directive 95/46/EC, Information Commissioner, marketing, PECR, spam, subject access

Data Protection distress compensation for CCTV intrusion

The Information Commissioner’s Office (ICO) recently (2 February) successfully prosecuted a business owner for operating CCTV without an appropriate notification under section 18 of the Data Protection Act 1998 (DPA), announcing:

Businesses could face fines for ignoring CCTV data protection law

But a recent case in the Scottish Sheriff Court shows that CCTV and data protection can also have relevance in private law civil proceedings. In Woolley against Akbar [2017] ScotsSC 7 the husband and wife pursuers (equivalent to claimants in England and Wales) successfully brought a claim for compensation for distress caused by the defender’s (defendant in England and Wales) use of CCTV cameras which were continuously recording video and audio, and which were deliberately set to cover the pursuers’ private property (their garden area and the front of their home). Compensation was assessed at £8634 for each of the pursuers (so £17268 in total) with costs to be assessed at a later date.

Two things are of particular interest to data protection fans: firstly, the willingness of the court to rule unequivocally that CCTV operated in non-compliance with the DPA Schedule One principles was unlawful; and secondly, the award of compensation despite the absence of physical damage.

The facts were that Mr and Mrs Woolley own and occupy the upper storey of a dwelling place, while Mrs Akbar owns and operates the lower storey as a guest house, managed by her husband Mr Akram. In 2013 the relationship between the parties broke down. Although both parties have installed CCTV systems, the pursuers’ system only monitors their own property, but this was not the case with the defender’s:

any precautions to ensure that coverage of the pursuers’ property was minimised or avoided. The cameras to the front of the house record every person approaching the pursuers’ home. The cameras to the rear were set deliberately to record footage of the pursuers’ private garden area. There was no legitimate reason for the nature and extent of such video coverage. The nature and extent of the camera coverage were obvious to the pursuers, as they could see where the cameras were pointed. The coverage was highly intrusive…the defender also made audio recordings of the area around the pursuers’ property…they demonstrated an ability to pick up conversations well beyond the pursuers’ premises. There are four audio boxes. The rear audio boxes are capable of picking up private conversations in the pursuers’ rear garden. Mr Akram, on one occasion, taunted the pursuers about his ability to listen to them as the pursuers conversed in their garden. The defender and Mr Akram were aware of this at all times, and made no effort to minimise or avoid the said audio recording. The nature of the coverage was obvious to the pursuers. Two audio boxes were installed immediately below front bedroom windows. The pursuers feared that conversations inside their home could also be monitored. The said coverage was highly intrusive.

Although, after the intervention of the ICO, the defender realigned the camera at the rear of the property, Sheriff Ross held that the coverage “remains intrusive”. Fundamentally, the sheriff held that the CCTV use was: unfair (in breach of the first data protection principle); excessive in terms of the amount of data captured (in breach of the third data protection principle); and retained for too long (in breach of the fifth data protection principle).

The sheriff noted that, by section 13(2) of the DPA, compensation for distress can only be awarded if the pursuer has suffered “damage”, which was not the case here. However, the sheriff further correctly noted, and was no doubt taken to, the decision of the Court of Appeal in Vidal-Hall & Ors v Google [2015] EWCA Civ 311 in which the court struck down section 13(2) as being incompatible with the UK’s obligations under the European data protection directive and the Charter of Fundamental Rights (my take on Vidal Hall is here). Accordingly, “pure” distress compensation was available.

Although the facts here show a pretty egregious breach of DPA, it is good to see a court understanding and assessing the issues so well, no doubt assisted in doing so by Paul Motion, of BTO Solicitors, who appeared for the pursuers.

One niggle I do have is about the role of the ICO in all this: they were clearly apprised of the situation, and could surely have taken enforcement action to require the stopping of the CCTV (although admittedly ICO cannot make an award of compensation). It’s not clear to me why they didn’t.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

4 Comments

Filed under damages, Data Protection, Information Commissioner

ICO FOI Decision Notices – insufficient attention to detail?

Anyone used to reading Freedom of Information Act 2000 (FOIA) decision notices from the Information Commissioner’s Office (ICO) will be familiar with this sort of wording:

The Commissioner has concluded that the public interest favours maintaining the exemption contained at section x(y) of FOIA. In light of this decision, the Commissioner has not gone on to consider the public authority’s reliance on section z(a) of FOIA.

In fact, a search on the ICO website for the words “has not gone on” throws up countless examples.

What lies behind this approach is this: a public authority, in refusing to disclose recorded information, is entitled to rely on more than one of the FOIA exemptions, because information might be exempt under more than one. An obvious example would be where information exempted from disclosure for the purposes of safeguarding national security (section 24 FOIA) would also likely to be exempt under section 31 (law enforcement).

One assumes that the ICO does this for pragmatic reasons – if information is exempt it’s exempt, and application of a further exemption in some ways adds nothing. Indeed, the ICO guidance for public authorities advises

you [do not]  have to identify all the exemptions that may apply to the same information, if you are content that one applies

Now, this is correct as a matter of law (section 78 of FOIA makes clear that, as a general principle, reliance by public authorities upon the Act’s exemptions is discretionary), and the ICO’s approach when making decisions is understandable, but it is also problematic, and a recent case heard by the Information Tribunal illustrates why.

In Morland v IC & Cabinet Office (EA/2016/0078) the Tribunal was asked to determine an appeal from Morland, after the Cabinet Office had refused to disclose to him minutes of the Honours and Decorations Committee, and after the ICO had upheld the refusal. As the Tribunal noted

The Cabinet Office refused the Appellant’s information request in reliance upon s. 37 (1) (b) and s. 35 (1) (a) of the Freedom of Information Act 2000 (“FOIA”) [and the ICO] Decision Notice found (at paragraph 13) that the exemption under s. 37 (1) (b) was 5 engaged by the request and (at paragraph 25) that the public interest favoured maintaining the exemption “by a narrow margin”.  The Decision Notice expressly did not consider the Cabinet Office’s reliance on s. 35 (1) (b). [emphasis added]

The problem arose because the Tribunal found that, pace the ICO’s decision, the exemption at section 37(1)(b) was not engaged (because that section creates an exemption to disclosure if the information relates to the conferring by the crown of an honour or dignity, and the information request related to whether an entirely new honour should be created). But what of the exemption at s35(1)(b)? Well, although it would not always be the case in similar circumstances, here the Tribunal and the parties were in a bind, because, as the Tribunal said

We are left with a situation where, as the Decision Notice did not reach a conclusion on that issue, none of the parties appear to have regarded s. 35 (1) (a) as being seriously in play in this appeal, with the effect that we have received limited argument on that issue

There is no power to remit a decision to the ICO (see IC v Bell [2014] UKUT 0106 (AAC) (considered in a Panopticonblog post here), so the Tribunal had to make findings in relation to s35, despite a “concern whether it is right to do so”. On the expressly limited evidence before it it found that the exemption was not engaged at the time of the request, and, accordingly, upheld Morland’s appeal, saying that it

[regarded] the failure of the Decision Notice to determine a key issue between the parties as rather unsatisfactory

Whether this will lead the ICO to revisit its apparent policy of, at least at times, focusing on only one of multiple claimed exemptions remains to be seen. It’s not often that I have sympathy with the Cabinet Office when it comes to matters of FOIA, but there is a modicum here.

Nonetheless, I think what this case does suggest is that a public authority should, when faced with an appeal of an ICO Decision Notice upholding a FOIA refusal, give strong consideration to whether it needs to be joined to the appeal (as, admittedly, the Cabinet Office was here) and to make sure that its response to the appeal (under part 27 of the Tribunal Rules) fully deals with all applicable exemptions, notwithstanding the contents of the Decision Notice. In this way, the Tribunal can, where necessary, take as fully-apprised a decision as possible on all of those exemptions.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

Leave a comment

Filed under Freedom of Information, Information Commissioner, Information Tribunal

A Massive Impact for the ICO?

[Edited to add: it is well worth reading the comments to this piece, especially the ones from Chris Pounder and Reuben Binns]

I needed a way to break a blogging drought, and something that was flagged up to me by a data protection colleague (thanks Simon!) provides a good opportunity to do so. It suggests that the drafting of the GDPR could lead to an enormous workload for the ICO.

The General Data Protection Regulation (GDPR) which entered into force on 24 May this year, and which will apply across the European Union from 25 May 2018, mandates the completion of Data Protection Impact Assessments (DPIAs) where indicated. Article 35 of the GDPR explains that

Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data

In the UK (and indeed elsewhere) we already have the concept of “Privacy Impact Assessments“, and in many ways all that the GDPR does is embed this area of good practice as a legal obligation. However, it also contains some ancillary obligations, one of which is to consult the supervisory authority, in certain circumstances, prior to processing. And here is where I get a bit confused.

Article 36 provides that

The controller shall consult the supervisory authority prior to processing where a data protection impact assessment under Article 35 indicates that the processing would result in a high risk in the absence of measures taken by the controller to mitigate the risk
[emphasis added]

A close reading of Article 36 results in this: if the data controller conducts a DPIA, and is of the view that if mitigating factors were not in place the processing would be high risk, it will have to consult supervisory authority (in the UK, the Information Commissioner’s Office (ICO)). This is odd: it effectively renders any mitigating measures irrelevant. And it appears directly to contradict what recital 84 says

Where a data-protection impact assessment indicates that processing operations involve a high risk which the controller cannot mitigate by appropriate measures in terms of available technology and costs of implementation, a consultation of the supervisory authority should take place prior to the processing [emphasis added]

So, the recital says the obligation to consult will arise where high risk is involved which the controller can’t mitigate, while the Article says the obligation will arise where high risk is involved notwithstanding any mitigation in place.

Clearly, the Article contains the specific legal obligation (the recital purports to set out the reason for the contents of the enacting terms), so the law will require data controllers in the UK to consult the ICO every time a DPIA identifies an inherently high risk processing activity, even if the data controller has measures in place fully to mitigate and contain the risk.

For example, let us imagine the following processing activity – collection of and storage of customer financial data for the purposes of fulfilling a web transaction. The controller might have robust data security measures in place, but Article 36 requires it to consider “what if those robust measures were not in place? would the processing be high risk?” To which the answer would have to be “yes” – because the customer data would be completely unprotected.

In fact, I would submit, if article 36 is given its plain meaning virtually any processing activity involving personal data, where there is an absence of mitigating measures, would be high risk, and create a duty to consult the ICO.

What this will mean in practice remains to be seen, but unless I am missing something (and I’d be delighted to be corrected if so), the GDPR is setting the ICO and other supervisory authorities up for a massive influx of work. With questions already raised about the ICO’s funding going forward, that is the last thing they are likely to need.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

9 Comments

Filed under Data Protection, GDPR, Information Commissioner

Any Safe Harbor in a storm…?

The ICO has contacted me to say that it actually selected SnapSurveys because they offered clients the option of hosting survey response on UK servers, and it has checked with SnapSurveys that this remains the case. I’ve been pointed me to http://www.snapsurveys.com/survey-software/security-accessibility-and-professional-outline/ which confirms this point.

So the answer to my question

Is the ICO making unlawful transfers of personal data to the US?

I’m pleased to confirm, appears to be “no”.

Earlier this week the Information Commissioner’s Office (ICO) published a blogpost by Deputy Commissioner David Smith, entitled The US Safe Harbor – breached but perhaps not destroyed!

“Don’t panic” says David to those data controllers who are currently relying on Safe Harbor as a means of ensuring that personal data transferred by them to the United States has adequate protection (in line with the requirements of Article 25 of the European Data Protection Directive, and the eighth principle of schedule one of the UK’s Data Protection Act 1998 (DPA)). He is referring, of course, to the recent decision of the Court of Justice of the European Union in Schrems. which Data controllers should, he says, “take stock” and “make their own minds up”:

businesses in the UK don’t have to rely on Commission decisions on adequacy. Although you won’t get the same degree of legal certainty, UK law allows you to rely on your own adequacy assessment. Our guidance tells you how to go about doing this.  Much depend [sic] here on the nature of the data that you are transferring and who you are transferring it to but the big question is can you reduce the risks to the personal data, or rather the individuals whose personal data it is, to a level where the data are adequately protected after transfer? The Safe Harbor can still play a role here.

Smith also refers to a recent statement by the Article 29 Working Party – the grouping of representatives of the various European data protection authorities, of which he is a member – and refers to “the substance of the statement being measured, albeit expressed strongly”. What he doesn’t say is how unequivocal it is in saying that

transfers that are still taking place under the Safe Harbour decision after the CJEU judgment are unlawful

And this is particularly interesting because, as I discovered today, the ICO itself appears (still) to be making transfers under Safe Harbor. I reported a nuisance call using its online tool (in doing so I included some sensitive personal data about a family member) and noticed that the tool was operated by SnapSurveys. The ICO’s own website privacy notice says

We collect information volunteered by members of the public about nuisance calls and texts using an online reporting tool hosted by Snap Surveys. This company is a data processor for the ICO and only processes personal information in line with our instructions.

while SnapSurveys’ privacy policy explains that

Snap Surveys NH, Inc. complies with the U.S. – E.U. Safe Harbor framework

This does not unambiguously say that SnapSurveys are transferring the personal data of those submitting reports to the ICO to the US under Safe Harbor – it is possible that the ICO has set up some bespoke arrangement with its processor, under which they process that specific ICO data within the European Economic Area – but it strongly suggests it.

It is understandable that a certain amount of regulatory leeway and leniency be offered to data controllers who have relied on Safe Harbor until now – to that extent I agree with the light-touch approach of the ICO. But if it is really the case that peoples’ personal data are actually being transferred by the regulator to the US, three weeks after the European Commission decision of 2000 that Safe Harbor provided adequate protection was struck down, serious issues arise. I will be asking the ICO for confirmation about this, and whether, if it is indeed making these transfers, it has undertaken its own adequacy assessment.

The views in this post (and indeed all posts on this blog) are my personal ones, and do not represent the views of any organisation I am involved with.

 

1 Comment

Filed under 8th principle, Data Protection, Directive 95/46/EC, Information Commissioner, safe harbor