No harm done

Why does nobody listen to me?

Quite a few media outlets and commentators have picked up on the consultation by the Department for Culture, Media and Sport I blogged about recently. The consultation is about the possibility of legislative change to make it easier for the Information Commissioner’s Office (ICO)(ICO) to “fine” (in reality, serve a civil monetary penalty notice) on people or organisations who commit serious contraventions of ePrivacy law in sending unsolicited electronic marketing messages (aka spam calls, texts, emails etc).

However, almost every report I have seen has missed a crucial point. So, we have The Register saying “ICO to fine UNBIDDEN MARKETEERS who cause ‘ANXIETY’…Inconvenience, annoyance also pass the watchdog’s stress test”, and Pinsent Masons, Out-Law.com saying “Unsolicited marketing causing ‘annoyance, inconvenience or anxiety’ could result in ICO fine”. We even have 11KBW’s formidable Christopher Knight saying

the DCMS has just launched a consultation exercise on amending PECR with a view to altering the test from “substantial damage or distress” to causing “annoyance, inconvenience or anxiety”

But none of these spot that the preferred option of DCMS, and the ICO is actually to go further, and give the ICO the power to serve a monetary penalty notice even when no harm has been shown at all

Remove the existing legal threshold of “substantial damage and distress” (this is the preferred option of both ICO and DCMS. There would be no need to prove “substantial damage and distress”, or any other threshold such as ‘annoyance, inconvenience or anxiety’…

So yes, this is a blog post purely to moan about the fact that people haven’t read my previous post. It’s my blog and I’ll cry if I want to.

UPDATE:

Chris Knight is so formidable that he’s both updated the Panopticon post and pointed out the oddness of option 3 being preferred when nearly all of the consultation paper is predicated on option 2 being victorious.

Leave a comment

Filed under Information Commissioner, marketing, monetary penalty notice, PECR, spam texts

Samaritans Radar – serious privacy concerns raised

UPDATE: 31 October

It appears Samaritans have silently tweaked their FAQs (so the text near the foot of this post no longer appears). They now say tweets will only be retained by the app for seven (as opposed to thirty) days, and have removed the words saying the app will retain a “Count of flags against a Twitter Users Friends ID”. Joe Ferns said on Twitter that the inclusion of this in the original FAQs was “a throw back to a stage of the development where that was being considered”. Samaritans also say “The only people who will be able to see the alerts, and the tweets flagged in them, are followers who would have received these Tweets in their current feed already”, but this does not absolve them of their data controller status: a controller does not need to access data in order to determine the means by which and the manner in which personal data are being processed, and they are still doing this. Moreover, this changing of the FAQs, with no apparent change to the position that those whose tweets are processed get no fair processing notice whatsoever, makes me more concerned that this app has been released without adequate assessment of its impact on people’s privacy.

END UPDATE

UPDATE: 30 October

Susan Hall has written a brilliant piece expanding on mine below, and she points out that section 12 of the Data Protection Act 1998 in terms allows a data subject to send a notice to a data controller requiring it to ensure no automated decisions are taken by processing their personal data for the purposes of evaluating matters such as their conduct. It seems to me that is precisely what “Samaritans Radar” does. So I’ve sent the following to Samaritans

Dear Samaritans

This is a notice pursuant to section 12 Data Protection Act 1998. Please ensure that no decision is taken by you or on your behalf (for instance by the “Samaritans Radar” app) based solely on the processing by automatic means of my personal data for the purpose of evaluating my conduct.

Thanks, Jon Baines @bainesy1969

I’ll post here about any developments.

END UPDATE

Samaritans have launched a Twitter App “to help identify vulnerable people”. I have only ever had words of praise and awe about Samaritans and their volunteers, but this time I think they may have misjudged the effect, and the potential legal implications of “Samaritans Radar”. Regarding the effect, this post from former volunteer @elphiemcdork is excellent:

How likely are you to tweet about your mental health problems if you know some of your followers would be alerted every time you did? Do you know all your followers? Personally? Are they all friends? What if your stalker was a follower? How would you feel knowing your every 3am mental health crisis tweet was being flagged to people who really don’t have your best interests at heart, to put it mildly? In this respect, this app is dangerous. It is terrifying to think that anyone can monitor your tweets, especially the ones that disclose you may be very vulnerable at that time

As for the legal implications, it seems to be potentially the case that Samaritans are processing sensitive personal data, in circumstances where there may not be a legal basis to do so. And some rather worrying misconceptions have accompanied the app launch. The first and most concerning of these is in the FAQs prepared for the media. In reply to the question “Isn’t there a data privacy issue here? Is Samaritans Radar spying on people?” the following answer is given

All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets. It does not look at private Tweets

The idea that, because something is in the public domain it cannot engage privacy issues is a horribly simplistic one, and if that constitutes the impact assessment undertaken, then serious questions have to be asked. Moreover, it doesn’t begin to consider the data protection considerations: personal data is personal data, whether it’s in the public domain or not. A tweet from an identified tweeter is inescapably the personal data of that person, and, if it is, or appears to be, about the person’s physical or mental health, then it is sensitive personal data, afforded a higher level of protection under the Data Protection Act 1998 (DPA). It would appear that Samaritans, as the legal person who determines the purposes for which, and the manner in which, the personal data are processed (i.e. they have produced an app which identifies a tweet on the basis of words, or sequences of words, and push it to another person) are acting as a data controller. As such, any processing has to be in accordance with their obligation to abide by the data protection principles in Schedule One of the DPA. The first principle says that personal data must be processed fairly and lawfully, and that a condition for processing contained in Schedule Two (and for sensitive personal data Schedule Two and Three) must be met. Looking only at Schedule Three, I struggle to see the condition which permits the app to identify a tweet, decide that it is from a potentially suicidal person and send it as such to a third party. The one condition which might apply, the fifth “The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject” is undercut by the fact that the data in question is not just the public tweet, but the “package” of that tweet with the fact that the app (not the tweeter) has identified it as a potential call for help.

The reliance on “all the data used in the app is public, so user privacy is not an issue” has carried through in messages sent on twitter by Samaritans Director of Policy, Research and Development, Joe Ferns, in response to people raising concerns, such as

existing Twitter search means anyone can search tweets unless you have set to private. #SamaritansRadar is like an automated search

Again, this misses the point that it is not just “anyone” doing a search on twitter, it is an app in Samaritans name which specifically identifies (in an automated way) certain tweets as of concern, and pushes them to third parties. Even more concerning was Mr Ferns’ response to someone asking if there was a way to opt out of having their tweets scanned by the app software:

if you use Twitter settings to mark your tweets private #SamaritansRadar will not see them

What he is actually suggesting there is that to avoid what some people clearly feel are intrusive actions they should lock their account and make it private. And, of course, going back to @elphiemcdork’s points, it is hard to avoid the conclusion that those who will do this might be some of the most vulnerable people.

A further concern is raised (one which confirms the data controller point above) about retention and reuse of data. The media FAQ states

Where will all the data be stored? Will it be secure? The data we will store is as follows:
• Twitter User ID – a unique ID that is associated with a Twitter account
• All Twitter User Friends ID’s – The same as above but for all the users friends that they
follow
• Any flagged Tweets – This is the data associated with the Tweet, we will store the raw
data for the Tweet as well
• Count of flags against a Twitter Users Friends ID – We store a count of flags against an
individual User
• To prevent the Database growing exponentially we will remove flagged Tweets that are
older than 30 days.

So it appears that Samaritans will be amassing data on unwitting twitter users, and in effect profiling them. This sort of data is terrifically sensitive, and no indication is given regarding the location of this data, and security measures in place to protect it.

The Information Commissioner’s Office recently produced some good guidance for app developers on Privacy in Mobile Apps. The guidance commends the use of Privacy Impact Assessments when developing apps. I would be interested to know if one was undertaken for Samaritans Radar, and, if so, how it dealt with the serious concerns that have been raised by many people since its launch.

This post was amended to take into account the observations in the comments by Susan Hall, to whom I give thanks. I have also since seen a number of excellent blog posts dealing with wider concerns. I commend, in particular, this by Adrian Short and this by @latentexistence

 

 

26 Comments

Filed under consent, Data Protection, Information Commissioner, Privacy, social media

DCMS consulting on lower threshold for “fining” spammers

The Department of Culture, Media and Sport (DCMS) has announced a consultation on lowering the threshold for the imposing of financial sanctions on those who unlawfully send electronic direct marketing. They’ve called it a “Nuisance calls consultation”, which, although they explain that it applies equally to nuisance text messages, emails etc., doesn’t adequately describe what could be an important development in electronic privacy regulation.

When, a year ago, the First-tier Tribunal (FTT) upheld the appeal by spam texter Christopher Niebel against the £300,000 monetary penalty notice (MPN) served on him by the Information Commissioner’s Office (ICO), it put the latter in an awkward position. And when the Upper Tribunal dismissed the ICO’s subsequent appeal, there was binding authority on the limits to the ICO’s power to serve MPNs for serious breaches of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). There was no dispute that, per the mechanism at section 55A of the Data Protection Act 1998 (DPA), adopted by PECR by virtue of regulation 31, Niebel’s contraventions were serious and deliberate, but what was at issue was whether they were “of a kind likely to cause substantial damage or substantial distress”. The FTT held that they were not – no substantial damage would be likely to arise and when it came to distress

the effect of the contravention is likely to be widespread irritation but not widespread distress…we cannot construct a logical likelihood of substantial distress as a result of the contravention.

When the Upper Tribunal agreed with the FTT, and the ICO’s Head of Enforcement said it had “largely [rendered] our power to issue fines for breaches of PECR involving spam texts redundant” it seemed clear that, for the time being at least, there was in effect a green light for spam texters, and, by extension, other spam electronic marketers. The DCMS consultation is in response to calls from the ICO, and others, such as the All Party Parliamentary Group (APPG) on Nuisance Calls, the Direct Marketing Association and Which for a change in the law.

The consultation proposes three options – 1) do nothing, 2) lower the threshold from “likely to cause substantial damage or substantial distress” to “likely to cause annoyance, inconvenience or anxiety, or 3) remove the threshold altogether, so any serious and deliberate (or reckless) contravention of the PECR provisions would attract the possibility of a monetary penalty. The third option is the one favoured by DCMS and the ICO.

If either of the second or third options is ultimately enacted, this could, I feel, lead to a significant reduction in the prevalence of spam marketing. The consultation document notes that (despite the fact that the MPN was overturned on appeal) the number of unsolicited spam SMS text message sent reduced by a significant number after the Niebel MPN was served. A robust and prominent campaign of enforcement under a legislative scheme which makes it much easier to impose penalties to a maximum of £500,000, and much more difficult to appeal them, could put many spammers out of business, and discourage others. This will be subject, of course, both to the willingness and the resources of the ICO. The consultation document notes that there might be “an expectation that [MPNs] would be issued by the ICO in many more cases than its resources permit” but the ICO has said (according to the document) that it is “ready and equipped to investigate and progress a significant number of additional cases with a view to taking greater enforcement action including issuing more CMPs”.

There appears to be little resistance (as yet, at least) to the idea of lowering or removing the penalty threshold. Given that, and given the ICO’s apparent willingness to take on the spammers, we may well see a real and significant attack on the scourge. Of course, this only applies to identifiable spammers in the domestic jurisdiction – let’s hope it doesn’t just drive an increase in non-traceable, overseas spam.

 

 

1 Comment

Filed under Data Protection, enforcement, Information Commissioner, Information Tribunal, marketing, monetary penalty notice, nuisance calls, PECR, spam texts, Upper Tribunal

If at first you don’t succeed…

The Information Commissioner’s Office (ICO) has uploaded to its website (24 October) two undertakings for breaches of data controllers’ obligations under the Data Protection Act 1998 (DPA). Undertakings are part of the ICO’s suite of possible enforcement actions against controllers.

One undertaking was signed by Gwynedd Council, after incidents in which social care information was posted to the wrong address, and a social care file went missing in transit between two sites. The other, more notably, was signed by the Disclosure and Barring Service (DBS), who signed a previous undertaking in March this year, after failing to amend a question (“e55″) on its application form which had been rendered obsolete by legislative changes. The March undertaking noted that

Question e55 of the application form asked the individuals ‘Have you ever been convicted of a criminal offence or received a caution, reprimand or warning?’ [Some applicants] responded positively to this question even though it was old and minor caution/conviction information that would have been filtered under the legislation. The individual’s positive response to question e55 was then seen by prospective employers who withdrew their job offers

This unnecessary disclosure was, said the ICO, unfair processing of sensitive personal data, and the undertaking committed DBS to amend the question on the form by the end of March.

However, the latest undertaking reveals that

application forms which do not contain the necessary amendments remain in circulation. This is because a large number of third party organisations are continuing to rely on legacy forms issued prior to the amendment of question e55. In the Commissioner’s view, the failure to address these legacy forms could be considered to create circumstances under which the unfair processing of personal data arises

The March undertaking had also committed DBS to ensure that supporting information provided to those bodies with access to the form be

kept under review to ensure that they continue to receive up to date, accurate and relevant guidance in relation to filtered matters

One might cogently argue that part of that provision of up-to-date guidance should have involved ensuring that those bodies destroyed old, unamended forms. And if one did argue that successfully, one would arrive at the conclusion that DBS could be in breach of the March undertaking for failing to do so. Breach of an undertaking does not automatically result in more serious sanctions, but they are available to the ICO, in the form of monetary penalties and enforcement notices. DBS might consider themselves lucky to have been given a second (or third?) chance, under which they must, by the end of of the year at the latest ensure that unamended legacy application forms containing are either rejected or removed from circulation.

One final point I would make is that no press release appears to have been put out about yesterday’s undertakings, nothing is on the ICO’s home page, and there wasn’t even a tweet from their twitter account. A large part of a successful enforcement regime is publicising when action has been taken. The ICO’s own policy on this says

Publicising our enforcement and regulatory activities is an important part of our role as strategic regulator, and a deterrent for potential offenders

Letting “offenders” off the publicising hook runs the risk of diminishing that deterrent effect.

2 Comments

Filed under Data Protection, enforcement, Information Commissioner, undertaking

The Crown Estate and behavioural advertising

A new app for Regent Street shoppers will deliver targeted behavioural advertising – is it processing personal data?

My interest was piqued by a story in the Telegraph that

Regent Street is set to become the first shopping street in Europe to pioneer a mobile phone app which delivers personalised content to shoppers during their visit

Although this sounds like my idea of hell, it will no doubt appeal to some people. It appears that a series of Bluetooth beacons will deliver mobile content (for which, read “targeted behavioural advertising”) to the devices of users who have installed the Regent Street app. Users will indicate their shopping preferences, and a profile of them will be built by the app.

Electronic direct marketing in the UK is ordinarily subject to compliance with The Privacy and Electronic Communications (EC Directive) Regulations 2003 (“PECR”). However, the definition of “electronic mail” in PECR is “any text, voice, sound or image message sent over a public electronic communications network or in the recipient’s terminal equipment until it is collected by the recipient and includes messages sent using a short message service”. In 2007 the Information Commissioner, upon receipt of advice, changed his previous stance that Bluetooth marketing would be caught by PECR, to one under which it would not be caught, because Bluetooth does not involve a “public electronic communications network”. Nonetheless, general data protection law relating to consent to direct marketing will still apply, and the Direct Marketing Association says

Although Bluetooth is not considered to fall within the definition of electronic mail under the current PECR, in practice you should consider it to fall within the definition and obtain positive consent before using it

This reference to “positive consent” reflects the definition in the Data Protection directive, which says that it is

any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed

And that word “informed” is where I start to have a possible problem with this app. Ever one for thoroughness, I decided to download it, to see what sort of privacy information it provided. There wasn’t much, but in the Terms and Conditions (which don’t appear to be viewable until you download the app) it did say

The App will create a profile for you, known as an autoGraph™, based on information provided by you using the App. You will not be asked for any personal information (such as an email address or phone number) and your profile will not be shared with third parties

autograph (don’t forget the™) is software which, in its words “lets people realise their interests, helping marketers drive response rates”, and it does so by profiling its users

In under one minute without knowing your name, email address or any personally identifiable information, autograph can figure out 5500 dimensions about you – age, income, likes and dislikes – at over 90% accuracy, allowing businesses to serve what matters to you – offers, programs, music… almost anything

Privacy types might notice the jarring words in that blurb. Apparently the software can quickly “figure out” thousands of potential identifiers about a user, without knowing “any personally identifiable information”. To me, that’s effectively saying “we will create a personally identifiable profile of you, without using any personally identifiable information”. The fact of the matter is that people’s likes, dislikes, preferences, choices etc (and does this app capture device information, such as IMEI?) can all be used to build up a picture which renders them identifiable. It is trite law that “personal data” is data which relate to a living individual who can be identified from those data or from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller. The Article 29 Working Party (made up of representatives from the data protection authorities of each EU member state) delivered an Opinion in 2010 on online behavioural advertising which stated that

behavioural advertising is based on the use of identifiers that enable the creation of very detailed user profiles which, in most cases, will be deemed personal data

If this app is, indeed, processing personal data, then I would suggest that the limited Terms and Conditions (which users are not even pointed to when they download the app, let alone be invited to agree them) are inadequate to mean that a user is freely giving specific and informed consent to the processing. And if the app is processing personal data to deliver electronic marketing failure to comply with PECR might not matter, but failure to comply with the Data Protection Act 1998 brings potential liability to legal claims and enforcement action.

The Information Commissioner last year produced good guidance on Privacy in Mobile Apps which states that

Users of your app must be properly informed about what will happen to their personal data if they install and use the app. This is part of Principle 1 in the DPA which states that “Personal data shall be processed fairly and lawfully”. For processing to be fair, the user must have suitable information about the processing and they must to be told about the purposes

The relevant data controller for Regent Street Online happens to be The Crown Estate. On the day that the Queen sent her first tweet, it is interesting to consider the extent to which her own property company are in compliance with their obligations under privacy laws.

This post has been edited as a result of comments on the original, which highlighted that PECR does not, in strict terms, apply to Bluetooth marketing

4 Comments

Filed under consent, Data Protection, Directive 95/46/EC, Information Commissioner, marketing, PECR, Privacy, tracking

Upper Tribunal rules on complying “promptly” with an FOI request

The Upper Tribunal has ruled on what “promptly” means in the FOI Act. The answer’s no surprise, but it’s helpful to have binding authority

The Freedom of Information Act 2000 (FOIA) demands that a public authority must (subject to the application of exemptions) provide information to someone who requests it within twenty working days. But it goes a bit further than that, it says (at section 10(1))

a public authority must comply…promptly and in any event not later than the twentieth working day following the date of receipt

But what does “promptly” mean in this context? This issue has recently been considered by the Upper Tribunal, in John v ICO & Ofsted 2014 UKUT 444 AAC.Matters before the Information Commissioner (IC) and the First-tier Tribunal (FTT) had turned on when the initial request for information had been made and responded to. The IC held that Ofsted had failed to respond within twenty working days, and Ofsted appealed this. Mr John argued before the FTT that although the IC had found in his favour to the extent that it held that Ofsted had failed to respond within twenty working days, it had failed to deal with the issue of whether Ofsted had responded promptly. The FTT found in Ofsted’s favour, but did not, Upper Tribunal Judge Jacobs observed, deal with Mr John’s argument on promptness. That was an error of law, which Judge Jacobs was able to remedy by considering the issue himself.

“Promptly” he observed, has a range of dictionary meanings, some of which relate more to attitude (“willingly”, or “unhesitatingly”) and others more to time (“immediate”, or “without delay”). The context of section 10(1) of FOIA “is concerned with time rather than attitude, although the latter can have an impact on the former”. It is clear though that “promptly” does not mean, in the FOIA context, “immediately” (that, said Judge Jacobs, would be “unattainable”) but is more akin to “without delay”:

There are three factors that control the time that a public authority needs to respond. First, there are the resources available to deal with requests. This requires a balance between FOIA applications and the core business of the authority. Second, it may take time to discover whether the authority holds the information requested and, if it does, to extract it and present it in the appropriate form. Third, it may take time to be sure that the information gathered is complete. Time spent doing so, is not time wasted.

What is particularly interesting is that Judge Jacobs shows a good understanding of what the process for dealing with FOIA requests might be within Ofsted, and, by extension, other public authorities:

A FOIA request would have to be registered and passed to the appropriate team. That team would then have to undertake the necessary research to discover whether Ofsted held the information requested or was able to extract it from information held. The answer then had to be composed and approved before it was issued.

In the instant case all this had been done within twenty working days:

I regard that as prompt within the meaning and intendment of the legislation. Mr John has used too demanding a definition of prompt and holds an unrealistic expectation of what a public authority can achieve and is required to achieve in order to comply with section 10(1).

This does not mean, however, that it might not be appropriate in some cases to enquire into how long an authority took to comply.

The Upper Tribunal’s opinion accords with the approach taken in 2009 by the FTT, when it held that

The plain meaning of the language of the statute is that requests should be responded to sooner than the 20 working days deadline, if it is reasonably practicable to do so. (Gradwick v IC & Cabinet Office EA/2010/0030)

It also accords with the IC’s approach in guidance and decision notices under FOIA, and its approach under the Environmental Information Regulations 2004 (where the requirement is that “information shall be made available…as soon as possible and no later than 20 working days”).

Most FOI officers will greet this judgment as a sensible and not unexpected one, which acknowledges the administrative procedures that are involved in dealing with FOIA requests. Nonetheless, as a binding judgment of an appellate court, it will be helpful for them to refer to it when faced with a requester demanding a response quicker than is practicable.

Appeals and Cross Appeals

A further issue determined by the Upper Tribunal concerned what should happen if both parties to a decision notice disagree with some or all of its findings and want to appeal, or at least raise grounds of appeal: must there be an appeal and cross-appeal, or can the respondent party raise issues in an appeal by the other party? Judge Jacobs ruled, in a comprehensive a complex analysis that merits a separate blog post (maybe on Panopticon?), that “although cross-appeals are permissible, they are not necessary”

 

 

Leave a comment

Filed under Environmental Information Regulations, Freedom of Information, Information Commissioner, Information Tribunal, Upper Tribunal

Clegg calls for a data protection public interest defence (where there already is one)

UPDATE: 22.10.14

It appears that Clegg’s comments were in the context of proposed amendments to the Crime and Criminal Justice Bill, and the Guardian reports that

The amendments propose a new defence for journalists who unlawfully obtain personal data (section 55 of the Data Protection Act) where they do so as part of a story that is in the public interest

But I’m not sure how this could add anything to the existing section 55 provisions which I discuss below, which mean that an offence is not committed if “the obtaining, disclosing or procuring [of personal data without the consent of the data controller] was justified as being in the public interest” – it will be interesting to see the wording of the amendments.

Interestingly it seems that another proposed amendment would be to introduce custodial sentences for section 55 offences. One wonders if the elevated public interest protections for journalists are a sop to the press, who have long lobbied against custodial sentences for this offence.

END UPDATE.

In an interesting development of the tendency of politicians to call for laws which aren’t really necessary, Nick Clegg has apparently called for data protection law to be changed to what it already says

The Telegraph reports that Nick Clegg has called for changes to data protection, bribery and other laws to “give journalists more protection when carrying out their job”. The more informed of you will have spotted the error here: data protection law at least already carries a strong exemption for journalistic activities. Clegg is quoted as saying

There should be a public interest defence put in law – you would probably need to put it in the Data Protection Act, the Bribery Act, maybe one or two other laws as well – where you enshrine a public interest defence for the press so that where you are going after information and you are being challenged, you can set out a public interest defence to do so

Section 32 of the Data Protection Act 1998 provides an exemption to almost all of a data controller’s obligations under the Act regarding the processing of personal data if

(a)the processing is undertaken with a view to the publication by any person of any journalistic…material,

(b)the data controller reasonably believes that, having regard in particular to the special importance of the public interest in freedom of expression, publication would be in the public interest, and

(c)the data controller reasonably believes that, in all the circumstances, compliance with that provision is incompatible with [the publication by any person of any journalistic...material]

This provision (described as “extremely wide” at Bill stage1) was considered at length in Part H of the report of the Leveson Inquiry into the Culture, Practices and Ethics of the Press, which looked at the press and data protection. Indeed, Leveson recommended section 32 be amended and narrowed in scope. Notably, he recommended that the current subjective test (“the data controller reasonably believes”) should be changed so that section 32 could only be relied on if inter alia “objectively the likely interference with privacy resulting from the processing of the data is outweighed by the public interest in publication” (emphasis added). I know we’ve all forgotten about Leveson now, and the Press look on the report as though it emerged, without context, from some infernal pit, but even so, I’m surprised Mr Clegg is calling for the introduction of a provision that’s already there.

Perhaps, one might pipe up, he was talking about the section 55 DPA offence provisions (indeed, the sub-heading to the Telegraph article does talk in terms of journalists being protected “when being prosecuted”. So let’s look at that: section 55(2)(d) provides in terms that the elements of the offence of unlawful obtaining etc of personal data are not made out if

 in the particular circumstances the obtaining, disclosing or procuring was justified as being in the public interest

So, we have not just a public interest defence to a prosecution, but, even stronger, a public interest provision which means an offence is not even committed if the acts were justified as being in the public interest.

Maybe Mr Clegg thinks that public interest provision should be made even stronger when journalists are involved. But I’m not sure it realistically could be. Nonetheless, I await further announcements with interest.

1Hansard, HC, vo1315, col 602, 2 July 1998 (as cited in Philip Coppel QC’s evidence to the Leveson Inquiry).

1 Comment

Filed under Data Protection, journalism, Leveson