Santa and the GDPR

There’s a meme that’s been doing the rounds for a while, and with Christmas approaching has become especially popular.

Firstly article 4 is made up of definitions (https://gdpr-info.eu/art-4-gdpr/), so difficult to be in breach of, but let’s ignore that and run with the joke.

Lawful Basis

The first thing to do is to establish the lawful basis under which the processing is taking place. I’d argue that contract rather than legitimate interests applies here, since there is a standing agreement between parties (who believe in Santa) and Claus Corp. that belief and good behaviour will bring gifts (bad behaviour, obviously, bringing coal or similar). While the parties in question are under the age of capacity (in the vast majority) and cannot enter contracts themselves, in this instance their guardians can give consent to enter into and perform the gift/behaviour exchange contract on their behalf (https://gdpr-info.eu/art-8-gdpr/).

Now this does raise the question of children or other parties who do not believe. In this instance there is no mechanism for Claus Corp. to collect or process any data about these parties and so again I see no problems here. Data is only collected on those who believe in Santa, and therefore also agree to the gift/behaviour contract (or rather will have guardians willing to do so on their behalf).

There is an argument that legitimate interests could apply as a legal basis, but the additional responsibility, right to portability, and other requirements make it less suitable in this instance.

Right to Object

Under the contract lawful basis data subjects do have the right to object () to the processing of their data, meaning the processing must cease unless the controller can demonstrate legitimate grounds for the processing weighed up against the interests of the data subject. Of course, objecting would raise questions about gifts later being received, meaning that if an objection is received processing would cease in any case. Now it is important that in any correspondence with the data subject, or their guardian, Claus Corp. notify them of their right to object and the mechanism to do so, but this is a relatively simple requirement.

Notification Obligation and Restriction of Processing

Claus Corp. does have a responsibility to communicate any rectification or erasure of personal data, or restriction of processing (https://gdpr-info.eu/art-19-gdpr/). Of course, the contract basis does not provide an automatic right to erasure, we have already established that the processing is lawful, and Claus Corp. famously keeps their data worryingly up to date. The only instance that could apply is one in which the controller, Claus Corp. no longer needs the data for the purposes of processing but the subject wishes for the data to be retained for legal purposes. Given the unlikelihood of any legal case dependent on Claus Corp. data, and their extraterritoriality for any treaties allowing legal authorities access to their databases, this is not a scenario that needs to be examined in depth.

Automated decision-making and profiling

This is an area that could raise issues for Claus Corp. While the details of their decision-making process are obviously proprietary, it is safe to assume at least some level of automation is involved due to the sheer volume of data gathered (https://gdpr-info.eu/art-22-gdpr/). However, in this instance not only are no legal effects produced, but the processing is necessary to enter into and perform the gift/behaviour exchange contract. Whether the data subject (or their guardian(s)) give explicit consent is a more challenging question, as it is clear that by believing and communicating that belief to guardians they are making a clear and active decision to enter into the contract. I suspect this would be seen as explicit consent, but without test cases there is still some uncertainty here.

As such the best decision for Claus Corp. would be to provide a means for appeal to obtain human intervention (elf intervention is not counted as sufficient under the GDPR, and indeed elf processing of data in the first place may be considered automated decision making rather than manual, another instance where a test case would help to establish precedent).

So long as Claus Corp. makes some minimal changes to their working practices, and applies appropriate security for the data being processed (there is no indication or suggestion that they are not doing so), I see nothing to suggest they are in breach of any article of the General Data Protection Regulation.

Having said that I do have some concerns around the Tooth Fairy agency, who are clear processing sensitive medical data and provide no such clear channels for communication as Claus Corp., but that will wait for another article.

Initiative Q and Social Proof

Initiative Q is tomorrow’s payment network, an innovative combination of cryptocurrency and payment processor which will take over the world and make all of it’s early investors rich beyond their wildest dreams. Even better, those early investors don’t need to invest any of their money, just give an e-mail address and register. Then they can even invite friends and family (from a limited pool of invites) to take part too.

Since they’re not asking for anything (there have been suggestions it’s an e-mail harvesting scheme, but there are much easier, cheaper and quicker ways to simply grab e-mails and social data) there’s nothing for those initial investors to lose, so why not get involved.

Well, that’s the pitch anyway. I’m not going to comment on whether or not I think Initiative Q is malicious (I don’t) or a scam (that’s a longer discussion), just on what that initial investment involves and what social engineering they’re using here to expand their initial user base. It’s also important to note that the initial investors are definitely putting a stake into the game when they sign up, especially when they start inviting friends and family, and this is something that people should be aware of before jumping in.

I can’t imagine anyone not seeing the cynical manipulation of the front page, with its estimated future value of the next spot, so won’t dwell on it other than to say it’s a rather clever way to apply a time limitation on decision-making as it ticks down (and does at least make it clear that only 10% of the future-value tokens are available after registration, the rest depending partly on inviting five friends and unspecified future tasks). I’ll be honest, this type of manipulation does tend to get my alarm bells ringing.

A false sense of urgency and/or scarcity (and this has both – urgency with the value ticking down, scarcity on the small number of invites per person) is one of the classic pillars of social engineering. It tends to help convince by short-circuiting the decision-making process, since there’s no time to consider carefully and instead you must buy NOW!

The more insidious part to me is the idea that sign ups are not committing anything, therefore are safe. I blame social media for this one. What’s being invested isn’t financial worth, but individual’s social capital as they are used to convince friends and family. This is social proof in action – large numbers of people, smaller numbers of close friends and family, again is a way to let our lazy brains off the hook on examining a proposition, so that we act on autopilot and sign up.

Worse is that once someone sees themselves as an investor, with a substantial future stake, they are much more likely to aggressively defend the system if anyone tries to cast doubts on it (there have been a few examples of this recently), and refuse to accept any evidence that it might not be everything that was promised. By encouraging people to invite five friends you’re also promoting consistency – no one wants to go back on what they said to five of their friends (or strangers in a lot of cases) in public, and so any actual evidence against the system will be ignored or discredited.

So the weights that need to be considered aren’t financial – instead by signing up you are committing your individual integrity and social capital to this system, which has explicitly stated that all they’re after at this point is growth. It’s fair to say there is nothing particularly innovative about the Initiative Q technology, despite their claims, and the particular model has been tried before with less-polished marketing. Before signing up, please think carefully, remember that they aren’t promising anything (the initiative is very careful in their language to be conditional) while you’re putting a piece of your own identity into the scheme, committing to defend it against sceptics, to carry out unspecified future tasks, and to drag your friends in using your own credibility.

Why Work in Cyber Security? (Again)

Last year I gave a couple of presentations to some college students about why they really, really should look to work in cyber security, and how to get into it. That time’s rolled around again, and I realized I’d lost the original source document for the slides, so had to do a quick update from scratch. This is the result. 

Vulnerability Scanning and Pen Testing

Recently I came across an article which nicely put forward a point I’ve been arguing for a while, often without much success: . Unlike Babar, I’m not amazed that there’s a confusion between vulnerability scanning and pen testing. The aim of both is to find vulnerabilities, and if the report is being read by someone who is not a security specialist and knows what they’re looking for then a vulnerability scan looks awfully similar.

The difference is vital. A good penetration test will often start with a vulnerability scan, a bad one stop there (aside from replacing the logo on the output). I’ve seen plenty of reports which I could have spun out of Nessus or Qualys myself from budget pen tests. This isn’t always the fault of a pen test company (although a good one should always advise customers of the requirements for effective engagement) – if you’re given only a couple of days engagement for a black box test (one where you have minimal information about the target) then you aren’t going to get much more than a vulnerability scan out of it.

The problem comes when those short engagements, with pen testers not provided proper information, are counted as pen tests for security. They aren’t – they will not discover anything an automated scan doesn’t, and so a company is simply wasting money on security theatre. Either don’t bother pretending that a pen test has occurred, or engage a company fully and effectively to assess a system over a longer period of time, with all the information and documentation available internally. This is one of those cases where a budget solution is worse than none at all.

This isn’t intended to diminish vulnerability scanning – it’s a vital activity that everyone should carry out on a regular basis (in my perfect world near-realtime, in reality, often monthly) to assess and deal with vulnerabilities. In a perfect world, these results can simply be handed over to a penetration testing company to save you a little money – pen testers are not the enemy, they are there to help you find holes, giving them additional information means that time isn’t wasted looking for non-existent holes and everyone gets better intelligence out at the end.

Thinking Inside the Box – Insider Threat Modelling

Earlier this week I was presenting at Forum Event’s Security IT Summit on insider threat modelling (get the presentation title joke? I was rather proud of it) which seemed to go well. A few people asked for the slides, and I’m always happy to share something that’s found useful, just wanted to add some context as well.

Presenting at the IT Security Summit

For those who are not aware Threat Modelling is simply an approach to making sure a system is secure as part of the design phase (and ideally through operational life and decommissioning) by anticipating possible threats or attacks. There’s a variety of methodologies of varying complexity and suitable for different purposes, but they are all very focused on external attackers (this is not a bad thing – insider and external are different classes of threat and a model attempting to address both would likely be spread thin). In the presentation I wanted to suggest the misty outlines of a way to start approaching the same modelling against insider threats, and make the problem we’re trying to solve clear. Given the level of engagement and discussion that followed, and some of the feedback I received, I think it went well. On a related note I know how I’m going to be using my dissertation, given the opportunity.

As always, commentary, feedback, or contacts more than welcome. Without further ado, onto the slides.

Thinking-Inside-the-Box

GDPR for Sole Traders – Controllers, Processors and ICO Registration

This morning my other half mentioned a concern that was popping up about ICO registration fees for sole traders working with personal data under GDPR. Since there seems to be a lot of confusion (including from the ICO phone staff themselves who have been reported as giving varying answers to the same people) I thought I’d try and help. All the usual provisos apply, and I am working only from the information published by the ICO – simply summarising it and cutting out some of the parts I don’t think are relevant.

First of all, the way that the ICO is funded and the requirements for notification/registration have changed. I won’t go into the old rules, but the new ones are quite simple – there is now a legal requirement for data controllers to pay a data protection fee to the ICO. These fees are not unreasonable, running between £40 to £2900 per year depending on size and turnover. You can also get a £5 discount for paying by direct debit. Data processors do not need to pay this fee. There are certain other exemptions, but frankly they really only complicate the issue for most sole traders. I will talk about them shortly, but the core issue is that difference between controllers and processors – which is fundamentally quite simple but poorly explained.

The definition is this – a data controller determines the purposes for which data is processed. For most sole traders and small businesses who work with personal data, this will not be the case – in the vast majority of cases you will be fulfilling a commission or contract for a client, which means that you are not determining the purpose of processing any data they have passed you. You will still have the same duty to protect that data under the GDPR of course, but are not the data controller and so do not have to pay the data protection fee to the ICO. If, however, you are gathering the data, determining the purposes of processing and analysis, or anything similar then you will either need to pay the fee or rely on one of the other exemptions.

A data processor meanwhile may process the data, but does not decide the purposes of processing. As an example – a translator asked to translate a CV by an individual or a company is the data processor, while the client would be the data controller. A data processor does not have to pay the ICO data protection fee.

What if I am a data controller?

If you are a data controller there are certain exemptions which might apply, for example if your only purpose in processing data is staff administration you are exempt from the fee. In general though you will need to pay it – the exemptions are clear and narrow in terms of the data you can process and the purposes of processing it. Unless everything you process (where you are defining the purposes of processing it, not a client) falls under the exemptions then you will need to pay the fee.

The exemptions are listed, clearly, in the ICO’s own literature:

  • Staff administration
  • Advertising, marketing and public relations
  • Accounts and records
  • Not-for-profit purposes
  • Personal, family or household affairs
  • Maintaining a public register
  • Judicial functions
  • Processing personal information without an automated system such as a computer

Take special note of that last one – the definition the ICO is using here of an automated system is different to the GDPR’s definition of automated processing. In essence, if you are using any system with any level of automation for any point in the processing of data, then you do not fall under this exemption. Hopefully the others and the reasons for them are clear.

As always, feel free to send me any questions either here or on Twitter, and hopefully the picture is a little clearer for some of you now.

Security and Compliance

A common error is to think that because a system, solution, process, company, person, or other entity is not compliant with a particular standard, it’s insecure. It’s not a grievous error, and is an understandable one to make which doesn’t cause any harm (except occasionally causing more investment in security, which is usually a good thing anyway).

The flip side of this is where the problem comes in – when the assumption is made that because an entity is compliant with a particular standard or standards it is secure.

The reason that people often fall into this error is because compliance is straight-forward (not necessarily easy): it’s a very binary set of conditions to meet in order to be compliant, and if you fail them you’re not compliant. It’s a very easy idea to understand, and quite comforting to conflate with the idea of being secure – because compliance is achievable. Security is not.

In a similar way to illness in people, compliance is the regular checkup with the doctor who confirms that you’re in a fine state of health, because you don’t have any symptoms on the checklist. A year later the heart attack hits, because not only were the predictive symptoms not on the list, but the doctor’s reassuring words convinced the patient they could be a little lazier and put less work into their fitness.

It isn’t that compliance and standards aren’t important, they are, very much so. It’s that they simply should not be the one measuring point used to determine whether or not something is secure. In fact they shouldn’t be used to determine whether something is secure at all. At most a few days after a standard is written, it’ll have sections that are out of date. Security moves fast, and standards bodies have no real way to keep up without moving to a living standards model – which would be a completely different discussion.

Security is an aspiration, not a goal that can be reached through checklists. The checklists help, but the important aspects of genuinely continuously improving security are a company culture which takes security seriously, security subject matter experts always looking to make justified improvements to reduce risk, a high value placed on using actionable threat intelligence to anticipate and prevent incidents, and instilling a culture of security knowledge, engagement and awareness throughout the entirety of an organisation.

Or to sum it all up with a nice little aphorism, we must be careful not to confuse the map for the territory.

Why Work in Cyber?

Occasionally I give presentations to students (and the general public) about different aspects of cyber security, including why to work in cyber. I thought sharing the presentations I build for these might be helpful. Source files and presentation notes can also be made available if requested.

This one’s targeted at 6th form students for an employability event, and is very much a personal view of why I think people are likely to be interested in a cyber security career (and why I would have been interested in it at the same age, had I known it was an option rather than looking at a Physics degree). After all, we need everyone we can get if anything’s going to improve.

Working-in-Cybersecurity

GDPR for Sole Traders – Contract Lawful Basis

Under GDPR a lawful basis is the justification to process personal data. While I’ve focused mainly on consent, as the easiest to understand, the others are worth looking at – and contract may be a better fit for much of the work that sole traders carry out. First, a quick examination of what the lawful bases mean and how you can apply them.

Essentially a lawful basis defines the purpose for which you’re holding the data, and provide different sets of rights for data subjects to allow for the different usage. Consent, for example, allows subjects to request erasure, to object to the storage of their data or processing for a specific purpose, and allows them to demand that any data held on them be exported and provided in a portable format. Obviously these rights are not suitable to all purposes.

The contract lawful basis allows for data subjects to object, which means that they can ask for you to stop processing their data for particular purposes. They cannot ask that you erase the data, nor that it be exported in a portable format (or rather they can ask for these things but you have a legal right to refuse). If they do ask that you stop processing their data for a specific purpose then you must comply – unless you have compelling legitimate reasons to process the data, or are doing so to establish, exercise or defend a legal claim (i.e. if a client has refused to pay, you would not have to stop processing their data in order to pursue payment).

In order to establish contract as a lawful basis you need to assess the data you are holding (most likely not much more than contact details), ensure you are only using it for that purpose (in order to negotiate or perform a contract), ensure you do not hold it longer than necessary (once the contract is complete if there are no legal reasons to hold onto it for a period of time, such as auditing or chasing payment, then you should delete it), and record that you are holding it under that basis. In most cases all you’d need would be a footer on e-mails explaining that you will hold and use contact information in order to negotiate and complete a contract, and will retain it for compliance with any applicable laws after the contract is complete.

If you want to use those contact details for any marketing either during or after the contract, then that would be a separate purpose (arguably you might be able to go for legitimate interest, but to be honest consent is far more suitable for this purpose) and you would need to inform the client and get their explicit consent for this. They would also need to be provided with a simple method to object (i.e. sign up to my mailing list for further details, followed by on each mailing an e-mail address to contact or link to click if they want to be removed).

As always, any questions please ask here, or via Twitter (I don’t mind dealing with private queries, but it’s easier to go through those two than answer the same questions several times). Also please remember if you have particular legal concerns then you should speak to an expert with insurance rather than leaning on this article for any legal opinion – I’m purely looking to clarify the available guidance and target it more appropriately at those who seem to have slipped under the ICO’s radar in terms of advice.