GDPR for Sole Traders – Contacts and Communications

This entry is part 2 of 5 in the series GDPR for Sole Traders

I’ve had variations of this question on contacts a few times since the first post, and I can understand the confusion around it. I’ve been asked by a few people whether lists you have of friends, family, or similar fall under GDPR (technically, yes, which is where the magic word ‘reasonable’ keeps coming in to play), and a lot of questions seem to be around the use of personal details for everyday business purposes.

Now, there’s various advice about legitimate interests as a basis for processing, notifications to send out to your address book and so on. When you are not collecting and processing personal data beyond contact details for clients, what needs to be done to be compliant?

The bad news, from one point of view, is that this is still personal data, it still falls under the act and therefore you need to think about what you are doing with it.

The good news is that it is extremely unlikely you need to do anything particularly special about it. The reason for this is that GDPR recognises two categories of data – personal data, and sensitive personal data. Sensitive personal data is personal data that relates to a protected characteristic. Normal personal data is personally identifying data which does not relate to a protected characteristic – an e-mail address or phone number would be personal data, but would not fall under sensitive personal data.

Unambiguous consent

The obvious question then is why it matters – and the answer is simple. Sensitive personal data requires explicit consent for any form of collection or processing – in other words to collect it or do anything with it, the data subject must be asked the question ‘do you agree to us collecting and holding this data for these purposes’, and a very definite yes is required from them for usage to be allowed.

Non-sensitive personal data (for want of a better term) requires only unambiguous consent. This means that they must have taken an affirmative action indicating their consent, but some aspects can be implied rather than having to be spelled out. So in the case of contact details, let’s say someone gives you a business card and says to get in touch. In that moment they have provided you with their personal data, and given you unambiguous consent to use it for the purposes of contacting them. The same would apply if they sent you an e-mail asking to start a business relationship, or picked up the phone.

However here’s where the whole purpose thing comes in again – while it is reasonable to say that someone who has given you their contact details and asked you to get in touch has given unambiguous consent for contact (or if you’re a larger company, and if you are we can talk consultancy fees or you can talk to your own GDPR experts, for one of your sales/relationship team to get in contact), they have not given consent to be signed up to your automated mailing list. If you think they might be interested in it, and want them to sign up, then get in touch and ask for their consent to use their data that way – then make sure you keep a record of that consent (saving an e-mail would count).

When someone makes personal data public it’s a slightly different matter. By having some of my contact details publicly available I am providing unambiguous consent to being contacted using them – although signing them up to an automated mailing list would be a different matter.

Please note that in this particular article, I am looking only at personal data used in an everyday business way, and what is reasonable to do with it. In terms of contacts and communications a sole trader is likely already compliant, unless they are carrying out some sort of automated marketing rather than handling things directly.

To sum this all up in some simple answers (and I still believe the GDPR can be summed up in a few sentences for the vast majority of cases, but that’ll come later):

  • someone giving you their contact details is providing all the consent you need to contact them, but nothing else such as subscribing them to an automated list
  • if you want to do anything other than contact them, then to be compliant you need to very clearly inform them of what you want to do and confirm that they consent to it
  • you need to be very clear with people you contact on how they can ask for their data to be removed, and do so promptly if they ask
  • take reasonable security precautions with these contact details (I would hope that most sole traders would consider them valuable enough to protect from disclosure in any case)

I’ve put together a quick and dirty tutorial on how to use one of the better encryption tools out there. I’ll go more into depth once I’m back at a real computer and have a little more time, maybe with a full video exploration and explanation of the different options. I’ll also be looking to do an evaluation of cloud storage options and how they play in to the situation. As always, if there are particular questions or particular urgent areas then drop me a line either here or on twitter (@coffee_fueled).

Security by Obscurity in Policies and Procedures

In policy management, I’ve noticed a problem of paranoia and an attachment to security by obscurity. There is a common idea of keeping internal business policies and procedures secret, as it’s more secure. I should note that I’m a big fan of properly applied paranoia, but I think that here the policy management people are behind their more technical counterparts terms of security methods.

Security by Obscurity

Security by obscurity is an old and much-ridiculed concept, which has given us some of the most damaging security failures we will ever see. It is the idea that keeping your security measures secret from the enemy will somehow make them safer. There is some truth to this if you can absolutely guarantee that the secret will be kept.

Of course, if you can guarantee that a secret will be kept from all potential attackers then you’ve already got a perfect security system, and I’d really like to buy it from you.

To put it simply, if your non-technical security relies on keeping secrets (not secrets such as passwords and similar, which are a different thing) then it is incredibly fragile. Not only because it is untested outside our own community (and therefore people who have a vested interest in not discovering holes), but because it is simply not possible to effectively keep such a secret against a determined attacker. Since policies are published internally it’s almost impossible to assure their secrecy, and you will have no idea when they leak.

Opening up policy

Unfortunately, this attitude seems widespread enough that there won’t be any change soon – but there could be. If companies were simply to open up and discuss their policies – even without identifying themselves – then it would help. The idea is to look at policies and procedures for an entity and spot the gaps where they don’t cover. Which areas are left unmonitored and unaudited due to historical oversights? Which people don’t have the right procedural guidance to protect them against leaking information to an anonymous caller or customer?

These are things that people might not notice within a company, or might notice, grumble about, and point at someone else to fix. Assuring ourselves that our policies are secret and so no one will exploit these vulnerabilities (and yes, they are just as much vulnerabilities as software and hardware ones – just because wetware is squishier doesn’t mean that each node or the network as a whole can’t be treated in a similar way) is exactly the same as keeping our cryptographic algorithm proprietary and secret – all it assures is that no one will know if it’s broken, and when (inevitably) the security holes are discovered people will lose faith, not to mention the issue of damage from exploitation of the vulnerability itself.

At the very least we should look at ways to collaborate on policies. I can see meetings where subject matter experts from different, but related, companies get together to discuss policies – anonymously or not (Chatham House rules anyone?) and find the mutual holes.

Essentially we need to start thinking about security policies and procedures the same way we think about software – and they really aren’t that different. The main difference is that wetware is a much more variable and error-prone computational engine, particularly when networked in a corporation, but then again I’ve seen plenty of unpredictable behaviour from both hardware and software.

Basically, if we’re ever going to beat the bad guys, particularly the ones who can socially engineer, we need to be more open with each other about what we’re doing. Not simply saying ‘here’s everything’, but genuinely engaging and discussing strategies. In this scenario, there are definitely two sides, and those of us on the good guy’s team need to start working together better on security, despite any rivalries, mistrust or competition. Working together, openly, on security benefits us all, and if we do it in good faith and building trust then it will only hurt the bad guys.