Initiative Q and Social Proof

Initiative Q is tomorrow’s payment network, an innovative combination of cryptocurrency and payment processor which will take over the world and make all of it’s early investors rich beyond their wildest dreams. Even better, those early investors don’t need to invest any of their money, just give an e-mail address and register. Then they can even invite friends and family (from a limited pool of invites) to take part too.

Since they’re not asking for anything (there have been suggestions it’s an e-mail harvesting scheme, but there are much easier, cheaper and quicker ways to simply grab e-mails and social data) there’s nothing for those initial investors to lose, so why not get involved.

Well, that’s the pitch anyway. I’m not going to comment on whether or not I think Initiative Q is malicious (I don’t) or a scam (that’s a longer discussion), just on what that initial investment involves and what social engineering they’re using here to expand their initial user base. It’s also important to note that the initial investors are definitely putting a stake into the game when they sign up, especially when they start inviting friends and family, and this is something that people should be aware of before jumping in.

I can’t imagine anyone not seeing the cynical manipulation of the front page, with its estimated future value of the next spot, so won’t dwell on it other than to say it’s a rather clever way to apply a time limitation on decision-making as it ticks down (and does at least make it clear that only 10% of the future-value tokens are available after registration, the rest depending partly on inviting five friends and unspecified future tasks). I’ll be honest, this type of manipulation does tend to get my alarm bells ringing.

A false sense of urgency and/or scarcity (and this has both – urgency with the value ticking down, scarcity on the small number of invites per person) is one of the classic pillars of social engineering. It tends to help convince by short-circuiting the decision-making process, since there’s no time to consider carefully and instead you must buy NOW!

The more insidious part to me is the idea that sign ups are not committing anything, therefore are safe. I blame social media for this one. What’s being invested isn’t financial worth, but individual’s social capital as they are used to convince friends and family. This is social proof in action – large numbers of people, smaller numbers of close friends and family, again is a way to let our lazy brains off the hook on examining a proposition, so that we act on autopilot and sign up.

Worse is that once someone sees themselves as an investor, with a substantial future stake, they are much more likely to aggressively defend the system if anyone tries to cast doubts on it (there have been a few examples of this recently), and refuse to accept any evidence that it might not be everything that was promised. By encouraging people to invite five friends you’re also promoting consistency – no one wants to go back on what they said to five of their friends (or strangers in a lot of cases) in public, and so any actual evidence against the system will be ignored or discredited.

So the weights that need to be considered aren’t financial – instead by signing up you are committing your individual integrity and social capital to this system, which has explicitly stated that all they’re after at this point is growth. It’s fair to say there is nothing particularly innovative about the Initiative Q technology, despite their claims, and the particular model has been tried before with less-polished marketing. Before signing up, please think carefully, remember that they aren’t promising anything (the initiative is very careful in their language to be conditional) while you’re putting a piece of your own identity into the scheme, committing to defend it against sceptics, to carry out unspecified future tasks, and to drag your friends in using your own credibility.

Why Work in Cyber Security? (Again)

Last year I gave a couple of presentations to some college students about why they really, really should look to work in cyber security, and how to get into it. That time’s rolled around again, and I realized I’d lost the original source document for the slides, so had to do a quick update from scratch. This is the result. 

Vulnerability Scanning and Pen Testing

Recently I came across an article which nicely put forward a point I’ve been arguing for a while, often without much success: . Unlike Babar, I’m not amazed that there’s a confusion between vulnerability scanning and pen testing. The aim of both is to find vulnerabilities, and if the report is being read by someone who is not a security specialist and knows what they’re looking for then a vulnerability scan looks awfully similar.

The difference is vital. A good penetration test will often start with a vulnerability scan, a bad one stop there (aside from replacing the logo on the output). I’ve seen plenty of reports which I could have spun out of Nessus or Qualys myself from budget pen tests. This isn’t always the fault of a pen test company (although a good one should always advise customers of the requirements for effective engagement) – if you’re given only a couple of days engagement for a black box test (one where you have minimal information about the target) then you aren’t going to get much more than a vulnerability scan out of it.

The problem comes when those short engagements, with pen testers not provided proper information, are counted as pen tests for security. They aren’t – they will not discover anything an automated scan doesn’t, and so a company is simply wasting money on security theatre. Either don’t bother pretending that a pen test has occurred, or engage a company fully and effectively to assess a system over a longer period of time, with all the information and documentation available internally. This is one of those cases where a budget solution is worse than none at all.

This isn’t intended to diminish vulnerability scanning – it’s a vital activity that everyone should carry out on a regular basis (in my perfect world near-realtime, in reality, often monthly) to assess and deal with vulnerabilities. In a perfect world, these results can simply be handed over to a penetration testing company to save you a little money – pen testers are not the enemy, they are there to help you find holes, giving them additional information means that time isn’t wasted looking for non-existent holes and everyone gets better intelligence out at the end.

Mahmood, B. (2018) Vulnerability Scanning vs. Penetration Testing: What’s the Difference?, The State of Security. Available at: https://www.tripwire.com/state-of-security/vulnerability-management/difference-vulnerability-scanning-penetration-testing/ (Accessed: 8 October 2018).

Thinking Inside the Box – Insider Threat Modelling

Earlier this week I was presenting at Forum Event’s Security IT Summit on insider threat modelling (get the presentation title joke? I was rather proud of it) which seemed to go well. A few people asked for the slides, and I’m always happy to share something that’s found useful, just wanted to add some context as well.

Presenting at the IT Security Summit

For those who are not aware Threat Modelling is simply an approach to making sure a system is secure as part of the design phase (and ideally through operational life and decommissioning) by anticipating possible threats or attacks. There’s a variety of methodologies of varying complexity and suitable for different purposes, but they are all very focused on external attackers (this is not a bad thing – insider and external are different classes of threat and a model attempting to address both would likely be spread thin). In the presentation I wanted to suggest the misty outlines of a way to start approaching the same modelling against insider threats, and make the problem we’re trying to solve clear. Given the level of engagement and discussion that followed, and some of the feedback I received, I think it went well. On a related note I know how I’m going to be using my dissertation, given the opportunity.

As always, commentary, feedback, or contacts more than welcome. Without further ado, onto the slides.

Thinking-Inside-the-Box

GDPR for Sole Traders – Controllers, Processors and ICO Registration

This entry is part 4 of 5 in the series GDPR for Sole Traders

This morning my other half mentioned a concern that was popping up about ICO registration fees for sole traders working with personal data under GDPR. Since there seems to be a lot of confusion (including from the ICO phone staff themselves who have been reported as giving varying answers to the same people) I thought I’d try and help. All the usual provisos apply, and I am working only from the information published by the ICO – simply summarising it and cutting out some of the parts I don’t think are relevant.

First of all, the way that the ICO is funded and the requirements for notification/registration have changed. I won’t go into the old rules, but the new ones are quite simple – there is now a legal requirement for data controllers to pay a data protection fee to the ICO. These fees are not unreasonable, running between £40 to £2900 per year depending on size and turnover. You can also get a £5 discount for paying by direct debit. Data processors do not need to pay this fee. There are certain other exemptions, but frankly they really only complicate the issue for most sole traders. I will talk about them shortly, but the core issue is that difference between controllers and processors – which is fundamentally quite simple but poorly explained.

The definition is this – a data controller determines the purposes for which data is processed. For most sole traders and small businesses who work with personal data, this will not be the case – in the vast majority of cases you will be fulfilling a commission or contract for a client, which means that you are not determining the purpose of processing any data they have passed you. You will still have the same duty to protect that data under the GDPR of course, but are not the data controller and so do not have to pay the data protection fee to the ICO. If, however, you are gathering the data, determining the purposes of processing and analysis, or anything similar then you will either need to pay the fee or rely on one of the other exemptions.

A data processor meanwhile may process the data, but does not decide the purposes of processing. As an example – a translator asked to translate a CV by an individual or a company is the data processor, while the client would be the data controller. A data processor does not have to pay the ICO data protection fee.

What if I am a data controller?

If you are a data controller there are certain exemptions which might apply, for example if your only purpose in processing data is staff administration you are exempt from the fee. In general though you will need to pay it – the exemptions are clear and narrow in terms of the data you can process and the purposes of processing it. Unless everything you process (where you are defining the purposes of processing it, not a client) falls under the exemptions then you will need to pay the fee.

The exemptions are listed, clearly, in the ICO’s own literature:

  • Staff administration
  • Advertising, marketing and public relations
  • Accounts and records
  • Not-for-profit purposes
  • Personal, family or household affairs
  • Maintaining a public register
  • Judicial functions
  • Processing personal information without an automated system such as a computer

Take special note of that last one – the definition the ICO is using here of an automated system is different to the GDPR’s definition of automated processing. In essence, if you are using any system with any level of automation for any point in the processing of data, then you do not fall under this exemption. Hopefully the others and the reasons for them are clear.

As always, feel free to send me any questions either here or on Twitter, and hopefully the picture is a little clearer for some of you now.

Security and Compliance

A common error is to think that because a system, solution, process, company, person, or other entity is not compliant with a particular standard, it’s insecure. It’s not a grievous error, and is an understandable one to make which doesn’t cause any harm (except occasionally causing more investment in security, which is usually a good thing anyway).

The flip side of this is where the problem comes in – when the assumption is made that because an entity is compliant with a particular standard or standards it is secure.

The reason that people often fall into this error is because compliance is straight-forward (not necessarily easy): it’s a very binary set of conditions to meet in order to be compliant, and if you fail them you’re not compliant. It’s a very easy idea to understand, and quite comforting to conflate with the idea of being secure – because compliance is achievable. Security is not.

In a similar way to illness in people, compliance is the regular checkup with the doctor who confirms that you’re in a fine state of health, because you don’t have any symptoms on the checklist. A year later the heart attack hits, because not only were the predictive symptoms not on the list, but the doctor’s reassuring words convinced the patient they could be a little lazier and put less work into their fitness.

It isn’t that compliance and standards aren’t important, they are, very much so. It’s that they simply should not be the one measuring point used to determine whether or not something is secure. In fact they shouldn’t be used to determine whether something is secure at all. At most a few days after a standard is written, it’ll have sections that are out of date. Security moves fast, and standards bodies have no real way to keep up without moving to a living standards model – which would be a completely different discussion.

Security is an aspiration, not a goal that can be reached through checklists. The checklists help, but the important aspects of genuinely continuously improving security are a company culture which takes security seriously, security subject matter experts always looking to make justified improvements to reduce risk, a high value placed on using actionable threat intelligence to anticipate and prevent incidents, and instilling a culture of security knowledge, engagement and awareness throughout the entirety of an organisation.

Or to sum it all up with a nice little aphorism, we must be careful not to confuse the map for the territory.

Why Work in Cyber?

Occasionally I give presentations to students (and the general public) about different aspects of cyber security, including why to work in cyber. I thought sharing the presentations I build for these might be helpful. Source files and presentation notes can also be made available if requested.

This one’s targeted at 6th form students for an employability event, and is very much a personal view of why I think people are likely to be interested in a cyber security career (and why I would have been interested in it at the same age, had I known it was an option rather than looking at a Physics degree). After all, we need everyone we can get if anything’s going to improve.

Working-in-Cybersecurity

GDPR for Sole Traders – Contract Lawful Basis

This entry is part 3 of 5 in the series GDPR for Sole Traders

Under GDPR a lawful basis is the justification to process personal data. While I’ve focused mainly on consent, as the easiest to understand, the others are worth looking at – and contract may be a better fit for much of the work that sole traders carry out. First, a quick examination of what the lawful bases mean and how you can apply them.

Essentially a lawful basis defines the purpose for which you’re holding the data, and provide different sets of rights for data subjects to allow for the different usage. Consent, for example, allows subjects to request erasure, to object to the storage of their data or processing for a specific purpose, and allows them to demand that any data held on them be exported and provided in a portable format. Obviously these rights are not suitable to all purposes.

The contract lawful basis allows for data subjects to object, which means that they can ask for you to stop processing their data for particular purposes. They cannot ask that you erase the data, nor that it be exported in a portable format (or rather they can ask for these things but you have a legal right to refuse). If they do ask that you stop processing their data for a specific purpose then you must comply – unless you have compelling legitimate reasons to process the data, or are doing so to establish, exercise or defend a legal claim (i.e. if a client has refused to pay, you would not have to stop processing their data in order to pursue payment).

In order to establish contract as a lawful basis you need to assess the data you are holding (most likely not much more than contact details), ensure you are only using it for that purpose (in order to negotiate or perform a contract), ensure you do not hold it longer than necessary (once the contract is complete if there are no legal reasons to hold onto it for a period of time, such as auditing or chasing payment, then you should delete it), and record that you are holding it under that basis. In most cases all you’d need would be a footer on e-mails explaining that you will hold and use contact information in order to negotiate and complete a contract, and will retain it for compliance with any applicable laws after the contract is complete.

If you want to use those contact details for any marketing either during or after the contract, then that would be a separate purpose (arguably you might be able to go for legitimate interest, but to be honest consent is far more suitable for this purpose) and you would need to inform the client and get their explicit consent for this. They would also need to be provided with a simple method to object (i.e. sign up to my mailing list for further details, followed by on each mailing an e-mail address to contact or link to click if they want to be removed).

As always, any questions please ask here, or via Twitter (I don’t mind dealing with private queries, but it’s easier to go through those two than answer the same questions several times). Also please remember if you have particular legal concerns then you should speak to an expert with insurance rather than leaning on this article for any legal opinion – I’m purely looking to clarify the available guidance and target it more appropriately at those who seem to have slipped under the ICO’s radar in terms of advice.

Password Vaults

I am very much not a fan of software or cloud-based password vaults. While I agree with the intention of allowing people to use long, complex, unique passwords for each site it always feels like there’s a flaw in the implementation, or in the concept itself. Locking all of your highly secure passwords up behind a single (memorable) password just seems fundamentally flawed as an idea – particularly if you’re doing so online where you have to trust other’s security. Even where it’s a local vault on your machine, a lot of convenience is lost when you move to a different machine, and having copies synced to each one using OneDrive, DropBox or similar raises the same issue as sticking the vault in the cloud.

One suggestion I’ve seen, and recommended in the past goes, against some of the fundamental advice. Certainly where the password isn’t that important, write it down and stick it in your wallet, glasses case, etc. You’ll know if it goes missing and can make sure you reset it or lock the account as soon as possible – and it means that long and complex passwords are usable.

Others include generating passwords based on an algorithm involving a core string, and something about the site or system the password is for. That way it is remembered for each site. The problem here is that such algorithms are easily figured out with just a few data points, and there are enough site compromises that those data points will be out there.

So I’m using a different solution myself. To be very clear that there is no affiliate marketing here, I get no benefit whatsoever from recommending or reviewing this option, I simply think it is one of the better options out there.

I use a hardware password vault called Mooltipass (yes, the name was part of the reason I originally looked into it, for those who get the reference) Mini which is a little USB device, and a smart card. Some software integrates smoothly into browsers for most logons, and the device itself functions largely as a USB keyboard for entering passwords. Essentially if I want to store a password I click a little icon that appears in password fields, the domain is detected and saved, and the password goes into the device. If I want to retrieve a password it’s fairly automatic with websites (software detects the domain, selects the needed account and passes it to the device, device then waits for approval before entering the password), and less clunky than an average software vault for anything else (select Login on the device, scroll to the appropriate account, click the scrollwheel and the password is entered).

Yes, it does work on phones with an appropriate cable – again fairly seamlessly. Without the smartcard you can’t retrieve the passwords, and there’s also a PIN for any manual credential management (editing passwords, adding credentials manually, etc). It’s also a rather fetching little thing. There are others out there, but this is the one I have, it works smoothly, the development team are serious about their security, and I highly recommend it to anyone who suffers from professional paranoia, or just doesn’t feel comfortable sticking the keys to their house in a key safe outside.

You can also back it up to your hard drive if absolutely necessary, or to a second device if you have one, so that dropping it down the drain does not involve losing everything. Each comes with a spare smartcard (either for a backup of the encryption key, or for a second user) which can be used as your spare, to decrypt a backup, or for a second user of the device (with a different encryption key obviously). It’s also relatively cheap at $79.

It will also work nicely with VeraCrypt for entering passphrases, so is a good way to maintain solidly encrypted partitions with different passphrases for different projects.