Thinking Inside the Box – Insider Threat Modelling

Earlier this week I was presenting at Forum Event’s Security IT Summit on insider threat modelling (get the presentation title joke? I was rather proud of it) which seemed to go well. A few people asked for the slides, and I’m always happy to share something that’s found useful, just wanted to add some context as well.

Presenting at the IT Security Summit

For those who are not aware Threat Modelling is simply an approach to making sure a system is secure as part of the design phase (and ideally through operational life and decommissioning) by anticipating possible threats or attacks. There’s a variety of methodologies of varying complexity and suitable for different purposes, but they are all very focused on external attackers (this is not a bad thing – insider and external are different classes of threat and a model attempting to address both would likely be spread thin). In the presentation I wanted to suggest the misty outlines of a way to start approaching the same modelling against insider threats, and make the problem we’re trying to solve clear. Given the level of engagement and discussion that followed, and some of the feedback I received, I think it went well. On a related note I know how I’m going to be using my dissertation, given the opportunity.

As always, commentary, feedback, or contacts more than welcome. Without further ado, onto the slides.

Thinking-Inside-the-Box

Security and Compliance

A common error is to think that because a system, solution, process, company, person, or other entity is not compliant with a particular standard, it’s insecure. It’s not a grievous error, and is an understandable one to make which doesn’t cause any harm (except occasionally causing more investment in security, which is usually a good thing anyway).

The flip side of this is where the problem comes in – when the assumption is made that because an entity is compliant with a particular standard or standards it is secure.

The reason that people often fall into this error is because compliance is straight-forward (not necessarily easy): it’s a very binary set of conditions to meet in order to be compliant, and if you fail them you’re not compliant. It’s a very easy idea to understand, and quite comforting to conflate with the idea of being secure – because compliance is achievable. Security is not.

In a similar way to illness in people, compliance is the regular checkup with the doctor who confirms that you’re in a fine state of health, because you don’t have any symptoms on the checklist. A year later the heart attack hits, because not only were the predictive symptoms not on the list, but the doctor’s reassuring words convinced the patient they could be a little lazier and put less work into their fitness.

It isn’t that compliance and standards aren’t important, they are, very much so. It’s that they simply should not be the one measuring point used to determine whether or not something is secure. In fact they shouldn’t be used to determine whether something is secure at all. At most a few days after a standard is written, it’ll have sections that are out of date. Security moves fast, and standards bodies have no real way to keep up without moving to a living standards model – which would be a completely different discussion.

Security is an aspiration, not a goal that can be reached through checklists. The checklists help, but the important aspects of genuinely continuously improving security are a company culture which takes security seriously, security subject matter experts always looking to make justified improvements to reduce risk, a high value placed on using actionable threat intelligence to anticipate and prevent incidents, and instilling a culture of security knowledge, engagement and awareness throughout the entirety of an organisation.

Or to sum it all up with a nice little aphorism, we must be careful not to confuse the map for the territory.

GDPR for Sole Traders – Contract Lawful Basis

This entry is part 3 of 5 in the series GDPR for Sole Traders

Under GDPR a lawful basis is the justification to process personal data. While I’ve focused mainly on consent, as the easiest to understand, the others are worth looking at – and contract may be a better fit for much of the work that sole traders carry out. First, a quick examination of what the lawful bases mean and how you can apply them.

Essentially a lawful basis defines the purpose for which you’re holding the data, and provide different sets of rights for data subjects to allow for the different usage. Consent, for example, allows subjects to request erasure, to object to the storage of their data or processing for a specific purpose, and allows them to demand that any data held on them be exported and provided in a portable format. Obviously these rights are not suitable to all purposes.

The contract lawful basis allows for data subjects to object, which means that they can ask for you to stop processing their data for particular purposes. They cannot ask that you erase the data, nor that it be exported in a portable format (or rather they can ask for these things but you have a legal right to refuse). If they do ask that you stop processing their data for a specific purpose then you must comply – unless you have compelling legitimate reasons to process the data, or are doing so to establish, exercise or defend a legal claim (i.e. if a client has refused to pay, you would not have to stop processing their data in order to pursue payment).

In order to establish contract as a lawful basis you need to assess the data you are holding (most likely not much more than contact details), ensure you are only using it for that purpose (in order to negotiate or perform a contract), ensure you do not hold it longer than necessary (once the contract is complete if there are no legal reasons to hold onto it for a period of time, such as auditing or chasing payment, then you should delete it), and record that you are holding it under that basis. In most cases all you’d need would be a footer on e-mails explaining that you will hold and use contact information in order to negotiate and complete a contract, and will retain it for compliance with any applicable laws after the contract is complete.

If you want to use those contact details for any marketing either during or after the contract, then that would be a separate purpose (arguably you might be able to go for legitimate interest, but to be honest consent is far more suitable for this purpose) and you would need to inform the client and get their explicit consent for this. They would also need to be provided with a simple method to object (i.e. sign up to my mailing list for further details, followed by on each mailing an e-mail address to contact or link to click if they want to be removed).

As always, any questions please ask here, or via Twitter (I don’t mind dealing with private queries, but it’s easier to go through those two than answer the same questions several times). Also please remember if you have particular legal concerns then you should speak to an expert with insurance rather than leaning on this article for any legal opinion – I’m purely looking to clarify the available guidance and target it more appropriately at those who seem to have slipped under the ICO’s radar in terms of advice.

Password Vaults

I am very much not a fan of software or cloud-based password vaults. While I agree with the intention of allowing people to use long, complex, unique passwords for each site it always feels like there’s a flaw in the implementation, or in the concept itself. Locking all of your highly secure passwords up behind a single (memorable) password just seems fundamentally flawed as an idea – particularly if you’re doing so online where you have to trust other’s security. Even where it’s a local vault on your machine, a lot of convenience is lost when you move to a different machine, and having copies synced to each one using OneDrive, DropBox or similar raises the same issue as sticking the vault in the cloud.

One suggestion I’ve seen, and recommended in the past goes, against some of the fundamental advice. Certainly where the password isn’t that important, write it down and stick it in your wallet, glasses case, etc. You’ll know if it goes missing and can make sure you reset it or lock the account as soon as possible – and it means that long and complex passwords are usable.

Others include generating passwords based on an algorithm involving a core string, and something about the site or system the password is for. That way it is remembered for each site. The problem here is that such algorithms are easily figured out with just a few data points, and there are enough site compromises that those data points will be out there.

So I’m using a different solution myself. To be very clear that there is no affiliate marketing here, I get no benefit whatsoever from recommending or reviewing this option, I simply think it is one of the better options out there.

I use a hardware password vault called Mooltipass (yes, the name was part of the reason I originally looked into it, for those who get the reference) Mini which is a little USB device, and a smart card. Some software integrates smoothly into browsers for most logons, and the device itself functions largely as a USB keyboard for entering passwords. Essentially if I want to store a password I click a little icon that appears in password fields, the domain is detected and saved, and the password goes into the device. If I want to retrieve a password it’s fairly automatic with websites (software detects the domain, selects the needed account and passes it to the device, device then waits for approval before entering the password), and less clunky than an average software vault for anything else (select Login on the device, scroll to the appropriate account, click the scrollwheel and the password is entered).

Yes, it does work on phones with an appropriate cable – again fairly seamlessly. Without the smartcard you can’t retrieve the passwords, and there’s also a PIN for any manual credential management (editing passwords, adding credentials manually, etc). It’s also a rather fetching little thing. There are others out there, but this is the one I have, it works smoothly, the development team are serious about their security, and I highly recommend it to anyone who suffers from professional paranoia, or just doesn’t feel comfortable sticking the keys to their house in a key safe outside.

You can also back it up to your hard drive if absolutely necessary, or to a second device if you have one, so that dropping it down the drain does not involve losing everything. Each comes with a spare smartcard (either for a backup of the encryption key, or for a second user) which can be used as your spare, to decrypt a backup, or for a second user of the device (with a different encryption key obviously). It’s also relatively cheap at $79.

It will also work nicely with VeraCrypt for entering passphrases, so is a good way to maintain solidly encrypted partitions with different passphrases for different projects.

GDPR for Sole Traders – Brief Summary

This entry is part 1 of 5 in the series GDPR for Sole Traders

There is a lot of advice floating around about the GDPR – for everyone from large enterprises to small business. What I haven’t seen much of (though I have seen some) is an attempt to relate it to self-employed or freelance individuals, sole traders, and small partnerships. Recently my wife came across some questions about it on one of her social media translator groups where there were concerns about how it will impact these groups.

The advice for SMEs and larger is generally very official, designed for consumption by the legal and technical teams of a company. That’s great for a company which has those, but doesn’t apply so much to a sole trader who doesn’t specialise in law and IT. So I offered to summarise as best I could, and translate things a little. Before I do that I need to make a couple of things clear:

  1. This post is aimed to make the GDPR understandable for sole traders and other small businesses, I do not aim to present or suggest solutions here, simply try and help to ensure that the responsibilities and duties are understood appropriately.
  2. This is not official advice from a lawyer or similar and you should not lean on it if you have a particularly complex situation, instead consult a professional who is willing to put official advice in writing and has liability insurance.
  3. This first post is a brief summary – I will be skimming over the legislation, as I don’t know which are the most pertinent questions to go into depth. If you have any questions please get in touch, or post a comment, and I’ll happy elaborate on any areas of confusion or concern.

Principles of the GDPR

The GDPR is based around a number of principles, largely building and elaborating on existing principles in data protection legislation.

  • Lawfulness, fairness and transparency: treat personal data according to the law, fairly, and be open and transparent about your usage of it with the subjects of the data
  • Purpose limitation: only collect and use data for clearly defined purposes, do not use it for anything outside of those purposes (transparency applies here, as the data subjects must be informed of the purposes)
  • Data minimisation: you should only keep the most limited data you can which is adequate and relevant to your purposes in processing
  • Accuracy: personal data you hold must be accurate, and where necessary kept up to date
  • Storage limitation: this is more an extension from data minimisation – you must not hold data in a form that allows identification of subjects longer than necessary for your stated purposes, note that this permits anonymisation of data for further storage and processing
  • Integrity and confidentiality: make sure that the data is secure, using (and this is the key word) appropriate technical or organisational measures
  • Accountability: the data controller (different from the data processor, we’ll get to that) is responsible for and must be able to demonstrate compliance with the GDPR legislation

GDPR Data Subjects, Processors and Controllers

The other three key terms that come up in the GDPR are about who it applies to. Here we have data subjectsprocessors, and controllers. Subjects is fairly clear, it refers to the subjects of the data being gathered and their rights to control that data. Specifically their right to be informed of any data held on request, and their right to have it shown to them or removed on request. Processors and controllers are where some confusion comes in, but are quite simple. The controller is the one who decides what the data is to be used for, while the processor is the one who carries out those uses. For sole traders and other small businesses these are almost certain to be the same people – although there will be cases where they are not (where your business is about providing analytical resource and expertise to third parties, for example).

Contractual Basis for Processing

I’m going to end with this section, for now, as it is the area which seems to raise the most concern. As mentioned if anyone has questions around other areas, please get in touch or comment and I’ll happily expand this series to address them, but this I really wanted to cover.

One of the most common questions I have heard is whether the GDPR will interfere with normal business, in particular starting a new business relationship with someone. Do you need to e-mail them to get informed consent, keep them notified that you’ll be storing their contact details in your address book/CRM, and so on? Marketing is a bigger, separate subject, so if it’s asked about I’ll go into it in a separate post.

Essentially if you are holding and processing someone’s personal data for the purposes of fulfilling an agreed contract and/or agreeing a contract then you have a lawful basis for processing. Contract here refers to any agreement that meets the terms of contract law (not necessarily written down, but means that terms have been offered and accepted, you both intend for the terms to be binding, and there is an element of exchange).

The good news is that this will apply if you are establishing a relationship and contract with a new client. It will also apply if, as part of the contract, they are asking you to process their personal data (i.e. getting a personal document translated to another language). What is important is that when you obtain the data you ensure the client/customer/third party is informed on how you will be using it, why you will be using it, what you will be using it for and how long you intend to keep hold of it.

Basically a lot of the GDPR boils down to being transparent with people about what you are doing with their data, taking responsibility and accountability for data you hold, and acting in a reasonable manner. If you are clear and open with customers about what you will do with their data (and remember, GDPR is opt-in, not opt-out, so the consent must be informed and documented), and keep track of data assets you hold then you should be fine. There’s a lot of use in the legislation about ‘reasonable’ measures, views, expectations and so on. Essentially – if someone contacts you in order to set up a contract and do business with you, particularly if the only personal data you obtain are contact details, you will need to take reasonable security measures to secure those contact details, and not misuse them to send out marketing e-mails without the explicit consent of the data subject.

Treat other’s data as you’d expect a responsible business to treat yours, and you should be fine. If you have questions either around other aspects, or around more practical security matters for a sole trader or small business (encryption, systems to use, what counts as ‘secure’) then please do leave a comment or get in touch directly.

I’ve been advised to add a small note around some specific scenarios:

  • if you are working with a document with a signature, even if there is no other personal data, then once you are finished with the document you should destroy it and/or ensure that the signature is unreadable (there may be reasons to keep the document, in which case it should be fully anonymised if you have no need of the data)
  • if you keep contact details (you do) then you should ensure that people are informed you are keeping them, and make sure they have a way to ask for the data you hold on them, ask for changes, ask for it to be erased, ask you to restrict the purposes you use it for (with contact details this essentially maps to asking for it to be erased I imagine), ask for an electronic copy, and object to you holding it – this is a large area, and I suspect will involve a separate post to cover fully
  • having a password on your computer is not a way to protect your data from theft unless you have also set up encryption, or are storing all data on the cloud rather than the local machine, or other circumstances which may apply – if I get some questions on this then I’ll go into more detail, but it’s fair to say that whole-disk encryption along with a password is a reasonable option, while an unencrypted disk is not

 

Micro Wireless Network Performance Lab

I’m currently studying for a postgrad in Cyber Security, and have had a practical assignment come up. Obviously I won’t be going into the details of the assignment here, but it’s not unreasonable to say that it involves measuring wireless network performance and throughput under a selection of different conditions.

The original labs for this were done during the lecture, and so I have a set of results which I could use, but since there were too many variables for me to feel comfortable (multiple students connecting at different times for traffic analysis, above anything else) with those results, and we only took a limited number of results for each environment, I’ve decided that I can do better on my own, and to build my own lab for the purpose. More importantly I can automate it, so that the extent of my effort in collecting the results can be reduced to kicking it off overnight and starting analysis in the morning.

So firstly there’s the shopping list, which is mercifully short and low-budget:

  1. A desktop or laptop PC with enough grunt to run a couple VMs, a couple of network ports, and as many USB ports as possible (that one’s important).
  2. Two USB wifi dongles capable of running in monitor mode – the standard in this area seems to be the TP-LINK TL-WN722N (currently around £12.09 from Amazon), I keep a few of these lying around because they keep coming in handy.
  3. A router which can support different network and security protocols, and can run an SSH server for configuration, for routers I pretty much always go with the GL .inet range, again because I have a few lying around for different projects (Out of stock from Amazon)
  4. Some network and USB cables.

The aim is to minimise the amount of running around required, and the requirement to have multiple physical systems involved. The steps to create the lab were also pretty easy:

  1. On the physical machine, make sure the hypervisor will let you assign a wireless dongle to each of the two VMs – this one took me some headbanging to sort out, dealing with issues around how Hyper-V handles network cards, and other issues around VirtualBox’s handling of USB dongles. Final solution was to handle the networking side in the host OS, and just assign them as network connections to the guest Ubuntu machines via VirtualBox
  2. Create two VMs (I’m planning to add a third for management purposes, so a whole suite of tests can just be scripted to run overnight, but this is a start) and drop an OS onto them (after fun trying to get atheros drivers on Linux I ended up with an older Ubuntu distribution, which worked just fine)
  3. Connect the router with a network cable, and the two USB dongles with extension cables, to your host machine, for now the two VMs can share the host machine’s network connection (for software downloading purposes)
  4. Install iperf on each of the guest machines, and set up the wireless network as desired for testing
  5. Switch the network connections for the two guests over to one wifi dongle each, connect them to the access point via the host system, and check that they can talk to each other happily
  6. Run iperf in server mode on your server guest VM, and in client mode on the client, be stunned at the variance of wifi signals with very little change in environmental conditions

So now it’s running through the first set of tests, followed by a reconfiguration of the access point, and repeat. Next step once that’s all done is to play around with scripting until I can just give it a specification to test out for a while, and leave it running (I’m curious about things like different key lengths, window sizes, and so on and whether they’ll have an impact). I’ll put the script together and run through it next time, and after that will take a look at various methods for intercepting, cracking and spoofing wireless networks, by extending the lab capabilities.

Security by Obscurity in Policies and Procedures

In policy management, I’ve noticed a problem of paranoia and an attachment to security by obscurity. There is a common idea of keeping internal business policies and procedures secret, as it’s more secure. I should note that I’m a big fan of properly applied paranoia, but I think that here the policy management people are behind their more technical counterparts terms of security methods.

Security by Obscurity

Security by obscurity is an old and much-ridiculed concept, which has given us some of the most damaging security failures we will ever see. It is the idea that keeping your security measures secret from the enemy will somehow make them safer. There is some truth to this if you can absolutely guarantee that the secret will be kept.

Of course, if you can guarantee that a secret will be kept from all potential attackers then you’ve already got a perfect security system, and I’d really like to buy it from you.

To put it simply, if your non-technical security relies on keeping secrets (not secrets such as passwords and similar, which are a different thing) then it is incredibly fragile. Not only because it is untested outside our own community (and therefore people who have a vested interest in not discovering holes), but because it is simply not possible to effectively keep such a secret against a determined attacker. Since policies are published internally it’s almost impossible to assure their secrecy, and you will have no idea when they leak.

Opening up policy

Unfortunately, this attitude seems widespread enough that there won’t be any change soon – but there could be. If companies were simply to open up and discuss their policies – even without identifying themselves – then it would help. The idea is to look at policies and procedures for an entity and spot the gaps where they don’t cover. Which areas are left unmonitored and unaudited due to historical oversights? Which people don’t have the right procedural guidance to protect them against leaking information to an anonymous caller or customer?

These are things that people might not notice within a company, or might notice, grumble about, and point at someone else to fix. Assuring ourselves that our policies are secret and so no one will exploit these vulnerabilities (and yes, they are just as much vulnerabilities as software and hardware ones – just because wetware is squishier doesn’t mean that each node or the network as a whole can’t be treated in a similar way) is exactly the same as keeping our cryptographic algorithm proprietary and secret – all it assures is that no one will know if it’s broken, and when (inevitably) the security holes are discovered people will lose faith, not to mention the issue of damage from exploitation of the vulnerability itself.

At the very least we should look at ways to collaborate on policies. I can see meetings where subject matter experts from different, but related, companies get together to discuss policies – anonymously or not (Chatham House rules anyone?) and find the mutual holes.

Essentially we need to start thinking about security policies and procedures the same way we think about software – and they really aren’t that different. The main difference is that wetware is a much more variable and error-prone computational engine, particularly when networked in a corporation, but then again I’ve seen plenty of unpredictable behaviour from both hardware and software.

Basically, if we’re ever going to beat the bad guys, particularly the ones who can socially engineer, we need to be more open with each other about what we’re doing. Not simply saying ‘here’s everything’, but genuinely engaging and discussing strategies. In this scenario, there are definitely two sides, and those of us on the good guy’s team need to start working together better on security, despite any rivalries, mistrust or competition. Working together, openly, on security benefits us all, and if we do it in good faith and building trust then it will only hurt the bad guys.

Security Training and Ebbinghaus’ Forgetting Curve

There is a problem with training users outside of the security field. People forget things, so no matter how much we might try to teach them about warning signs there is always a limit. For those who work in security, it is sometimes hard to relate to the problems people outside our domain might have with retaining training. Not only do we tend to take various points as common sense, but we also forget how difficult it is to understand deliberate breaches of the social contract.

In addition, if we’re completely honest with ourselves, we forget to check things as well – it’s just that the regular exposure to incidents that increase awareness means that these points get thoroughly burned in. Continue reading “Security Training and Ebbinghaus’ Forgetting Curve”

Teaching Cryptography with Lockpicking

I want to look at ways to relate physical models to core security concepts such as cryptography, to raise awareness and understanding of security for those who are less technical. As part of this I’ll be looking at mapping picking to power analysis, a technique for monitoring power consumption in hardware to discover cryptographic secrets.

Mechanical lock picking is an obvious starting point for this form of teaching since the terminology carries across reasonably well (or at least everyone understands keys) to digital lock picking. The first problem is that while it is simple enough to come up with a lockbox analogy for symmetric and asymmetric encryption, the single pin picking of locks doesn’t map across quite so well. Continue reading “Teaching Cryptography with Lockpicking”