Why Work in Cyber Security? (Again)

Last year I gave a couple of presentations to some college students about why they really, really should look to work in cyber security, and how to get into it. That time’s rolled around again, and I realized I’d lost the original source document for the slides, so had to do a quick update from scratch. This is the result. 

Thinking Inside the Box – Insider Threat Modelling

Earlier this week I was presenting at Forum Event’s Security IT Summit on insider threat modelling (get the presentation title joke? I was rather proud of it) which seemed to go well. A few people asked for the slides, and I’m always happy to share something that’s found useful, just wanted to add some context as well.

Presenting at the IT Security Summit

For those who are not aware Threat Modelling is simply an approach to making sure a system is secure as part of the design phase (and ideally through operational life and decommissioning) by anticipating possible threats or attacks. There’s a variety of methodologies of varying complexity and suitable for different purposes, but they are all very focused on external attackers (this is not a bad thing – insider and external are different classes of threat and a model attempting to address both would likely be spread thin). In the presentation I wanted to suggest the misty outlines of a way to start approaching the same modelling against insider threats, and make the problem we’re trying to solve clear. Given the level of engagement and discussion that followed, and some of the feedback I received, I think it went well. On a related note I know how I’m going to be using my dissertation, given the opportunity.

As always, commentary, feedback, or contacts more than welcome. Without further ado, onto the slides.

Thinking-Inside-the-Box

Why Work in Cyber?

Occasionally I give presentations to students (and the general public) about different aspects of cyber security, including why to work in cyber. I thought sharing the presentations I build for these might be helpful. Source files and presentation notes can also be made available if requested.

This one’s targeted at 6th form students for an employability event, and is very much a personal view of why I think people are likely to be interested in a cyber security career (and why I would have been interested in it at the same age, had I known it was an option rather than looking at a Physics degree). After all, we need everyone we can get if anything’s going to improve.

Working-in-Cybersecurity

Password Vaults

I am very much not a fan of software or cloud-based password vaults. While I agree with the intention of allowing people to use long, complex, unique passwords for each site it always feels like there’s a flaw in the implementation, or in the concept itself. Locking all of your highly secure passwords up behind a single (memorable) password just seems fundamentally flawed as an idea – particularly if you’re doing so online where you have to trust other’s security. Even where it’s a local vault on your machine, a lot of convenience is lost when you move to a different machine, and having copies synced to each one using OneDrive, DropBox or similar raises the same issue as sticking the vault in the cloud.

One suggestion I’ve seen, and recommended in the past goes, against some of the fundamental advice. Certainly where the password isn’t that important, write it down and stick it in your wallet, glasses case, etc. You’ll know if it goes missing and can make sure you reset it or lock the account as soon as possible – and it means that long and complex passwords are usable.

Others include generating passwords based on an algorithm involving a core string, and something about the site or system the password is for. That way it is remembered for each site. The problem here is that such algorithms are easily figured out with just a few data points, and there are enough site compromises that those data points will be out there.

So I’m using a different solution myself. To be very clear that there is no affiliate marketing here, I get no benefit whatsoever from recommending or reviewing this option, I simply think it is one of the better options out there.

I use a hardware password vault called Mooltipass (yes, the name was part of the reason I originally looked into it, for those who get the reference) Mini which is a little USB device, and a smart card. Some software integrates smoothly into browsers for most logons, and the device itself functions largely as a USB keyboard for entering passwords. Essentially if I want to store a password I click a little icon that appears in password fields, the domain is detected and saved, and the password goes into the device. If I want to retrieve a password it’s fairly automatic with websites (software detects the domain, selects the needed account and passes it to the device, device then waits for approval before entering the password), and less clunky than an average software vault for anything else (select Login on the device, scroll to the appropriate account, click the scrollwheel and the password is entered).

Yes, it does work on phones with an appropriate cable – again fairly seamlessly. Without the smartcard you can’t retrieve the passwords, and there’s also a PIN for any manual credential management (editing passwords, adding credentials manually, etc). It’s also a rather fetching little thing. There are others out there, but this is the one I have, it works smoothly, the development team are serious about their security, and I highly recommend it to anyone who suffers from professional paranoia, or just doesn’t feel comfortable sticking the keys to their house in a key safe outside.

You can also back it up to your hard drive if absolutely necessary, or to a second device if you have one, so that dropping it down the drain does not involve losing everything. Each comes with a spare smartcard (either for a backup of the encryption key, or for a second user) which can be used as your spare, to decrypt a backup, or for a second user of the device (with a different encryption key obviously). It’s also relatively cheap at $79.

It will also work nicely with VeraCrypt for entering passphrases, so is a good way to maintain solidly encrypted partitions with different passphrases for different projects.

Security by Obscurity in Policies and Procedures

In policy management, I’ve noticed a problem of paranoia and an attachment to security by obscurity. There is a common idea of keeping internal business policies and procedures secret, as it’s more secure. I should note that I’m a big fan of properly applied paranoia, but I think that here the policy management people are behind their more technical counterparts terms of security methods.

Security by Obscurity

Security by obscurity is an old and much-ridiculed concept, which has given us some of the most damaging security failures we will ever see. It is the idea that keeping your security measures secret from the enemy will somehow make them safer. There is some truth to this if you can absolutely guarantee that the secret will be kept.

Of course, if you can guarantee that a secret will be kept from all potential attackers then you’ve already got a perfect security system, and I’d really like to buy it from you.

To put it simply, if your non-technical security relies on keeping secrets (not secrets such as passwords and similar, which are a different thing) then it is incredibly fragile. Not only because it is untested outside our own community (and therefore people who have a vested interest in not discovering holes), but because it is simply not possible to effectively keep such a secret against a determined attacker. Since policies are published internally it’s almost impossible to assure their secrecy, and you will have no idea when they leak.

Opening up policy

Unfortunately, this attitude seems widespread enough that there won’t be any change soon – but there could be. If companies were simply to open up and discuss their policies – even without identifying themselves – then it would help. The idea is to look at policies and procedures for an entity and spot the gaps where they don’t cover. Which areas are left unmonitored and unaudited due to historical oversights? Which people don’t have the right procedural guidance to protect them against leaking information to an anonymous caller or customer?

These are things that people might not notice within a company, or might notice, grumble about, and point at someone else to fix. Assuring ourselves that our policies are secret and so no one will exploit these vulnerabilities (and yes, they are just as much vulnerabilities as software and hardware ones – just because wetware is squishier doesn’t mean that each node or the network as a whole can’t be treated in a similar way) is exactly the same as keeping our cryptographic algorithm proprietary and secret – all it assures is that no one will know if it’s broken, and when (inevitably) the security holes are discovered people will lose faith, not to mention the issue of damage from exploitation of the vulnerability itself.

At the very least we should look at ways to collaborate on policies. I can see meetings where subject matter experts from different, but related, companies get together to discuss policies – anonymously or not (Chatham House rules anyone?) and find the mutual holes.

Essentially we need to start thinking about security policies and procedures the same way we think about software – and they really aren’t that different. The main difference is that wetware is a much more variable and error-prone computational engine, particularly when networked in a corporation, but then again I’ve seen plenty of unpredictable behaviour from both hardware and software.

Basically, if we’re ever going to beat the bad guys, particularly the ones who can socially engineer, we need to be more open with each other about what we’re doing. Not simply saying ‘here’s everything’, but genuinely engaging and discussing strategies. In this scenario, there are definitely two sides, and those of us on the good guy’s team need to start working together better on security, despite any rivalries, mistrust or competition. Working together, openly, on security benefits us all, and if we do it in good faith and building trust then it will only hurt the bad guys.

Security Training and Ebbinghaus’ Forgetting Curve

There is a problem with training users outside of the security field. People forget things, so no matter how much we might try to teach them about warning signs there is always a limit. For those who work in security, it is sometimes hard to relate to the problems people outside our domain might have with retaining training. Not only do we tend to take various points as common sense, but we also forget how difficult it is to understand deliberate breaches of the social contract.

In addition, if we’re completely honest with ourselves, we forget to check things as well – it’s just that the regular exposure to incidents that increase awareness means that these points get thoroughly burned in. Continue reading “Security Training and Ebbinghaus’ Forgetting Curve”