In policy management, I’ve noticed a problem of paranoia and an attachment to security by obscurity. There is a common idea of keeping internal business policies and procedures secret, as it’s more secure. I should note that I’m a big fan of properly applied paranoia, but I think that here the policy management people are behind their more technical counterparts terms of security methods.
Security by Obscurity
Security by obscurity is an old and much-ridiculed concept, which has given us some of the most damaging security failures we will ever see. It is the idea that keeping your security measures secret from the enemy will somehow make them safer. There is some truth to this if you can absolutely guarantee that the secret will be kept.
Of course, if you can guarantee that a secret will be kept from all potential attackers then you’ve already got a perfect security system, and I’d really like to buy it from you.
To put it simply, if your non-technical security relies on keeping secrets (not secrets such as passwords and similar, which are a different thing) then it is incredibly fragile. Not only because it is untested outside our own community (and therefore people who have a vested interest in not discovering holes), but because it is simply not possible to effectively keep such a secret against a determined attacker. Since policies are published internally it’s almost impossible to assure their secrecy, and you will have no idea when they leak.
Opening up policy
Unfortunately, this attitude seems widespread enough that there won’t be any change soon – but there could be. If companies were simply to open up and discuss their policies – even without identifying themselves – then it would help. The idea is to look at policies and procedures for an entity and spot the gaps where they don’t cover. Which areas are left unmonitored and unaudited due to historical oversights? Which people don’t have the right procedural guidance to protect them against leaking information to an anonymous caller or customer?
These are things that people might not notice within a company, or might notice, grumble about, and point at someone else to fix. Assuring ourselves that our policies are secret and so no one will exploit these vulnerabilities (and yes, they are just as much vulnerabilities as software and hardware ones – just because wetware is squishier doesn’t mean that each node or the network as a whole can’t be treated in a similar way) is exactly the same as keeping our cryptographic algorithm proprietary and secret – all it assures is that no one will know if it’s broken, and when (inevitably) the security holes are discovered people will lose faith, not to mention the issue of damage from exploitation of the vulnerability itself.
At the very least we should look at ways to collaborate on policies. I can see meetings where subject matter experts from different, but related, companies get together to discuss policies – anonymously or not (Chatham House rules anyone?) and find the mutual holes.
Essentially we need to start thinking about security policies and procedures the same way we think about software – and they really aren’t that different. The main difference is that wetware is a much more variable and error-prone computational engine, particularly when networked in a corporation, but then again I’ve seen plenty of unpredictable behaviour from both hardware and software.
Basically, if we’re ever going to beat the bad guys, particularly the ones who can socially engineer, we need to be more open with each other about what we’re doing. Not simply saying ‘here’s everything’, but genuinely engaging and discussing strategies. In this scenario, there are definitely two sides, and those of us on the good guy’s team need to start working together better on security, despite any rivalries, mistrust or competition. Working together, openly, on security benefits us all, and if we do it in good faith and building trust then it will only hurt the bad guys.