Micro Wireless Network Performance Lab

I’m currently studying for a postgrad in Cyber Security, and have had a practical assignment come up. Obviously I won’t be going into the details of the assignment here, but it’s not unreasonable to say that it involves measuring wireless network performance and throughput under a selection of different conditions.

The original labs for this were done during the lecture, and so I have a set of results which I could use, but since there were too many variables for me to feel comfortable (multiple students connecting at different times for traffic analysis, above anything else) with those results, and we only took a limited number of results for each environment, I’ve decided that I can do better on my own, and to build my own lab for the purpose. More importantly I can automate it, so that the extent of my effort in collecting the results can be reduced to kicking it off overnight and starting analysis in the morning.

So firstly there’s the shopping list, which is mercifully short and low-budget:

  1. A desktop or laptop PC with enough grunt to run a couple VMs, a couple of network ports, and as many USB ports as possible (that one’s important).
  2. Two USB wifi dongles capable of running in monitor mode – the standard in this area seems to be the TP-LINK TL-WN722N (currently around £14.54 from Amazon), I keep a few of these lying around because they keep coming in handy.
  3. A router which can support different network and security protocols, and can run an SSH server for configuration, for routers I pretty much always go with the GL .inet range, again because I have a few lying around for different projects (Out of stock from Amazon)
  4. Some network and USB cables.

The aim is to minimise the amount of running around required, and the requirement to have multiple physical systems involved. The steps to create the lab were also pretty easy:

  1. On the physical machine, make sure the hypervisor will let you assign a wireless dongle to each of the two VMs – this one took me some headbanging to sort out, dealing with issues around how Hyper-V handles network cards, and other issues around VirtualBox’s handling of USB dongles. Final solution was to handle the networking side in the host OS, and just assign them as network connections to the guest Ubuntu machines via VirtualBox
  2. Create two VMs (I’m planning to add a third for management purposes, so a whole suite of tests can just be scripted to run overnight, but this is a start) and drop an OS onto them (after fun trying to get atheros drivers on Linux I ended up with an older Ubuntu distribution, which worked just fine)
  3. Connect the router with a network cable, and the two USB dongles with extension cables, to your host machine, for now the two VMs can share the host machine’s network connection (for software downloading purposes)
  4. Install iperf on each of the guest machines, and set up the wireless network as desired for testing
  5. Switch the network connections for the two guests over to one wifi dongle each, connect them to the access point via the host system, and check that they can talk to each other happily
  6. Run iperf in server mode on your server guest VM, and in client mode on the client, be stunned at the variance of wifi signals with very little change in environmental conditions

So now it’s running through the first set of tests, followed by a reconfiguration of the access point, and repeat. Next step once that’s all done is to play around with scripting until I can just give it a specification to test out for a while, and leave it running (I’m curious about things like different key lengths, window sizes, and so on and whether they’ll have an impact). I’ll put the script together and run through it next time, and after that will take a look at various methods for intercepting, cracking and spoofing wireless networks, by extending the lab capabilities.

Security by Obscurity in Policies and Procedures

In policy management, I’ve noticed a problem of paranoia and an attachment to security by obscurity. There is a common idea of keeping internal business policies and procedures secret, as it’s more secure. I should note that I’m a big fan of properly applied paranoia, but I think that here the policy management people are behind their more technical counterparts terms of security methods.

Security by Obscurity

Security by obscurity is an old and much-ridiculed concept, which has given us some of the most damaging security failures we will ever see. It is the idea that keeping your security measures secret from the enemy will somehow make them safer. There is some truth to this if you can absolutely guarantee that the secret will be kept.

Of course, if you can guarantee that a secret will be kept from all potential attackers then you’ve already got a perfect security system, and I’d really like to buy it from you.

To put it simply, if your non-technical security relies on keeping secrets (not secrets such as passwords and similar, which are a different thing) then it is incredibly fragile. Not only because it is untested outside our own community (and therefore people who have a vested interest in not discovering holes), but because it is simply not possible to effectively keep such a secret against a determined attacker. Since policies are published internally it’s almost impossible to assure their secrecy, and you will have no idea when they leak.

Opening up policy

Unfortunately, this attitude seems widespread enough that there won’t be any change soon – but there could be. If companies were simply to open up and discuss their policies – even without identifying themselves – then it would help. The idea is to look at policies and procedures for an entity and spot the gaps where they don’t cover. Which areas are left unmonitored and unaudited due to historical oversights? Which people don’t have the right procedural guidance to protect them against leaking information to an anonymous caller or customer?

These are things that people might not notice within a company, or might notice, grumble about, and point at someone else to fix. Assuring ourselves that our policies are secret and so no one will exploit these vulnerabilities (and yes, they are just as much vulnerabilities as software and hardware ones – just because wetware is squishier doesn’t mean that each node or the network as a whole can’t be treated in a similar way) is exactly the same as keeping our cryptographic algorithm proprietary and secret – all it assures is that no one will know if it’s broken, and when (inevitably) the security holes are discovered people will lose faith, not to mention the issue of damage from exploitation of the vulnerability itself.

At the very least we should look at ways to collaborate on policies. I can see meetings where subject matter experts from different, but related, companies get together to discuss policies – anonymously or not (Chatham House rules anyone?) and find the mutual holes.

Essentially we need to start thinking about security policies and procedures the same way we think about software – and they really aren’t that different. The main difference is that wetware is a much more variable and error-prone computational engine, particularly when networked in a corporation, but then again I’ve seen plenty of unpredictable behaviour from both hardware and software.

Basically, if we’re ever going to beat the bad guys, particularly the ones who can socially engineer, we need to be more open with each other about what we’re doing. Not simply saying ‘here’s everything’, but genuinely engaging and discussing strategies. In this scenario, there are definitely two sides, and those of us on the good guy’s team need to start working together better on security, despite any rivalries, mistrust or competition. Working together, openly, on security benefits us all, and if we do it in good faith and building trust then it will only hurt the bad guys.

Security Training and Ebbinghaus’ Forgetting Curve

There is a problem with training users outside of the security field. People forget things, so no matter how much we might try to teach them about warning signs there is always a limit. For those who work in security, it is sometimes hard to relate to the problems people outside our domain might have with retaining training. Not only do we tend to take various points as common sense, but we also forget how difficult it is to understand deliberate breaches of the social contract.

In addition, if we’re completely honest with ourselves, we forget to check things as well – it’s just that the regular exposure to incidents that increase awareness means that these points get thoroughly burned in. Continue reading “Security Training and Ebbinghaus’ Forgetting Curve”

Teaching Cryptography with Lockpicking

I want to look at ways to relate physical models to core security concepts such as cryptography, to raise awareness and understanding of security for those who are less technical. As part of this I’ll be looking at mapping picking to power analysis, a technique for monitoring power consumption in hardware to discover cryptographic secrets.

Mechanical lock picking is an obvious starting point for this form of teaching since the terminology carries across reasonably well (or at least everyone understands keys) to digital lock picking. The first problem is that while it is simple enough to come up with a lockbox analogy for symmetric and asymmetric encryption, the single pin picking of locks doesn’t map across quite so well. Continue reading “Teaching Cryptography with Lockpicking”