A New Path

The 7th of September, 2020 is an important day for me as it will mark the the re-launch of the family company with me fully taking the helm as the new company director. This is an exciting step for me and for the future of the company.  

Bores has a wonderful heritage, being set up in October 1988 by Chris and Sarah Bore with an impassioned focus on training NXP/TriMedia hardware and software as well as image and digital signal processing (DSP). However, these industry pioneers retired earlier this year and handed over the business name and values to me, their son (along with the business trip shotglass collection).

I am not new to the family business and I have been working on an ad hoc basis for the company since 2005 (earlier if you include hooking up BNC connectors because small hands are better under desks). However, on the 7th this will be my full-time role and I have to say that I am excited for what the future holds.

Whilst wanting to maintain the essence of the business as set up by Chris and Sarah, I do have some exciting plans for the future. Bores will no longer be offering DSP or NXP training as I am steering the focus of the company onto digital, data and cyber security, and am offering a bespoke consultancy and training service along these lines.

So, where have I come from?

I have grown up with the business and technology since its foundation in 1988 and have a deep-rooted passion and fascination for computers, networking and IT security. I had once considered a different path and started a physics degree in 2000, but decided I would much rather work in IT. And I am so glad I did.

After a few years working in the IT industry across the board including; IT support, security, development, architecture and engineering, I found my focus had moved completely to security, and the growing importance of this is an ever-increasing and therefore evolving element of the industry.

At the end of 2016, following a debilitating six months of viral arthritis which gave me time to reassess my life, I enrolled at Northumbria University to do an MSc in Cyber Security, from which I graduated in 2019.

This gave me the know-how and confidence to consider working independently, although it was also at this time that I discussed the future of Bores with my parents who were considering retiring. I had intended to start my own independent consultancy, but I decided I would rather keep the family business going for another 30+ years. Bores is a piece of history. Not only for my family but for the wider technology industry and it would almost be criminal to let it go.

Although a well-established family business, I do want to put my own stamp on Bores Consultancy Ltd with a complete redesign and relaunch whilst still delivering top-quality training and consultancy services with a rigorous security perspective whilst upholding the company values set by my parents in the past.

I hope you will join me in celebrating this new era of Bores Consultancy Ltd and wish me luck for the next stage in my career and the career of this family business. But also, I would like to reassure Sarah and Chris that their creation is safe in the hands of their son meaning they can have a relaxed retirement.

I am working on the mechanics of a safe, virtual launch party of some kind for the evening of the 4th of September. Sorry to say that it will be a bring your own bottle event, but at least the travel costs will be low.

Thanks to everyone for all your support and help over the years, I would never have gotten to this point without you all.

#OSS2020 Playing with Security

A few weeks ago I ended up getting involved with the Open Security Summit 2020, initially with the idea of just running a threat modelling session. Things progressed a bit, and the idea evolved as time went on. We ended up with an interesting approach to an incident response scenario, which I want to document and carry further.

You may have had experience of incident response tabletops, or scenarios, before where there are various injects such as newspaper headlines and similar to help set the scene and get effective responses. If not the idea is to run through some sort of incident scenario to test internal response processes. Usually this involves senior members of an organisation sitting around a table to run through a scripted scenario.

With #OSS2020 we put a slightly different spin on it. Rather than just having one team, we had several, covering different perspectives of an incident – including the attacker team. Red team exercises are familiar to many, and really all we did was bring that concept to a tabletop exercise so that the scenario could shift and alter as events progressed.

The ‘script’ was a basic description of a company and their product, and initial briefing cards for each of the teams, the rest was done by moderators talking across back channels (a private Slack channel) to coordinate between breakout rooms. The experience was entirely different from any similar scenario I’ve run through, with a sense of pressure that’s often missing, and decisions having more dramatic consequences than they might (revoking root certificates to customer environments did stymie the attackers for a while, but also held back the blue team from remediation actions – whether it was the right call or not is something we’ll be discussing in the post-mortem).

It’s an exercise I’d definitely repeat again and encourage others to try. The one big conclusion we came to after the session was that a central point of coordination would be extremely useful to adjudicate on actions taken by teams, rather than relying on discussion by the team moderators. If anyone would like help performing such an exercise, tailored to their organisation, get in touch – the prep work needed is minimal and the output in terms of not only testing responses but increasing awareness and understanding is hard to overstate.

In the spirit of #OSS2020, the videos will be uploaded to YouTube and made available – including the conversations in the different breakout rooms as they occurred – and while they’ll be a long watch I would recommend taking a look when they go up.

Running a Coffeehouse

Yesterday I ran the first, to my knowledge, security coffeehouse. Based on the ideas I’ve taken on from Cafe Scientifique, where years ago I had one of my first public speaking experiences (on the science of cyber security), the security coffeehouse is about bringing people into a conversation on a security-related topic.

I have to admit, the positivity of the response has been a touch overwhelming. I can’t help but feel this has tapped into something that’s needed as much as Cafe Scientifique was. To explain I guess a little background is needed.

The Cafe Scientifique movement was born from the idea that science should be brought into the open and discussed in the public sphere. Somewhat based on the ideas of the Parisian Salons of the 17th and 18th centuries, Cafe Scientifique is about letting people talk on scientific topics that they have researched, and letting an audience engage in the discussion.

The Security Coffeehouse idea is somewhat based off that, with a touch of the London Coffeehouses of the same period (if you’re wondering if they went anywhere, Lloyds came out of one of those coffeehouses). It’s about open discussion of security between people who know or are interested in something about it, and about making that available. Not about it being a peer to peer forum, or a vendor session, or a conference, just giving anyone who’s interested the opportunity to talk about it.

The feedback suggests that there’s a definite appetite for these sorts of events, so I plan to keep running them. Unlike the coffeehouses (or penny universities if you lived in Cambridge at the time), a relaxing drink in the hands of the panellists is strongly encouraged to help encourage the right relaxing atmosphere. I hope I’ll see some of you at one of these in future, whether on the panel or in the audience, going by the inaugural event future ones are going to be good.

Last mention – a huge thank you to the panellists from the first event, you made it possible, and you helped me realise it’s a viable idea.

Pen Testing – Why, When, and Where is it going?

Between various speaking gigs, running online events, writing for the Circuit, work, and some side projects, it’s been a while since I posted. Given that, I want to reactivate this with something that might be controversial and talk about pen testing.

Pen testing, or penetration testing, is the (usually) manual assessment of a system by experts in technical security. It can encompass, or be part of, red teaming exercises where physical vectors are also used, phishing testing, and similar activities, but the USP is usually that it’s done manually by experts (for a given definition of manually – any commercial pen tester who doesn’t script 90% of what they do is doing it wrong and wasting their time and your money).

Why?

There’s no denying the value of a well-scoped pen test performed at the right time in a product lifecycle. In the perfect scenario this pen test is comprehensive, with enough time to thoroughly delve into the system and provide proof of exploitation for all discoveries. Mostly, there isn’t enough time given for a fully comprehensive and thorough test, but there’s still value in this kind of deep delve.

Even if the test is short, and poorly scoped, it’s better than nothing so long as it isn’t taken to give a false sense of security around a product.

The findings of a good pen test can provide an organisation with a final check before going live to decide whether to accept, mitigate, or transfer any risks which are made visible. What a pen test cannot provide is assurance that the outcome of a project is secure, any more than any other testing. A pen test with no findings is like a house that hasn’t burned down – it’s no reason to skip having a fire alarm.

When?

The pen test should be the last step of your security testing stack. It’s the most effortful, and most expensive, form of testing you can undergo and so wasting it on false positives and easy-to-fix vulnerabilities is not a good way to spend your money. Ideally you should pen test when you have a production-ready candidate, in parallel with your last round of release testing.

On top of that findings picked up by a pen test will be far, far more expensive to fix (both in terms of effort, delay, and financial outlay), so you want to minimise the chances of things being picked up this late. Earlier development lifecycle stages should progress from concept testing (threat modelling is great for this, and making sure requirements incorporate security), code analysis tools and secure coding practices, vulnerability scanning, fuzz testing, and/or any other testing that fits your model.

Where is it going?

This is the bit that might be controversial, and I want to be absolutely clear that I am in no way denigrating the value of pen testing, or the expertise of pen testers. This is speculation based on what I’ve seen in the security industry, and what I’m expecting to come up.

Pen testing benefits hugely from being the sexy, hyped-up side of security. Red teaming and pen testing tracks get far, far more attention and excitement than the more reactive incident response side, and both get more hype than the fully defensive planning and design stages.

I can understand this. I’m a big fan of heist movies myself, when I want to relax and do something professional I’d rather do a bit of CTF or play around with something from vulnhub than review architecture diagrams and project requirements, and I know from experience that I can get far more attendees and interest at a workshop or presentation on lockpicking than I can one on threat modelling (caveat: based on personal experience and anecdotal evidence rather than comprehensive data collection).

This is a problem. The attacking side may be the sexy side, but it only has limited effectiveness when you’re trying to defend a system. Pen testing done well is a simulated attack, and a simulated attack is not a comprehensive, long-term, defensive strategy. A pen test is, at its basic level, running through a series of tactics and seeing if they work. The reason a lot of it can be automated is because the vast majority of pen test findings are known vulnerabilities already, and the pen test is just discovering them and showing they can be exploited.

So I genuinely believe that as (or if) we mature the blue side more, pen testing is going to become both less important, and much, much more frustrating as a career. In fact security overall is (hopefully) going to become less of a thrill-a-minute experience if we do it right. In my ideal world, pen tests will become more and more a case of continuous security research into discovering new attack vectors and vulnerabilities, and pen testing for existing vulnerabilities will disappear.

That’s a utopian world and will never be fully achieved, but the automation of pen testing as we mostly know it today is here and that poses a threat to pen testing companies. While it’s an uncomfortable truth for both attack and defense sides, penetration tests are mostly compliance driven rather than arranged out of a genuine awareness of security needs. With the growth of automated testing, I am reasonably certain that it won’t be long before automated testing platforms and frameworks are accepted for compliance purposes.

Forward-thinking pen testing companies should doing two things if they want to survive and thrive. Preparing to shift into a different way of working (and honestly, a more interesting one than running through a list of the OWASP Top 10 for a few days – this is hyperbole but does have a grain of truth) is the first. Educating customers that they can add value beyond achieving ISO27k, PCI-DSS, SOC2, CBEST, etc is the second.

Doorstep Dispensaree

This entry is part of 1 in the series Lessons Learned

Since it’s a New Year, and an opportunity presented itself, I’m trying something new. In this series, if it continues, I will be looking at various incidents and pulling out the lessons we can, should, or must learn in security. This first article looks at the first penalty levied by the ICO under GDPR, against Doorstep Dispensaree.

There are a few lessons to pull out of this case, but for those who want to look at the details themselves the full ICO penalty notice is available from their website.

The ICO identified a number of failings after they were called in by the MHRA, who had found some curious containers while executing a search warrant for their own investigation. In a courtyard behind the premises they found:

  • 47 unlocked crates
  • 2 disposal bags
  • 1 cardboard box

These unsecured containers, stored outside, contained around half a million documents – some water damaged – with personal information including medical and prescription information dating from 2016-2018. Shortly after they were informed, the Commissioner sent a request for additional information, a list which will be familiar to anyone who has banged their head against GDPR enough: data retention and disposal policies, privacy notice, explanation of why some data had been retained since 2016 and was stored in this way, and various other standard bits of evidence.

In response Doorstep Dispensaree did not cover themselves in glory by denying knowledge of the matter. Things escalated when they then refused to answer questions following a second request, apparently confusing the ICO and MHRA investigations. After an Information Notice was issued requiring the information, then appealed and upheld, they provided about half of the information claiming protection under the DPA 2018 that providing the rest would expose them to prosecution by the MHRA.

As part of their documentation, they did kindly provide the National Pharmacy Association template Data Protection Officer Guidance and Checklist, and Definitions and Quick Reference Guide. Other documents did not mention the GDPR, and the template were the original templates from the National Pharmacy Association.

Lessons

In all Doorstep Dispensary were found to have contravened Articles 5, 13, 14, 24 and 32 to some degree or another. It’s a good thing for them that no information was stolen, as the ICO would have been unlikely to look on them kindly if anything had happened – especially as given the serious compliance failures it is unlikely any notification of the breach would have been forthcoming until the MHRA investigation flagged it up, and certainly not within the required 72 hours.

You cannot delegate responsibility for data you control

Doorstep Dispensaree failed to dispose of the data securely, fairly clearly as some of it dated from 2016. An attempt was made to have the penalty assigned to a waste disposal company, blaming them for not having picked up the waste, which the ICO dismissed. While no evidence that the company was contracted was provided, Doorstep Dispensaree also failed to implement their own claimed shredding procedures, and in any case as the data controller they are ultimately accountable for the breach.

Information security is more than just confidentiality

Apparently storing sensitive personal information in unlocked boxes in an accessible yard is not considered secure storage or processing. As this is fairly obvious, the main point of interest here is that the ICO picked up on water damage to the documents being another failing. In information security terminology, there was a failure to ensure integrity and availability along with confidentiality. There is more to information security than simply locking data away.

Policies do not work retroactively

The ICO commented that eventually Doorstep Dispensaree did provide a more comprehensive set of policies. Unfortunately many were still in template form and had been acquired as a response to the investigation. The lesson here is that once an investigation has started, it’s probably too late to start downloading policy templates – best to put a framework in place beforehand.

Only keep data that needs to be kept

While the ICO were generous enough to only consider on-going infringement from May 2018, they remarked that the age of the data caused some concern about retention.

Be clear on your privacy notice

Your privacy notice requires you to state who the controller is and how to contact them, the nature of the processing and the basis (and with special category data the additional basis for processing), the categories of personal data you collect and work with, all parties involved in the processing of data, how long it is retained, the rights of the data subjects, and where personal data is collected. It must also be written in clear, unambiguous language and be freely available to data subjects.

A GDPR breach does not require that data has been lost or stolen

What’s particularly interesting about this case compared to many of the headlines about cyber-related data protection incidents, is that no data was stolen. The fact that it could have led to distress and damage to individuals is sufficient.

Trying to improve counts

The improvements Doorstep Dispensaree claims to be making have been taken into account, even though some of the policy documents presented are still in template format.

If you are found in breach, co-operate

It is very clear that the lack of early co-operation did more to hurt than help Doorstep Dispensaree’s case. Later co-operation and attempts to improve were taken into account, and the original proposed penalty of £400 000 was finalised as £275 000.

‘Insider’ Threat with Deepfakes

I have been neglecting this blog recently as finishing off my master’s thesis (now done, passed and everything), working on various tasks in a new role, and finishing off a contributed book chapter have been priorities. With two of those three taken care of I am hoping to have time to write a bit more on here again.

In that spirit, and as nice piece of synergy with the book chapter, a quick word about an incident reported today. I predict it will be the first of many.

Impersonation of senior executives via e-mail has become a well-established, and profitable, practice among both organised and disorganised cyber criminals. Some of the most advanced attempts go to extreme lengths of surveillance and co-ordination. Even the most basic mass attempts have seen some success. Whether to include this as insider threat is more a philosophical debate than one where we can draw a clear line, but I choose to include it.

With the advent of Deepfake algorithms it was inevitable that impersonation would go beyond email. Deepfake pornography created for blackmail or revenge purposes appeared shortly after the technology became available. Using it for impersonation has mainly been for entertainment purposes, until recently. As far as I know, this is the first published case outside of fiction of an attacker going this far in impersonation. That they did not think to spoof the source phone number, leading to early detection, is surprising.

There are possible solutions to this. Since we have yet to effectively solve the problem of impersonation by changing only the display name on e-mails I suspect they are a long way from widespread adoption.

Security Myths: Trading Functionality for Security

This entry is part 1 of 1 in the series Security Myths

One of the more common myths in security is that it will always compete with functionality of a system. There is some element of truth in the idea, but it’s often touted as a reason for not putting security in place which is something we need to move past as an industry.

The problem is that it is true, to a degree. Once a system is built layering security protections on top will always have some impact on functionality and usability, whether that’s by requiring additional authentication by users, restricting connectivity, or simply by impacting performance. And there lies the problem – it’s true when layering security on top of an existing system.

If a system is designed with security considerations from the ground up, placed on an equal footing at the beginning of requirements gathering as other functional and non-functional requirements, then there is no need to have a significant impact on the functionality of a system. In fact, given that one of the best ways to apply security at design is to simplify and automate, designing for both security and function will usually result in a more secure, more robust, and more functional system.

Yes, it does involve more up-front work, and some additional expense, but in terms of the effort saved on sticking web application firewalls, additional incompatible authentication mechanisms, and external monitoring systems among many other point solutions in front of a system quickly pays back that initial effort.

So more accurately, security will always be a trade off with functionality when it is applied too late to a system. When it is incorporated into requirements and built in as a core part of the design, this idea no longer needs to apply.

GDPR and Consent

Despite what the non-stop wave of e-mails, text messages, and notices popping up everywhere would have you believe, GDPR is not about consent.

In fact as a rule of thumb what the legislation suggests is that consent should only be relied on to collect, store, or process personal data if you do not have any other legitimate reason to do so.

A quick look at the lawful bases provided for under GDPR, and the rights that each provides, show this quite well.

  • Consent: the subject has given clear consent to process their data for a specific purpose
  • Contract: processing is necessary to fulfil a contract with the subject, or to prepare a contract that they have requested
  • Legal obligation: processing is necessary to comply with the law (exception being contractual obligations)
  • Vital interests: processing is necessary to protect someone’s life (not necessarily the subject)
  • Public task: processing is necessary to perform a task in the public interest, or for your official functions
  • Legitimate interests: processing is necessary for legitimate interests of yourself or a third party (this does not apply to public authorities carrying out official tasks)

The only reason all of these consent notices are popping up everywhere is because none of the other bases apply. Mostly it’s about getting consent to profile you in order to better target advertising. So, when ticking those consent forms, or giving consent, it’s worth remembering that it isn’t down to GDPR that you’re now seeing them everywhere – it’s down to companies being desperate to use your personal information to more effectively persuade you to buy things or think a certain way, and not having any reason in your interest for doing so.