Running a Coffeehouse

Yesterday I ran the first, to my knowledge, security coffeehouse. Based on the ideas I’ve taken on from Cafe Scientifique, where years ago I had one of my first public speaking experiences (on the science of cyber security), the security coffeehouse is about bringing people into a conversation on a security-related topic.

I have to admit, the positivity of the response has been a touch overwhelming. I can’t help but feel this has tapped into something that’s needed as much as Cafe Scientifique was. To explain I guess a little background is needed.

The Cafe Scientifique movement was born from the idea that science should be brought into the open and discussed in the public sphere. Somewhat based on the ideas of the Parisian Salons of the 17th and 18th centuries, Cafe Scientifique is about letting people talk on scientific topics that they have researched, and letting an audience engage in the discussion.

The Security Coffeehouse idea is somewhat based off that, with a touch of the London Coffeehouses of the same period (if you’re wondering if they went anywhere, Lloyds came out of one of those coffeehouses). It’s about open discussion of security between people who know or are interested in something about it, and about making that available. Not about it being a peer to peer forum, or a vendor session, or a conference, just giving anyone who’s interested the opportunity to talk about it.

The feedback suggests that there’s a definite appetite for these sorts of events, so I plan to keep running them. Unlike the coffeehouses (or penny universities if you lived in Cambridge at the time), a relaxing drink in the hands of the panellists is strongly encouraged to help encourage the right relaxing atmosphere. I hope I’ll see some of you at one of these in future, whether on the panel or in the audience, going by the inaugural event future ones are going to be good.

Last mention – a huge thank you to the panellists from the first event, you made it possible, and you helped me realise it’s a viable idea.

Pen Testing – Why, When, and Where is it going?

Between various speaking gigs, running online events, writing for the Circuit, work, and some side projects, it’s been a while since I posted. Given that, I want to reactivate this with something that might be controversial and talk about pen testing.

Pen testing, or penetration testing, is the (usually) manual assessment of a system by experts in technical security. It can encompass, or be part of, red teaming exercises where physical vectors are also used, phishing testing, and similar activities, but the USP is usually that it’s done manually by experts (for a given definition of manually – any commercial pen tester who doesn’t script 90% of what they do is doing it wrong and wasting their time and your money).

Why?

There’s no denying the value of a well-scoped pen test performed at the right time in a product lifecycle. In the perfect scenario this pen test is comprehensive, with enough time to thoroughly delve into the system and provide proof of exploitation for all discoveries. Mostly, there isn’t enough time given for a fully comprehensive and thorough test, but there’s still value in this kind of deep delve.

Even if the test is short, and poorly scoped, it’s better than nothing so long as it isn’t taken to give a false sense of security around a product.

The findings of a good pen test can provide an organisation with a final check before going live to decide whether to accept, mitigate, or transfer any risks which are made visible. What a pen test cannot provide is assurance that the outcome of a project is secure, any more than any other testing. A pen test with no findings is like a house that hasn’t burned down – it’s no reason to skip having a fire alarm.

When?

The pen test should be the last step of your security testing stack. It’s the most effortful, and most expensive, form of testing you can undergo and so wasting it on false positives and easy-to-fix vulnerabilities is not a good way to spend your money. Ideally you should pen test when you have a production-ready candidate, in parallel with your last round of release testing.

On top of that findings picked up by a pen test will be far, far more expensive to fix (both in terms of effort, delay, and financial outlay), so you want to minimise the chances of things being picked up this late. Earlier development lifecycle stages should progress from concept testing (threat modelling is great for this, and making sure requirements incorporate security), code analysis tools and secure coding practices, vulnerability scanning, fuzz testing, and/or any other testing that fits your model.

Where is it going?

This is the bit that might be controversial, and I want to be absolutely clear that I am in no way denigrating the value of pen testing, or the expertise of pen testers. This is speculation based on what I’ve seen in the security industry, and what I’m expecting to come up.

Pen testing benefits hugely from being the sexy, hyped-up side of security. Red teaming and pen testing tracks get far, far more attention and excitement than the more reactive incident response side, and both get more hype than the fully defensive planning and design stages.

I can understand this. I’m a big fan of heist movies myself, when I want to relax and do something professional I’d rather do a bit of CTF or play around with something from vulnhub than review architecture diagrams and project requirements, and I know from experience that I can get far more attendees and interest at a workshop or presentation on lockpicking than I can one on threat modelling (caveat: based on personal experience and anecdotal evidence rather than comprehensive data collection).

This is a problem. The attacking side may be the sexy side, but it only has limited effectiveness when you’re trying to defend a system. Pen testing done well is a simulated attack, and a simulated attack is not a comprehensive, long-term, defensive strategy. A pen test is, at its basic level, running through a series of tactics and seeing if they work. The reason a lot of it can be automated is because the vast majority of pen test findings are known vulnerabilities already, and the pen test is just discovering them and showing they can be exploited.

So I genuinely believe that as (or if) we mature the blue side more, pen testing is going to become both less important, and much, much more frustrating as a career. In fact security overall is (hopefully) going to become less of a thrill-a-minute experience if we do it right. In my ideal world, pen tests will become more and more a case of continuous security research into discovering new attack vectors and vulnerabilities, and pen testing for existing vulnerabilities will disappear.

That’s a utopian world and will never be fully achieved, but the automation of pen testing as we mostly know it today is here and that poses a threat to pen testing companies. While it’s an uncomfortable truth for both attack and defense sides, penetration tests are mostly compliance driven rather than arranged out of a genuine awareness of security needs. With the growth of automated testing, I am reasonably certain that it won’t be long before automated testing platforms and frameworks are accepted for compliance purposes.

Forward-thinking pen testing companies should doing two things if they want to survive and thrive. Preparing to shift into a different way of working (and honestly, a more interesting one than running through a list of the OWASP Top 10 for a few days – this is hyperbole but does have a grain of truth) is the first. Educating customers that they can add value beyond achieving ISO27k, PCI-DSS, SOC2, CBEST, etc is the second.

Doorstep Dispensaree

This entry is part of 1 in the series Lessons Learned

Since it’s a New Year, and an opportunity presented itself, I’m trying something new. In this series, if it continues, I will be looking at various incidents and pulling out the lessons we can, should, or must learn in security. This first article looks at the first penalty levied by the ICO under GDPR, against Doorstep Dispensaree.

There are a few lessons to pull out of this case, but for those who want to look at the details themselves the full ICO penalty notice is available from their website.

The ICO identified a number of failings after they were called in by the MHRA, who had found some curious containers while executing a search warrant for their own investigation. In a courtyard behind the premises they found:

  • 47 unlocked crates
  • 2 disposal bags
  • 1 cardboard box

These unsecured containers, stored outside, contained around half a million documents – some water damaged – with personal information including medical and prescription information dating from 2016-2018. Shortly after they were informed, the Commissioner sent a request for additional information, a list which will be familiar to anyone who has banged their head against GDPR enough: data retention and disposal policies, privacy notice, explanation of why some data had been retained since 2016 and was stored in this way, and various other standard bits of evidence.

In response Doorstep Dispensaree did not cover themselves in glory by denying knowledge of the matter. Things escalated when they then refused to answer questions following a second request, apparently confusing the ICO and MHRA investigations. After an Information Notice was issued requiring the information, then appealed and upheld, they provided about half of the information claiming protection under the DPA 2018 that providing the rest would expose them to prosecution by the MHRA.

As part of their documentation, they did kindly provide the National Pharmacy Association template Data Protection Officer Guidance and Checklist, and Definitions and Quick Reference Guide. Other documents did not mention the GDPR, and the template were the original templates from the National Pharmacy Association.

Lessons

In all Doorstep Dispensary were found to have contravened Articles 5, 13, 14, 24 and 32 to some degree or another. It’s a good thing for them that no information was stolen, as the ICO would have been unlikely to look on them kindly if anything had happened – especially as given the serious compliance failures it is unlikely any notification of the breach would have been forthcoming until the MHRA investigation flagged it up, and certainly not within the required 72 hours.

You cannot delegate responsibility for data you control

Doorstep Dispensaree failed to dispose of the data securely, fairly clearly as some of it dated from 2016. An attempt was made to have the penalty assigned to a waste disposal company, blaming them for not having picked up the waste, which the ICO dismissed. While no evidence that the company was contracted was provided, Doorstep Dispensaree also failed to implement their own claimed shredding procedures, and in any case as the data controller they are ultimately accountable for the breach.

Information security is more than just confidentiality

Apparently storing sensitive personal information in unlocked boxes in an accessible yard is not considered secure storage or processing. As this is fairly obvious, the main point of interest here is that the ICO picked up on water damage to the documents being another failing. In information security terminology, there was a failure to ensure integrity and availability along with confidentiality. There is more to information security than simply locking data away.

Policies do not work retroactively

The ICO commented that eventually Doorstep Dispensaree did provide a more comprehensive set of policies. Unfortunately many were still in template form and had been acquired as a response to the investigation. The lesson here is that once an investigation has started, it’s probably too late to start downloading policy templates – best to put a framework in place beforehand.

Only keep data that needs to be kept

While the ICO were generous enough to only consider on-going infringement from May 2018, they remarked that the age of the data caused some concern about retention.

Be clear on your privacy notice

Your privacy notice requires you to state who the controller is and how to contact them, the nature of the processing and the basis (and with special category data the additional basis for processing), the categories of personal data you collect and work with, all parties involved in the processing of data, how long it is retained, the rights of the data subjects, and where personal data is collected. It must also be written in clear, unambiguous language and be freely available to data subjects.

A GDPR breach does not require that data has been lost or stolen

What’s particularly interesting about this case compared to many of the headlines about cyber-related data protection incidents, is that no data was stolen. The fact that it could have led to distress and damage to individuals is sufficient.

Trying to improve counts

The improvements Doorstep Dispensaree claims to be making have been taken into account, even though some of the policy documents presented are still in template format.

If you are found in breach, co-operate

It is very clear that the lack of early co-operation did more to hurt than help Doorstep Dispensaree’s case. Later co-operation and attempts to improve were taken into account, and the original proposed penalty of £400 000 was finalised as £275 000.

‘Insider’ Threat with Deepfakes

I have been neglecting this blog recently as finishing off my master’s thesis (now done, passed and everything), working on various tasks in a new role, and finishing off a contributed book chapter have been priorities. With two of those three taken care of I am hoping to have time to write a bit more on here again.

In that spirit, and as nice piece of synergy with the book chapter, a quick word about an incident reported today. I predict it will be the first of many.

Impersonation of senior executives via e-mail has become a well-established, and profitable, practice among both organised and disorganised cyber criminals. Some of the most advanced attempts go to extreme lengths of surveillance and co-ordination. Even the most basic mass attempts have seen some success. Whether to include this as insider threat is more a philosophical debate than one where we can draw a clear line, but I choose to include it.

With the advent of Deepfake algorithms it was inevitable that impersonation would go beyond email. Deepfake pornography created for blackmail or revenge purposes appeared shortly after the technology became available. Using it for impersonation has mainly been for entertainment purposes, until recently. As far as I know, this is the first published case outside of fiction of an attacker going this far in impersonation. That they did not think to spoof the source phone number, leading to early detection, is surprising.

There are possible solutions to this. Since we have yet to effectively solve the problem of impersonation by changing only the display name on e-mails I suspect they are a long way from widespread adoption.

Security Myths: Trading Functionality for Security

This entry is part 1 of 1 in the series Security Myths

One of the more common myths in security is that it will always compete with functionality of a system. There is some element of truth in the idea, but it’s often touted as a reason for not putting security in place which is something we need to move past as an industry.

The problem is that it is true, to a degree. Once a system is built layering security protections on top will always have some impact on functionality and usability, whether that’s by requiring additional authentication by users, restricting connectivity, or simply by impacting performance. And there lies the problem – it’s true when layering security on top of an existing system.

If a system is designed with security considerations from the ground up, placed on an equal footing at the beginning of requirements gathering as other functional and non-functional requirements, then there is no need to have a significant impact on the functionality of a system. In fact, given that one of the best ways to apply security at design is to simplify and automate, designing for both security and function will usually result in a more secure, more robust, and more functional system.

Yes, it does involve more up-front work, and some additional expense, but in terms of the effort saved on sticking web application firewalls, additional incompatible authentication mechanisms, and external monitoring systems among many other point solutions in front of a system quickly pays back that initial effort.

So more accurately, security will always be a trade off with functionality when it is applied too late to a system. When it is incorporated into requirements and built in as a core part of the design, this idea no longer needs to apply.

Going After the Big Phish

A while ago I did a presentation for one of the CTG events on Executive Security and Close Protection Technology. It was good to be able to go in with a slightly different take on things, as I was approaching it as a basic awareness of how people carry out whaling and phishing, trying to target senior executives, and what awareness an exec security team might want or need to have to protect their principal from this sort of decidedly non-physical attack.

For various reasons I hadn’t got around to making this one available until recently, after having heard James Linton, the man who phished the White House as a prank, speak. It prodded my memory and so, in case it’s useful to anyone, here’s the presentation.

GDPR and Consent

Despite what the non-stop wave of e-mails, text messages, and notices popping up everywhere would have you believe, GDPR is not about consent.

In fact as a rule of thumb what the legislation suggests is that consent should only be relied on to collect, store, or process personal data if you do not have any other legitimate reason to do so.

A quick look at the lawful bases provided for under GDPR, and the rights that each provides, show this quite well.

  • Consent: the subject has given clear consent to process their data for a specific purpose
  • Contract: processing is necessary to fulfil a contract with the subject, or to prepare a contract that they have requested
  • Legal obligation: processing is necessary to comply with the law (exception being contractual obligations)
  • Vital interests: processing is necessary to protect someone’s life (not necessarily the subject)
  • Public task: processing is necessary to perform a task in the public interest, or for your official functions
  • Legitimate interests: processing is necessary for legitimate interests of yourself or a third party (this does not apply to public authorities carrying out official tasks)

The only reason all of these consent notices are popping up everywhere is because none of the other bases apply. Mostly it’s about getting consent to profile you in order to better target advertising. So, when ticking those consent forms, or giving consent, it’s worth remembering that it isn’t down to GDPR that you’re now seeing them everywhere – it’s down to companies being desperate to use your personal information to more effectively persuade you to buy things or think a certain way, and not having any reason in your interest for doing so.

Santa and the GDPR

There’s a meme that’s been doing the rounds for a while, and with Christmas approaching has become especially popular.

Firstly article 4 is made up of definitions (https://gdpr-info.eu/art-4-gdpr/), so difficult to be in breach of, but let’s ignore that and run with the joke.

Lawful Basis

The first thing to do is to establish the lawful basis under which the processing is taking place. I’d argue that contract rather than legitimate interests applies here, since there is a standing agreement between parties (who believe in Santa) and Claus Corp. that belief and good behaviour will bring gifts (bad behaviour, obviously, bringing coal or similar). While the parties in question are under the age of capacity (in the vast majority) and cannot enter contracts themselves, in this instance their guardians can give consent to enter into and perform the gift/behaviour exchange contract on their behalf (https://gdpr-info.eu/art-8-gdpr/).

Now this does raise the question of children or other parties who do not believe. In this instance there is no mechanism for Claus Corp. to collect or process any data about these parties and so again I see no problems here. Data is only collected on those who believe in Santa, and therefore also agree to the gift/behaviour contract (or rather will have guardians willing to do so on their behalf).

There is an argument that legitimate interests could apply as a legal basis, but the additional responsibility, right to portability, and other requirements make it less suitable in this instance.

Right to Object

Under the contract lawful basis data subjects do have the right to object () to the processing of their data, meaning the processing must cease unless the controller can demonstrate legitimate grounds for the processing weighed up against the interests of the data subject. Of course, objecting would raise questions about gifts later being received, meaning that if an objection is received processing would cease in any case. Now it is important that in any correspondence with the data subject, or their guardian, Claus Corp. notify them of their right to object and the mechanism to do so, but this is a relatively simple requirement.

Notification Obligation and Restriction of Processing

Claus Corp. does have a responsibility to communicate any rectification or erasure of personal data, or restriction of processing (https://gdpr-info.eu/art-19-gdpr/). Of course, the contract basis does not provide an automatic right to erasure, we have already established that the processing is lawful, and Claus Corp. famously keeps their data worryingly up to date. The only instance that could apply is one in which the controller, Claus Corp. no longer needs the data for the purposes of processing but the subject wishes for the data to be retained for legal purposes. Given the unlikelihood of any legal case dependent on Claus Corp. data, and their extraterritoriality for any treaties allowing legal authorities access to their databases, this is not a scenario that needs to be examined in depth.

Automated decision-making and profiling

This is an area that could raise issues for Claus Corp. While the details of their decision-making process are obviously proprietary, it is safe to assume at least some level of automation is involved due to the sheer volume of data gathered (https://gdpr-info.eu/art-22-gdpr/). However, in this instance not only are no legal effects produced, but the processing is necessary to enter into and perform the gift/behaviour exchange contract. Whether the data subject (or their guardian(s)) give explicit consent is a more challenging question, as it is clear that by believing and communicating that belief to guardians they are making a clear and active decision to enter into the contract. I suspect this would be seen as explicit consent, but without test cases there is still some uncertainty here.

As such the best decision for Claus Corp. would be to provide a means for appeal to obtain human intervention (elf intervention is not counted as sufficient under the GDPR, and indeed elf processing of data in the first place may be considered automated decision making rather than manual, another instance where a test case would help to establish precedent).

So long as Claus Corp. makes some minimal changes to their working practices, and applies appropriate security for the data being processed (there is no indication or suggestion that they are not doing so), I see nothing to suggest they are in breach of any article of the General Data Protection Regulation.

Having said that I do have some concerns around the Tooth Fairy agency, who are clear processing sensitive medical data and provide no such clear channels for communication as Claus Corp., but that will wait for another article.