‘Insider’ Threat with Deepfakes

I have been neglecting this blog recently as finishing off my master’s thesis (now done, passed and everything), working on various tasks in a new role, and finishing off a contributed book chapter have been priorities. With two of those three taken care of I am hoping to have time to write a bit more on here again.

In that spirit, and as nice piece of synergy with the book chapter, a quick word about an incident reported today. I predict it will be the first of many.

Impersonation of senior executives via e-mail has become a well-established, and profitable, practice among both organised and disorganised cyber criminals. Some of the most advanced attempts go to extreme lengths of surveillance and co-ordination. Even the most basic mass attempts have seen some success. Whether to include this as insider threat is more a philosophical debate than one where we can draw a clear line, but I choose to include it.

With the advent of Deepfake algorithms it was inevitable that impersonation would go beyond email. Deepfake pornography created for blackmail or revenge purposes appeared shortly after the technology became available. Using it for impersonation has mainly been for entertainment purposes, until recently. As far as I know, this is the first published case outside of fiction of an attacker going this far in impersonation. That they did not think to spoof the source phone number, leading to early detection, is surprising.

There are possible solutions to this. Since we have yet to effectively solve the problem of impersonation by changing only the display name on e-mails I suspect they are a long way from widespread adoption.

Security Myths: Trading Functionality for Security

This entry is part 1 of 1 in the series Security Myths

One of the more common myths in security is that it will always compete with functionality of a system. There is some element of truth in the idea, but it’s often touted as a reason for not putting security in place which is something we need to move past as an industry.

The problem is that it is true, to a degree. Once a system is built layering security protections on top will always have some impact on functionality and usability, whether that’s by requiring additional authentication by users, restricting connectivity, or simply by impacting performance. And there lies the problem – it’s true when layering security on top of an existing system.

If a system is designed with security considerations from the ground up, placed on an equal footing at the beginning of requirements gathering as other functional and non-functional requirements, then there is no need to have a significant impact on the functionality of a system. In fact, given that one of the best ways to apply security at design is to simplify and automate, designing for both security and function will usually result in a more secure, more robust, and more functional system.

Yes, it does involve more up-front work, and some additional expense, but in terms of the effort saved on sticking web application firewalls, additional incompatible authentication mechanisms, and external monitoring systems among many other point solutions in front of a system quickly pays back that initial effort.

So more accurately, security will always be a trade off with functionality when it is applied too late to a system. When it is incorporated into requirements and built in as a core part of the design, this idea no longer needs to apply.

Going After the Big Phish

A while ago I did a presentation for one of the CTG events on Executive Security and Close Protection Technology. It was good to be able to go in with a slightly different take on things, as I was approaching it as a basic awareness of how people carry out whaling and phishing, trying to target senior executives, and what awareness an exec security team might want or need to have to protect their principal from this sort of decidedly non-physical attack.

For various reasons I hadn’t got around to making this one available until recently, after having heard James Linton, the man who phished the White House as a prank, speak. It prodded my memory and so, in case it’s useful to anyone, here’s the presentation.

GDPR and Consent

Despite what the non-stop wave of e-mails, text messages, and notices popping up everywhere would have you believe, GDPR is not about consent.

In fact as a rule of thumb what the legislation suggests is that consent should only be relied on to collect, store, or process personal data if you do not have any other legitimate reason to do so.

A quick look at the lawful bases provided for under GDPR, and the rights that each provides, show this quite well.

  • Consent: the subject has given clear consent to process their data for a specific purpose
  • Contract: processing is necessary to fulfil a contract with the subject, or to prepare a contract that they have requested
  • Legal obligation: processing is necessary to comply with the law (exception being contractual obligations)
  • Vital interests: processing is necessary to protect someone’s life (not necessarily the subject)
  • Public task: processing is necessary to perform a task in the public interest, or for your official functions
  • Legitimate interests: processing is necessary for legitimate interests of yourself or a third party (this does not apply to public authorities carrying out official tasks)

The only reason all of these consent notices are popping up everywhere is because none of the other bases apply. Mostly it’s about getting consent to profile you in order to better target advertising. So, when ticking those consent forms, or giving consent, it’s worth remembering that it isn’t down to GDPR that you’re now seeing them everywhere – it’s down to companies being desperate to use your personal information to more effectively persuade you to buy things or think a certain way, and not having any reason in your interest for doing so.

Santa and the GDPR

There’s a meme that’s been doing the rounds for a while, and with Christmas approaching has become especially popular.

Firstly article 4 is made up of definitions (https://gdpr-info.eu/art-4-gdpr/), so difficult to be in breach of, but let’s ignore that and run with the joke.

Lawful Basis

The first thing to do is to establish the lawful basis under which the processing is taking place. I’d argue that contract rather than legitimate interests applies here, since there is a standing agreement between parties (who believe in Santa) and Claus Corp. that belief and good behaviour will bring gifts (bad behaviour, obviously, bringing coal or similar). While the parties in question are under the age of capacity (in the vast majority) and cannot enter contracts themselves, in this instance their guardians can give consent to enter into and perform the gift/behaviour exchange contract on their behalf (https://gdpr-info.eu/art-8-gdpr/).

Now this does raise the question of children or other parties who do not believe. In this instance there is no mechanism for Claus Corp. to collect or process any data about these parties and so again I see no problems here. Data is only collected on those who believe in Santa, and therefore also agree to the gift/behaviour contract (or rather will have guardians willing to do so on their behalf).

There is an argument that legitimate interests could apply as a legal basis, but the additional responsibility, right to portability, and other requirements make it less suitable in this instance.

Right to Object

Under the contract lawful basis data subjects do have the right to object () to the processing of their data, meaning the processing must cease unless the controller can demonstrate legitimate grounds for the processing weighed up against the interests of the data subject. Of course, objecting would raise questions about gifts later being received, meaning that if an objection is received processing would cease in any case. Now it is important that in any correspondence with the data subject, or their guardian, Claus Corp. notify them of their right to object and the mechanism to do so, but this is a relatively simple requirement.

Notification Obligation and Restriction of Processing

Claus Corp. does have a responsibility to communicate any rectification or erasure of personal data, or restriction of processing (https://gdpr-info.eu/art-19-gdpr/). Of course, the contract basis does not provide an automatic right to erasure, we have already established that the processing is lawful, and Claus Corp. famously keeps their data worryingly up to date. The only instance that could apply is one in which the controller, Claus Corp. no longer needs the data for the purposes of processing but the subject wishes for the data to be retained for legal purposes. Given the unlikelihood of any legal case dependent on Claus Corp. data, and their extraterritoriality for any treaties allowing legal authorities access to their databases, this is not a scenario that needs to be examined in depth.

Automated decision-making and profiling

This is an area that could raise issues for Claus Corp. While the details of their decision-making process are obviously proprietary, it is safe to assume at least some level of automation is involved due to the sheer volume of data gathered (https://gdpr-info.eu/art-22-gdpr/). However, in this instance not only are no legal effects produced, but the processing is necessary to enter into and perform the gift/behaviour exchange contract. Whether the data subject (or their guardian(s)) give explicit consent is a more challenging question, as it is clear that by believing and communicating that belief to guardians they are making a clear and active decision to enter into the contract. I suspect this would be seen as explicit consent, but without test cases there is still some uncertainty here.

As such the best decision for Claus Corp. would be to provide a means for appeal to obtain human intervention (elf intervention is not counted as sufficient under the GDPR, and indeed elf processing of data in the first place may be considered automated decision making rather than manual, another instance where a test case would help to establish precedent).

So long as Claus Corp. makes some minimal changes to their working practices, and applies appropriate security for the data being processed (there is no indication or suggestion that they are not doing so), I see nothing to suggest they are in breach of any article of the General Data Protection Regulation.

Having said that I do have some concerns around the Tooth Fairy agency, who are clear processing sensitive medical data and provide no such clear channels for communication as Claus Corp., but that will wait for another article.

Initiative Q and Social Proof

Initiative Q is tomorrow’s payment network, an innovative combination of cryptocurrency and payment processor which will take over the world and make all of it’s early investors rich beyond their wildest dreams. Even better, those early investors don’t need to invest any of their money, just give an e-mail address and register. Then they can even invite friends and family (from a limited pool of invites) to take part too.

Since they’re not asking for anything (there have been suggestions it’s an e-mail harvesting scheme, but there are much easier, cheaper and quicker ways to simply grab e-mails and social data) there’s nothing for those initial investors to lose, so why not get involved.

Well, that’s the pitch anyway. I’m not going to comment on whether or not I think Initiative Q is malicious (I don’t) or a scam (that’s a longer discussion), just on what that initial investment involves and what social engineering they’re using here to expand their initial user base. It’s also important to note that the initial investors are definitely putting a stake into the game when they sign up, especially when they start inviting friends and family, and this is something that people should be aware of before jumping in.

I can’t imagine anyone not seeing the cynical manipulation of the front page, with its estimated future value of the next spot, so won’t dwell on it other than to say it’s a rather clever way to apply a time limitation on decision-making as it ticks down (and does at least make it clear that only 10% of the future-value tokens are available after registration, the rest depending partly on inviting five friends and unspecified future tasks). I’ll be honest, this type of manipulation does tend to get my alarm bells ringing.

A false sense of urgency and/or scarcity (and this has both – urgency with the value ticking down, scarcity on the small number of invites per person) is one of the classic pillars of social engineering. It tends to help convince by short-circuiting the decision-making process, since there’s no time to consider carefully and instead you must buy NOW!

The more insidious part to me is the idea that sign ups are not committing anything, therefore are safe. I blame social media for this one. What’s being invested isn’t financial worth, but individual’s social capital as they are used to convince friends and family. This is social proof in action – large numbers of people, smaller numbers of close friends and family, again is a way to let our lazy brains off the hook on examining a proposition, so that we act on autopilot and sign up.

Worse is that once someone sees themselves as an investor, with a substantial future stake, they are much more likely to aggressively defend the system if anyone tries to cast doubts on it (there have been a few examples of this recently), and refuse to accept any evidence that it might not be everything that was promised. By encouraging people to invite five friends you’re also promoting consistency – no one wants to go back on what they said to five of their friends (or strangers in a lot of cases) in public, and so any actual evidence against the system will be ignored or discredited.

So the weights that need to be considered aren’t financial – instead by signing up you are committing your individual integrity and social capital to this system, which has explicitly stated that all they’re after at this point is growth. It’s fair to say there is nothing particularly innovative about the Initiative Q technology, despite their claims, and the particular model has been tried before with less-polished marketing. Before signing up, please think carefully, remember that they aren’t promising anything (the initiative is very careful in their language to be conditional) while you’re putting a piece of your own identity into the scheme, committing to defend it against sceptics, to carry out unspecified future tasks, and to drag your friends in using your own credibility.

Why Work in Cyber Security? (Again)

Last year I gave a couple of presentations to some college students about why they really, really should look to work in cyber security, and how to get into it. That time’s rolled around again, and I realized I’d lost the original source document for the slides, so had to do a quick update from scratch. This is the result. 

Vulnerability Scanning and Pen Testing

Recently I came across an article which nicely put forward a point I’ve been arguing for a while, often without much success: . Unlike Babar, I’m not amazed that there’s a confusion between vulnerability scanning and pen testing. The aim of both is to find vulnerabilities, and if the report is being read by someone who is not a security specialist and knows what they’re looking for then a vulnerability scan looks awfully similar.

The difference is vital. A good penetration test will often start with a vulnerability scan, a bad one stop there (aside from replacing the logo on the output). I’ve seen plenty of reports which I could have spun out of Nessus or Qualys myself from budget pen tests. This isn’t always the fault of a pen test company (although a good one should always advise customers of the requirements for effective engagement) – if you’re given only a couple of days engagement for a black box test (one where you have minimal information about the target) then you aren’t going to get much more than a vulnerability scan out of it.

The problem comes when those short engagements, with pen testers not provided proper information, are counted as pen tests for security. They aren’t – they will not discover anything an automated scan doesn’t, and so a company is simply wasting money on security theatre. Either don’t bother pretending that a pen test has occurred, or engage a company fully and effectively to assess a system over a longer period of time, with all the information and documentation available internally. This is one of those cases where a budget solution is worse than none at all.

This isn’t intended to diminish vulnerability scanning – it’s a vital activity that everyone should carry out on a regular basis (in my perfect world near-realtime, in reality, often monthly) to assess and deal with vulnerabilities. In a perfect world, these results can simply be handed over to a penetration testing company to save you a little money – pen testers are not the enemy, they are there to help you find holes, giving them additional information means that time isn’t wasted looking for non-existent holes and everyone gets better intelligence out at the end.

Mahmood, B. (2018) Vulnerability Scanning vs. Penetration Testing: What’s the Difference?, The State of Security. Available at: https://www.tripwire.com/state-of-security/vulnerability-management/difference-vulnerability-scanning-penetration-testing/ (Accessed: 8 October 2018).

Thinking Inside the Box – Insider Threat Modelling

Earlier this week I was presenting at Forum Event’s Security IT Summit on insider threat modelling (get the presentation title joke? I was rather proud of it) which seemed to go well. A few people asked for the slides, and I’m always happy to share something that’s found useful, just wanted to add some context as well.

Presenting at the IT Security Summit

For those who are not aware Threat Modelling is simply an approach to making sure a system is secure as part of the design phase (and ideally through operational life and decommissioning) by anticipating possible threats or attacks. There’s a variety of methodologies of varying complexity and suitable for different purposes, but they are all very focused on external attackers (this is not a bad thing – insider and external are different classes of threat and a model attempting to address both would likely be spread thin). In the presentation I wanted to suggest the misty outlines of a way to start approaching the same modelling against insider threats, and make the problem we’re trying to solve clear. Given the level of engagement and discussion that followed, and some of the feedback I received, I think it went well. On a related note I know how I’m going to be using my dissertation, given the opportunity.

As always, commentary, feedback, or contacts more than welcome. Without further ado, onto the slides.

Thinking-Inside-the-Box