I'm just back from AD&D: 2nd Edition. I mean, from the second occurence of the Active Defense & Deception workshop co-hosted with EuroS&P (what did you expect, you rolist geeks?).
This year the workshop took place in Delft, Netherlands. We did the trip back and forth from France in the same day, no chance this time for sight-seeing or simply for spending quality time with the very interesting and friendly community I see growing. I hope you all had a blast.
Take-aways from AD&D 2
If you missed it, here a summary of this year's talks, with some personal observation here and there. You'll find some of the presentation slides on the http://adnd.work website.
For the record, my (self-imposed) mission is to make The World adopt deception as a defense mechanism, it's via this prism that I'm looking at research progress in the field.
The first talk was about the ability of ChatGPT (version 3) to phish people. In her talk, Megha Sharma let humans and ChatGPT write phishing emails. Through a questionnaire, these emails were shown to people who had to classify them as genuine or phishing. The TL;DR is that humans phishers are still stronger today than ChatGPT. But considering the potential for automation, this looks to me that the future of phishing will be a bleak one.
An interesting observation is that phishing works through activating cognitive biases. Such biases keep working even if you're security aware, meaning that noone is immune. The positive conclusion I want to keep is the corrolary that cognitive biases will thus also always work against cyber-attackers. I'm looking forward to seeing more research on that aspect.
Being a tinkerer, I LOVED what Daniel Reti showed us next: a 'Honey Infiltrator'. This can be a raspberry pi with two ethernet connections (one in, one out) that can physically be plugged to a device. The interceptor can then work as (and technically is) a Man-In-The-Middle proxy. The idea is to program the device to inject deceptive elements on-the-fly by manipulating the TCP packets. This can be used, for example, to inject files in an FTP server, or to inject web pages in a web server. Of course, this is not trivial to configure and comes with its own problems and limitations, but as a geek tool it is extremely promising.
This led to interesting discussions right into the coffee break about what is worth simulating versus planting actual files, the very types of discussion that made me think on where I want to push my own research project (which basically is a reverse-proxy for HTTP, built to be as user friendly as possible to accelerate adoption)
The next talk, from Jacob Quibell, 'attacks' an important problem: the profiling of attackers. The way Quibell addresses the problem is in my opinion simple and elegant: it consists in presenting to an attacker a set of files, all linked to one of six topics, from HR data, to financial data. These six topics are mapped to an attacker category: 'insider' to 'HR', 'criminal' to 'financial', etc.
Then, it's a matter of checking which types of files are opened by the attacker. After a few rounds where different content is presented, a confidence score is established. It's still an early concept at the moment, I'm looking forward to see what will come out of it.
Hello TU Delft, thanks for the nice location and for the excellent food!
After the break, we resumed for our last 4 talks.
Chelsea Johnson talked about the Sunk Cost fallacy. You know, the bias that make you insist on completing project A even when it's clear that abandoning it and doing project B instead would be a better course of action.
Johnson used a panel of people to make them solve puzzles on two sets of projects: A and B. She used two angles to trigger and measure the impact of the fallacy. Unsurprisingly, people were more eager to continue working on their project (rather than switching) when they were closer to completion, or when the benefit of switching was unclear.
This created a nice baseline in my opinion of how people react in general. Now to see how this can be applied to cyber-attackers (and extended to other cognitive biases), there is a just-started project sponsored by the IARPA. It's called ReSCIND and its goals can only inspire awe: to systematically assess the triggering and impact of cognitive biases for cyber-defense goals. First results expected in one year!
Next, Tyler Malloy talked about transfer learning. It was done in the context of a machine learning agent becoming better at defending in a (simple) game as it got experience as an attacker, and vice-versa. This reminded me of how Alpha Zero learnt to play chess by itself. The main idea is that by training on simpler games, the agent could then be exposed to more complex (security) games. Will Malloy create the best AI hacker AND defender this way?
The next talk was also about agents, this one using the CyberBattleSim playground which simulates a hacker taking control of systems. Ryan Gabrys and his colleagues extended the playground with deceptive notions such as fake credentials and honeypots. They then compared how the hacking agent reacts in three different situations: when the CyberBattleSim game is played as-is, when honeypots are added in a static way (meaning: they are always the same and always at the same place) and when honeypots are dynamically mutated (based on a set of rules) between each 'round'. The results are... intriguing. In essence, the agent was only slightly annoyed by the presence of static deception, it quickly adapted to its presence and almost performed as good as without. But when honeypots start to get mutated, the agent completely loses its ability to attack.
I'm not sure what conclusion should be drawn. I doubt humans would react in a similar way, but maybe Gabrys found the perfect way to fight against AI hackers which, I'm sure of it, will soon scourge the web for fun and profit (nah, only for profit, really).
The last talk, from Pei-Yu Huang, was a great way to close the event: after an actual cyber-incident, where attackers hacked a company's Active Directory and dropped there a job which would install a ransomware on all machines, Huang and her colleagues did a postmortem analysis using MITRE ATT&CK. From these TTPs, she used MITRE ENGAGE to devise a new architecture for the company, using these active defense techniques which would make the attack, should it happen again, extremely impractical. This may be the way to show the value of active defense and deception to the world, let's hope we won't need hundreds of more companies getting hacked before companies start deploying simple honeytokens and honey accounts into their premises !
Natural 20, Critical Hit !
Aaand that's a wrap for this year. We all know that the best Advanced Dungeons & Dragons edition is version 2.5 (fight me in the comments !), so all that's left to do is to convince the organizing commitee to do the next workshop in 6 months instead of 12 months. How hard could that be?