Security frameworks for misinformation

[Cross-posted from Medium https://medium.com/misinfosec/security-frameworks-for-misinformation-d4b58e4047ec]

Someone over in the AI Village (one of the MLsec communities – check them out) asked about frameworks for testing misinformation attacks.  Whilst the original question was perhaps about how to simulate and test attacks – and more on how we did that later – one thing I’ve thought about a lot over the past year is how misinformation could fit into a ‘standard’ infosec response framework (this comes from a different thought, namely who the heck is going to do all the work of defending against the misinformation waves that are now with us forever).

I digress. I’m using some of my new free time to read up on security frameworks, and I’ve been enjoying Travis Smith’s Leveraging Mitre Att&ck video.  Let’s see how some of that maps to misinfo.

First, the Gartner adaptive security architecture.

The Four Stages of an Adaptive Security Architecture

It’s a cycle (OODA! intelligence cycle! etc!) but the main point of the Gartner article is that security is now continuous rather than incident-based. That matches well with what I’m seeing in the mlsec community (that attackers are automating, and those automations are adaptive, e.g. capable of change in real time) and with what I believe will happen next in misinformation: a shift from crude human-created, templated incidents to machine-generated, adaptive continuous attacks.

The four stages in this cycle seem sound for misinformation attacks,  but we would need to change the objects under consideration (e.g. systems might need to change to communities) and the details of actions under headings like “contain incidents”.  9/10 I’d post-it this one.

Then the SANS sliding scale of cyber security

SlidingScaleCyberSecurity

AKA “Yes, you want to hit them, but you’ll get a better return from sorting out your basics”. This feels related to the Gartner cycle in that the left side is very much about prevention, and the right about response and prediction. As with most infosec, a lot of this is about what we’re protecting from what, when, why and how.

With my misinformation hat on, architecture seems to be key here.I know that we have to do the hard work of protecting our base systems: educating and inoculating populations (basically patching the humans), designing platforms to be harder to infect with bots and/or botlike behaviours. SANS talks about compliance, but for misinformation there’s nothing (yet) to be compliant against. I think we need to fix that. Non-human traffic pushing misinformation is an easy win here: nobody loves an undeclared bot, especially if it’s trying to scalp your grandmother.

For passive defence, we need to keep up with non-human traffic evading our detectors, and have work still to do on semi-automated misinformation classifiers and propagation detection. Misinformation firewalls are drastic but interesting: I could argue that the Great Firewall (of China) is a misinformation firewall, and perhaps that becomes an architectural decision for some subsystems too.

Active defence is where the humans come in, working on the edge cases that the classifiers can’t determine (there will most likely always be a human in the system somewhere, on each side), and hunting for subtly crafted attacks. I also like the idea of misinformation canaries (we had some of these from the attacker side, to show which types of unit were being shut down, but they could be useful on the defence side too).

Intelligence is where most misinformation researchers (on the blue team side) have been this past year: reverse-engineering attacks, looking at tactics, techniques etc.  Here’s where we need repositories, sharing and exchanges of information like misinfocon and credco.

And offense is the push back – everything from taking down fake news sites and removing related advertisers to the types of creative manoeuvres exemplified by the Macron election team.

To me, Gartner is tactical, SANS is more strategic.  Which is another random thought I’d been kicking around recently: that if we look at actors and intent, looking at the strategic, tactical and execution levels of misinformation attacks can also give us a useful way to bucket them together.  But I digress: let’s get back to following Travis, who looks next at what needs to be secured.

V7 Matrix web 1024x720.png

For infosec, there’s the SANS Top 20, aka CIS controls (above). This interests me because it’s designed at an execution level for systems with boundaries that can be defended, and part of our misinformation problem is that we haven’t really thought hard about what our boundaries are – and if we have thought about them, have trouble deciding where they should be and how they could be implemented. It’s a useful exercise to find misinformation equivalents though, because you can’t defend what you can’t define (“you know it when you see it” isn’t a useful trust&safety guideline).  More soon on this.

Compliance frameworks aren’t likely to help us here: whilst HIPAA is useful to infosec, it’s not really forcing anyone to tell the truth online. Although I like the idea of writing hardening guides for adtech exchanges, search, cloud providers and other (unwitting) enablers of “fake news” sites and large-scale misinformation attacks, this isn’t a hardware problem (unless someone gets really creative repurposing unused bitcoin miners).

So there are still a couple of frameworks to go (and that’s not including the 3-dimensional framework I saw and liked at Hack Manhattan), which I’ll skim through for completeness, more to see if they spark any “ah, we would have missed that” thoughts.

Image result for lockheed-martin cyber kill chain

Lockheed-Martin cyber kill chain: yeah, this basically just says “you’ve got a problem”. Nothing in here for us; moving on.

Related image

And finally, MITRE’s ATT&CK framework, or Adversarial Tactics, Techniques & Common Knowledge. To quote Travis, it’s a list of the 11 most common tactics used against systems, and 100s of techniques used at each phase of their kill chains. Misinformation science is moving quickly, but not that quickly, and I’ve watched the same types of attacks (indeed the same text even) be reused against different communities at different times for different ends.  We have lists from watching since 2010, and pre-internet related work from before that: either extending ATT&CK (I started on that a while ago, but had issues making it fit) or a separate repository of attack types is starting to make sense.

A final note on how we characterise attacks. I think the infosec lists on characterising attackers make sense for misinformation: persistence, privilege escalation, defence evasion, credential access, discovery, lateral movement, execution, collection, exfiltration, command and control are all just as valid if we’re talking about communities instead of hardware and systems. Travis’s notes characterising response also make sense for us too: we also need to gather information each time on what is affected, what to gather, how the technique was used by an adversary, how to prevent a technique from being exploited, and how to detect the attack.

We’re not at war — we’re just doing infosec

[Cross-posted from Medium https://medium.com/misinfosec/were-not-at-war-we-re-just-doing-infosec-fff1d25fbcd1]

I read three connected things today: the digital Maginot line, about how we’ve been in a digital “warm war’ for the past few years, why influence matters in the spread of misinformation, and are ads really that bad?.

The first one tells a story that I’ve told myself over the past few years — that the age-old drive to gain territory and power has moved from turning up with weapons and fighting (‘kinetic warfare’) to get other countries to hand them over, to using social media channels to persuade those countries’ people to either welcome the new overlords, or persuade their own politicians to not interfere when their neighbouring countries get overrun.

I’ve sounded warnings because I believe I’ve lived in a uniquely fragile country in uniquely fragile conditions at a uniquely fragile time. I don’t however believe that that’s always going to be the case, and I think the answers to that lie in both past and recent history, and across articles like these three. Bear with me because I’m still gathering my own thoughts (into, amongst other things, a practical book) and hadn’t meant to write so much background yet.

Every country in the world has been trying to influence every other country that it interacts with since before there have been countries: it’s geopolitics. It tries to influence populations: it’s own through politics, others through propaganda and other more benign forms of outreach and image manipulation.

What social media buys an aggressor country is reach and scale. The same thing that lets me chat with friends in Kenya, or you see that not everyone in Peru is living on a farm with 10 llamas also allows anyone anywhere to insert themselves into an influence network in any language on any topic anywhere else on the planet. And there’s the power, right there.

One way to stop playing a game is to step outside of the game. Widespread user-generated content is relatively new: so what’s to stop defending countries from putting up their own digital firewalls, China-style, and collapsing the troublesome parts of the internet on themselves? Perhaps it’s that the entities that are massively powerful have changed, and countries are small in power now, compared to large global companies. And that gets me to a small glimmer of hope. Because those companies’ power is rooted in ad money. And ads are shown to humans. And despite all our globalisation, all our progress, all our ability to buy our Christmas decorations from Norway, I’ve seen the data, and people are still very geographically rooted in their ad clicks. It will be interesting to see what countries do.

It’s also interesting to see what people do. Because without users, a social media company becomes — is Myspace still around? And without people who can be manipulated, an influence campaign is just more conspiracy-laden shouting (how *is* qanon doing these days btw?). I said when I started that I’ve lived in a uniquely fragile country in uniquely fragile conditions at a uniquely fragile time. I think we’re still there, but I’m starting to see signs of resilience that give me hope.

Yes, we have work to do, but FWIW I don’t think we’re at war. I do think we need a new layer of information security and, amongst others, I’m working on tactics, techniques and procedures for that. We’re perhaps at the Cuckoo’s Egg part of the journey, and this is forever work, not a quick set of fixes, but it’s worth it.

Not just America

I’ve been reading lately – my current book is Benkler et al’s Network Propaganda, and in its Origins chapter, I was reading about 19th century vs 20th century American political styles and how they apply today, and caught myself thinking “am I studying America too much? Misinformation is hitting most countries around the world now – am I biased because I’m here?”.

I think that’s a valid question to ask.  Despite there being good work in many places around the world (looking at you, Lithuania and France!), much has been made of the 2016 US elections and beyond; many of the studies I’ve seen and work I’ve been involved in have a US bias, funding or teams.  And are we making our responses vulnerable because of that?

I think not.  And I think not because we’ve all been touched by American culture.  When we talk about globalisation, we’re usually talking about American companies, American movies and styles and brands, and much Internet culture has followed that too (which country doesn’t bother with a country suffix?) And just as we see Snickers and Disney all over the world, so too have our cultures been changed by things like game shows.