Practical influence operations

Image may contain: Sara-Jayne Terp, standing

[Cross-posted from Medium https://medium.com/misinfosec/practical-influence-operations-760b03f8a493]

Sofwerx are the US Special Operations command’s open source innovations unit – it’s where scarily fit (but nice) military people, contractors, academics and people with green hair, piercings, dodgy pasts and fiercely bright creative minds come together to help solve wicked problems that need extremely unconventional but still disciplined thinking (don’t ever make the mistake of thinking hackers are just chaotic; given a good problem, they are very very focussed and determined). This week, Sofwerx gathered some experts together to tell other parts of the US government about Weaponized Information.

I was one of the experts. I’m low on purple hair and piercings, but I do have about 30 years practice in the defense innovation community, am lucky enough to speak at Black Hat and Defcon about things like how algorithms control humans and the ethics of adversarial machine learning, and I really get interested in things.  Like how autonomous vehicles and people team together. Or how agencies like FEMA and the UN use data in disasters. Or, for the past couple of years, how to counter attacks on human belief systems.  Here are some notes from my talk whilst I wait for the amazing media team to make me look elegant in the video (please please!).

I spoke at the end of the day, so my talk is rightly one of a pair of talks, with David Perlman and myself bookending the day. After a military introduction (waves to Pablo), David opened the day by describing the landscape of misinformation and other large-scale, internet-enabled influence operations; expert talks during the day built out from that, explaining lessons we can learn from earlier operations against Jihadis (Scot Terban), deep dives into specific technologies of interest (Matthew Sorrell and Irene Amerini on countering deepfakes and other multimedia forensics), then me pulling those back together with a talk setting out a framework from which we (and by we I meant the people in front of us in the room plus a discipline created from the skills of a very specific set of experts) could start to respond to the problems, before passing it back to the military in the form of Keith Dear from the RAF. 

So. Lots of people talk about the problem of misinformation, election hacking, influence operations, Russia, the internet research agency blah blah blah.  Very few of them talk about potential solutions, including difficult or unconventional solutions.  The analogy I used at the start was from my days playing rugby union for my university.  Sometimes we would play another side, and the two sides would be completely matched in power and ballskill and speed, but they just didn’t understand the tactics of the game. And the other side would score again and again to the point of embarrassment against them, because they knew the field and had gameplay and the other side didn’t.  And that is what recent history has felt like to me.  If you’re going to talk about solutions, and about handling misinformation as a someone has to do this, and someone is going to have to do this forever because this is just like spam and isn’t going to go away thing, you’re going to need processes, you’re going to need your own gameplay, and you’re going to need to understand the other side’s gameplay so you can get inside and disrupt it. Play the game. Don’t just stand on the field.

Offense

So first, offense. Usually I talk a lot about this, because I’ve spent a lot of time in the past two years raising the alarm with different communities, and asking them to frame the problem in terms of actors with intents, targets, artefacts and potential vulnerabilities.  This time I skimmed over – mentioned the Internet Research Agency in Russia as the obvious biggest player in this game, but that despite their size they were playing a relatively unsubtle unsophisticated game and that more interesting to me were the more subtle tests and attacks that might be also happening whilst we were watching them. I defined misinformation as deliberately false information with an objective that’s often money or geopolitical gain and ranges from the strange (“Putin with aliens!”) to the individually dangerous (“muslim rape gangs in Toronto”).  I also pushed back o the idea that influence operations aren’t the same as social engineering; to me, influence operations are social engineering at scale, and if we use the SE definition of “psychological manipulation of people into performing actions or divulging confidential information”, we are still talking about action, but those actions are often either in aggregate or at a remove from the original target (e.g. a population is targeted to force a politician to take action), with the scale being sometimes millions of people (Russian-owned Facebook groups in the 2016 Congress investigation had shares and interactions in the 10s of millions, although we do have to allow for botnet activities there).

Scale is important when we talk about impacts: these can range from individual – people caught up in opposing-group demonstrations deliberately created at the same place and time, to communities – disaster responses and resources being diverted around imaginary roadblocks (e.g. fake “bridge out” messaging) to nationstate (the “meme war” organizing pages that we saw with QAnon and related groups’ branding for the US, Canada and other nations in the past year). 

Targeting is scaled too: every speaker mentioned human cognitive biases; although I have my favorite biases like familiarity backfire (if you repeat a message with a negative in it, humans remember the message but not the negative) there are hundreds of other biases that can be used as a human attack surface online (the cognitive bias codex lists about 180 of them). There’s sideways scale: many efforts focus on single platforms, but misinformation is now everywhere there’s user-generated content: social media sites like facebook, twitter, reddit, eventbrite, but also comment streams, payment sites, event sites: anywhere you can leave a message, comment, image, video, content that another human can sense.  Influence operations aren’t new, but social media buys reach and scale: you can buy 1000 easy-to-find bots for a few dollars or 100 very hard to detect Twitter or Facebook ‘aged users’ for $150; less if you know where to look.  There are plenty of botnet setup guides online; a large cheap set can do a lot of damage very quickly, and you can play a longer, more subtle online game by adding a little pattern matching or AI to a smaller aged set.

Actors and motivations pretty much divide into: state/nonstate actors who are doing this for geopolitical gain (creating discord or swaying opinion on a specific topic), entrepreneurs doing it for money (usually driving people to view their websites and making money from advertising on them), grassroots groups doing it for fun (e.g. to create chaos as a new form of vandalism) and private influencers for either attention (the sharks on the subways) or, sometimes, money.  This isn’t always a clean-cut landscape: American individual influencers have been known to create content that is cut-and-pasted onto entrepreneurs’ websites (most, but increasingly not all, entrepreneurs don’t have English as their first language and the US is a large market); that messaging is often also useful to the state actors (especially if their goal is in-country division) and attractive to grassroots groups.  This is a huge snurfball that people like Ben Nimmo do great work unravelling some of the linkages in.

Defence

One of the most insightful comments I got at a talk was “isn’t this just like spam? Won’t they just make it go away the same way?”.  I didn’t appreciate it at the time, and my first thought was “but we’re the ‘they’, dammit”, but IMHO there are some good correlates here, and that one question got me thinking about whether we could treat misinformation the same way we treat other unwanted internet content like spam and ddos attacks.

I’ve looked at a lot of disciplines, architectures and frameworks (“lean/agile misinformation”, anyone?) and the ones that look closest to what we need come from information security.  One of these is the Gartner cycle: deceptively simple with its prevent-detect-respond-predict.  The good news is that we can take these existing frameworks and fit our problem to them, to see if there are areas that we’ve missed or need to strengthen in some way.   The other good news is that approach works well.  The bad news is that if you fit existing misinformation defense work to the Gartner cycle, we’ve got quite a lot of detect work going on, a small bit of prevent, almost no effective respond and nothing of note except some special exceptions (Chapeau! again to Macron’s election team for the wonderful con-job you pulled on your attackers) on predict.

Looking at “detect”: one big weakness of an influence operation is that the influencer has to be visible in some way (although the smart ones find ways to pop up and remove messages quickly, and target small enough to be difficult to detect) – they leave “artefacts”, traces of their activity.  There are groups and sites dedicated to detecting and tracking online botnets, which is a useful place to look up any ‘user’ behaving suspiciously.  The artifacts they use tend to split into content and context artifacts.  Content artifacts are things within a message or profile: known hashtags (e.g. #qanon), text that correlates with known bots, image artifacts in deepfake videos, known fake news URLs, known fake stories.  Stories are interesting because sites like Snopes already exist to track at the story level, and groups like Buzzfeed and FEMA have started listing known fake stories during special events like natural disasters.  But determining whether something is misinformation from content alone can be difficult – the Credibility Coalition and W3C credibility standards I’ve been helping with also include context-based artifacts: whether users are connected to known botnets, trolls or previous rumors (akin to the intelligence system of rating both the content and the carrier), their follower and retweet/likes patterns and metadata like advertising tags and DNS.  One promising avenue, as always, is to follow the money, in this case advertising dollars; this is promising both in tracking misinformation and also in its potential to disrupt it.

There are different levels of “respond”, ranging from individual actions to community, platform and nationstates.  Individuals can report user behaviors to social media platforms; this has been problematic so far, for reasons discussed in earlier talks (basically platform hesitation at accidentally removing user accounts).  Individuals can also report brands advertising on “fake news” sites to advertisers through pressure groups like Sleeping Giants, who have been effective in communicating the risk from this to the brands.  Individuals have tools that they can use to collaboratively block specific account types (e.g. new accounts, accounts with few followers): all of these individual behaviors could be scaled.  Platforms have options: they do remove non-human traffic (the polite term for “botnets and other creepy online things”) and make trolls less visible to other users; ad exchanges do remove non-human traffic (because of a related problem, click fraud – bots don’t buy from brands) and problematic pages from their listings.

Some communities actively respond.  One of my favorites are the Lithuanian ‘Elves’: an anonymous online group who fight Russian misinformation online, apparently successfully, with a combination of humor and facts.  This has also been promising in small-scale trials in Panama and the US during disasters (full disclosure: I ran one of those tests).  One of the geopolitical aims of influence operations that was mentioned by several other speakers was to widen political divides in a country.  A community that’s been very active in countering that is the peace technology community, and specifically the Commons Project, which used techniques developed across divides including Israel-Palestine and Cyprus with a combination of bots and humans to rebuild human connections across damaged political divides.

On a smaller scale, things that have been tried in the past years include parody-based counter-campaigns, SEO hacks to place disambiguation sites above misinformation sites in search results, overwhelming (“dogpiling onto”) misinformation hashtags with unrelated content, diverting misinformation hashtag followers with spoof messages, misspelt addresses and users names (‘typosquatting’), and identifying and engaging with affected individuals.  I remain awed by my co-conspirator Tim who is a master at this. 

All the above has been tactical because that’s where we are right now, but there are also strategic things going on. Initiatives to innoculate and educate people about misinformation exist, and the long work of bringing it into the light continues in many places.

Adaptations

I covered offense and defence, but that’s never the whole of a game: for instance, in yet another of my interests, MLsec (the application of machine learning to information security),  the community divides its work into using machine learning to attack, using it to defend, and attacking the machine learning itself. 

Right now the game is changing, and this is why I’m emphasizing frameworks.  This time also feels to me like the moment Cliff Stoll writes about in The Cuckoo’s Egg, when one man is investigating an information security incursion, a “hack”, happening through his computers, and slowly finding other people across the government who were recognizing the problem too, before that small group grew out into the huge industry we see today. 

We need frameworks because the attacks are adapting quickly, and it’s going to get worse because of advances in areas like MLsec: we’re creating adaptive, machine-learning-driven attacks that learn to evade machine-learning-driven detectors and rapidly heading from artefact-based to behavior-based to intent-based discussions.  Already happening or likely to happen next include hybrid attacks where attackers combine algorithms and humans to evade and attack a combination of algorithms (e.g. detectors, popularity etc) and humans; a current shift from obvious trolls and botnets to infiltrating and weaponizing existing human communities (mass-scale “useful idiots”), and attacks across multiple channels at the same time masked with techniques like pop-up and low-and-slow messaging.  This is where we are: this is becoming an established part of hybrid warfare that needs to be considered not as war, but certainly on a similar level to, say, turning up in part of Colombia with some money and a gunboat pointed at the railway station and accidentally creating a new country from a territory you’d quite like to build a canal in (Panama).  Also of note is what happens if the countries currently attacking the US make the geopolitical and personal gains they required, stop their current campaigns and leave several hundred highly-trained influence operators without a salary.  Generally what happens in those situations is an industry forms around commercial targets: some of this has already happened, but those numbers could be interesting, and not in a good way.

One framework isn’t enough to cover this. The SANS sliding scale of security describes, from left to right, the work needed to secure a system from architecting that system to be secure through passively defending it against threats, actively responding to attacks and producing intelligence all the way to “legal countermeasures and self-self-defense against an adversary”.  We have some of the architecture work done.  Some of the passive defence. Lots of intelligence.  There’s potential for defense here.  There’s going to need to be strategic and tactical collaboration, and by that I mean practical things like nobody quite knows what to call the state we’re in: it’s not war but it is a form of landgrab (later in the day I whispered “are we the Indians?” to a co-speaker, meaning this must have been what it felt like to be a powerful leader watching the settlers say “nice country, we’ll take it”), possibly politics with the addition of other means, and without that definition it’s really hard to regulate what is and isn’t allowed to happen (also perhaps important: it seems that only the military have limits on themselves in this space).  With cross-platform subtle attacks, collaboration and information sharing will be crucial, so trusted third-party exchanges matter.  Sharing of offensive techniques, tactics and processes matter too, so a misinformation version of the ATT&CK framework for now (I tried fitting it to the end of the existing framework and it just doesn’t fit – the shape is good but there’s adjustments needed) with a SANS top 20 later (because we’re already seeing the same attack vectors repeating, misinformation versions of script kiddies etc etc).  There’s a defense correlate to the algorithms+ humans comment on offense above: we will most likely need a hybrid response of algorithms plus humans countering attacks by algorithms plus humans.  We will need to think the unthinkable, even if we immediately reject it (“Great Wall Of America”, nah).   And we really need to talk about what offense would look like: and I don’t mean that in a kinetic sense, I mean what are valid self-self-defense actions.

I ended my presentation with a brief glimpse at what I’m working on right now, and a plea for the audience.  I’m working half my time helping to build the Global Disinformation Index, an independent disinformation rating system, and the rest researching areas that interest me, which right now is that misinformation equivalent to the ATT&CK techniques, tactics and procedures framework. My plea for the audience was to please not fight the last war here.