Responses to misinformation

[Cross-post from Medium https://medium.com/misinfosec/responses-to-misinformation-885b9d82947e]

There is no one, magic, response to misinformation. Misinformation mitigation, like disease control, is a whole-system response.

MisinfosecWG has been working on infosec responses to misinformation. Part of this work has been creating the AMITT framework, to provide a way for people from different fields to talk about misinformation incidents without confusion. We’re now starting to map out misinformation responses, e.g.

Today I’m sat in Las Vegas, watching the Rootzbook misinformation challenge take shape. I’m impressed at what the team has done in a short period of time (and has planned for later). It also has a place on the framework — specifically at the far-right of it, in TA09 Exposure. Other education responses we’ve seen so far include:

Education is an important counter, but won’t be enough on its own. Other counters that are likely to be trialled with it include:

  • Tracking data providence to protect against context attacks (digitally sign media and metadata in a way that media includes the original URL in which it was published and private key is that of the original author/publisher)
  • Forcing products altered by AI/ML to notify their users (e.g. there was an effort to force Google’s very believable AI voice assistant to announce it was an AI before it could talk to customers)
  • Requiring legitimate news media to label editorials as such
  • Participating in the Cognitive Security Information Sharing and Analysis Organization (ISAO)
  • Forcing paid political ads on the Internet to follow the same rules as paid political advertisements on television
  • Baltic community models, e.g. Baltic “Elves” teamed with local media etc

Jonathan Stray’s paper “Institutional Counter-disinformation Strategies in a Networked Democracy” is a good primer on counters available on a national level.

Why talk about disinformation at a hacking event?

I’m one of the DEFCON AI Village core team, and there’s quite a bit of disinformation activity in the Village this year, including:

Why talk about disinformation* at a hacking event?  I mean, shouldn’t it be in the fluffy social science candle-waving events instead? What’s it doing in the AI Village?  Isn’t it all a bit, kinda, off-topic? 

Nope.  It’s in exactly the right place. Misinformation, or more correctly its uglier cousin, disinformation, is a hack.  Disinformation takes an existing system (communication between very large numbers of people) apart and adapts it to fit new intentions – whether that’s temporarily destroying the system’s ability to function (the “Division” attacks that we see on people’s trust in each other and in the democratic systems that they live within), changing system outputs (influence operations to dissuade opposition voters or change marginal election results) or making the system easy to access and weaken from outside (antivax and other convenient conspiracy theories).  And a lot of this is done at scale, at speed, and across many different platforms and media – which if you remember your data science history is the three Vs: volume, variety and velocity (there was a fourth v: veracity, but erm misinformation guys!)

And the AI part? Disinformation is also called Computational Propaganda for a reason.  So far, we’ve been relatively lucky: the algorithms used by disinformation’s current ruling masters, Russia, Iran et al, have been fairly dumb (but still useful).  We had bots (scripts pretending to be social media users, usually used to amplify a message, theme or hashtag til algorithms fed it to to real users) so simple you could probably spot them from space – like, seriously, sending the same message 100s of times a day at a rate even Win (who’s running the R00tz bots exercise at AI Village) can’t type at, backed up by trolls – humans (the most famous of which were in the Russian Internet Research Agency) spreading more targetted messages and chaos, with online advertising (and its ever so handy demographic targetting) for more personalised message delivery.   That luck isn’t going to last. Isn’t lasting.  Bots are changing. The way they’re used is changing. The way we find disinformation is changing (once, sigh, it was easy enough to look for #qanon on twitter, to find a whole treasure trove of crazy).

The disinformation itself is starting to change: goodbye straight-up “fake news” and hunting for high-frequency messages, hello more nuanced incidents that means anomaly detection and pattern-finding across large volumes of disparate data and its connections.  And as a person who’s been part of both MLsec (the intersection of machine learning/AI and information security), and Mmisinfosec (the intersection of misinformation and information security), I *know* that looks just like a ‘standard’ (because hell, there are no standards but we’ll pretend for a second there are) MLsec problem.  And that’s why there’s disinformation in the AI village. 

If you get more curious about this, there’s a whole separate community, Misinfosec http://misinfosec.org, working on the application of information security principles to misinformation.  Come check us out too.

* “Is there a widely accepted definition of mis vs disinformation?” Well not really, not yet (there’s lots of discussion about it in places like the Credibility Coalition‘s Terminology group, reading papers like Fallis’ “what is Disinformation?“. Clare Wardle’s definitions of dis, mis, and mal information are used a lot. But most active groups pick a definition and get on with the work – for instance this is MisinfosecWG’s working definition “We use misinformation attack (and misinformation campaign) to refer to the deliberate promotion of false, misleading or mis-attributed information. Whilst these attacks occur in many venues (print, radio, etc), we focus on the creation, propagation and consumption of misinformation online. We are especially interested in misinformation designed to change beliefs in a large number of people.” and my personal one is that we’re heading towards disinformation as the mass manipulation of beliefs that isn’t necessarily with fake content (text, images, videos etc) but usually includes fake context (misattribution of source, location, date, context etc) and use of real content to manipulate emotion in specific directions. Honestly, it’s like trying to define pornography – trying to find the right definitions is important, but can get in the way of the work of keeping it out of the mainstream, and if it’s obvious, it’s obvious. We’ll get there, but in the meantime, there’s work to do.