Disinformation as a security problem: why now, and how might it play out?

[Cross-post from Medium https://medium.com/misinfosec/disinformation-as-a-security-problem-why-now-and-how-might-it-play-out-3f44ea6cda95]

When I talk about security going back to thinking about the combination of physical, cyber and cognitive, people sometimes ask me why now? Why, apart from the obvious weekly flurries of misinformation incidents, are we talking about cognitive security now?

Big, Fast, Weird

I usually answer with the three Vs of big data: volume, velocity, variety (the fourth V, veracity, is kinda the point of disinformation, so we’re leaving it out of this discussion).

  • The internet has a lot of text data floating around it, but its variety isn’t just in all the different platforms and data formats needed to scrape or inject into it — it’s also in the types of information being carried. We’re way past the Internet 1.0 days of someone posting the sports scores online and a bunch of hackers lurking on bulletin boards: now everyone and their grandmother is here, and the (sniffable, actionable and adjustable) data flows include emotions, relationships, group sentiment (anyone thinking about market sentiment should be at least a little worried by now) and group cohesion markers.
  • There’s a lot of it — volumes are high enough that brands and data scientists can spend their days doing social media analysis, looking at cliques, message spread, adaption and reach.
  • And it’s coming in fast: so fast that an incident manager can do AB-testing on humans in real time, adapting messages and other parts of each incident to fit the environment and head towards incident goals faster, more efficiently etc. Ideally that adaptation is much faster than any response, which fits the classic definition of “getting inside the other guy’s OODA loop”.

NB The internet isn’t the only system carrying these things: we still have traditional media like radio, television and newspapers, but they’re each increasingly part of these larger connected systems.

So what next?

Another question I get a lot is “so what happens next”. Usually I answer that one by pointing people at two books: The Cuckoo’s Egg and Walking Wounded — both excellent books about the evolution of the cybersecurity industry (and not just because great friends feature in them), and say we’re at the start of The Cuckoo’s Egg, where Stoll starts noticing there’s a problem in the systems and tracking the hackers through them.

I think we’re getting a bit further through that book now. I live in America. Someone sees a threat here, someone else makes a market out of it. Cuddle-an-alligator — tick. Scorpion lollipops in the supermarket — yep. Disinformation as a service / disinformation response as a service — also in the works, as predicted for a few years now. Disinformation response is a market, but it’s one with several layers to it, just as the existing cybersecurity market has specialists and sizes and layers.

Markets: sometimes botany, sometimes agile

Frank is a very wise, very experienced friend (see books above), who calls our work on AMITT “botany” — building catalogs of techniques and counters slower than the badguys can maraud across our networks, when we really should be out there chasing them. He’s right. Kinda.

I read Adam Shostack’s slides on threat modelling in 2019 today. He talks about the difference between “waterfall” (STRIDE, kill chain etc) and “agile” threat modelling. I’ve worked on both: on big critical systems that used waterfall/“V” methods because you don’t really get to continuously rebuild an aircraft or ship design, and on agile systems that we trialled with and adapted to end-user needs. (I’ve also worked on lean production, where classically speaking, agile is where you know the problemspace and are iterating over solutions, and lean is iterations on both the problem and solution spaces. This will become important later). This is one of the splits: we’ll still need the slower, deliberative work that gives labels and lists defences and counters for common threats (the “phishing” etc equivalents of cognitive security), but we also need that rapid response to things previously unseen that keeps white-hat hackers glued to their screens for hours, and there’s a growing market too in tools to support them. (as an aside, I’m part of a new company, and this agile/ waterfall split finally gives me a word to describe “that stuff over there that we do on the fly”).

Also because I’m old I can remember when universities had no clue where to put their computer science group — it was sometimes in the physics department, sometimes engineering, or maths, or somewhere wierder still; later on, nobody quite knew where to put data scientists as they cut across disciplines and used techniques from wherever made sense, from art to hardcore stats. This market will shake out that way too. Some of the tools, uses and companies will end up as part of day-to-day infosec. Others will be market-specific (Media and adtech are already heading that way); others again will meet specific needs on the “influence chain”, like educational tools and narrative trackers. Perhaps a good next post would be an emerging-market analysis?

overlaps between misinformation research and IoT security

[Cross-post from Medium https://medium.com/misinfosec/at-truth-trust-online-someone-asked-me-about-the-overlaps-between-misinformation-research-and-iot-a69772aba963]

At Truth&Trust Online, someone asked me about the overlaps between misinformation research and IoT security. There’s more than you’d think, and not just in the overlaps between people like Chris Blask who are working on both problem sets.

I stopped for a second, then went “Oh. I recognize this problem. It’s exactly what we did with data and information fusion (and knowledge fusion too, but you know a lot of that now as just normal data science and AI). Basically it’s about what happens when you’re building situation pictures (mental models of what is happening in the world) based on data that’s come from people (the misinformation, or information fusion part) and things (the IoT, or data fusion part). And what we basically did last time was run both disciplines separately – the text analysis and reasoning in a different silo to the sensor-based analysis and reasoning – til it made sense to start combining them (which is basically what became information fusion). That’s how we got the *last* pyramid diagram – the DIKW model of data under information under knowledge under wisdom (sorry: it’s been changed to insight now), and similar ideas of transformations and transitions in information (in the Shannon information theory sense of the word) between layers.

We’ll probably do a similar thing now. Both disciplines feed into situation pictures; both can be used to support (or refute) each other. Both contain all the classics like protecting information CIA: confidentiality, integrity, accessibility. I tasked two people at the conference to start delving into this area further (and connect to Chris) – will see where this goes.