Writing about Countermeasures

[Cross-post from Medium https://medium.com/misinfosec/writing-about-countermeasures-1671d231e8a2]

The AMITT framework so far is a beautiful thing — we’ve used it to decompose different misinformation incidents into stages and techniques, so we can start looking for weak points in the ways that incidents are run, and in the ways that their component parts are created, used and put together. But right now, it’s still part of the “admiring the problem” collection of misinformation tools -to be truly useful, AMITT needs to contain not just the breakdown of what the blue team thinks the red team is doing, but also what the blue team might be able to do about it. Colloquially speaking, we’re talking about countermeasures here.

Go get some counters

Now there are several ways to go about finding countermeasures to any action:

  • Go look at ones that already exist. We’ve logged a few already in the AMITT repo, against specific techniques — for example, we listed a set of counters from the Macron election team as part of incident I00022.
  • Pick a specific tactic, technique or procedure and brainstorm how to counter it — the MisinfosecWG did this as part of their Atlanta retreat, describing potential new counters for two of the techniques on the AMITT framework.
  • Wargame red v blue in a ‘safe’ environment, and capture the counters that people start using. The Rootzbook exercise that Win and Aaron ran at Defcon AI Village was a good start on this, and holds promise as a training and learning environment.
  • Run a machine learning algorithm to generate random countermeasures until one starts looking more sensible/effective than the others. Well, perhaps not, but there’s likely to be some measure of automation in counters eventually…

Learn from the experts

So right. We get some counters. But hasn’t this all been done before. Like if we’re following the infosec playbook to do all this faster this time around (we really don’t have 20 years to get decent defences in place — we barely have 2…) then shouldn’t we look at things like courses of action matrices? Yes. Yes we should…

Courses of Action Matrix [1]

So this thing goes with the Cyber Killchain — the thing that we matched AMITT to. Down the left side we have the stages; 7 in this case; 12 in AMITT. Along the top we have six things we can do to disrupt each stage. And in each grid square, we have a suggestion of an action (I suspect there are more than one of these for each square) that we could take to cause that type of disruption at that stage. That’s cool. We can do this.

The other place we can look is at our other parent models, like the ATT&CK framework, the psyops model, marketing models etc, and see how they modelled and described counters too — for example, the mitigations for ATT&CK T1193 Spearphishing.

Make it easy to share

Checking parent models is also useful because this gives us formats for our counter objects— which is basically that these are of type “mitigation”, and contain a title, id, brief description and list of techniques that they address. Looking at the STIX format for course-of-action gives us a similarly simple format for each counter against tactics — a name, description and list of things it mitigates against.

We want to be more descriptive whilst we find and refine our list of counters, so we can trace our decisions and where they came from. A more thorough list of features for a counter list would probably include:

  • id
  • name
  • brief description
  • list of tactics can be used on
  • list of techniques can be used on
  • expected action (detect, deny etc)
  • who could take this action (this isn’t in the infosec lists, but we have many actors on the defence side with different types of power, so this might need to be a thing)
  • anticipated effects (both positive and negative — also not in the infosec lists)
  • anticipated effort (not sure how to quantify this — people? money? hours? but part of the overarching issue is that attacks are much cheaper than defences, so defence cost needs to be taken into account)

And be generated from a cross-table of counters within incidents, which looks similar to the above, but also contains the who/where/when etc:

  • id
  • brief description
  • list of tactics it was used on
  • list of techniques it was used on
  • action (detect, deny etc)
  • who took this action
  • effects seen (positive and negative)
  • resources used
  • incident id (if known)
  • date (if known)
  • counters-to-the-counter seen

“Boundaries for Fools, Guidelines for the Wise…”

At this stage, older infosec people are probably shaking their heads and muttering something about stamp collecting and bingo cards. We get that. We know that defending against a truly agile adversary isn’t a game of lookup, and as fast as we design and build counters, our counterparts will build counters to the counters, new techniques, new adaptations of existing techniques etc.

But that’s only part of the game. Most of the time people get lazy, or get into a rut — they reuse techniques and tools, or it’s too expensive to keep moving. It makes sense to build descriptions like this that we can adapt over time. It also helps us spot when we’re outside the frame.

Right. Time to get back to those counters.

References:

[1] Hutchins et al “Intelligence-driven computer network defense informed by analysis of adversary campaigns and intrusion kill chains”, 2011