Who handles misinformation outbreaks?

[Cross-post from Medium https://medium.com/misinfosec/who-handles-misinformation-outbreaks-e635442972df]

Misinformation attacks— the deliberate and sustained creation and amplification of false information at scale — are a problem. Some of them start as jokes (the ever-present street sharks in disasters) or attempts to push an agenda (e.g. right-wing brigading); some are there to make money (the “Macedonian teens”), or part of ongoing attempts to destabilise countries including the US, UK and Canada (e.g. Russia’s Internet Research Agency using troll and bot amplification of divisive messages).

Enough people are writing about why misinformation attacks happen, what they look like and what motivates attackers. Fewer people are actively countering attacks. Here are some of them, roughly categorised as:

  • Journalists and data scientists: Make misinformation visible
  • Platforms and governments: Reduce misinformation spread
  • Communities: directly engage misinformation
  • Adtech: Remove or reduce misinformation rewards

Throughout this note, I want to ask questions about these entities like “why are they doing this”, “how could we do this better” and “is this sustainable”, and think about what a joined-up system to counter larger-scale misinformation attacks might look like (I’m deliberately focussing on sustained attacks, and ignoring one-off “sharks in the streets” type misinformation here).

Make Misinformation Visible

If you can see misinformation attacks in real time, you can start to do something about them. Right now, that can be as simple as checking patterns across a set of known hashtags, accounts or urls on a single social media platform. That’s what the Alliance for Securing Democracy does, with its dashboards of artefacts (hashtags, urls etc) from suspected misinformation trolls in the US (Hamilton68) and Germany (Artikel38), what groups like botsentinel do with artefacts from suspected botnets, and sites like newstracker do for artefacts from Facebook.

In almost real-time, there are groups tracking and debunking rumours: Snopes and similar fact-checking organisations, small-scale community groups like no melon and Georgia’s Myth Detector, and state-level work like the European External Action Service East Stratcom Task Force’s EU vs Disinfo campaign and US State Department’s polygraph.info.

For some events, like US elections, there are many groups collecting and storing message-level data, but that work is often academic, aimed at learning and strategic outputs (e.g. advocacy, policy, news articles). Some work worth watching includes Jonathan Albright’s event-specific datasets; Kate Starbird’s work on crisis misinformation; Oxford Internet Institute’s Computational Propaganda project (e.g. their 2016 USA elections work) and DFR Lab’s #electionwatch and #digitalsherlocks work.

Real-time misinformation monitoring is costly to set up accurately, and is usually tied to a geography, event, group of accounts or known artefacts, and a small number of platforms. That leaves gaps (were there any dashboards for the recent Ontario elections?) and overlaps, and there’s a lot of scope here for real-time information-sharing (oh god did I just volunteer for yet another data standards body?) across platforms, countries and topics, because that’s where the attackers are starting to work.

Simple things like checking a new platform for known misinformation artefacts, e.g. the US House Intelligence Committee’s datasets [“Exhibit A”] of Russia-funded Facebook advertisements, and checking for links to known fake news sites have already unearthed new misinformation sources. We’re already seeing multi-platform attacks and evolution in tactics to make detection and responses harder, and really good techniques (the evolutions of early persona management software into large-scale natural language posts) haven’t really been used yet. When they do, our tracking will need to get broader and more sophisticated, and will need all the artefacts it can get.

More tracking projects are listed in Shane Greenup’s article, and it’s worth watching Disinfo Portal, which reports on other anti-misinformation campaigns.

Reduce misinformation spread

Large-scale misinformation is a pipeline. It starts in different places and for different reasons, but it generally needs a source, content (text, images, video etc), online presence (websites, social media accounts, advertisements, ability to write comments or other user-generated content in sites and fora etc), reach (e.g. through botnet or viral amplification), and end targets (community beliefs, people viewing advertising etc) to succeed.

The source is a good place to start. I’ve had success in the past asking someone to remove or edit their misinforming posts, but that’s not our use-case here: we’re looking at persistent and often adversarial content. Attacking a creator’s intent, or using other less-savory methods to dissuade them is perhaps something from a bad spy novel, but these have also been known to happen in real life. The creators of fake news sites are often there to make money, and either removing the promise of cash (see below) or making it riskier to obtain that money (e.g. by penalising the site creators) might work. The creators of social media-based misinformation attacks often have other incentives, getting quickly into the realm of politics and diplomacy (also see below).

The next place is the platforms that host misinformation. These are typically websites (e.g. “macedonian fake news sites” ) and social media platforms, but we’ve also seen misinformation coordinated across other user generated content hosts, including comment sections and payment sites.

Websites typically use internet service providers, domain name providers and advertising exchanges (for monitisation). Internet service providers could and have removed sites that violate their terms of service (e.g. GoDaddy removed the Daily Stormer and other sites have been removed for stealing content), but we haven’t seen them remove reported “fake news” sites yet (we’d love to be corrected on that). Sites can be removed for domain squatting or names similar to trademarks, but most fake news sites steer clear of this. Advertising exchanges and their customers do keep blacklists of sites that include fake news sites — that’s covered in more detail below. It’s also possible to disrupt views to fake news sites, by changing search engine results using sites with similar text, squatting on domains with similar names etc.

Misinformation on social media can be original content (text, images etc), pointers to fake news sites, or amplification of either of these through repetition, use of popular hashtags and usernames, or other marketing techniques to get sources and messages more attention. One area of much interest recently has been amplification of misinformation by trolls, bots and brigades (e.g. coordinated 8chan users); we’re starting to see these adapt to current detection techniques and are looking at ways they’re likely to evolve as artificial intelligence techniques improve (that’s for a different post).

Social media platforms have tried a variety of ways to stop misinformation spreading.

  • Early work focussed on the reader of misinformation, e.g. adding credibility markers to messages, adding fact-check buttons and notifying people who engaged with known bots/trolls (this is included in Facebook’s list of possible actions). Whilst reader education is important in combatting misinformation, user actions are difficult to obtain (which is why advertisers set a high monetary value on clicks and actions) and some actions (e.g. Twitter check marks) added to reader confusion.
  • The most-discussed platform actions are identifying and removing fake accounts. In an adversarial environment, this needs to be done quickly, e.g. remove botnets whilst misinformation attacks are happening, not after, and that speed increases the potential for collateral damage including misclassification and platform friction for users. Removing accounts is also not without risk to the company: Twitter suspended 70 million accounts recently, which had very little effect on the active botnets being tracked (most of the accounts removed were inactive), but did damage Twitter’s share price: Twitter, like many social media platforms, makes most of its revenue from advertising, and also has to manage perception of the company (a good example of this is the drop in transactions after AppNexus’s 2015 bot cleanout).
  • People are good pattern and nuance detectors: users can and do flag fake accounts and hate speech in messages to platforms, but this both creates a queue to be managed, and a potential for abuse itself (several of my female friends have had their social media accounts reported by abusers online). Abuse reports appear to go into ‘black holes’ (many of the well-documented botnet accounts are still active), and misinformation messages are often carefully crafted to create division without triggering platform hate speech rules, but there may still be some merit in this.
  • A softer way to reduce the spread of misinformation is to limit the visibility of suspected misinformation and its promoters, and increase the operating cost (in terms of time and attention) for accounts thought to be parts of botnets. We’ve seen shadowbans (making content invisible to everyone except its creator) and requests for account verification (e.g. by SMS) on bot accounts with specific characteristics recently: whilst reverifying every account in a 1000-bot network can be done, it takes time and adds friction to the running of a misinformation botnet.
  • The end point of misinformation is its target demographic(s). Since reach (the number of people who see each idea or message) is important, anything that limits either reach or the propagation speed of misinformation is useful. There are spam filter-style tools at this end point (e.g. BlockTogether) that highlight suspicious content in a user’s social media feeds, but platforms don’t yet have coordinated blocking at the user end point.
  • Large-scale misinformation attacks happen across multiple platforms, any one of which might not have a strong enough signal for removing messages on its own. There are meetings but not much visible response coordination across platforms yet, and the idea of an EFF-style watchdog for the online providers that misinformation flows through is a good one if it’s backed up with coordinated real-time response. This is where standards bodies like the Credibility Coalition are valuable, in helping to improve the ways that information is shared.

Stopping misinformation at the platform level can be improved by borrowing frameworks and techniques from the information security community (which yes, is also a post in its own right), including techniques for adversarial environments like creating ‘red teams’ to attack our own systems. Facebook has, encouragingly, set up a red team for misinformation; it will be interesting to see where that goes.

Stopping misinformation at the platform level also needs new policies, to provide a legal framework for removing offenders, and to align misinformation with the business interests of the platforms. One solution is for platforms to extend their existing policies and actions for hate speech, pornography and platform abuses. Removing misinformation comes with financial, legal and social risks to a platform, and if it’s to get into policies and development plans, it needs strong support. This is where governments can play a large part, in the same way that GDPR data privacy regulations forced change. There are already regulations in Germany, and similar activity in other countries that look very similar to the pre-GDPR discussions on data privacy and consent; unfortunately, misinformation regulations are also being discussed by countries with a history of censorship, making it even more important to get these right. A sensible move for platforms now is to create and test their own policies and feed into government policy, before policy is forced on them from outside.

Reducing misinformation rewards

Misinformation attacks usually have goals, ranging from financial profit to creating favorable political conditions (approval or confusion: both can work) for a specific nationstate and its actions.

  • Online advertising is a main source of funding for many misinformation sites. Reducing misinformation advertising revenues reduces the profits and economic incentives of “fake news” sites, and the operating revenue available to misinformation campaigns. Adtech companies keep blacklists of fraudulent websites: although very few of them (e.g. online advertisers in Slovakia) explicitly blacklist misinformation sites, misinformation sites and bots are often blacklisted already because they’re correlated with activities like click fraud, bitcoin mining and gaming market sentiment (e.g. in cryptocurrency groups). Community campaigns like Sleeping Giants have been effective in creating adtech boycotts of political misinformation and hate speech sites. Other work supporting this includes collection and analysis of fake news sites.
  • Political misinformation is part of a larger information warfare / propaganda game: fewer politicians have played it better than Macron’s election team creating false information honeypots and responses to their disclosure before the election blackout started in France. Social media is now a part of cyberwarfare, so we’re likely to see (or not see) more skillful responses like this online.

Ultimately when we respond to misinformation attacks by reducing rewards, we’re not trying to completely eradicate the misinformation — as with all good forms of conflict reduction, we’re trying to make it prohibitively costly for any but the most determined attackers to have the resources and reach to affect us.

Engaging misinformation online

Even if platforms work to remove misinformation botnets and limit the reach of trolls, misinformation will still get through. It’s not clear who’s responsible for countering misinformation attacks at this point: it’s made it past the platforms’ controls, but still has the potential to damage belief systems and communities (increasingly in real life, as evidenced by trolls setting up opposing protests at the same locations and times).

At this point, it’s appropriate to engage with the material and its creators, limiting the reach of bots and trolls whilst being mindful of personal and community safeties. This engagement can happen at most points in the misinformation pipeline, and this is another area where the infosec mindset of creatively adapting systems can be useful. Some recent notable examples include:

  • Lithuanian and Latvian ‘Elves’: roughly 100 people who respond to Russian-backed misinformation with humour and verified content, in “Elves vs. Trolls skirmishes”.
  • VOST work (in Spanish) on tagging disaster misinformation with links to verified updates, and retweeting misinformation with “Falso” stamps.
  • Overwhelming misinformation hashtags with other content (beautiful recent examples: a right-wing hashtag suddenly filled with news about a swimming contest, and an Indian guru using the Qanon hashtag).
  • Using search engine optimisation and brigading to move other stories above misinformation pages and related search terms in search and news results.

Engaging with bots and trolls has risks, including doxxing, which should be carefully considered if you’re planning to do this yourself.

The ecosystem, in general

Many of the players countering misinformation are working on a small scale. This can be effective, but we’re looking at an issue that will probably be part of the internet for the foreseeable future, and from my experience running crisismapping deployments, I know that without more support, these efforts might not be sustainable in the long term.

The social media platforms have a great deal of leverage on this problem — they could shut down misinformation almost overnight, but that would be at great potential cost to them, both in audience and in financial cost (e.g. the cost to share price of ad-supported platforms removing ad viewings). Reducing money supplies (e.g. adtech) and other misinformation incentives is another good avenue of approach, and we’ve seen some working community responses but they might be difficult to scale over time.

Misinformation attacks aren’t going away. Today they’re nation-state backed and personal attacks; tomorrow those skills could be scaled with AI and applied to companies, groups and other organisations. We need to think about this as an ecosystem, using similar tools and mindsets to information security. We also need to create more joined-up, cross-platform responses.

Thanks to connie moon sehat (hide). 

“Max evil” MLsec: why should you care?

[cross-posted from Medium https://medium.com/@sarajayneterp/max-evil-mlsec-why-should-you-care-ae3a42bfea52]

[Shoutout: we’re looking for papers — if you’re working on MLsec, please consider submitting to AI Village by June 15th, CAMLIS by June 30th.]

MLsec is the intersection of machine learning, artificial intelligence, deep learning and information security. It has an active community (see the MLsec project, Defcon’s AI Village and the CAMLIS conference) and a host of applications including the offensive ones outlined in “The Malicious Use of AI”.

One of the things we’ve been talking about is what it means to be ethical in this space. MLsec work divides into using AI to attack systems and ecosystems, using AI to defend them, and attacking the AI itself, to change its models or behavior in some way (e.g. the groups that poisoned the Tay chatbot’s inputs were directly attacking its models to make it racist). Ethics applies to each of these.

Talking about Ethics

Talking about algorithm ethics is trendy, and outside MLsec, there’s been a lot of recent discussion of ethics and AI. But many of the people talking aren’t practitioners: they don’t build AI systems, or train models or make design decisions on their algorithms. There’s also talk about ethics in infosec because it’s a powerful field that affects many people- and when we twin it with another powerful field (AI), and know how much chaos we could unleash with MLsec, we really need to get its ethics right.

This discussion needs to come from practitioners: the MLsec equivalents of Cathy O’Neill (who I will love forever for seamlessly weaving analysis of penis sizes with flawed recidivism algorithms and other abuses of people). It still needs to be part of the wider conversations about hype cycles (people not trusting bots, then overtrusting them, then reaching a social compromise with them), data risk, and what happens to people and their societies when they start sharing jobs, space, homes, societal decisions and online society with algorithms and bots (sexbots, slackbots, AI-backed bots etc), but we also need to think about the risks that are unique to our domain.

A simple definition

There are many philosophy courses on ethics. In my data work, I’ve used a simple definition: ethics is about risk, which has 3 main parts:

  • how bad the consequences of something are (e.g. death is a risk, but so is having your flight delayed),
  • how likely that thing is to happen (e.g. death in a driverless train is relatively rare) and
  • who it affects (in this case, the system designer, owner, user and other stakeholders, bystanders etc).
  • Risk also has perceptions: for example, people believe that risks from terrorism are greater than those from train travel, and people’s attitudes to different risks can vary from risk-seeking through risk-neutral to risk-averse.

Ethics is about reducing risk in the sense of reducing the combination of the severity of adverse effects, the likelihood of them happening and the number of people or entities it affects. It’s about not being “that guy” with the technology, and being aware of the potential effects and sideeffects of what we do.

Ethics in MLsec

One of my hacker heroes is @straithe. Her work on how to hack human behaviors with robots is creative, incredible, and opening up a whole new area of incursion, information exfiltration and potential destruction. She thinks the hard thoughts about mlsec risks, and some of the things she’s talked about recently include:

  • Using a kid or dead relative’s voice in phishing phonecalls. Yes, we can do that: anyone who podcasts or posts videos of themselves online is leaving data about their voice, it’s relatively easy to record people talking, and Baidu’s voice-mimicking programs are already good enough to fool someone.
  • Using bots (both online and offline) to build emotional bonds with kids, then ‘killing’ or threatening to ‘kill’ those bots.
  • Using passive-aggressive bots to make people do things against their own self-interests.

The bad things here are generally personal (distress etc), but that’s not “max evil” yet. What about these?

  • Changing people’s access to resources by triggering a series of actions that reduce their social credit scores and adversely change their life (this is possible using small actions and a model of their human network).
  • Microtargetting groups of people with emotive content, to force or change their behavior (e.g. start riots and larger conflicts: when does political advertising stop and warfare start?).
  • Taking control of a set of autonomous vehicles (what responsibility do you have if one crashes and kills people?)
  • Mass unintended consequences from your machine learning system (e.g. unintentional racism in its actions).

Now we’re on a bigger scale, both in numbers and effect. When we talk about chaos, we’re talking about letting loose adaptive algorithms and mobile entities that could potentially do anything from making people cry through to destroying their lives and death. Welcome to my life, where we talk about just how far we could go with a technology on both attack and defense, because often we’re up against adversaries who won’t hesitate to have the evil thoughts and act on them, and someone has to think them in order to counter them. This is normal in infosec, and we need to have the same discussions about things like the limits of red team testing, blue team actions, deception and responsible disclosure.

Why we should care

People can get hurt in infosec operations. Some of those hurts are small (e.g. the loss of face from being successfully phished); some of them are larger. Some of the damage is to targets, sometimes it’s to your own team, sometimes it’s to people you don’t even know (like the non-trolls who found themselves on the big list of Russian trolls).

MLsec is infosec on steroids: we have this incredibly powerful, exciting research area. I’ve giggled too at the thought of cute robots getting people to open secure doors for them, and it’s fun to think the evil thoughts, to go to places where most people (or at least people who like to sleep at night) shouldn’t go. But with that power comes responsibility, and our responsibility here is to think about ethics, about societal and moral lines before we unknowingly cross them.

Some basic actions for us:

  • When we build or use models, think about whether they’re “fair” to different demographics. Human-created data is often biased against different groups — we shouldn’t just blindly replicate that.
  • If our models, bots, robots etc can affect humans, think about what the worst effects could be on them, and whether we’re prepared to accept that risk for them.
  • Make our design choices wisely.

Further reading:

Cleaning a borked Jupyter notebook

It’s a small simple thing, but this might save me (or you) an hour some day. One of my Jupyter notebooks got corrupted – I have some less-than-friendly tweet data in it that not only stopped my notebook from loading, but also crashed my Jupyter instance when I did.  I would normally just fix this in the terminal window, but thought it might be nice to share.  If you’re already familiar with the backend of Jupyter and json file formats, feel free to skip to the next post.  And if you’ve borked a Jupyter file but can still open it, then “clear all” might be a better solution.  Otherwise…

Jupyter notebooks are pretty things, but behind the scenes, they’re just a Json data object.   Here’s how to look at that, using Python (in another Jupyter notebook: yay, recursion!):

import json
raw = open('examine_cutdown_tweets.ipynb', 'r').read()
raw[:200]

That just read the jupyter notebook file in as text.  See those curly brackets: it’s formatted as json.   If we want to clean the notebook up, we need to read this data in as json. Here’s how (and a quick look at its metadata):

jin = json.loads(raw)
print(jin.keys())
(jin['metadata'], jin['nbformat'], jin['nbformat_minor'])

Each section of the notebook is in one of the “cells”. Let’s have a look at what’s in the first one:

print(len(jin['cells']))
jin['cells'][0].keys()

The output from the cell is in “outputs”. This is where the pesky killing-my-notebook output is hiding. The contents of the first of these looks like:

print(type(jin['cells'][0]['outputs']))
jin['cells'][0]['outputs']

At which point, if I delete all the outputs (or the one that I *know* is causing problems), I should be able to read in my notebook and look at the code and comments in it again.

for i in range(len(jin['cells'])):
   jin['cells'][i]['outputs'] = []
jin['cells']

And write out the cleaned-up notebook:

jout = json.dumps(jin)
open('tmp_notebook.ipynb', 'w').write(jout)

Finally, the code that does the cleaning (and just that code) is:

import json
raw = open('mynotebook.ipynb', 'r').read()
jin = json.loads(raw)
for i in range(len(jin['cells'])):
jin['cells'][i]['outputs'] = []
open('tmp_mynotebook.ipynb', 'w').write(json.dumps(jin))

Good luck!

An introvert’s guide to presentations

[Cross-post from Medium https://techblog.appnexus.com/an-introverts-guide-to-presentations-2e9de660b2c6]

I’m Sara, and I’m an introvert. I’m also a regular speaker who used to throw up before every presentation I gave (see under “introvert”). Here are some notes I made before the last talk I gave, explaining how I went from terrified to “Hey, let’s talk”:

  • This is not about you. What you’re doing up there on stage is starting a conversation — not being interviewed for your role of “acceptable human being”. Start that conversation (which will happen after your talk).
  • Stories sell. Give your presentation a story arc, so people can follow the ‘big story’ you’re telling, while you drill down into details.
  • There is no one ‘right’ way to do slides. Do what is comfortable to you. If that means bullet points, then use bullet points. (Don’t try to just read off your slides. The audience will forgive most things if you have a good story, but it does need to be a story).
  • Get comfortable with the stage. If you have a chance to stand on the stage before your talk, do it. (I sometimes arrive early to do this). That way, the only thing that’s new to you is the audience.
  • Make friends with some of your audience. If you’re at a new event, pick out a few people who seem to be “friendly faces”. If you get scared, talk to them. If you’re scared of questions, tell some friendly people what you would like to be asked.
  • Wear something comfortable to you. ‘Comfortable’ might mean clothes (and shoes) that you feel physically comfortable in, or clothes that you feel powerful in. You don’t need the extra stress of painful shoes and/or worrying about how you look. And if you need to take your favorite teddy bear in with you, do it.
  • Make friends with the technology (another good reason to go in early). You don’t need the extra stress of your video not playing on their machine, hunting for the right cable, or not knowing how the clicker works.
  • Do this more than once. Speaking can be terrifying for introverts, but after a while it gets easier. It can help to practice by talking about ‘known’ topics instead of new work: that takes some of the pressure off you, letting you concentrate on talking. It’s also okay to say “No” to giving talks on topics you’re not very familiar with, until it gets easier (h/t Andre Bickford).

That’s my list of tips so far. What advice do other speaking introverts have for their peers?

Auction Theory

IMAG0346
Auction theory is becoming important globally, as we discuss the effects of programmatic advertising on politics and beliefs around the world.  It’s also becoming important personally: I’m starting work on incredibly rapid, large-scale auctions as part of my new job on programmatic advertising. I’m interested in the human parts of this, and especially in the effects of bots and automation on human needs, desires and interactions. I was starting to wade into the theory behind that, when I decided to stop for a moment and apply my new rule to first think about what in my background could be helpful to the job at hand.
I grew up in the German and English countryside. Saturdays and Wednesdays were market days, and sometimes instead of going to the local town’s market, we’d go over to the farmers market in a nearby town whose bridge still bore the plate offering to ship anyone who damaged it to Australia.
By “farmers’ market”, I don’t mean the sort of thing found in Union Square, where farmers dress up stalls to sell veg, meats, honey, bread etc to city dwellers.  I mean a good old-fashioned market were farmers were both the vendors and the buyers.  Central to this market was the livestock auction: a whirl of animals being herded into rings where a barely comprehensible (edit: totally uncomprehensible despite years of trying to understand what the heck they were saying) auctioneer rattled off, seemingly without breathing, prices until each animal or group of animals was sold.  I loved standing there, watching the intensity of everyone involved, seeing small signals, small tells, gruff men (and a few women) holding game faces as they got the best sheep for a good price, as thy sold those sheep, or watched who was buying and how.  Most of the day, these men would hide out in the pub, coming out only when their animals were involved: there was a signpost, of the old-fashioned, lots-of-arrows-stacked-on-each-other kind, that flipped up the name of the animals currently at auction, angled squarely at the drinkers.  I remember that farmers wives were busy too, but across the road from the livestock: selling cakes and produce, competing with a similar amount of understated fervor. As a small girl who’d learnt to count playing poker (thanks Dad!), I was wonderfully invisible in the market, and could observe anyone that I wanted until my parents’ patience ran out (or they finished shopping for produce).
We also went to furniture auctions in the town: places of great wonder for a small child, with boxes full of potential treasures being auctioned off amongst antiques. I started to learn about techniques like sniping (bidding at the last moment), forcing another participant’s hand (I still use that sometimes to push up the prices in a charity auction – sorry!), reading a room for who’s likely to bid, understanding how people bid not just on single items but also how they bid on sequences of items, how to be gracious in my bidding (sometimes there are people who just deserve that item more) and, most importantly, knowing the difference between the value of something and the most that I’m prepared to pay for it (which may or may not be above its monetary value, depending on its other values to me). The poker lessons also came in handy here too (so many tells… sooooo many tells).
And then eBay, and online auctions. No physical tells to read, but still people showed their intentions through their auction behaviors, and many of the old behaviors (like sniping and forcing up bids) still held. eBay fed some of my collection obsessions (which reminds me, there are some Russian teacups I need to go look at), and honed my ability to search for things competitor bidders might have missed (misspellings, alternative descriptions, getting into the heads of potential sellers, searching and working in different languages etc etc). It’s where I first saw bidding with bots, and realized just how much of human nature you could start to anticipate and automate for.
Between these, I also worked on adversarial systems, as part of the great games of the Cold War and beyond, moving from counter to counter-counter and so on up the chain. That got me into systems of value and competition within those systems, and eventually into relative calculations of risk. These weren’t auctions per se, but many of the same game mechanics applied.
And now programmatic advertising: tuning algorithms to bid, sell, counterbid for the eyeballs and actions of humans. That’s a lot to think about, but it turns out I’ve been thinking about auctions in one form or another for a lifetime, and algorithms for almost as long. Will be interesting to see how much of my own art and experience I can insert into this.
PS this isn’t all old-fart memories: I’m reading through Krishna’s ‘Auction Theory” and other classics whilst building some much newer tools. More notes on the tech side of things soon.

Humility and hacking

IMAG1441

 

This is my year of change.  One of the things on my list is to become a better hacker.  Another is to tell more of my stories, lest I finally go insane or die in a bizarre hangliding accident involving a sheep and lose them all.  This post kinda covers both.

There’s a saying: “everyone knows the good hackers are; nobody knows who the great ones are”.  Well, I know some of the great ones: the ones who broke things way ahead of everyone else, who pulled off unbelievable stunts but never ever went public, and they taught me a lot when I was younger (they didn’t always realise they were teaching me, but hey, it’s in the blood).  And yes, I broke into a lot of things; I hated the privilege of secrets, enjoyed the finesse of entering without being seen and yes, on occasion the privilege of being a young woman in a world that saw young women as a non-threat (I loved that prejudice, but only in that particular setting). (side note: people and organisations are really very mundane, with the occasional interesting highlight. This might also be why I’m very very difficult to surprise).

And then I grew up, went straight and worked for the government (kinda sorta: there was a bit of overlap there).  I don’t think I ever lost my hacker spirit, in the original sense of hacker: that of drilling down into a system to really understand how it works, how it could work, and all the other possibilities within its build but outside its original design. There’s an argument that I may have hacked a few large organisations, and not in the breaking in sense, and at one time made a living out of doing it; I might also have a bad habit of fixing things just because they’re there.

But I have missed the breaking-in part, and although I’m now at a stage in life when most people have settled down happily with their families, with steady jobs and responsibilities they’ll stay with for probably a long while, I find myself alone again with the freedom to do almost anything.  And yes, I fill my spare time with social-good things like journalism credibility standards, crisis support, teaching and belief hacking research, but I still feel the pull of trying to understand something physical, to have the language to talk to my tribe.

All of which was an over-grandiose preamble to “I’m getting old and rusty and I should get humble about my current lack of skills, get my ass in gear and learn to break stuff”. This note is really for anyone thinking about doing the same thing, a breadcrumbs of places to start. I’m using my background in AI as a guide for what I’m aiming at, and I’ve started by:

  • going to local and not-so-local infosec groups (nysecsec, defcon, bsides, mlsec)
  • asking hacker friends for pointers on how to get started (turns out I have a *lot* of friends who are hackers…)
  • following people who put out interesting hacker news, and reading the things they point at
  • reading books, sites and posts on theory
  • most importantly, starting with the practicals: course exercises and capture the flag to tune my brain up, examining code from attacks, doing some light lockpicking, thinking round the systems I have.

There’s no malice in this: I’m not about to break into military servers, or dump out sensitive site contents; I’ll be content to break my own stuff, to tune my brain back up again (with the plus side of choosing a better gym lock for myself) and work on the intersection of hacking and DS/AI.  It’s been a long year already, and the conflicts around us on the internet aren’t likely to let up anytime soon, and we’ll need more people who can do this, so it seems sensible to me to be prepared…

Some of those places to start:

It’s a long time since I manipulated assembler and read logs and intent as easily as reading English, but I’ll get back there. Or at least to the point where I can build things that do that themselves.

The ‘citizens’ have power. They can help.

[cross-post from medium]

This is a confusing time in a confusing place. I’ve struggled with concepts like allyship from within, of whether I sit in a country on the verge of collapse or renewal, on how I might make some small positive difference in a place that could take the world down with it. And, y’know, also getting on with life, because even in the midst of chaos, we still eat and sleep and continue to do all the human things. But I’m starting to understand things again.

I often say that the units of this country are companies, not people: that democratic power here rests a lot in the hands of organizations (and probably will until Citizens United is overturned, people find their collective strength and politics comes back from the pay-to-play that it’s become in the last decade or two). But since we’re here, I’ve started thinking about how that might itself become an advantage. We’ve already seen a hint of it with StormFront being turned away by the big hosts. What else could be done here? Where else are the levers?

One thing is that a relatively small number of people are in a position to define what’s socially acceptable: either by removing the unacceptable (e.g StormFront) or making it harder to find or easier to counter. And for me, this is not about ensuring a small group of nazis don’t coordinate and share stores: that’s going to happen anyway, whether we like it or not. It’s more about reducing their access and effect on our grandmothers, on people who might see a well-produced article or presidential statement and not realize that it doesn’t reflect reality (we can talk all we want about ‘truth’, and I have been known to do that, but some subjective assessments are just way past the bounds of uncertainty).

Removing the unacceptable is a pretty nuclear option, and one that is hard to do cleanly (although chapeau to the folks who say ‘no’). Making things harder to find and easier to counter — that should be doable. Like, tell me again how search and ranking works? Or rather, don’t (yes, I get eigenvector centrality and support, and enjoy Query Understanding) — it’s more important that people who are working on search and ranks are connected to the people working on things like algorithm ethics and classifying misinformation; people who are already wrestling with how algorithm design and page content adversely affect human populations, and the ethics of countering those effects. There’s already a lot of good work starting on countering the unacceptable (e.g. lots of interesting emergent work like counter-memes, annotation, credibility standards, removing advertising money and open source information warfare tools).

Defining and countering “unacceptable” is a subject for another note. IMHO, there’s a rot spreading in our second (online) world: it’s a system built originally on trust and a belief in its ability to connect, inform, entertain, and these good features are increasingly being used to bully, target, coerce and reflect the worst of our humanities. One solution would be to let it collapse on itself, to be an un-countered carrier for the machine and belief hacks that I wrote about earlier. Another is to draw our lines, to work out what we as technologists can and should do now. Some of those things (like adapt search) are thematically simple but potentially technically complex, but that’s never scared us before (and techies like a good chewy challenge). And personally, if things do start going Himmler, I’d rather be with Bletchley than IBM.

Boosting the data startup

[cross-post from Medium]

Some thoughts from working in data-driven startups.

Data scientists, like consultants, aren’t needed all the time. You’re very useful as the data starts to flow and the business questions start to gel, but before that there are other skills more needed (like coding); and afterwards there’s a lull before your specialist skills are a value-add again.

There is no data fiefdom. Everyone in the organization handles data: your job is to help them do that more efficiently and effectively, supplementing with your specialist skills when needed, and getting out of the way once people have the skills, experience and knowledge to fly on their own. That knowledge should include knowing when to call in expert help, but often the call will come in the form of happening to walk past their desk at the right time.

Knowing when to get involved is a delicate dance. Right at the start, a startup is (or should be) still working out what exactly it’s going to do, feeling its way around the market for the place where its skills fit; even under Lean, this can resemble a snurfball of oscillating ideas, out of which a direction will eventually emerge: it’s too easy to constrain that oscillation with the assumptions and concrete representations that you need to make to do data science. You can (and should) do data science in an early startup, but it’s more a delicate series of nudges and estimates than hardcore data work. As the startup starts to gel around its core ideas and creates code is a good time to join as a data scientist because you can start setting good behavior patterns for both data management and the business questions that rely on it, but beware that it isn’t an easy path: you’ll have a lot of rough data to clean and work with, and spend a lot of your life justifying your existence on the team (it helps a lot if you have a second role that you can fall back on at this stage). Later on, there will be plenty of data and people will know that you’re needed, but you’ll have lost those early chances to influence how and what data is stored, represented, moved, valued and appraised, and how that work links back (as it always should) to the startup’s core business.

There are typically 5 main areas that the first data nerd in a company will affect and be affected by (h/t Carl Anderson from Warby Parker ): data engineering, supporting analysts, supporting metrics, data science and data strategy. These are all big, 50-person-company areas that will grow out of initial seeds: data engineering, analyst support, metric support, data science and data governance.

  • Data engineering is the business of making data available and accessible. That starts with the dev team doing their thing with the datastores, pipelines, APIs etc needed to make the core system run. It’ll probably be some time before you can start the conversation about a second storage system for analysis use (because nobody wants the analysts slowing down their production system) so chances are you’ll start by coding in whatever language they’re using, grabbing snapshots of data to work on ’til that happens.
  • To start with, there will also be a bunch of data outside the core system, that parts of the company will run on ’til the system matures and includes them; much of it will be in spreadsheets, lists and secondary systems (like CRMs). You’ll need patience, charm, and a lot of applied cleaning skills to extract these from busy people and make the data in them more useful to them. Depending on the industry you’re in, you may already have analysts (or people, often system designers, doing analysis) doing this work. Your job isn’t to replace them at this; it’s to find ways to make their jobs less burdensome, usually through a combination of applied skills (e.g. writing code snippets to find and clean dirty datapoints), training (e.g. basic coding skills, data science project design etc), algorithm advice and help with more data access and scaling.
  • Every company has performance metrics. Part of your job is to help them be more than window-dressing, by helping them link back to company goals, actions and each other (understanding the data parts of lean enterprise/ lean startup helps a lot here, even if you don’t call it that); it’s also to help find ways to measure non-obvious metrics (mixtures, proxies etc; just because you can measure it easily doesn’t make it a good metric; just because it’s a good metric doesn’t make it easy to measure).
  • Data science is what your boss probably thought you were there to do, and one of the few things that you’ll ‘own’ at first. If you’re a data scientist, you’ll do this as naturally as breathing: as you talk to people in the company, you’ll start getting a sense of the questions that are important to them; as you find and clean data, you’ll have questions and curiosities of your own to satisfy, often finding interesting things whilst you do. Running an experiment log will help here: for each experiment, what was the business need, the dataset, what did you do, and most importantly, what did you learn from it. That not only frames and leaves a trail for someone understanding your early decisions; it will also leave a blueprint for other people you’re training in the company on how data scientists think and work (because really you want everyone in a data-driven company to have a good feel for their data, and there will be a *lot* of data). Some experiments will be for quick insights (e.g. into how and why a dataset is dirty); others will be longer and create prototypes that will eventually become either internal tools or part of the company’s main systems; being able to reproduce and explain your code will help here (e.g. Jupyter notebooks FTW).
  • With data comes data problems. One of these is security; others are privacy, regulatory compliance, data access and user responsibilities. These all fall generally under ‘data governance’. You’re going to be part of this conversation, and will definitely have opinions on things like data anonymisation, but early on, much of it will be covered by the dev team / infosec person, and you’ll have to wait for the right moment to start conversations about things like potential privacy violations from combining datasets, data ecosystem control and analysis service guarantees. You’ll probably do less of this part at first than you thought you would.

Depending on where in the lifecycle you’ve joined, you’re either trying to make yourself redundant, or working out how you need to build a team to handle all the data tasks as the company grows. Making yourself redundant means giving the company the best data training, tools, and structural and scientific head start you can; leaving enough people, process and tech behind to know that you’ve made a positive difference there. Building the team means starting by covering all the roles (it helps if you’re either a ‘unicorn’ — a data scientist who call fill all the roles above — or pretty close to being one; a ‘pretty white pony’ perhaps) and gradually moving them either fully or partially over to other people you’ve either brought in or trained (or both). Neither of these things is easy; both are tremendously rewarding and will grow your skills a lot.

What could help fix belief problems?

[cross-post from Medium]

Last part (of 4) from a talk I gave about hacking belief systems.

Who can help?

My favourite quote is “ignore everything they say, watch everything they do”. It was originally advice to girls about men, but it works equally well with both humanity and machines.

If you want to understand what is happening behind a belief-based system, you’re going to need a contact point with reality. Be aware of what people are saying, but also watch their actions; follow the money, and follow the data, because everything leaves a trace somewhere if you know how to look for it (looking may be best done as a group, not just to split the load, but also because diversity of viewpoint and opinion will be helpful).

Verification and validation (V&V) become important. Verification means going there. For most of us, verification is something we might do up front, but rarely do as a continuing practice. Which, apart from making people easy to phish, also makes us vulnerable to deliberate misinformation. Want to believe stuff? You need to do the leg-work of cross-checking that the source is real, finding alternate sources, or getting someone to physically go look at something and send photos (groups like findyr still do this). Validation is asking whether this is a true representation of what’s happening. Both are important; both can be hard.

One way to start V&V is to find people who are already doing this, and learn from them. Crisismappers used a bunch of V&V techniques, including checking how someone’s social media profile had grown, looking at activity patterns, tracking and contacting sources, and not releasing a datapoint until at least 3 messages had come in about it. We also used data proxies and crowdsourcing from things like satellite images (spending one painful Christmas counting and marking all the buildings in Somalia’s Afgooye region), automating that crowdsourcing where we could.

Another group that’s used to V&V is the intelligence community, e.g. intelligence analysts (IAs). As Heuer puts it in the Psychology of Intelligence Analysis (see the CIA’s online library), “Conflicting information of uncertain reliability is endemic to intelligence analysis, as is the need to make rapid judgments on current events even before all the evidence is in”. Both of these groups (IAs and crisismappers) have been making rapid judgements about what to believe, how to handle deliberate misinformation, what to share and how to present uncertainty in it for years now. And they both have toolsets.

Is there a toolset?

 significances
Graphs from a session on statistical significance, AB testing and friends. We can ‘clearly’ see when the two distributions are separate or nearly the same, but need tools to work out where the boundary between these two states are.

The granddaddy of belief tools is statistics. Statistics helps you deal with and code for uncertainty; it also makes me uncomfortable, and makes a lot of other people uncomfortable. My personal theory on that is it’s because statistics tries to describe uncertainty, and there’s always that niggling feeling that there’s something else we haven’t quite considered when we use do this.

Statistics is nice once you’ve pinned down your problem and got it into a nicely quantified form. But most data on belief isn’t like that. And that’s where some of the more interesting qualitative tools come in, including structured analytics (many of which are described in Heuer’s book “structured analytic techniques”). I’ll talk more about these in a later post, but for now I’ll share my favourite technique: Analysis of Competing Hypotheses (ACH).

ACH is about comparing multiple hypotheses; these might be problem hypotheses or solution hypotheses at any level of the data science ladder (e.g. from explanation to prediction and learning). The genius of ACH isn’t that it encourages people to come up with multiple competing explanations for something; it’s in the way that a ‘winning’ explanation is selected, i.e. by collecting evidence *against* each hypothesis and choosing the one with the least disproof, rather than trying to find evidence to support the “strongest” one. The basic steps (Heuer) are:

  • Hypothesis. Create a set of potential hypotheses. This is similar to the hypothesis generation used in hypothesis-driven development (HDD), but generally has graver potential consequences than how a system’s users are likely to respond to a new website, and is generally encouraged to be wilder and more creative.
  • Evidence. List evidence and arguments for each hypothesis.
  • Diagnostics. List evidence against each hypothesis; use that evidence to compare the relative likelihood of different hypotheses.
  • Refinement. Review findings so far, find gaps in knowledge, collect more evidence (especially evidence that can remove hypotheses).
  • Inconsistency. Estimate the relative likelihood of each hypothesis; remove the weakest hypotheses.
  • Sensitivity. Run sensitivity analysis on evidence and arguments.
  • Conclusions and evidence. Present most likely explanation, and reasons why other explanations were rejected.

This is a beautiful thing, and one that I suspect several of my uncertainty in AI colleagues were busy working on back in the 80s. It could be extended (and has been); for example, DHCP creativity theory might come in useful in hypothesis generation.

What can help us change beliefs?

 ooda_original
Boyd’s original OODA loop diagram

Which us brings us back to one of the original questions. What could help with changing beliefs? My short answer is “other people”, and that the beliefs of both humans and human networks are adjustable. The long answer is that we’re all individuals (it helps if you say this in a Monty Python voice), but if we view humans as systems, then we have a number of pressure points, places that we can be disrupted, some of which are nicely illustrated in Boyd’s original OODA loop diagram (but do this gently: go too quickly with belief change, and you get shut down by cognitive dissonance). We can also be disrupted as a network, borrowing ideas from biostatistics, and creating idea ‘infections’ across groups. Some of the meta-level things that we could do are:

  • Teaching people about disinformation and how to question (this works best on the young; we old ‘uns can get a bit set in our ways)
  • Making belief differences visible
  • Breaking down belief ‘silos’
  • Building credibility standards
  • Creating new belief carriers (e.g. meme wars)

What goes wrong in belief-based models?

[cross-post from Medium]

This is part 3 of a talk I gave recently on hacking belief systems using old AI techniques. Parts 1 and 2 looked at human and computational belief, and the internet and computer systems development as belief-based systems. Now we look at the things that can (and do) go wrong with belief-based systems.

Machine Belief

Machines have a belief system either because we hard-wire that system to give a known output for a known input (all those if-then statements), or because they learn it from data. We touched on machine learning in part 1 of this series (remember the racist chatbot?), and on how it’s missing the things like culture, error-correction and questioning that we install in small humans. But before we get to how we might build those in, let’s look at the basic mechanics of machine learning.

At the heart of machine learning are algorithms: series of steps that, repeated, create the ability to cluster similar things together, classify objects, and build inference chains. Algorithms are hackable too, and three big things tend to go wrong with them: inputs, models and willful abuse. Bias and missing data are issues on the input side: for instance, if an algorithm is fed racially-biased cause-to-effect inputs, it’s likely to produce a similarly-biased system, and although we can interpolate between data points, if there’s a demographic missing from our training data, we’re likely to misrepresent them.

And although we may appear to have algorithms that automatically turn training data into models, there are almost always human design decisions being used to tune them, from the interpretations that we make of inputs (e.g. mouseclicks are equivalent to interest in a website, or that we make assumptions like 1 phone per person in a country means that everyone has a phone, when it could mean, as is common in West Africa, that some people have multiple phones and others have none) to the underlying mathematical models that they use.

Back in the day, we used to try to build models of the world by hand, with assertions like “all birds can fly”, “birds have wings”, “fred is a bird so fred can fly”, and many arguments over what to do about penguins. That took time and created some very small and fragile systems, so now we automate a lot of that, by giving algorithms examples of input and output pairs, and leaving them to learn enough of a model that we get sensible results when we feed in an input it hasn’t seen before. But sometimes we build systems whose models don’t fit the underlying mechanics of the world that they’re trying to capture, either because we’re using the wrong type of model (e.g. trying to fit straight lines to a curve), the input data doesn’t cover the things that we need it to learn, the world has changed since we built the model, or any of a dozen other things have gone wrong.

Machines can also be lied to. When we build models from data, we usually assume that the data, although inexact, isn’t deliberately wrong. That’s not true any more: systems can be ‘gamed’ with bad data (either misinformation, e.g. accidentally, or disinformation, e.g. deliberately), with machine learning from data on the internet being especially susceptible to this.

And issues in machine belief aren’t abstract: as more systems are used to make decisions about people, Lazar’s catalogue of algorithm risks become important to consider in building any algorithm (these risks include manipulation, bias, censorship, privacy violations, social discrimination, property rights violations, market power abuses, cognitive effects and heteronomy, e.g. individuals no longer having agency over the algorithms influencing their lives).

Human Belief

Humans and machines can go wrong in similar ways (missing inputs, incorrect assumptions, models that don’t explain the input data etc), but the way that humans are wired adds some more interesting errors.

When we’re shown a new fact (e.g. I crashed your car), our brains initially load in that fact as true, before we consciously decide whether it’s true or not. That is itself a belief, but it’s a belief that’s consistent with human behaviour. And it has some big implications: if we believe, even momentarily, that something false is true, then by the time we reject it, it’s a) left a trace of that belief in our brain, and b) our brain has built a set of linked beliefs (e.g. I’m an unsafe driver) that it doesn’t remove with it. Algorithms can have similar memory trace problems; for instance, removing an input-output pair from a support vector machine is unlikely to remove its effect from the model, and unpicking bad inputs from deep learning systems is hard.

Humans have imperfect recall, especially when something fits their unconscious bias (try asking a US conservative about inauguration numbers). And humans also suffer from confirmation bias: they’re more likely to believe something if it fits their existing belief framework. Humans also have mental immune systems, where new fact that are far enough away from their existing belief systems are immediately rejected. I saw this a lot when I worked as an innovations manager (we called this the corporate immune system): if we didn’t introduce ideas very carefully and/or gradually, the cognitive dissonance in decision-makers’ heads was too great, and we watched them shut those ideas out.

We’re complicated. We also suffer from the familiarity backfire effect, as demonstrated by advertising: the more someone tries to persuade us that a myth is wrong, the more familiar we become with that myth, and more likely we are to believe it (familiarity and belief are linked). At the same time, we can also suffer from the overkill backfire effect, as demonstrated by every teenager on the planet ever, where the more evidence we are given against a myth, and the more complex that evidence is compared to a simple myth, the more resistant we are to believing it (don’t despair: the answer is to avoid focussing on the myth whilst focussing on more simple alternative explanations).

And we’re social: even when presented with evidence that could make us change our minds, our beliefs are often normative (the beliefs that we think other people expect us to believe, often held so we can continue to be part of a group), and resistance to new beliefs is part of holding onto our in-group identity. Counters to this are likely to include looking at group dynamics and trust, and working out how to incrementally change beliefs not just at the individual but also at the group level.

Combined belief

 house has-a
Image: http://users.cs.cf.ac.uk/Dave.Marshall/AI2/node145.html

Human and machine beliefs have some common issues and characteristics. First, they have to deal with the frame problem: you can’t measure the whole world, so there are always parts of your system potentially affected by things you haven’t measured. This can be a problem; we spent a lot of the early 80s reasoning about children’s building blocks (Winston’s blocks world) and worrying about things like the “naughty baby problem”, where sometime during our careful reasoning about the world, an outside force (the “naughty baby”) could come into the frame and move all the block orders around.

We also have input issues, e.g. unknown data and uncertainty about the world. Some of that uncertainty is in the data itself; some of it is in the sources (e.g. other people) and channels (e.g. how facts are communicated); a useful trick from intelligence system design is to assess and score those 3 sources of uncertainty separately. Some uncertainty can be reduced by averaging: for instance, if you ask several farmers for the weight of a bull, each individual guess is likely to be wrong, but an average of the guesses is usually close to the real weight (this is how several cool tricks in statistics work).

And then there’s human bias. A long time ago, if you wanted to study AI, you inevitably ended up reading up on cognitive psychology. Big data didn’t exist (it did, but not applied to everything: it was mainly in things like real-time sonar processing systems), the Cyc system was a Sisyphean attempt to gather enough ‘facts’ to make a large expert system, and most of our efforts revolved around how to make sense of the world given a limited number of ‘facts’ about it (and also how to make sense of the world, given a limited number of uncertain data points and connections between them). And then the internet, and wikipedia, and internet 2.0 where we all became content providers, and suddenly we had so much data that we could just point machine learning algorithms at it, and start finding useful patterns.

We relied on two things: having a lot of data, and not needing to be specific: all we cared about was that, on average, the patterns worked. So we could sell more goods because the patterns that we used to target the types of goods to the types of people most likely to buy them, with the types of prompts that they were most likely to respond to. But we forgot the third thing: that almost all data collection has human bias in it — either in the collection, the models used or the assumptions made when using them. And that is starting to be a real problem for both humans and machines trying to make sense of the world from internet inputs.

And sometimes, beliefs are wrong

 kuhn_cycle

Science is cool — not just because blowing things up is acceptable, but also because good science knows that it’s based on models, and actively encourages the possibility that those models are wrong. This leads to something called the Kuhn Cycle: most of the time we’re building out a model of the world that we’ve collectively agreed is the best one available (e.g. Newtonian mechanics), but then at some point the evidence that we’re collecting begins to break that collective agreement, and a new stronger model (e.g. relativity theory) takes over, with a strong change in collective belief. It usually takes an intelligent group with much humility (and not a little arguing) to do this, so it might not apply to all groups.

 chile_tshirt
RedCross t-shirt after 2010 Chile earthquake

And sometimes people lie to you. The t-shirt above was printed to raise funds for the Red Cross after the Chile 2010 earthquake. It was part of a set of tweets sent pretending to be from a trapped mother and child, and was believed by the local fire brigade, who instead found an unaffected elderly couple at that address. Most crisismapping deployments have very strong verification teams to catch this sort of thing, and have been dealing with deliberate misinformation like this during crises since at least 2010. There’s also been a lot of work on veracity in data science (which added Veracity as a fourth “V” to “Volume, Velocity, Variety”).

 fox_viz
Fox News visualisation: not exactly untrue, but not emphasising the truth either

And sometimes the person lying is you: if you’re doing data science, be careful not to deceive people by mistake, e.g. by not starting your axes at zero (see above), you can give readers a false impression of how significant differences between numbers are.