The Ethics of Algorithms

I opened a discussion on the ethics of algorithms recently, with a small thing about what algorithms are, what can be unethical about them and how we might start mitigating that. I kinda sorta promised a blogpost off that, so here it is.

Algorithm? Wassat?

muh%cc%a3ammad_ibn_musa_al-khwarizmi

Al-Khwarizmi (wikipedia image)

Let’s start by demystifying this ‘algorithm’ thing. An algorithm (from Al-Khwārizmī, a 9th-century Persian mathematician, above) is a sequence of steps to solve a problem. Like the algorithm to drink coffee is to get a mug, add coffee to the mug, put the mug to your mouth, and repeat. An algorithm doesn’t have to be run a computer: it might be the processes that you use to run a business, or the set of steps used to catch a train.

But the algorithms that the discussion organizers were concerned about aren’t the ones used to not spill coffee all over my face. They were worried about the algorithms used in computer-assisted decision making in things like criminal sentencing, humanitarian aid, search results (e.g. which information is shown to which people) and the decisions made by autonomous vehicles; the algorithms used by or instead of human decision-makers to affect the lives of other human beings. And many of these algorithms get grouped into the bucket labelled “machine learning”. There are many variants of machine learning algorithm, but generally what they do is find and generalize patterns in data: either in a supervised way (“if you see these inputs, expect these outputs; now tell me what you expect for an input you’ve never seen before, but is similar to something you have”), reinforcement-learning way (“for these inputs, your response is/isn’t good) or unsupervised way (“here’s some data; tell me about the structures you see in it”). Which is great if you’re classifying cats vs dogs or flower types from petal and leaf measurements, but potentially disastrous if you’re deciding who to sentence and for how long.

 neuralnetwork architecture (wikipedia)

Simple neural network (wikipedia image)

Let’s anthropomorphise that. A child absorbs what’s around them: algorithms do the same. One use is to train the child/machines to reason or react, by connecting what they see in the world (inputs) with what they believe the state of the world to be (outputs) and the actions they take using those beliefs. And just like children, the types of input/output pairs (or reinforcements) we feed to a machine-learning based system affects the connections and decisions that it makes. Also like children, different algorithms have different abilities to explain why they made specific connections or responded in specific ways, ranging from clear explanations of reasoning (e.g. decision trees, which make a set of decisions based on each input) to something that can be mathematically but not cogently expressed (e.g. neural networks and other ‘deep’ learning algorithms, which adjust ‘weights’ between inputs, outputs and ‘hidden’ representations, mimicking the ways that neurons connect to each other in human brains).

Algorithms can be Assholes

Algorithms are behind a lot of our world now. e.g. Google (which results should you be shown), Facebook (which feeds you should see), medical systems detecting if you might have cancer or not. And sometimes those algorithms can be assholes.

algorithm_asshole_headlines

Headlines (screenshots)

Here are two examples: a Chinese program that takes facial images of ‘criminals’ and maps those images to a set of ‘criminal’ facial features that the designers claim have nearly 90% accuracy in determining if someone is criminal, from just their photo. Their discussion of “the normality of faces of non-criminals” aside, this has echoes of phrenology, and should raise all sorts of alarms about imitating human bias. The second example is a chatbot that was trained on Twitter data; the headline here should not be too surprising to anyone who’s recently read any unfiltered social media.

We make lots of design decisions when we create an algorithm. One decision is which dataset to use. We train algorithms on data. That data is often generated by humans, and by human decisions (e.g. “do we jail this person”), many of which are imperfect and biased (e.g. thinking that people whose eyes are close together are untrustworthy). This can be a problem if we use those results blindly, and we should always be asking about the biases that we might consciously or unconsciously be including in our data. But that’s not the only thing we can do: instead of just dismissing algorithm results as biased, we can also use them constructively, to hold a mirror up to ourselves and our societies, to show us things that we otherwise conveniently ignore, and perhaps should be thinking about addressing in ourselves.

In short, it’s easy to build biased algorithms with biased data, so we should strive to teach algorithms using ‘fair’ data, but when we can’t, we need to use other strategies for our models of the world, and can either talk about the terror of biased algorithms being used to judge us, or we can think about what they’re showing us about ourselves and our society’s decision-making, and where we might improve both.

What goes wrong?

If we want to fix our ‘asshole’ algorithms and algorithm-generated models, we need to think about the things that go wrong.  There are many of these:

  • On the input side, we have things like biased inputs or biased connections between cause and effect creating biased classifications (see the note on input data bias above), bad design decisions about unclean data (e.g. keeping in those 200-year-old people), and missing whole demographics because we didn’t think hard about who the input data covered (e.g. women are often missing in developing world datasets, mobile phone totals are often interpreted as 1 phone per person etc).
  • On the algorithm design side, we can have bad models: lazy assumptions about what input variables actually mean (just think for a moment of the last survey you filled out, the interpretations you made of the questions, and how as a researcher you might think differently about those values), lazy interpretations of connections between variables and proxies (e.g. clicks == interest), algorithms that don’t explain or model the data they’re given well, algorithms fed junk inputs (there’s always junk in data), and models that are trained once on one dataset but used in an ever-changing world.
  • On the output side, there’s also overtrust and overinterpretation of outputs. And overlaid on that are the willful abuses, like gaming an algorithm with ‘wrong’ or biased data (e.g. propaganda, but also why I always use “shark” as my first search of the day), and inappropriate reuse of data without the ethics, caveats and metadata that came with the original (e.g. using school registration data to target ‘foreigners’).

But that’s not quite all. As with the outputs of humans, the outputs of algorithms can be very context-dependent, and we often make different design choices, depending on that context, for instance last week, when I found myself dealing with a spammer trying to use our site at the same time as helping our business team stop their emails going into customers’ spam filters.  The same algorithms, different viewpoints, different needs, different experiences: algorithm designers have a lot to weigh up every time.

Things to fight

Accidentally creating a deviant algorithm is one thing; deliberately using algorithms (including well-meant algorithms) for harm is another, and of interest in the current US context.  There are good detailed texts about this, including Cathy O’Neill’s work, and Latzer, who categorised abuses as:

  • Manipulation
  • Bias
  • Censorship
  • Privacy violations
  • Social discrimination
  • Property right violations
  • Market power abuses
  • Cognitive effects (e.g. loss of human skills)
  • Heteronomy (individuals no longer have agency over the algorithms influencing them)

I’ll just note that these things need to be resisted, especially by those of us in a position to influence their propagation and use.

How did we get here?

Part of the issue above is in how we humans interface with and trust algorithm results (and there are many of these, e.g. search, news feed generators, recommendations, recidivism predictions etc), so let’s step back and look at how we got to this point.

And we’ve got here over a very long time: at least a century or two, to back when humans started using machines that they couldn’t easily explain because they could do tasks that had become too big for the humans.   We automate because humans can’t handle the data loads coming in (e.g. in legal discovery, where a team often has a few days to sift through millions of emails and other organizational data); we also automate because we hope that machines will be smarter than us at spotting subtle patterns. We can’t not automate discovery, but we also have to be aware of the ethical risks in doing it.  But humans working with algorithms (or any other automation) tend to go through cycles: we’re cynical and undertrust a system tip it’s “proved”, then tend to overtrust its results (these are both part of automation trust). In human terms, we’re balancing these things:

More human:

  • Overload
  • Incomplete coverage
  • Missed patterns and overlooked details
  • Stress
More automation:

  • Overtrust
  • Situation awareness loss (losing awareness because algorithms are doing processing for us, creating e.g. echo chambers)
  • Passive decision making
  • Discrimination, power dynamics etc

And there’s an easy reframing here: instead of replacing human decisions with automated ones, let’s concentrate more on sharing, and frame this as humans plus algorithms (not humans or algorithms), sharing responsibility, control and communication, between the extremes of under- and over-trust (NB that’s not a new idea: it’s a common one in robotics).

Things I try to do

Having talked about what algorithms are, what we do as algorithm designers, and the things that can and do go wrong with that, I’ll end with some of the things I try to do myself.  Which basically comes down to consider ecosystems, and measure properly.

Considering ecosystems means looking beyond the data, and at the human and technical context an algorithm is being designed in.  It’s making sure we verify sources, challenge both the datasets we obtain and the algorithm designers we work with, and have a healthy sense of potential risks and their management (e.g. practice both data and algorithm governance), and reduce bias risk by having as diverse (in thought and experience) a design team as we can, and access to people who know the domain we’re working in.

Measuring properly means using metrics that aren’t just about how accurately the models we create fit the datasets we have (this is too often the only goal of a novice algorithm designer, expressed as precision = how many of the things you labelled as X are actually X; and recall = how many of the things that are really X did you label as X?), but also metrics like “can we explain this” and “is this fair”.  It’s not easy but the alternative is a long way from pretty.

I once headed an organisation whose only rule was “Don’t be a Jerk”.  We need to think about how to apply “Don’t be a jerk” to algorithms too.

Infosec, meet data science

I know you’ve been friends for a while, but I hear you’re starting to get closer, and maybe there are some things you need to know about each other. And since part of my job is using my data skills to help secure information assets, it’s time that I put some thoughts down on paper… er… pixels.

Infosec and data science have a lot in common: they’re both about really really understanding systems, and they’re both about really understanding people and their behaviors, and acting on that information to protect or exploit those systems.  It’s no secret that military infosec and counterint people have been working with machine learning and other AI algorithms for years (I think I have a couple of old papers on that myself), or that data scientists and engineers are including practical security and risk in their data governance measures, but I’m starting to see more profound crossovers between the two.

Take data risk, for instance. I’ve spent the past few years as part of the conversation on the new risks from both doing data science and its components like data visualization (the responsible data forum is a good place to look for this): that there is risk to everyone involved in the data chain, from subject and collectors through to processors and end-product users, and that what we need to secure goes way beyond atomic information like EINs and SSNs, to the products and actions that could be generated by combining data points.  That in itself is going to make infosec harder: there will be incursions (or, if data’s coming out from outside, excursions), and tracing what was collected, when and why is becoming a lot more subtle.  Data scientists are also good at subtle: some of the Bayes-based pattern-of-life tools and time-series anomaly algorithms are well-bounded things of beauty.  But it’s not all about DS; also in those conversations have been infosec people who understand how to model threats and risks, and help secure those data chains from harm (I think I have some old talks on that too somewhere, from back in my crisismapping days).

There are also differences.  As a data scientist, I often have the luxury of time: I can think about a system, find datasets, make and test hypotheses and consider the veracity and inherent risks in what I’m doing over days, weeks or sometimes months.  As someone responding to incursion attempts (and yes, it’s already happening, it’s always already happening), it’s often in the moment or shortly after, and the days, weeks or months are taken in preparation and precautions.  Data scientists often play 3d postal chess; infosec can be more like Union-rules rugby, including the part where you’re all muddy and not totally sure who’s on your side any more.

Which isn’t to say that data science doesn’t get real-time and reactive: we’re often the first people to spot that something’s wrong in big streaming data, and the pattern skills we have can both search for and trace unusual events, but much of our craft to date has been more one-shot and deliberate (“help us understand and optimise this system”). Sometimes we realize a long time later that we were reactive (like realizing recently that mappers have been tracking and rejecting information injection attempts back to at least 2010 – yay for decent verification processes!). But even in real-time we have strengths: a lot of data engineering is about scaling data science processes in both volume and time, and work on finding patterns and reducing reaction times in areas ranging from legal discovery (large-scale text analysis) to manufacturing and infrastructure (e.g. not-easy-to-predict power flows) can also be applied to security.

Both infosec and data scientists have to think dangerously: what’s wrong with this data, these algorithms, this system (how is it biased, what is it missing, how is it wrong); how do I attack this system, how can I game these people; how do I make this bad thing least-worst given the information and resources I have available, and that can get us both into difficult ethical territory.  A combination of modern data science and infosec skills means I could gather data on and profile all the people I work with, and know things like their patterns of life and potential vulnerabilities to e.g. phishing attempts, but the ethics of that is very murky: there’s a very very fine line between protection and being seriously creepy (yep, another thing I work on sometimes).  Equally, knowing those patterns of life could help a lot in spotting non-normal behaviours on the inside of our systems (because infosec has gone way beyond just securing the boundaries now), and some of our data summary and anonymisation techniques could be helpful here too.  Luckily much of what I deal with is less ethically murky: system and data access logs, with known vulnerabilities, known data and motivations, and I work with a most wonderfully evil and detailed security nerd.  But we still have a lot to learn from each other.  Back in the Cold War days (the original Cold War, not the one that seems to be restarting now), every time we designed a system, we also designed countermeasures to it, often drawing on disciplines far outside the original system’s scope.  That seems to be core to the infosec art, and data science would seem to be one of those disciplines that could help.

Notes from John Sarapata’s talk on online responses to organised adversaries

John Sarapata (@JohnSarapata) = head of engineering at Jigsaw  (= new name for Google Ideas).  Jigsaw = “the group at Google that tries to help users facing organized violence and oppression”.  A common thread in their work is that they’re dealing with the outputs from organized adversaries, e.g. governments, online mobs, extremist groups like ISIS.
One example project is redirectmethod.org, which looks for people who are searching for extremist connections (e.g. ISIS) and shows them content from a different point of view, e.g. a user searching for travel to Aleppo might be shown realistic video of conditions there. [IMHO this is a useful application of social engineering in a clear-cut situation; threats and responses in other situations may be more subtle than this (e.g. what does ‘realistic’ mean in a political context?).]
The Jigsaw team is looking at threats and counters at 3 levels of the tech stack:
  • device/user: activities are consume and create content; threats include attacks by governments, phishing, surveillance, brigading, intimidation
  • wire: activities are find then transfer; threats include DNS hijacking, TOR bridge probes
  • server: activities are hosting; threats include DDOS
[They also appear to be looking at threats and counters on a meta level (e.g. the social hack above).]
Examples of emergent countermeasures outside the team include people bypassing censorship in Turkey by using Google’s public DNS 8.8.8.8, and people in China after the 2008 Szechwan earthquake posting images of school collapses and investigating links between these (ultimately leading to finding links between collapses, school contractors using substandard concrete and officials being bribed to ignore this) despite government denial of issues.  These are about both reading and generating content, both of which need to be protected.
There are still unsolved problems, for example communications inside a government firewall.  Firewalls (e.g. China’s Great Firewall) generally have slow external pipes with internal alternatives (e.g. Sino Weibo), so people tend to consume information from inside. Communication of external information inside a firewall isn’t solved yet, e.g mesh networks aren’t great; the use of thumb drives to share information in Cuba was one way around this, but there’s still more to do.  [This comment interested me because that’s exactly the situation we’ve been dealing with in crises over the past few years: using sneakernet/ mopeds,  point-to-point, meshes etc., and there may be things to learn in both directions.]
Example Jigsaw projects and apps include:
  • Unfiltered.news, still in beta: creates a knowledge graph when Google scans news stories (this is language independent). One of the cooler uses of this is being able to find things that are reported on in every country except yours (e.g. Russia, China not showing articles on the Panama Papers).
  • Anti-phishing: team used stuff from Google’s security team for this, e.g. using Password Alert (alerts when user e.g. puts their company password into a non-company site) on Google accounts.
  • Government Attack Warning. Google can see attacks on gmail, google drive etc accounts: when a user logs in, Google displays a message to them about a detected attack, including what they could do.
  • Conversation AI. Internet discussions aren’t always civil, e.g. 20-25 governments including China and Russia have troll armies now, amplified by bots (brigading); conversation AI is machine classification/detection of abuse/harassment in text; the Jigsaw team is working on machine learning approaches together with the youtube comment cleanup team.  The team’s considered the tension that exists between free speech and reducing threats: their response is that detection apps must lay out values, and Jigsaw values include that conversation algorithms are community specific, e.g. each community decides its limits on swearing etc.; a good example of this is Riot Games. [This mirrors a lot of the community-specific work by community of community groups like the Community Leadership Forum].  Three examples of communities using Conversation AI: a Youtube feature that flags potential abuse to channel owners (launching Nov 2016). Wikipedia: flagging personal attacks (e.g. you are full of shit) in talk pages (Wikipedia has a problem with falling numbers of editors, partly because of this). New York Times: scaling existing human moderation of website comments (NYT currently turns off comments on 90% of pages because they don’t have enough human moderators). “NYT has lots of data, good results”.  Team got interesting data on how abuse spreads after releasing a photo of women talking to their team about #gamergate, then watching attackers discuss online (4chan etc) who of those women to attack and how, and the subsequent attacks.
  • Firehook: censorship circumvention. Jigsaw has the Uproxy plugin  for peer-to-peer information sharing across censorship boundaries (article), but needs to do more, eg look at the whole ecosystem.  Most people use proxy servers (e.g. VPNs), but a government could disallow VPNs: we need many different proxies and ways to hide them.  Currently using WebRTC for peer to peer proxies (e.g. Germany to Turkey using e.g NAT hole punching), collateral freedom and domain fronting, e.g. GreatFire routing New York Times articles through Amazon and GitHub.  Domain fronting (David Fyfield article) uses the fact that e.g. CloudFlare hosts many sites: the user connects to an allowed host, https encrypts it, then uses the encrypted header to go to a blocked site on the same host.  There are still counters to this; China first switched off GitHub access (then had to restore it), and used the Great Cannon to counter GreatFire, e.g. every 100th load of Baidu Analytics injects malware into external machines and creates a DDOS botnet. NB the firewall here was on path, not in path: a machine off to one side listens for banned words and breaks connections, but Great Cannon is inside the connection; and with current access across the great firewall, people see different pages based on who’s browsing.
  • DDOS: http://www.digitalattackmap.com/ (with Arbor Networks), shows DDOS in real time. Digital attacks are now mirroring physical ones, and during recent attacks, e.g. Hong Kong’s Umbrella Revolution, Jigsaw protected sites on both sides (using Project Shield, below) because Google thinks some things, like DDOS, are unfair.  Interesting point: trolling as the human equivalent of DDOS [how far can this comparison go in designing potential counters?].
  • Project Shield: reused Google’s PSS (Page Speed Service) to protect news sites, human rights organization etc from DDOS attacks. Sites are on Google cloud: can scale up number of VMs used and nginx allows clever uses with reverse proxies, cookie challenges etc.  Example site: Krebs on Security was being DDOSed (nb a DDOS attack on a site costs about $50 online), moved from host Akamai to Google with Project Shield. Team is tracking Twitter user bragging about this and other attacks (TL;DR: IoT attack, e.g. baby monitors; big botnet, Mirai botnet source code now released, brought down Twitter, Snapchat).  Krebs currently getting about 5 attacks a day, e.g. brute-force, Slowloris, Hulk (bandwidth, syn flood, post flood, cache busting, WordPress pingback etc), and Jigsaw gets the world’s best DDOSers hacking and testing their services.
Audience questions:
  • Qs: protecting democracy in US, e.g. botnets, online harassment etc.? A: don’t serve specific countries but worldwide.
  • Q: google autofill encouraging hatespeech? A: google reflects the world as it is; google search reflects what people do, holds a mirror back up to you. Researching machine learning bias on google suggest results and bias in training data.  Don’t want to censor, but don’t want to propagate bad things.
  • Q: not censor but inform, can you e.g. tell a user “your baby monitor is hacked”? A: privacy issue, eg g connecting kit and emails to ip addresses.
  • Q: people visiting google site to attack… spamming google auto complete, algorithms? A: if google detects people gaming them, will come down hard.
  • Q: standard for “abusive”? how to compare with human?. A: is training data, Wikipedia is saying what’s abusive. Is all people in the end.
  • Q: how deal with e.g. misinformation? A: unsolved problem, politically sensitive, e.g. who gets to decide what’s fake and true? censorship and harassment work will take time.
  • Q: why Twitter not on list of content? A: Twitter might not want this, team is resource constrained, e.g. NYT models are useless on youtube because NYT folks use proper grammar and spelling.
  • Q: how to decide what to intervene in?A:  e.g. Google takes sides against ISIS, who are off the charts on eg genocide.
  • Q: AWS Shield; does Google want to commercialise their stuff? A: no. NB CloudFlare Galileo is also response to google work.
  • Q: biggest emerging tech threat online? Brigading, e.g. groups of people and bots. This breaches physical and online, includes eg physical threats and violence, and is hard to detect and attribute.
Other references: