Return on Humanity and other metrics

I’ve been drafting project documents. It’s one of those jobs that has to be done, to give us a better work specification than “we want to build something to help analysts fill in the humanitarian information gap”. I’ve also been thinking for a while about how to best compare hackathon projects to determine which to support further (if the team’s willing to be supported of course) after each hackathon.

Part of project management is asking whether something should be done, how it fits into existing ecosystems and how much gain it brings over alternative methods. In business, this is covered by return on investment – the percentage cash gain from the project relative to the money invested in it every year. It’s more complicated of course – project accounting needs to think about cash flows over time via things like net present value and IFO (income from operations) curves – but the bottom line is usually ‘how big is the financial return’, before asking questions like ‘how does this fit our company values’.

How does this apply to humanitarian systems? There are two parts to this question: 1) how do we compare the social gains of one decision over another, and 2) how can we monetize these social gains to explain why finance-sensitive agencies (governments, big business etc) should invest in humanitarian work.

I’m starting with three sources for this (I have Patrick McNamara to thank for pointing me at the first two):

  • Benetech’s Return on Humanity (ROH) metrics.
  • REDF’s Social Return on Investment (SROI), OASIS and RISE metrics.
  • DfID’s metrics in the 2011 Multilateral Aid Review.

DfID compared aid agencies on two main value-for-money scales: 1) how much each agency contributed to UK development objectives (it also listed results in terms of the numbers of people helped, items (food, mosquito nets etc) delivered and deaths prevented) and 2) organizational behavior & values (including transparency, value consciousness, drive and partnering). DfID looked intelligently (with help from Alison Evans and Lawrence Haddad) at the value chain from costs and inputs to outputs and (qualitative & quantitative) outcomes – in their own words, “The further we go up the chain, from costs, to inputs, to outputs and then to outcomes, the more difficult it becomes to measure the things that we are interested in. But it is still important to try to do this, because this is what matters: not the number of teachers trained, but the difference this makes to poor children’s life chances. Our approach has therefore been to measure what we can, and look at proxy measures for the rest.”

Most of DfID’s metrics are more applicable to an organization rather than the results that it outputs, but the thinking used to generate these metrics, and the methods used to obtain and weigh evidence could be useful at the results level. One thought is about alignment with goals – for the UN, the high-level question here is “how well do these results align with the Millennium Development Goals”, but there are interesting lower-level questions about goals to be had here too. There’s also a metaquestion – DfID used two main metrics because they were difficult to combine, and our organisation outputs are likely to be comparably complex. Perhaps in development, the answer isn’t a bottom-line single score (e.g. ROI) but a spider graph of three or more (perhaps including ROI to businesses, who are also part of the development ecosystem).

REDF “…blends social and financial returns, includes performance measurement, and creates an “efforts-to-outputs” ratio that can be looked at across enterprises.” Again, this looks like an institutional metric.

ROH is described as “analyzing four factors: social return on investment (financial and social benefits to society), benchmarking (how a program or product compares with other options to solve the same problem), financial sustainability (sufficient customer support to result in financial break even), and sales (dollar volume and number of customers served).” Now this could be promising.


  • Multilateral Aid Review, DfID, 2011
  • Paris declaration on aid effectiveness, 2005
  • Multilateral Organisation Performance Assessment Network,

Humanitarian ontologies

I’ve been thinking a lot about humanitarian and development data lately.  Helping, even, with the UNGIWG catalogs of what the UN is holding and how to obtain it.   And thinking about how best to make it available to an analyst – given what will hopefully become quite large datastores, how to traverse between datasets, crisis indicators and proxies .

Which actually leads to three ideas.

A lot of the data isn’t useful in itself.  What it’s useful for is the effects that it shows on more direct indicators and variables.  For example, the UN data includes divorce rates, marriages, deaths etc. These don’t tell us as much about vulnerability to crisis as the variable that connects them: household size. And wouldn’t it be nice to have an ontology that makes these variables explicit so we can link them to the proxies (the terms that we search for, like ‘household bills’), indicators (the things that humanitarian surveys are looking for, like ‘switched to cheaper foods’) and metadata (the ‘headings’ in each datastore).

To make this useful, we’d need to display these connections.  And a natural representation for this is a graph (or an indented list if you’re in a low-bandwidth environment). This post could be useful here: I’m visualising an expanded box showing the main features of a node, with links out to connected nodes that could be clicked on to bring them into detail focus too, perhaps with a simple clickable breadcrumb trail (a wiki-like heading of a>b>c>d) to show the user how they got there.  It’s a nice neat self-contained piece of work for someone.

As is this: using the connection-learning algorithms from Mwebase et al to surface some of the connections between indicators and data.


  • There’s a lot of work going on in crisis ontologies at the moment.  These are tangential to the needs of slow-burn crisis ontologies, but might overlap, and might provide some useful existing pointers.
  • @medic have developed an ontology for disease systems – what we’re talking about above could be seen as a widening of this to all humanitarian slow-burn crises. Instedd’s Evolve project is also interesting for this, but note that diseases are a special case – you now what you’re looking for, can track from disease to symptom and don’t have the causal complexity that slow-burn crises are likely to contain.
  • The Activesa (OWL) ontology looks like a promising, albeit a bit militarily-focussed, starting-point.  The University of Southampton group have been working on humanitarian situation awareness from a military perspective for a while now.
  • And of course this discussion wouldn’t be complete without pointing to FAO’s geopolitical ontology – useful for situation that knowledge once you’ve got it.

Choosing a web framework for analysis

The current project is an open source data analysis platform. It’s aimed at humanitarian and development data, but could be used to track other data too. It needs to work with existing big data and open data tools. And we have no budget to get it built.

The idea is pretty simple. Give each analyst access to data, tools and a place they can store “hunches” – ideas about what’s going on in data that they haven’t fully formed yet, and would appreciate some help from other analysts on. Make each platform capable of operating off-tether (for those unfortunate moments when the local internet is down). And connect the platforms together so that people with similar problems can share ideas, best practices, interesting leads etc.

Oh, and we have 6 months to build a pilot system, and are planning to use agile and test driven development techniques in the build.

Potentially suitable web applications framework candidates to carry this include:

  • Ruby/Rails
  • Python/Django
  • Scala / Lift
  • Java / Struts
  • PHP / Drupal
  • C# / ASP
  • The results of the building a distributed decentralised internet group’s work.

The desiderata for choosing a programming framework for this are:

  • Help. How big and how healthy is its active community? i.e. how much help is available if we hit a snag or need to know about a feature?
  • People. How easy is it to attract people with these skills onto the project?
  • Privacy. How secure is the framework? Some data will need to be protected, i.e. not visible outside a specific platform – how much can we guarantee this protection?
  • Learning. How fast can we learn enough about this framework to use it? Alternatively: how fast can we get people who want to help up to speed in it?
  • Speed. How fast can we build something in this framework (we have 6 months to build a pilot system).
  • Libraries. How many useful libraries are available (so we don’t have to spend time building unnecessary stuff).
  • Tools. How good are its instructions, IDEs, debuggers, test tools, database access?
  • Adapters. How easy is it to run and connect analysis tools and to APIs from the framework? Most of the tools found so far are written in Python, and occasionally in R.
  • Bandwidth. We need to be able to use this in a low-bandwidth environment.
  • Pretty. We’ve been specifically asked to make the platform pretty. I’m sure there’s a metric for that out there somewhere.
Feature Rails Django Lift Struts Drupal ASP
Help Active community Active community
People Existing UN volunteer teams Sought after and difficult to obtain
Privacy Plug-in security
Speed Fast Takes time
Tools Multiple test tools Unittest. Test client. Unit tests SimpleTest Unit tests
Adapters Has wrappers for Python Natural for Python tools Callout to python scripts. And Drupy!

Well that’s the matrix for later, when we have a longer development cycle and resources.  For now, given the speed and people constraints, it comes down to a choice of two: Rails or Django.  Each wins on different scores: Django on its python background (and hence really simple adapters), Ruby on its toolset, prettiness and available people.  Which is what it comes down to in the end.  We’ve been asked for something pretty and fast, and there isn’t the luxury now to choose Scala’s simplicity and security, or Drupal’s community tools and Python adapters.   Perhaps when the open data Drupal project is up and running, we can share what we’ve learnt about needs and tools from this project with them, and start to build something *really* special for the world.