The re-rise of the question

And so another year ends. Another decade starts, and an accident of history and human nature – name one religion that doesn’t have some form of midwinter feast – has given us all some time to stop and reflect. Except the people working the busiest time of year in the shops and hospitals, or doing shifts keeping everything running. But for all those people in the offices and factories, there’s a break in the normal routine, and a time to catch our breath a little, even if it is between another round of games or drinks, or another run down the piste.

So since we’re all in end-noughties reflective mood, I thought I’d join in. The did-I-meet-my-new-years-resolutions-post is later. I’ve been thinking about what marked the end of this decade for me, and me it’s that people are starting to question again. Not “how do I find Waitrose”, but the bigger questions, and a lot of what I’m seeing is “are we doing the right thing”?

One of the things that I work on are the points where strategy, management and technology meet. And over the years, I’ve been watching each of these strata – I’d say ‘disciplines’ but most mid-level tech jobs now are or should be a combination of all three – evolve. And with the short attention spans and ever-faster development cycles of the dot-com boom and its followers came techniques for business, management and engineering that each at their heart had the simple question “are we doing the right thing”, and in the case of systems engineering, its correlate “are we doing this thing right”: see under validation and verification respectively. One of my favourite grumpy engineers has convinced me to always start with “what is the problem that we’re trying to solve here”, but that’s a point for a different day.

So most of us computer scientists started the decade with a toolbox of tried-and-tested strategy and management techniques that worked fine on very stable requirements and timespans that were never quite long enough for each project without a hockey-stick panic, but were at least long enough to be managed in recognisable and sizeable phases (or find stakeholders, gather requirements, create an architecture etc etc). We’ve ended it with agile methodologies, morning standups and scrum masters, where we pick features off a list, build them and check that we’re doing the right thing and thing right by regularly reviewing the system with our customers and users.  Except there’s a mismatch… the agile methods don’t give us the larger framework to build complex systems in, and the old requirements-based methods don’t build enough user feedback as these systems are being built. So the big question for now is how to be both agile and situated, when the system is more than just a slightly-clever website.


I live in two worlds – the real and the virtual. To some extent we all do now – moving our personae from physical to virtual connections with barely a thought about what this means. But thinking a little more, it struck me that this might just be a good example of autonomy in action.

Autonomy also lives in two worlds. For many people, it means the ability to act independently of controls, to think and act for oneself – for others, especially those used to autonomous systems, it means robots and the subtle grades of control, responsibility and trust that their human operators (or, increasingly, team-mates) share with them.

I could get boring about PACT (Pilot Authorisation and Control of Tasks) levels and other grading scales that range from the human having full control of a system’s actions to the system having full control. I could talk about Asimov’s laws, and the morals and legal minefield that start to come into play when a robot – designed by humans, but with its reasoning maybe optimized by machines – climbs up the PACT scale and starts to report back rather than wait for a human decision. I could even talk about variable autonomy levels and what will happen when the human-machine responsibility balance shift automatically to take account of their relative task loadings.

But I won’t. Because I think, to some extent, we’re already there. To me, autonomy isn’t really about putting smarter machines into the field, it’s about understanding that, once you accept systems as part of the team, you can get past the oh-Gad-it’s-Terminator moment and start thinking like a mixed human-machine team player instead of someone working with a load of humans who happen to have some whizzy toys at their disposal. And you know what falls out if you do this? Simplicity. Elegance. Because instead of going “ooh, wow, technology to replace the humans”, you play to all the team members’ strengths. Build systems and teams that use machines’ ability to sift and present data and do the mindless fusion tasks, but also use the humans’ strengths in pattern-finding, group reasoning and making sense of complex, uncertain situations.

And y’know what? This is happening every time my virtual self crawls around Twitter or Facebook. I’m using tools, I’m allowing my focus to be guided by filters and tags and ever-more-complex reasoning without entirely stopping to think about what this means (well, okay, I looked up the trending algorithms but hey), but I’m also adding my own human reasoning and focus on top, and using those machine steers to find and work with dozens of intersecting communities to report on and make sense of the world using yet more semi-autonomous tools. Which is a good thing. But in this landscape, with early AI techniques turning up in tools like RIF and the semantic web, and algorithm-based controls in our virtual lives being increasingly taken on trust, it’s maybe time to reflect on the subtleties and meaning of autonomy and how it really already applies to us all.

I meant to write a post about remembering to include the human in systems, to not to be blinded by technology.  I’m not sure I did that, but I enjoyed the journey to a different point all the same.

Postscript. I wondered this morning if a truer test of autonomy would be arguments – not gentle reasoning, but the type of discussion usually brought on when two people are both sure of a completely opposite conclusion. Like, say, when map-reading together.  And then an ‘aha’ moment. PACT et al aren’t really about the philosophical and moral nuances of sharing responsibility et al with machines, oh no. It’s simpler: they’re about machine courtesy – embedded politesse if you will.  Or at least I would have reached that conclusion, but I think it’s your turn gentle reader.  No, no, after you…