Oh the questions, the questions. The more meetings I go to, the more I see the deep fundamental importance of asking and attempting to answer the right questions. And there are many wrong questions out there, especially (but definitely not limited to just) in AI. Having said there are wrong questions, I probably need to clarify that. There are very few invalid questions: most questions provoke a response, or thought, and therefore have a motive or reason. Now some questions are just plain immoral (such as “what’s the best way to kill a million people”), others are just plain insensible (“Hey you! The troll with the big bike! How come you’re so ugly?”) and some are just there for the fun of it. So also having said that there are wrong questions, I need to qualify again that it’s a subjective thing. There are good questions to ask here and now, and there are bad questions to ask here and now. I’m just hoping that I have enough good taste to tell the difference.
So: questions to think about sometime on this blog, with the proviso that some of these questions are just for fun, with no promise of anything even approaching a bounded sensible answer:
* How can we organise machine thought?
* How is machine thought similar to and different from human thought?
* How can we make machines more creative?
* How do humans make sense of information?
* What do we do when we’re not sure?
* What makes us human?
* How are some people geniuses, and can we replicate that with machines?
Sometime, I may try to arrange these into the possible and not here/not now categories. But that may have to wait until after I’ve attempted to start answering them.
2 thoughts on “What’s the questions again?”
We know a lot about inference. There are many logics, many proof theories, lots of ideas of what ‘deduction’ means. But we know little about context – frames are a reasonable idea, but they have not really moved the art of real world inference forward. I even wonder if the separation between facts and inference is sensible. In any case one suspects that before something non-trivial can be said about machine thinking, we need at least to be able to say something non-trivial about machine inference.
A frame merely bounds the area in which inferences are being made; it’s a convenient way of removing the effects of the rest of the world from an area that may not generally be usually affected by it. Many of the network-based representations (e.g. bayesnets, semantic nets etc) use spreading propagations and bin states (node states that code for rare events without explicity stating what they are) that allow for these side-swipe events (if used in large enough representations, with the caveat as ever that no matter how large a representation, it is still bounded just by a larger frame); the problem becomes when to read off an answer; does one spend time propagating effects across an entire large network, or do a faster update-read cycle that assumes that facts are stable but might change, possibly rapidly, as more propagation time and information become available. Using metanodes (network nodes containing either a single summary node or a smaller, more detailed network, depending on need) allows some degree of control over the level of representation and hence required propagation effort needed; these do seem to mirror some of the mental shorthands used by people, again with the caveat as ever that human reasoning may be a useful pointer to potentially useful machine reasoning styles but may not necessarily be the most efficient or effective form of reasoning to implement in a machine.
Where frames fail is when an unmodelled influences or events occurs. Now how one handles that depends on many factors including how important an inference failure is; to put it in financial terms, if it’s a £1000 loss in £10m it might be treated as a niggle that costs more to fix than to accept; if it’s £1000 from your own bank account, that’s a little more serious and you might be prepared to put in the systems (human or machine) to prevent it. One way of dealing with this is to model or learn the rare states explicitly and take the hit on the cost (in coding, propagation time etc) of doing this; another is to set trigger events (e.g. those bin states) that then flag that more detailed processing is required – another quite human thing to do (some call it intuition, others call it spotting clues, i.e. the mismatches between the usual mental model and reality). Sometimes that detailed processing isn’t coded in but an action is still required; in this state, humans usually do the best they can with the resources (both mental and physical) that they have – into this category falls much of creativity (e.g. analogy etc) and its automation.
I could argue that network representations also allow for the mixture of facts and inference; a link set in a bayesnet (and I apologise for using bayesnets lots, but it’s a paradigm that I already know and love) could validly be construed as a set of individual facts about the connections between the nodes involved; Friedman et al’s work on reducing bayesnet link complexity by discovering the network structure hidden in each link matrix could reasonably be adapted for this use. There will be arguments about extending first-order paradigms into nth-order reasoning, but I personally think this is a bit of a red herring; nth-order reasoning is already around, prompted mainly by the need for empathy/ meta-reasoning in game and agent theory, and it is strange now to be ignoring an entire dimension of thought. I mean, right now I’m thinking about thought and thinking about modifying the way that I think about those thoughts; it’s not exactly an unnatural process to do. Strangely, some support also comes from the more staid corners of computer science; self-modifying vector code has been quietly with us for a long time now (it’s a beggar to debug but very beautiful when it does work), and some of the tracing techniques from that could be adapted to stop us scaring ourselves about ‘no longer knowing what the computer is thinking’ (to steal a recent phrase from the aviation community).
In short, context is important; it bounds most possibilities and allows us to make definite inferences in a very large, very messy and uncertain world. But equally, the ability to recognise when that context shorthand is not working and to reason outside it may also be important, and is certainly important in many human areas including survival. Facts and inference about facts are separated in first-order reasoning systems but can be treated together, and human processing can give us some pointers to how to do that. I still have to work out some details about e.g. whether a fact can be treated as an inference in its own right, but none of this seems entirely impossible.
Comments are closed.