Friday, September 14, 2012

Artificial Intelligence, Human Intelligence

Filming Jeopardy!

 The comments on a recent post about science fiction, A Literature of the Possible, included a considerable subthread about HAL, the ship's computer in 2001: A Space Odyssey. The discussion centered on HAL's literary role, and whether its function in the story could have been played by either a human mutineer, or something - such as a shark - not claimed or expected to have anything like 'intelligence' in the human sense of the term.

Here I will take the discussion in another direction, going boldly where many have gone before, including my own blog post on What Do AIs Want? - along with several earlier comment threads, which I am too lazy to dig out.

In that post, I discussed SF tropes regarding AI and its response to humans, starting with the ever-popular insane malevolence and ending with:
It will ignore us, but neither serenely nor with any other emotion. It is after all a machine, with all the emotion and motivation of a lawn mower.

Of all the possibilities regarding AI, this may be the one we least expected beforehand. So of course inevitably - thus far, at least - it seems to be the one that is coming true.

This was not expected because the only high-level congnitive intelligence we had to go by is our own. So we imagined artificial intelligence to be, at the least, similarly mysterious. This was the case long before computers or robotics were even imagined.

Roger Bacon was said* to have once created a brazen head, capable of speech and, implicitly, of thought. 'Time is,' said the head. A little later, 'Time was.' Still ignored (by Bacon's stupid servant, Miles), the head finally said 'Time is past!' - and shattered into pieces.

AI as a concept has been making heads explode ever since. Bacon's brazen head, as described, was not out to kill or enslave anyone. But it was enigmatic. You don't know quite what it was thinking, or feeling, any more than you knew what HAL was thinking and feeling behind that unblinking red eye.

What we took for granted is that the AI would indeed be thinking, in some way at least broadly comparable to our thinking. And it would likewise be feeling ... something.

The ability to play chess well was long taken as a touchstone of 'real' AI. At one point, HAL mentions playing chess with Dave. Even in the early 80s, the supercomputer in WarGames, after learning from its sim runs that nuclear war is unwinnable, suggests playing 'a nice game of chess.'

I even remember an explanation of why chess made such a good benchmark: Because the brute-force solution of playing out all permutations to find out which ones worked would require a million trillion zillion moves. Even an IBM computer could never run them out, certainly not fast enough for a chess match. So for a computer to play grandmaster-level chess it would have to do whatever human grandmasters do. QED!

It did not work out that way, and QE wasn't D'd. As it turned out, of course, crunching through permutations by brute force is exactly how IBM's Deep Blue beat Gary Kasparov in 1997.**

We still have only a hazy notion of how human grandmasters play chess. But by way of intriguing insights, I have read (possibly in a blog comment here) that while powerful chess computers can beat any 'unassisted' human player, chess masters using simple commercial chess programs can beat any computer playing on its own, without a human partner.


In any case, most AI work today is not at all about trying to emulate the still-mysterious workings of the human mind. It is about amplifying human intelligence - that of skilled programmers and subject-matter experts - by harnessing it to the brute-force power of computers.

This does not rule out the possibility that work on brain structure and psychology - perhaps amplified by expert systems - may yet unlock the mystery of human thought and lead to machines that really do think. But there is no particular reason to anticipate this in the plausible midfuture. Particularly since such work does not have the clear practical applications that expert systems do.

All of which casts doubt on some familiar SF tropes. Not all of them! An expert-system political machine (in the literal sense!) might amplify and implement its designers' scheme to conquer and enslave humanity. And if they were careless in algorithm design, it might fail to exclude them from enslavement.

But it will do so with no ambition or passion of its own - only by implementing its designers' schemes. And human freedom fighters won't defeat it by persuading it to adopt the ideas of John Locke or the Buddha. If they defeat it, they will do so by identifying limits or errors in its implementation (perhaps using their own expert systems to reverse-engineer these), and posing problems it cannot efficiently solve.

Faust lives. Frankenstein's monster, not so much.


Discuss.






* In a play by Elizabethan playwright Robert Greene. Greene also made a snide remark about William Shakespeare, which has the distinction of being the first notice of Will's presence in the London theatrical scene.

** Disclosure: I have an ongoing work gig on behalf of IBM. But in this case I don't have to cop to a shameless pander - Big Blue's work on AI truly is impressive.




The image of Jeopardy! comes from this Flickr page.

171 comments: