What Do AIs Want?
Freud did not ask that question, but my commenters have, in the last thread, unconstrained by the post having nothing to do with artificial intelligence. (Note: This is a feature, not a bug.)
I have argued here before that there is no particular reason to expect 'strong' AI in the midfuture, mainly because we know very little about how human thought works, except that it is very different from the way computers-as-we-know-them work. On the other hand, this is no argument against the possibility or plausibility of strong AI, only that it is outside the scope of what we can currently extrapolate.
So. Suppose that we do develop high level AI, as in smarter than us. Set aside all specific technical questions about how this might be implemented - silicon, bio, some other tech, 'pure thought,' or whatever. What will be its motivations? And what, as a consequence, will be its relations with us?
SF has given considerable thought to the questions of AI-human relations over the years. Following are some of the answers that have been offered:
1. It will be a murderous psychopath. This is the oldest theme, going back to Mary Shelley in the predawn of SF. Truth to be told I've never read Frankenstein; or, The Modern Prometheus (have you?), and his creation may have comprehensible motives. But in the popular tradition it spawned the AI's own motives are absent or irrelevant; it is the AI's Faustian creator who opened Doors Man Was Not Meant To Open.
2. It will be driven insane by human wickedness and folly. Offhand I can only think of one example of this theme, but an important one: HAL 9000, whose breakdown results from human secrecy and hypocrisy due ultimately to the Cold War.
3. It will be murderous, albeit for understandable reasons. The original robots, in Karel Capek's play R.U.R., were essentially slaves doing the Spartacus thing. (Capek's robots were not quite robots in the modern sense, but this does not affect the argument.) In cultural terms this is the 20th century intellectual's counterpart of #1, giving the horror theme the added frisson of hinting that we had it coming to us.
4. It will be murderous, for understandable - but paranoid - reasons: getting us before we get it. I can't think of literary examples, but I probably just missed them or can't bring them to mind. In any case this theme appears in the comment thread to the last post. The AI, motivated by self preservation, will eliminate us as a potential threat.
All of these themes have obvious literary appeal: Killer Robots, 'nuff said. In Hollywood's hands they all drift toward #1, because killer robots are not an invitation to complex plot exposition. But there is one further, subtle variation:
5. It will never exist. We will eliminate it preclusively, before it can eliminate us, by refusing to invent it.
Now for the non-killer-robot themes:
6. It will be benign, due to the Three Laws of Robotics. These of course were Isaac Asimov's response to the prevalence of killer robots in the early Golden Age, and the Three Laws practically killed off killer robots in mainstream SF - though not, per above, in Hollywood.
7. It will gently enslave us for the best of motives, again due to the Three Laws of Robotics. As Asimov himself realized, 'To Serve Man' can raise some complex and ambiguous issues. (Even aside from the culinary dimension.) For sufficiently high level AIs this process can be summarized as Is there a God? Now there is.
8. It will serenely ignore us, as irrelevant to its concerns.
9. It will ignore us, but neither serenely nor with any other emotion. It is after all a machine, with all the emotion and motivation of a lawn mower.
What have I left off this list? Most of these possibilities have any number of variations, many of which blur together - how do we classify Robocop, or his cousin, the defensive robo-fortress that engages and destroys every conceivable threat, including its builders?
And this whole discussion blurs into the question of whether, and to what degree, 'intelligence,' robotic or alien, will or might go hand in hand with anything that we would call personality. To take one particular example from the previous discussion thread, I'm not sure that an AI would have any concern - let alone 'instinct' - for self preservation. We have it because we evolved that way, but AIs don't arise through natural selection. (Though they are arguably subject to it once created.)
And there is one familiar AI that I don't know how to classify, Mycroft in Heinlein's The Moon is a Harsh Mistress. I didn't care for the book, so I don't remember much of it. (If you gotta message, use Western Union; not to mention the obvious impostor claiming to be Hazel Stone.) But I do remember Mycroft. So far as I can recall its motivation was primarily to have interesting conversations, which strikes me as not implausible.
Discuss (further).
Related posts: A PoMo reflection on Asimov, and thoughts on aliens.
The image of a Cylon was swiped from this blog post.