Friday, January 21, 2011

What Do AIs Want?

A Cylon
Freud did not ask that question, but my commenters have, in the last thread, unconstrained by the post having nothing to do with artificial intelligence. (Note: This is a feature, not a bug.)

I have argued here before that there is no particular reason to expect 'strong' AI in the midfuture, mainly because we know very little about how human thought works, except that it is very different from the way computers-as-we-know-them work. On the other hand, this is no argument against the possibility or plausibility of strong AI, only that it is outside the scope of what we can currently extrapolate.

So. Suppose that we do develop high level AI, as in smarter than us. Set aside all specific technical questions about how this might be implemented - silicon, bio, some other tech, 'pure thought,' or whatever. What will be its motivations? And what, as a consequence, will be its relations with us?

SF has given considerable thought to the questions of AI-human relations over the years. Following are some of the answers that have been offered:


1. It will be a murderous psychopath. This is the oldest theme, going back to Mary Shelley in the predawn of SF. Truth to be told I've never read Frankenstein; or, The Modern Prometheus (have you?), and his creation may have comprehensible motives. But in the popular tradition it spawned the AI's own motives are absent or irrelevant; it is the AI's Faustian creator who opened Doors Man Was Not Meant To Open.

2. It will be driven insane by human wickedness and folly. Offhand I can only think of one example of this theme, but an important one: HAL 9000, whose breakdown results from human secrecy and hypocrisy due ultimately to the Cold War.

3. It will be murderous, albeit for understandable reasons. The original robots, in Karel Capek's play R.U.R., were essentially slaves doing the Spartacus thing. (Capek's robots were not quite robots in the modern sense, but this does not affect the argument.) In cultural terms this is the 20th century intellectual's counterpart of #1, giving the horror theme the added frisson of hinting that we had it coming to us.

4. It will be murderous, for understandable - but paranoid - reasons: getting us before we get it. I can't think of literary examples, but I probably just missed them or can't bring them to mind. In any case this theme appears in the comment thread to the last post. The AI, motivated by self preservation, will eliminate us as a potential threat.


All of these themes have obvious literary appeal: Killer Robots, 'nuff said. In Hollywood's hands they all drift toward #1, because killer robots are not an invitation to complex plot exposition. But there is one further, subtle variation:

5. It will never exist. We will eliminate it preclusively, before it can eliminate us, by refusing to invent it.


Now for the non-killer-robot themes:

6. It will be benign, due to the Three Laws of Robotics. These of course were Isaac Asimov's response to the prevalence of killer robots in the early Golden Age, and the Three Laws practically killed off killer robots in mainstream SF - though not, per above, in Hollywood.

7. It will gently enslave us for the best of motives, again due to the Three Laws of Robotics. As Asimov himself realized, 'To Serve Man' can raise some complex and ambiguous issues. (Even aside from the culinary dimension.) For sufficiently high level AIs this process can be summarized as Is there a God? Now there is.

8. It will serenely ignore us, as irrelevant to its concerns.

9. It will ignore us, but neither serenely nor with any other emotion. It is after all a machine, with all the emotion and motivation of a lawn mower.


What have I left off this list? Most of these possibilities have any number of variations, many of which blur together - how do we classify Robocop, or his cousin, the defensive robo-fortress that engages and destroys every conceivable threat, including its builders?

And this whole discussion blurs into the question of whether, and to what degree, 'intelligence,' robotic or alien, will or might go hand in hand with anything that we would call personality. To take one particular example from the previous discussion thread, I'm not sure that an AI would have any concern - let alone 'instinct' - for self preservation. We have it because we evolved that way, but AIs don't arise through natural selection. (Though they are arguably subject to it once created.)

And there is one familiar AI that I don't know how to classify, Mycroft in Heinlein's The Moon is a Harsh Mistress. I didn't care for the book, so I don't remember much of it. (If you gotta message, use Western Union; not to mention the obvious impostor claiming to be Hazel Stone.) But I do remember Mycroft. So far as I can recall its motivation was primarily to have interesting conversations, which strikes me as not implausible.


Discuss (further).




Related posts: A PoMo reflection on Asimov, and thoughts on aliens.



The image of a Cylon was swiped from this blog post.

Thursday, January 13, 2011

The Ferry to Hogwarts

Pier 29 1/2, San FranciscoReality 1, whimsy 0 ...

Alas there seems to be no ferry service to anywhere from this pier, though a block away is the ferry to one of San Francisco's most famously dubious tourist attractions, Alcatraz. (Possibly relevant note: The gap in my posting here was due to a stubborn head cold, not incarceration.)

Nor is there any profound justification for posting this image, beyond the general Rule of Cool. But it provides a handy segue to an ongoing topic of this blog, the somewhat uneasy boundary line between Realism [TM] and Romance.

Such border disputes are by no means confined to outer space, but space is a particularly productive environment for them, because the whole idea of going into space for any reason is essentially and profoundly Romantic. Yes, comsats, weathersats, and various other things we have sent into space have their practical uses, but it seems awfully unlikely that strictly practical people would ever have come up with them, given how absurdly difficult and costly space travel is.

Yes, before space travel there was military rocketry. But - setting aside the question of in what sense our blowing each other up qualifies as practical - the established or foreseeable roles of military rocketry in the early 20th century did not point toward space boosters. Practical military rockets like the Katyusha were essentially self propelled shells, more expensive and less accurate than standard shells, but able to be fired from cheap, lightweight launchers instead of heavy, expensive artillery rifles.

The V-2 was, in the pre-atomic age, a supremely impractical weapon: an expensive and inaccurate way to lob a shell not all that much farther than the longest-ranged guns of the time could achieve. No one would have come up with such an idea on purely military grounds. I'll guess that Versailles restrictions played a role in making the German army interested in alternatives to conventional artillery, but it was the first generation of space geeks, not military specialists, that put long-range rocketry into play.

Yes, nuclear warheads made ICBMs all too practical, but it is no accident that the first generation of ICBMs, both US and Soviet, turned out to be much more suitable as space boosters than as weapons.


Space travel is, like the image above, ultimately all about the Rule of Cool, AKA Romance. This has significant implications. As strong as are the practical reasons for not spending zillions on it, these reasons have not, so far, succeeded in making the whole silly thing go away. Unless post industrial civilization removes itself from the social selection options, it will probably not go away in the midfuture, either.

A comparison can be made here to other Zeerust-era future techs, such as the SST. Supersonic aircraft are also inherently cool, but not that cool. So, not only do SSTs fail to offer enough merely practical benefits to pay for their development cost, they also fail to offer enough coolness to overcome that limitation.

I suspect that people will walk on Mars before airline passengers (again) travel at supersonic speeds.

It may be somewhere between paradoxical and hypocritical for me to turn around and argue this point, considering how much time I spend here beating up on popular space tropes. But I beat up on the PSTs so you won't have to. Romance, in and of itself, need not apologize to realism for anything, but the minor sub-branch of Romance that decks itself out as hard SF has a certain obligation to fake it convincingly, including space futures that sustain at least the illusion that they were invented in this century.




The image was snagged from Google Maps. And here is a genuine example of mysterious British transportation signage. Can anyone here elucidate the meaning?

Saturday, January 1, 2011

Try Try Again ...

Delta IV Heavy launch
Blogger ate most of my last post (possibly my error, not Blogger's), so an updated reconstruction of the point is due.

In fact my original post was a bit unfair to the illustration, as I learned while writing it. In 1899, when that image was presumably drawn, the skyscraper era was only just beginning. The Eiffel Tower had demonstrated that much taller structures could be built, and the Otis elevator ('lift' to some of you) made such structures practical for mundane uses, but no office building in 1899 had yet overtopped the Great Pyramid, something that several medieval church spires had achieved, albeit by narrow margins.

So the artist was not really so much projecting a contemporary trend as imagining a yet-unrealized trend. And - given that the scale of the image is rather loose and impressionistic - they did a pretty good job of capturing the general sense of the 20th century NYC skyline, even though the tallest twin buildings are, apparently, some 5 km high.

Whether or not it really illustrates my intended point, the image is of interest for its own sake, notably the dirigibles and the kilometer-long ship in the East River.


But the intended point, the Linear Fallacy, is the tendency of predictions to project the Latest Trend into the future, whatever that trend may happen to be. In 1968, Kubrick and Clarke projected the decade just past into the future, and so that had a major interplanetary infrastructure already taking form by 2001.

This blog may also be guilty of the Linear Fallacy, operating in the other direction - presuming that the incremental progress of the last 40 years is the norm.

In fact, tech progress overall seems to be a combination of the two, with accelerando phases interspersed with (generally much longer) eras of incremental progress and gradual refinement. A good historical example is European sailing ships, which underwent an accelerando in the 15th century - the emergence of the full rigged ship - followed by 350 years of refinement until the Industrial Revolution swept the whole tech into the dustbin of quaintness.

If our path in the next 200-300 or so years does lie in the direction of extensive human space travel, it will likely as not take the form of two or three dramatic accelerando leaps separated by longer periods of incremental progress.



The image for today's post, from Astronomy Picture of the Day (h/t Tammy), has nothing whatsoever to do with the subject, but is offered for your viewing satisfaction out of sheer badass coolness. It shows a Delta IV Heavy, capable of sending 9.3 tons onto an Earth escape orbit. Click to appreciate in larger format.

Our current launch technology doesn't do all that we wish it did, but it is certainly AWESOME.