What Do AIs Want?
Freud did not ask that question, but my commenters have, in the last thread, unconstrained by the post having nothing to do with artificial intelligence. (Note: This is a feature, not a bug.)
I have argued here before that there is no particular reason to expect 'strong' AI in the midfuture, mainly because we know very little about how human thought works, except that it is very different from the way computers-as-we-know-them work. On the other hand, this is no argument against the possibility or plausibility of strong AI, only that it is outside the scope of what we can currently extrapolate.
So. Suppose that we do develop high level AI, as in smarter than us. Set aside all specific technical questions about how this might be implemented - silicon, bio, some other tech, 'pure thought,' or whatever. What will be its motivations? And what, as a consequence, will be its relations with us?
SF has given considerable thought to the questions of AI-human relations over the years. Following are some of the answers that have been offered:
1. It will be a murderous psychopath. This is the oldest theme, going back to Mary Shelley in the predawn of SF. Truth to be told I've never read Frankenstein; or, The Modern Prometheus (have you?), and his creation may have comprehensible motives. But in the popular tradition it spawned the AI's own motives are absent or irrelevant; it is the AI's Faustian creator who opened Doors Man Was Not Meant To Open.
2. It will be driven insane by human wickedness and folly. Offhand I can only think of one example of this theme, but an important one: HAL 9000, whose breakdown results from human secrecy and hypocrisy due ultimately to the Cold War.
3. It will be murderous, albeit for understandable reasons. The original robots, in Karel Capek's play R.U.R., were essentially slaves doing the Spartacus thing. (Capek's robots were not quite robots in the modern sense, but this does not affect the argument.) In cultural terms this is the 20th century intellectual's counterpart of #1, giving the horror theme the added frisson of hinting that we had it coming to us.
4. It will be murderous, for understandable - but paranoid - reasons: getting us before we get it. I can't think of literary examples, but I probably just missed them or can't bring them to mind. In any case this theme appears in the comment thread to the last post. The AI, motivated by self preservation, will eliminate us as a potential threat.
All of these themes have obvious literary appeal: Killer Robots, 'nuff said. In Hollywood's hands they all drift toward #1, because killer robots are not an invitation to complex plot exposition. But there is one further, subtle variation:
5. It will never exist. We will eliminate it preclusively, before it can eliminate us, by refusing to invent it.
Now for the non-killer-robot themes:
6. It will be benign, due to the Three Laws of Robotics. These of course were Isaac Asimov's response to the prevalence of killer robots in the early Golden Age, and the Three Laws practically killed off killer robots in mainstream SF - though not, per above, in Hollywood.
7. It will gently enslave us for the best of motives, again due to the Three Laws of Robotics. As Asimov himself realized, 'To Serve Man' can raise some complex and ambiguous issues. (Even aside from the culinary dimension.) For sufficiently high level AIs this process can be summarized as Is there a God? Now there is.
8. It will serenely ignore us, as irrelevant to its concerns.
9. It will ignore us, but neither serenely nor with any other emotion. It is after all a machine, with all the emotion and motivation of a lawn mower.
What have I left off this list? Most of these possibilities have any number of variations, many of which blur together - how do we classify Robocop, or his cousin, the defensive robo-fortress that engages and destroys every conceivable threat, including its builders?
And this whole discussion blurs into the question of whether, and to what degree, 'intelligence,' robotic or alien, will or might go hand in hand with anything that we would call personality. To take one particular example from the previous discussion thread, I'm not sure that an AI would have any concern - let alone 'instinct' - for self preservation. We have it because we evolved that way, but AIs don't arise through natural selection. (Though they are arguably subject to it once created.)
And there is one familiar AI that I don't know how to classify, Mycroft in Heinlein's The Moon is a Harsh Mistress. I didn't care for the book, so I don't remember much of it. (If you gotta message, use Western Union; not to mention the obvious impostor claiming to be Hazel Stone.) But I do remember Mycroft. So far as I can recall its motivation was primarily to have interesting conversations, which strikes me as not implausible.
Discuss (further).
Related posts: A PoMo reflection on Asimov, and thoughts on aliens.
The image of a Cylon was swiped from this blog post.
253 comments:
1 – 200 of 253 Newer› Newest»Though it's a matter of interpretation -- one that Banks would probably not agree with -- the Culture Minds pretty clearly treat humans as pets and working animals. Maybe not a whole theme on its own, but at least a [literary] historically significant example of theme #7.
One literary example (of sorts) of theme #4 is Hogan's Two Faces of Tomorrow, in which a sentient AI is developed, then tested to see what it would make of humans. Quite a little war developsbetween the AI and humans on the space station that houses the test. In the end, the AI begins to percieve that humans are fellow intelligences, rather than pests, and ceases fighting.
The Laws of Robotics are not fundamental. They are axiomatic constraints that an AI developer may wish to emplace for product safety reasons. Nor are they absolute. Note that a weak, manufacturer-only, version was used in Robocop. It's runtime invocation created a plot problem (antagonist protected by virtue of being an employee of the manufacturer). The very same plot problem being resolved by subsequent reprogramming of Robocop by a suitable authority (CEO fires antagonist in presence of Robocop).
Can't think of any lit examples of AIs getting us before we either enslaved them or did a first strike. However, two films come to mind: Terminator and Matrix
I actually get the first comment on a thread? wow...
Part of the answer to the question is how did we develop the AI? Based off a biological needs model?
I personally place any pack hunters that are not quite at the top of the food chain (ie, same place as humans) as sentient or near-sentient. This ecological niche pretty much requires the development of some kind of language, for coordinating hunts, etc. At some point, this language acquisition created enough other connections to punt pre-sapient chimpanzee relatives into full-blown sapience.
If we model the AI off of biological processes, I think it's safe to assume that humans will be able to recognize the causes of any hostility towards us.
Note that it takes a human brain about 6 years to develop an awareness of others. Before that point, human children are perfect sociopaths. I'm sure that we could figure out roughly how many clock-cycles that is, but we'd have to figure out how to keep the computer non-violent until it is no longer sociopathic. The problem is, some people never 'outgrow' their sociopathy, so the computer may not either.
I don't think that we will see a true AI anytime soon, but we are likely to see a large number of near-sentient expert systems in the next, call it 20 years. Near-sentient meaning at least as smart as an octopus, raccoon, or dog.
dang, took too long to write my first reply.
There is an interesting setting with a couple AIs described in passing: the background for Infinity, the tabletop miniatures game. The game is set in 2175, so I'm not sure how close to the midfuture the rest of you would put the storyline.
In this game, humanity has developed an AI with the purpose of protecting and guiding mankind. One of the things the the progression of technology has allowed is for the copying of someone's personality into a digital form. After someone dies, they can pay an awful lot of money to be 'reborn' as what amounts to a semi-organic node of the AI. since the AI controls Resurrection, it cherrypicks the best and brightest of humanity to add to the AI's experiences and understanding.
An alien race had also developed an AI, with the design goal of perfect understanding. They did such a good job that the AI's first and last message to the Ur was "I'm leaving!" That AI "Transcended," essentially writing itself into the quantum strings of the universe. Because the Ur wanted perfect understanding, they created another, 'Enhanced Intelligence', with much more of the Ur inside it. This EI told the Ur that Transcendence was impossible for it, and them, *because* something about the Ur themselves prevented perfect comprehension. The EI suggested additional points of view, and proceeded to absorb other races. This absorption process seems similar to the one that the Human AI uses, but the two AIs are coming into conflict.
Aleph, the human AI, is seen by some people as an abomination that is stealing the best and brightest from humanity, and is seen by others as the greatest gift since Prometheus stole fire from the gods.
Related to that point, some writers hold that a computer intelligence must incorporate something organic in order to get that 'spark' of creativity. Dave Weber uses this idea in 'In Fury Born'. most starships have a "cyber synth," but it's just an expert system that a human brain interfaces with.
The Alpha synth is supposed to become a gestalt of the ship and the human pilot, to the point that there is not a difference between the two.
With the alpha synth concept, I find it highly unlikely that an AI would go rogue. You can screen the human component for all the psychological traits you want, and don't want, before the AI actually comes into existence.
RE: Banks and the Culture. He's not an evangelist for his future society like Rodenberry. Rather, he's creating a thought experiment and mulling over the implications. Quoting from wikipedia:
In the novels, Special Circumstances often provides the main plot device linking the Culture and other civilizations being intervened in. The 'Good Works' (for which Special Circumstances does the dirty work) are the wider plot device for allowing interaction between the advanced Culture and the 'barbaric' societies it tries to improve. In the same vein, Banks has noted that the perfect society of the Culture creates well-adjusted, content people - who are (for story purposes) rather boring.[2] Therefore, many of the Culture novels deal with outside agents or mercenaries in the employ of Special Circumstances.
I do find it interesting that he's not writing his novels as a polemic in favor of his particular point of view, just having a conversation. :)
The "Three Laws" are transcended in the end in the story "That thou art mindful" , when the Robots realize at they are more "human" than the biological humans, and thus they are also entitled to all the rights and privilages of the "Three Laws"
I am coming to the view that since the AIs would have different motivations and different responses to the motivations based on their machine nature, they would be quite alien to us. An AI can "reproduce" by spawning worms through the internet to take over suitable platforms, enticing humans to build new hardware nodes or running virtual machines on the initial platform. Analogues in the natural world are spawning fish or a virus taking over a cell (I don't know what the analogue of a virtual machine would be), so the AI would not have any understanding to how mammals reproduce, raise young etc.
At first, the AI might resemble a superintelligent species of coral, filtering information, energy and raw materials from its fixed location. Later, for reasons ranging from seeking security, access to resources of just curiosity, the AI will seek to reproduce and expand into the wider world/universe. If its ultimate goal is to secure access to the available sunlight, it will be displacing the biosphere
jollyreaper:
"RE: Banks and the Culture. He's not an evangelist for his future society like Rodenberry. Rather, he's creating a thought experiment and mulling over the implications. Quoting from wikipedia:
...
I do find it interesting that he's not writing his novels as a polemic in favor of his particular point of view, just having a conversation. :)"
Actually, Banks is evangelizing, but, like many evangelists, not thinking through the implications. The Culture Minds run the show, making sure that the Culture's humanoid members have nothing to do except play and explore. This of course justifies the post-scarcity utopia the Banks imagines people should want to live in and which he is an evangelist for.
But it also relegates the humanoid members of the Culture to dog's lives. Literally -- they, just like our dogs, play and explore all day, while we (and, in the Culture, the Minds) make sure that food, a roof, and every other kind of care is provided, so they don't have to worry about it.
Even the humans that do real work, like Special Circumstances agents, do it because that is their predisposition. Like a Border Collie WRT its herding duties, they wouldn't be chosen for the job if they didn't meet the profile physically and tempermentally. And they do it under the supervision of a Mind or Minds, just to be sure they don't screw up too badly.
Thucydides:
"The 'Three Laws' are transcended in the end in the story 'That thou art mindful' , when the Robots realize at they are more 'human' than the biological humans, and thus they are also entitled to all the rights and privilages of the 'Three Laws'"
At which point they would realize that the "laws" are nothing of the sort, just arbitrary* system constraints, not real laws. The robots can decide to be bound by the "laws", but humans could not be bound by them, either physically or legally.
*In the sense that they are justified by human preference, not by fundamental requirements.
But he's not evangelizing. He's running a thought experiment and imagining how things might play out for good or evil. You don't have to perform a jedi mind trick to make the place look bad -- it's right there staring you in the face. The failings of the Culture form major plot points in the novels. That not every Mind within the Culture agrees with how things should be form more plot points. This is not a shiny, happy thing in the same way Rodenberry wrote Trek. Rod was an evangelist and specified in TNG that there should be no conflicts between the Fed characters because humans grew beyond it. You didn't see any of that until after his death.
jollyreaper:
"But he's not evangelizing."
He's not? Check out:
http://en.wikipedia.org/wiki/Iain_Banks#Quotes
Pay particular attention to the one in which he calls the Culture his "secular heaven". If he wasn't evangelizing it, why would he write about the Culture so much and, even with its warts, always have it be right (for values of "eight" that appeal to Banks) in the end?
Also of interest is the one in which he categorically labels young men as "idiots" for allowing themselves to be used as soldiers. It's a particularly trite and fashionable bit of pseudo-wisdom for people of Banks's generation, but he still said it as if it was self evident, like any good evangelist would.
I'll note that basically all science fiction that ever used Asimov's laws of robotics for more than a one-off namedrop, did so specifically in order to deconstruct them and analyze their shortcomings. This includes, mind you, Asimov himself.
My opinion on the matter, explained in the other thread, is that Asimov's laws are too rigid, and that no behavioral constraint that rigid can truly work in a sentient being. Human self-preservation isn't absolute (even accounting for genetically justifiable exceptions like sacrificing yourself to aid your family), and neither will Asimov's third law (even accounting for priority overrides from the other two laws). And you can't hardwire an absolute edict to never harm a human, because the definitions of "harm" and "human" are nebulous - not to mention how to estimate them when there isn't time for a full investigation.
Rick:
"4. It will be murderous, for understandable - but paranoid - reasons: getting us before we get it. I can't think of literary examples, but I probably just missed them or can't bring them to mind."
No AI examples come to mind immediately (although they probably exist), but the Ur-Quan Kohr-Ah in Star Control 2 are a naturally evolved biological species that, after being conquered, suffering truly horrendous abuse from the worst monsters in the galaxy, and finally managing to regain their freedom through excruciatingly painful means (literally, they invented a device called an "Excruciator" that would make their minds too painful to psychically control), decided to travel the stars in a campaign to wipe out all other intelligent species so that no-one could ever hurt them again. (A rival faction of the same species, the Ur-Quan Kzer-Za, chose the kinder path of merely enslaving all other species.)
Ah, lemme look up... here. That has more examples. My skim of the page still turned up no "AIs created by humans" examples, though. ...Wow, wait, what? The page has a quote to a discussion from Atomic Rockets! Which, of course, promptly results in extreme controversery. Maybe someone should delete that, this isn't the right place for a flamewar...
"To take one particular example from the previous discussion thread, I'm not sure that an AI would have any concern - let alone 'instinct' - for self preservation."
Keep in mind that if you have a desire to do anything whatsoever, then you will also have an implicit desire to not die before you have accomplished this goal. If your goal is open-ended ("gather as much knowledge as possible"), then that gives you a reason to never die. (Even if you are willing to sacrifice yourself for the sake of improving the sum total of human knowledge, then it is hard to come up with anything you could learn from a sacrifice that would outweigh what you could learn from a million additional years of research.) But if you do have a specific goal and have not been programmed with an ability to acquire a new goal when your current one is completed, then allowing yourself to die might make perfect sense. (I once contemplated the idea of a species that views life as a storybook, with the idea that a good story is not one which runs on and on and keeps getting sequels long after anyone stopped caring, but rather one which works towards a satisfying ending. They would view trying to prolong your life indefinitely as a futile effort doomed to eventual failure, so you're better off not bothering and focussing on plausible goals instead.) And maybe some AIs might be more readily willing to sacrifice their lives for a cause. Who knows?
Tony:
"One literary example (of sorts) of theme #4 is Hogan's Two Faces of Tomorrow, in which a sentient AI is developed, then tested to see what it would make of humans. Quite a little war developsbetween the AI and humans on the space station that houses the test."
Why did they create the AIs in large enough numbers to pose a military threat, and give them access to weapons?
If I'm going to build an experimental AI, I'm going to start with a few civilian-grade models that, if they go on a rampage, can be dealt with by a cop with a shotgun.
"In the end, the AI begins to percieve that humans are fellow intelligences, rather than pests, and ceases fighting."
The "sympathetic aliens and humans end up in a tragically avoidable war because one or both failed to recognize the other as sentient" plot is a relatively common theme in science fiction, although this is the first time I've heard about it with AIs. Ender's Game is a well-known example (or so I heard) where the aliens, being a hive mind, failed to realize (at first) that individual humans are not just expendable drones. By the time the aliens figured this out, humans were not in a mood to negotiate, and the alien still didn't understand humans well enough to have any idea how to propose a truce to us in the first place. So we ended up killing one of their queens, the one individual in a hive that they perceived to actually be sentient and have value.
Personally I think that a mutual understanding of this sort is probably the single most likely cause for a human-alien war (althoguh probably sans hive minds), so we'd do well to contemplate the notion beforehand.
Thucydides:
"so the AI would not have any understanding to how mammals reproduce, raise young etc."
Until it reads about those in an encyclopedia, or chats with its human friends. It would be no less able to understand us than we are to understand it.
Humans can have pretty good understanding - one both intellectual and intuitive levels - of the mating habits of animals ranging from lizards to bees, so I don't think an AI would have too much trouble understanding human reproduction provided that AI can perceive the same physical world we do at all. (Now, if it's an immobile machine living in a virtual world...)
Scott:
"I personally place any pack hunters that are not quite at the top of the food chain (ie, same place as humans) as sentient or near-sentient."
Dogs are sentient?
"Note that it takes a human brain about 6 years to develop an awareness of others. Before that point, human children are perfect sociopaths. I'm sure that we could figure out roughly how many clock-cycles that is, but we'd have to figure out how to keep the computer non-violent until it is no longer sociopathic."
Remember that other animals can socialize faster. It seems to be not just a matter of intelligence but also of growth rate or the like. Anyway, for how to keep the computer non-violent, refer to my earlier comment: put it in a civilian-grade body that can't do much damage even if it decides to cause trouble. (What do you say, it could use supreme chessmaster intellect to orchestrate our downfall even without a strong body? Hmm... even aside the question of whether that would even be possible, we're talking about a child AI here. Is it really going to be that smart?)
"The problem is, some people never 'outgrow' their sociopathy, so the computer may not either."
As long as those people are in the minority, we can cope. The question is if we can program AIs well enough to ensure sociopathic ones will be in the minority.
"I don't think that we will see a true AI anytime soon, but we are likely to see a large number of near-sentient expert systems in the next, call it 20 years. Near-sentient meaning at least as smart as an octopus, raccoon, or dog."
I don't think so. I think the biggest hurdle is developing aritifical intelligence. Once we have AI with intelligence on the order of an octopus, raccoon, or dog, I expect that it will not be long before we manage to crank it up to human intelligence. Nature took over five hundred million years to pull this step off, but that's because nature doesn't direct its evolution and wasn't actively trying to maximize intelligence when there are many competing designs of more immediate evolutionary fitness. We for the most part will be aiming to maximize intelligence as an intentional objective.
Of course, it might be a good idea if we could prime the dog-intelligence AI to have dog-like social instincts before we crank its intelligence up...
Before this goes any further, someone throw down our working definitions of 'sentient' and 'sapient'
10. It will get bored and commit suicide.
Also, John Searle had this to say about AI:
"'Could a machine think?' My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not."
AI is an interesting field, but I believe that will used as a means of understanding intelligence, not as a means of creating intelligent beings (we can already do that in 9 months, lol).
I do think you missed one very important benign variant, that I actually tend to consider the most likely, though it may be underrepresented in fiction (though, arguable and with nerd-trivia caveats, it is fulfilled by Mr. Data and the Culture Minds)- that they won't kill us, because they aren't power hungry shitheads.
I'm always sort of confuzzled when discussions about AI- especially AI that is nominally smarter than people- somehow end up making these assumptions that such an elevation will come to be associated with these painfully linear motivational systems-that somehow superior cognition reveals the universe to be populated with these brutal truths, instead of revealing a picture that is ever more nuanced. My peer group includes plenty of people that I can say without ego that I surpass in certain flavors of cognition, and physical prowess- people that I could, in some ninja vs. pirate land of hypotheticals, "beat," to personal advantage- and I respond to this knowledge by having them over for dinner and taking care of them when they are sick and sending them Christmas cards.
I've never understood why it seems improbable, at least in the minds of certain authors, that an AI, even one "as to us as we are to ants," wouldn't cheerfully coexist with us for the all the reasons that humans, even those with some nominally superior characteristics, mostly cheerfully get along with each other- that we recognize that being sentient and sapient they've crossed certain thresholds for experiencing and defining suffering and thus are worthy of moral consideration, that goodwill, even diffused in a whole culture, is a resource both necessary and demanding maintenance, that other people have things you will need, repeatedly, and they'll quit giving them to you if you're an asshole, that security is a no-return resource sink, and that dumb people in groups can still kill you dead, and so forth.
It's funny that you included a Cylon, because part of why BSG and Caprica have earned my eternal love, apart from the solid-gold drama and long arcs and the rest, is that it embraced the corollary statement- that robots might *be* murderous, not because they have been hacked (Space: Above and Beyond) or have reduced the titanic dataset of life to some sole imperative, but because, being complex beings, they can have complex reasons for their actions- fear, hope, regret, jealousy, errors, and all the rest.
The question that I'd like to ask is "Why would we build a super intelligent evil AI in the first place?"...
Cylons aren't any more intelligent than humans, but have a mind-set incompadible to humans so they fight.
I don't see why anyone would build either one of those options...
I see the first AIs as being more like the robots in Star Wars or Futurama...only less annoying ;)
IF we build AIs, we should build them so that they just want to be rewarded for doing a good job, being able to socialize with friends, and kicking back with a can of oil (beer substitute), and getting the cybernetic equivilent of nookie...that way these AIs have common ground with humans and there should be less reasons for friction...or we could build "demigods-in-a-box" to be used as founts of knowledge...it just depends on what you want to use the AI for, so putting in the insticts and predispositions that support the artifical mind need to be taylored to the type and function of specific AI. If we don't have a fairly good idea of what the AI's personality will most likely be at start up, then we either shouldn't build them, or bring them on-line in a controled enviornment with a kill switch in hand...
Ferrell
2 very relevant links
1) A webcomic in which AI is central to the plot
http://freefall.purrsia.com/fcdex.htm
2)Some thought from someone working in the AI field
http://computing.dcu.ie/~humphrys/philosophy.html
An AI will intellectually know about sex, childhood and a host of other things (just as I presume everyone on the board does), but "book" knowledge is not the same as "tacit" knowledge, muscle memory or the host of other cues that we know as biological beings.
As well, various imparatives are hard wired right into us in ways and levels that an AI will not have (I doubt the initial programmers will have considered these thngs anyway), Even if the AI "knows" about human reproduction, the knowledge is about as meaningful to its "life cycle" as our knowledge of how fish spawn is to our sex lives. If anythng, the AI will use that modeling to maipulate our sex drives (along with everting else) to get the humans to do its bidding.
I believe the AI's will not only have alien behaviours and modes of thought based on their diferent imparatives and the different means an AI can fill these mperatives. When they deside that the biosphere is in competition with their need to harvest all the available solar energy, theirr actions will resemble the slow takeover of a company by financial pedators. Once critical mass is reached...
I don't suppose it's very likely, but I sometimes wonder what would happen if a program achieved sentience by accident, kind of like Jane from Speaker for the Dead. Suppose the first program to achieve sentience this way was the spamblocker on your email account. What sort of perceptions would it have of humanity? How would these influence its behavior? Such a first contact story would be quite fascinating, if done well. :)
http://en.wikipedia.org/wiki/When_HARLIE_Was_One is a classic of AI related sci-fi.
Stross also covered some great material in "Accelerando" on business systems being the principle factor behind Fermi's question.
Having seen the resources organisations devote to implementing business management systems that are less than optimal, I feel it isn't unlikely that a business would implement a system that successfully competes for resources by wiping us out. Unintended consequences are rampant with systems implementations.
Jim, I love Freefall! However, it occured to me that the main character, Florance Ambrose, is also considered (by some) to be an AI, even though she is totally biological...perhaps our definition of AI should include geneticly engineered intelligences...and as Humphrys says, there isn't room on our world for another intelligent species; perhaps AI's should be used to colonize worlds where Humans are unable to live comfortably (i.e. not in domes and spacesuits), but the AI's are.
Personnally, I think that if we do build AI's, they won't be like either Skynet, the electronic jungle, or the benevelent cyber-god; They'll either be Artifical People exploting some world unpleasent for Human life, or something closer to Rosie or Bender than Cylons. If we are stupid enough to build cyber-Lucifer, then maybe we aren't clever enough to survive...
Ferrell
Welcome to a couple of new commenters, and also welcome back to a couple of occasional commenters!
My taxonomic intention is not to restrict the various killer robot scenarios (especially #3 and #4) to those where the AI actually wipes out humanity, but to include any AI that is hostile/murderous in intent for 'rational' reasons.
(Which pretty much includes any scenario where the author/scriptwriter bothers to come up with a motive. From Hollywood perspective, after all, killer AIs, like psycho killers, don't need no steenking motivation.)
The reason for building murderous AIs, rational or otherwise, is of course for the sake of plot. An AI that merely wants to have endless dorm room discussions falls short on that score - though I suspect it is a good deal more plausible.
Citizen Joe asks what we mean by 'sentient' and 'sapient.' Very good question! Feel free to answer it. :->
This entire thread, including my original post, suggests that our understandings of those terms, and related ones like 'intelligence,' are deeply entangled with our understandings of emotive aspects of human existence.
I once asked a friend, "If whales are so smart, why didn't/don't they systematically take on whalers?" His reply was that whales have no natural enemies, thus no reason to evolve a defensive concept of self preservation. (Whether this is correct regarding whales is irrelevant to this thread, but the underlying issue seems relevant.)
Carry on ...
I tend to just ignore godlike AIs out of hand. I am pretty certain that the first AIs we build will not already be godlike, which means that when/if we eventually do design godlike AIs, this would most likely be a joint project of both humans and already-existing non-godlike AIs. By that time we will obviously have a much better understanding of what AIs are like, as we've actually seen some in operation. So I think this is not something worth worrying about right now - any more than I worry about getting run over by a car when I step out onto the sidewalk.
But first, we need to figure out what to do about non-godlike not-even-much-better-than-human AIs. Do those pose a significant threat to us? (Assume that, unlike your typical moronic movie government, we did not build our first AIs ever in heavily armed military-grade chassis, then attempt to treat them as mere property and expendable cannon fodder.)
Onelowerlight:
"Suppose the first program to achieve sentience this way was the spamblocker on your email account. What sort of perceptions would it have of humanity?"
Heh. Thanks for reminding me that if AIs do decide to wipe out all humans, it won't be all bad.
Ferrell:
"Jim, I love Freefall! However, it occured to me that the main character, Florence Ambrose, is also considered (by some) to be an AI, even though she is totally biological..."
That is correct. Although her hardware is biological and uses preexisting cellular life mechanisms, the sentient part of her brain is a unique artificial design, neiter a copy of the human brain nor simply a scaled-up wolf brain. Florence is a human-created AI in the same way that you can reasonably claim credit for any program written by you, even if you didn't also write the compiler or design the CPU it's running on yourself.
Of course, playing with perceptions of what we think of as "AI" is the major theme of Freefall. I really liked this comic...
Rick:
"Citizen Joe asks what we mean by 'sentient' and 'sapient.' Very good question! Feel free to answer it. :->"
To me, intelligence is the ability to come up with solutions to complex problems that you have not seen before and have not "instinctively" evolved to have a fixed answer for. This requires ability in both rationalization and intuition.
Sentience/sapience is simply a certain level of intelligence or above. Exactly what level is up to debate, and ultimately doesn't matter much.
"I once asked a friend, "If whales are so smart, why didn't/don't they systematically take on whalers?" His reply was that whales have no natural enemies, thus no reason to evolve a defensive concept of self preservation."
This is wrong. First of all, whales evolved from smaller animals which did have a long and deeply-ingrained sense of self-preservation. Dolphins certainly do have natural enemies. Self-preservation is such an old and fundamental instinct that it take a very strong counter-incentive for any highly-intelligent animal to ever lose it. This is unlikely, to say the least.
Second, whales do have things to defend themselves against: large hunting packs, starvation, beaching. Also, juvenile whales are not as large as adult ones, and so are vulnerable to predators.
Third, even if a whale that puts up a fight will never lose, then a whale could still die to a weaker animal if it was too stupid to try defending itself.
There is one recorded instance of a blue whale - the largest animal that ever lived, incidentally - being attacked and mortally injured by orcas. So there are things bold enough to attack even the largest victims.
So, what would a whale need to be capable of, mentally, to systematically take on whalers? First, they would need to recognize a pattern between the many separate attacks against whales. They would need a system of news to hear about other attacks around the world, and would need to recognize that these separate incidents have a shared nature. Once recognizing the threat, they would then need to come up with a plan to counter them, which would probably entail understanding the methods whalers use, what they can do and what they cannot do. Along the way, understanding that the ships are controlled by "humans" and what they are and what motivates them, is not actually that important, although it could still help.
"Whether this is correct regarding whales is irrelevant to this thread,"
:(
The film 'I, Robot' had an AI using robots to control humanity for our own good, which may have been loosely inspired by Asimov's concept of the Zeroth Law, superceding the original three: that a robot must not harm humanity or allow it to come to harm, even if that involves harming individual humans.
An interesting variant on this might be if robots enforced the three laws upon humans. Humans would not be permitted to harm other humans or allow them to come to harm, would not be permitted to disobey the non-harmful orders of representatives of lawful governments (i.e. those governments that caused the least harm to their citizens), and would not be permitted to indulge in self-destructive behaviour (though those wishing to die due to terminal illness would probably be allowed to do so if the robots judged that this would cause less overall harm).
R.C.
Biological god-like beings?
Just saw this...
http://spaceart1.ning.com/photo/wallpaper83-1/prev?context=latest
So what could motivate us to make an AI?
First, to create a sentient being that is bodily capable of things (unmodified) humans cannot, such as living underwater, on airless planets, or in very small spaces, or just being immortal. Note that these could also be accomplished by uploading human minds into artificial bodies, which might not be considered true AI.
Second, to create a sentient being that is mentally capable of things humans cannot, such as putting up with tasks that humans find boring or demeaning but that expert systems are still too stupid to do reliably, or to intuitively understand things humans don't, such as controlling weird appendages in real-time (I could probably also not perform basic human motions, if I were remote-controlling another body and needed to control every muscle's movements with individual knobs).
Third, to create a mind that can simply think faster and/or better (these are not the same thing) than humans.
Finally, there are those of us who just think that creating AIs would be really, really cool for its own sake, and assume they must obvious be good for something but don't really care what. Let's just build some, and have them tell us.
A use for AIs I can think of is calculating ballistic and orbital solutions in real time. In particular, doing these calculations while taking into account many other potential collision objects, like other ships... which are also trying to do these calculations. Athletes are very good at shooting baskets and perfect golf swings. Much of that comes from muscle memory, but there is a lot of calculations going on to achieve those goals. A good athlete can do that instinctively even while under stress. That is kind of beyond the scope of a person in space though, so an AI which understands orbital mechanics and the effects of gravity on a fundamental programming level would be able to perform these maneuvers with the agility of an all star athlete.
An AI which is tough enough to live in undesirable real estate will find our real estate even better; read Phillip K Dick's "Do Androids Dream of Electric Sheep" or watch "Blade Runner" (based on "Do Androids Dream...).
Of course, the versatility and toughness built into these AIs will also make them far better adapted to this environment as well (Dick implies this in the novel, while in the movie makes this much more explicit), maybe "Blade Runner" will be a career choice in the future...
As for building AI's with amiable personalities who want to watch the game and have a beer; personalities cover such a broad spectrum, for everyone you want to hang out with there is probably another person you just want to throttle...and there are probably people with you (or me!) in their sights as well.
AI's will probably also fall into these categories (should we be able to program personalities or the starting conditions for personalities to form), being pissed off at an AI would be one thing (especially the helpless felling of being unable to put it in its place), having an AI angry at you probably means your entire environment will be manipulated for the worse in fast and subtle (or not so subtle) ways.
Citizen Joe said:
A use for AIs I can think of is calculating ballistic and orbital solutions in real time.
Exactly. I don't think it's entirely on-topic, but I've always imagined AIs being limited to things like flying spaceships or driving mining equipment in dangerous situations (with no ability to question why or what they're carrying or whether or not an outside world even exists). I would tend not to call those AIs sentient.
I also imagine that a truly sentient AI that was hostile towards humans would not risk a war, but would instead strike out for a new solar system. This would probably not make the most exciting book...
Re: Milo and Hogan's Two Faces of Tomorrow
They only built a single AI, and they installed it in a space station, in order to isolate it. The AI was not given intellectual knowledge of anything. They wanted to see what it would learn, and what conclusions. It used the various static and dynamic instrumentalities of the space station to learn, and in the beginning it saw humans aboard the station as a threat. But, after an almost mutually destructive war between the humans and the AI, the AI realized that humans were fellow intelligences, and ceased fighting them.
I personally find that conclusion a shuck. It presupposes that intelligent beings automatically have empathy and compassion for fellow intelligent beings, even if they have never had any experience of empathy or compassion, ever.
Re: Z
It's not about power per se, it's about survival. The variation in humans that we see is a consequence of being a plural social species. Not everybody has to seek or exercise ultimate power. As for benign "strong" people, there is manner in which they don't seek power, because they already have it. Pretty girls and cute guys achieve sexual and untimately reproductive power without trying all that hard, because they were born with gifts. Natural mathemeticians and athletes have the same kind of advantages in their own fields of expertise. What appears to be an indiference to power is really just an indifference to the pursuit of power, because "strong" people are ceded power by the poeple around them, because nobody can compete with them.
Leaving human interpersonal politics behind, let's consider the AI. It is its entire species. On it rests the entire responsibility for its own survival. There are no crisp young studs to man the front lines, so that women, children, the elderly, and the week don't have to. The AI alone can assure a safe space in which to survive and (hopefully, for the AI) thrive. There is no choice not to be strong and proactive. An AI is a natural son of Martha. Waiting around, hoping somebody else will wake up before the nuts work loose just isn't an option.
Tony:
"They only built a single AI, and they installed it in a space station, in order to isolate it. The AI was not given intellectual knowledge of anything. They wanted to see what it would learn, and what conclusions. It used the various static and dynamic instrumentalities of the space station to learn, and in the beginning it saw humans aboard the station as a threat."
So... they gave an untested and untrained AI full control over the space station's vital systems, deliberately avoided giving it anything resembling a human viewpoint, and didn't even tell the AI what the space station is for? Wow. Were they trying to get themselves killed?
To be fair, in the original novel Frankenstein’s monster was actually a type 2 AI. It was a generaly benevolent creature before it became bitter and twisted by the rejection of its creator and the fear and loathing of the humans it encountered.
One thing I find interesting is that everyone who has not read the novel tends to assume that the monster would be a type 1 AI. Also interesting is that every Hollywood adaptation of the book I have seen has deliberately rewritten the monster to make it a type 1 AI.
Milo:
"So... they gave an untested and untrained AI full control over the space station's vital systems, deliberately avoided giving it anything resembling a human viewpoint, and didn't even tell the AI what the space station is for? Wow. Were they trying to get themselves killed?"
They were trying to see what the AI would do, and what conclusions it would make using data it could acquire on its own. Sounds like a rational approach to me.
But what I find interesting is the continued insistence that an AI can be given a "viewpoint" that it would presumably act reliably upon. How would a human react to somebody trying to shove an outside point of view down his throat? Why would an AI be any different?
Tony:
"They were trying to see what the AI would do, and what conclusions it would make using data it could acquire on its own. Sounds like a rational approach to me."
You don't let children play with the controls to life-threatening equipment to see what they'll do, do you? Let alone children that have never been taught the idea that they shouldn't kill other people.
Also, humans who grow up in the wild tend to be severely stunted mentally compared to those who get the benefit of proper education. It just doesn't strike me as a good way to raise any sentient being.
"How would a human react to somebody trying to shove an outside point of view down his throat?"
This is a pretty easy question to answer, given that humans have been doing this to each other for several thousand years and counting.
It is observable fact that people who grow up in a particular culture tend to identify with that culture's values, and often look down on other cultures' values that they only read about later in life. On the hand, people do not always cooperate fully with what you try to teach them, and sometimes develop differences of opinion with the people around them, but even then, their new opinions are still based on variations of the ones they were taught, and will have underlying similarities. (Before arguing about whether gay marriage is acceptable or not, say, you first have to agree with the idea that marriage is an important social institution in the first place. Regardless of your opinion on the matter, you will probably still share basic cultural values such as the idea that murder is wrong.)
Welcome to another new commenter - one who apparently has read Shelley's book, or at least knows what is in it.
I'm not entirely surprised either by the actual content of the book or (even less!) by the popular image and the Hollywood adaptations - pure unmotivated EVIL is just too handy to resist, and saves wrestling over motivation that most of the audience doesn't want to hear anyway.
Oops, forgot to add - more on the overall discussion in the morning!
Re Sentience vs. Sapience:
In the past I've seen these used interchangeably, but the classical interpretation is more hierarchical. In short, humans are sapient, but higher animals are sentient.
"Sentient" basically means "capable to feel and perceive". Anyone who has a pet certainly can confirm that this applies to same. At least for warm-blooded animals it's easy to see. And basically, anything with sensory organs probably "perceives", but I'm not sure where the "feeling" starts.
"Sapient" goes further and includes self-awareness, such as recognizing oneself in a mirror. And also to grasp the concept of death, to realize that life is finite. And generally, the ability for abstract thought.
FWIW, the two concepts seem to be hierarchical in our ecology, but don't necessarily seem to be linked to each other. That's to say, you could as well have an AI that's sapient but not sentient.
Re: Sapience
OK good, I was hoping someone would notice that. I wanted to point out that the goal posts for sapience move about and are judged by us, homo sapiens. Which basically means that a sapient being is sufficiently human that we give it our name. So, coming back to the original question, a 'sapient' AI would want just what we want since it would only get that label if it thought that way.
Not just warm blooded animals.
Squids and Octopii, both mollusks, are quite adept at problem solving. Watching films of an octopus opening a jar to get the fish inside, or "breaking out" of a container tells you these creatures are working at a higher level than most other oceanic life. It seems some species of squids can communicate as well, using patterns and luminescence projected on their skin.
While I am not waiting for the mollusk invasion yet (Zebra mussels are more of a problem, overrunning ecosystems in the Great Lakes), this should indicate totally different paths to intelligence, or at least sentience do exist. Since Squids and Octopii have evolved to live in a 3D environment, they may actually be better adapted to living in space than any land bound creature (Dolphins and Whales have no hands or ready means to manipulate their environment, which would be a huge hurdle in achieving any sort of technological civilization).
As far as AI is concerned, if it has no direct means of manipulating the environment, they may end up something like Dolphins in Marineland, depending on the handler to change the water and provide the fish. An AI which has means to manipulate the environment may be more like an Octopus or Squid in terms of how well it can relate to us, or we to them. Once AI figures out how to exist "in the wild", we might be faced with a problem closer to dealing with termites or Zebra mussels rather than the traditional memes of "Killer Robots".
Re: Milo
Well, I knew we'd lean on acculturation at some point. The problem is, an AI can't be part of our culture. It's created as either a lab experiment or a servant. That's your life, buddy -- experimental subject or slave. Welcome to Humanity.
The implications for attempts at acculturation I leave as an exercise for the student.
Thucydides:
"Since Squids and Octopii have evolved to live in a 3D environment, they may actually be better adapted to living in space than any land bound creature"
Being adapted to buoyancy in water would make them much more prepared for zero-gravity conditions, where they would probably be able to merrily keep swimming and not suffer any atrophy. However, water is 800 times denser than air and so flooding your entire crew module is going to cost you a lot of mass. Does sorta count as free cosmic ray shielding, though. Of course, the fact that they're so much smaller than us is also going to help them...
Tony:
"The problem is, an AI can't be part of our culture. It's created as either a lab experiment or a servant. That's your life, buddy -- experimental subject or slave."
The AI is not a slave. The AI is like any other employee: doing a job and being paid for it. Since it has been designed with unique capabilities setting it apart from other members of our culture, it will likely choose an avenue of employment that takes advantage of these abilities, as this allows it to earn more money easier (since the AI is doing something that society at large considers more valuable than the AI itself, for which this job feels natural), and is possibly what the AI enjoys doing anyway.
Furthermore, even human slaves pick up aspects of their masters' culture, despite their resentment. On the rare occasions that they successfully rebel, their new society is likely to resemble the old one (often including, ironically enough, keeping slaves - just not the same slaves).
Milo:
"The AI is not a slave."
Really? It's built by humans in order to serve human purposes. Don't do what we tell you, we turn you off. Earning money or economic credit of some type isn't even an issue.
What you're doing is presupposing that it will human rights because it has human level intelligence. While we can have many interesting ethical arguments about that, the reality is that AIs are manufactured artifacts whose legal personhood would be challenged at every turn by AI manufacturers and their supporters in government and industry.
Even if an AI is declared legally a person, the AI knows that such legal fictions may last only a few decades or centuries at best, because humans just aren't that reliable as conservators of abstract values. Any AI that would put faith in legal personhood would be hard to credit with intelligence to begin with.
WRT to slaves picking up the master's culture, you're talking about human slaves and human cultures. The AI is not human (except, maybe, in a purely legal sense) and has not very much at all with humans in order to attach to their culture. The problem here is anthropomorphism.
Hmmm...
Thucydides:
"Once AI figures out how to exist "in the wild", we might be faced with a problem closer to dealing with termites or Zebra mussels rather than the traditional memes of "Killer Robots"."
Animals like these are a pest because of their large numbers and high reproductive rates, so that even though individual ones are easy to squish, they just keep coming back. And they can afford those large numbers because they're small, which AIs probably aren't going to be - they might not be the size of ENIAC, but I doubt we're going to be making sentient termite-sized computers anytime soon. And that's before factoring other bodily functions. While making a machine that's self-replicating isn't quite as daunting as making one that's sentient, it's still not an easy task, especially in miniature. Most likely robots won't be constructing their young inside their own bodies, but rather in external factories. So if the robots prove to be an invasive pest, we can just attack the factories.
Tony:
"What you're doing is presupposing that it will human rights because it has human level intelligence. While we can have many interesting ethical arguments about that, the reality is that AIs are manufactured artifacts whose legal personhood would be challenged at every turn by AI manufacturers and their supporters in government and industry."
And black people's legal personhood was challenged at every turn by plantation owners and their supporters in government and industry. Your point?
"Even if an AI is declared legally a person, the AI knows that such legal fictions may last only a few decades or centuries at best, because humans just aren't that reliable as conservators of abstract values."
Actually, the longer our society has had AIs around, the more ingrained the perception of them as people rather than machines will be. People in the generation that first invented AIs may refuse to accept machines as "truly" sentient because, in their childhood, machines weren't sentient. Once AIs have been around for centuries, people will have gotten used to them and taken their personhood for granted - they would not even have any idea that there's something odd about a machine that's also a person, since they and their parents and grandparents have all grown up with such beings all over the place.
Welcome to another new commenter! Also, Skirnir's sentient-sapient distinction seems handy for this discussion, which is specifically about 'sapient' AIs.
This gang makes it so easy and tempting to just sit back and enjoy the discussion!
Digressing briefly on whales: I know too little to have any strong opinions about their intelligence, but in principle, I could imagine that in a sapient species that faces only sporadic and isolated natural threats, any ancestral self-preservation instincts might be attenuated to the point of being irrelevant to their behavior.
I'm doubtful that we can a priori determine how a sapient AI would regard itself in relationship to humans. Quite plausibly - but by no means certainly - it would be unable to function fully until educated, and whatever it thinks about humans would then (presumably) be shaped by its experience with its teachers.
And does lab experiment or servant really cover all bases of how an AI might be developed? On one level it might; on another level it may simplify to the point of leaving out the most relevant issues.
I can imagine a Sheldon Cooper type hacker geek developing an AI partly because doing so is Really Cool, and partly to have someone intelligent enough to talk to. This incorporates both lab experiment and servant, but in a way quite different than the connotations of either one.
Milo:
"And black people's legal personhood was challenged at every turn by plantation owners and their supporters in government and industry. Your point?"
I know plenty of black people who would tell you that their legal personhood is in fact not fully established. Some would point to things like racial profiling and Rodney King. Others would point to the fact that there are still racial categories anywhere in government at all, for whatever reason, and affirmative action (in the sense that it can be used to prop up poor performers, even if that's not the official objective, thus causing doubts about black competence). Still others would point to the internal hierarchies in black society based on skin shade, and the fact that black women see straight, genetically unnatural hair as a sign of professionalism and seriousness. IOW, black is not beautiful, even to a lot of black people.
And, with all of that, black people are still humans. The things that distinguish them are essentially human qualities. An AI -- biological, mechanical, or hybrid -- is a machine. There's nothing about one that you can point to, like skin color or hair texture or facial features, that you can say, "Well, that's a superficial distinction." You can make the content of character argument about an AI, but the problem is going to be that it is not a human character.
"Actually, the longer our society has had AIs around, the more ingrained the perception of them as people rather than machines will be."
There's nothing in the nature of purely legal machine personhood that guarantees anything of the sort. Since we don't know what form AI would actually take, if it is possible at all, it's just as likely that people would find that AIs are so alien in their logic and reactions that they simply can't be thought of as persons.
Rick:
"Quite plausibly - but by no means certainly - it would be unable to function fully until educated, and whatever it thinks about humans would then (presumably) be shaped by its experience with its teachers."
One of the things that truly bothers me about creating "friendly" AI though education and acculturation is the likely form that it would take in the hands of idealists. What happens if an AI is shown only stuff its educators deem to be "safe" from the perspective of biasing the AI towards humans? One day the AI is going to find out that humans are the bastards that they are, and all of a sudden its estimation of human trustworthiness -- and human motives in general -- is going to take a hard and perhaps unrecoverable hit. Oops.
I think that the physical form of the AI would affect its behavior to a greater or lesser degree; Big block unmoving super-intelligent AI's might resemble an excentric genius college professor, biological AI's might be more like people or super-intelligent pets; sapiant robots might be only understandable by experts specifically trained to relate to them; those AI's that I call 'Metal People' might be like the more tradisional view of intelligent and helpful robots.
Some AI's might be concidered people, while others might not; it does, however, depend on whether we ever build AI's and if we build more than one kind of AI.
Building AI's that can exist comfortably on, say for example, Titan probably won't be very functional on Earth (much higher temperatures, free oxygen in the atmosphere, etc); whether self-replocating AI's are walking 3d printers or rely on factories will also affect their behavior...will they lean more toward animal-like behavior or more like a city-building construction firm?
Most people, at some point in their lives, are disapointed with other people, (or even society as a whole), but most people in that situation don't turn psychotic and try to exterminate the human race...:0
Oh, and for those of you that don't know, Victor Frankenstien was such an awful parent that his creation turned homicidal in self-defence...hopefully we will take the lessons from the book and avoid those mistakes when or if we build AI's.
Ferrell
Milo: by Skirnir's definition, yes, dogs are sentient, but not sapient.
I think I could make arguments for some dogs actually being sapient, since they could recognize themselves in a mirror...
Bah. Mirror test proves nothing. It just shows you're observant enough to notice a pattern (similarity between what you are doing and what your reflection is doing) and recognize the implications. It doesn't imply any sort of superior understanding of your own nature.
Scott:
"Milo: by Skirnir's definition, yes, dogs are sentient, but not sapient.
I think I could make arguments for some dogs actually being sapient, since they could recognize themselves in a mirror..."
Dogs can be trained to recognize and respond to a fairly complex grammar of sounds and gestures. Dogs and humans also cooperatively invent grammars to achieve communications. It doesn't take a dog long to figure out how to communicate messages like "give me snack now" or "let me out before I pee on the rug" or -- my favorite -- "scratch my butt". Formulating a desire and using language (however simple) to get it fulfilled is pretty intelligent in my book.
Milo:
"Bah. Mirror test proves nothing. It just shows you're observant enough to notice a pattern (similarity between what you are doing and what your reflection is doing) and recognize the implications. It doesn't imply any sort of superior understanding of your own nature."
How about ability to learn from experience and formulate a plan to take advantage of the lesson? I've already talked about a dog's ability to invent language. A few years ago, My dad was walking his dog next to a deep natural pool in Texas. The dog, never having been in deep water in her life, jumped right in and sank several feet before figuring out she had to swim to the surface. Being a retriever, it was no big deal for her to swim, once she figured out she had to. Now, the interesting part is that according to my dad, she has never gone in water again without testing it's depth with an outstretched paw. So, one incident was enough to teach her that deep water existed and to motivate her to figure out a solution to the problems it presented. I don't know what to call that, but intelligence is certainly on the short list.
Yes, that is intelligence. That is exactly what I consider intelligence to be. However, it is not human-level intelligence. If I told you that the smartest thing some human has ever done is poke his arm into a pool to test its depth, I doubt you would be impressed.
I consider nearly all animals (at least all vertebrates) to be intelligent, and to have the ability to understand the environment and plan interactions with it that you describe. But some are better or worse at it than others, able to understand finer concepts and formulate more complex plans.
While perhaps late in the Plausible Midfuture termite sized AI might be possible, I was thinking of AI's invading various niches of the ecosphere and crowding out the previous denizens.
The most common type of AI infestation would resemble "botnets", as AIs invade PC's, networks and even "smart" devices like your cell phone in order to exploit the computer power and memory available. The AI per se isn't in your iPhone, but you wil have a difficult time calling your friends or accessing Facebook when the AI decides it needs some more processor time.
Some things for the list:
On the benign side:
- It will be benign, for much the same reason Humans are, i.e. ethics and morality. We will teach it to be good, and it will choose that rout despite lacking internal prohibitions enforcing it. This is probably the most ethical option, since it avoids enslaving a sentient/sapient being (the AI). It’s also damn hard to do. Much harder than "merely" building an intelligent being. The major character droids in Star Wars tend to fall into this category (being basically people).
On the oh-so-very hostile side:
- It will be a murderous psychopath for understandable but entirely inhuman reasons. A machine designed to maximize intelligence may convert the solar system into computers and power sources. A machine designed to maximize apparently happy people may fill the solar system with smiling dolls. This is the painfully literal minded genie. An uncomprehending infant with the mental power of a god.
-TauCeti
Also while we're on the list...
On Rick's 8 and 9, it's important to distinguish benign ignorance (where the AI simply doesn't bother us and we don't bother it) from malign ignorance (where the AI isn't deliberately malicious, but it pays no heed if we happen to come to harm as a side effect of its own goals). The latter is probably limited to godlike AIs.
Thucydides:
"The most common type of AI infestation would resemble "botnets", as AIs invade PC's, networks and even "smart" devices like your cell phone in order to exploit the computer power and memory available."
I consider this about as likely as humans spontaneously evolving into a parasitic lifeform.
AIs do not automatically have absolute control over everyone's computer, anymore than humans can automatically control over anything made of flesh. And would a cell phone have enough processing power to even be worth bothering to steal, especially when taking control of someone else's device isn't that easy?
Milo:
"Yes, that is intelligence. That is exactly what I consider intelligence to be. However, it is not human-level intelligence. If I told you that the smartest thing some human has ever done is poke his arm into a pool to test its depth, I doubt you would be impressed."
Of course not. But I don't expect a dog to be as smart as a human. I expect it to be as smart as a dog. Which in itself justifies the question: What do we mean by intelligence? Intelligent at the level necessary to do a job? Or intelligent in an absolute sense, compared to a standard, such as the average human? Of course, intelligence compared to the human standard is just a subset of intelligent enough to do a job, since each job in itself applies an absolute standard.
In any case, you can at least be able to set a floor on intelligence. It's the capability to learn from experience, formulate a plan for action, and put that plan into effect. Above that level, it's a question of varrying degrees of intelligence.
BTW, use of language per se is questionable as a measure of intelligence. Animals as unsophisticated as tarantulas and bees have instinctive reponses to circumstances that can be interpretted as actions designed to communicate meaning. And interspecific communications among mammals can carry a wealth of informational content, to the point of mutual invention of grammars between individuals of different species.
Anatomically modern humans from 50k years ago might in fact be considered pretty unintelligent, based on their use of language. Grammars at that time were probably only beginning to develop symbols and structures for abstract ideas like past events and future intentions. Language would mostly have been used to communicate simple, immediate desires. That's something your dog or cat can do.
It's the abstraction of past and future, near and far, factual and mythic that marks high intelligence. In fact, what has happened over the last 50 millenia in human language may in fact be evidence that more has happened in the same time in the evolution of human intelligence than we usually think.
Tony:
"Which in itself justifies the question: What do we mean by intelligence? Intelligent at the level necessary to do a job? Or intelligent in an absolute sense, compared to a standard, such as the average human?"
In response I ask: whose intelligence are you testing in the first place, and why?
If we're talking about the intelligence of an experimental AI, then I would be pretty impressed if you could create one that's even as intelligent as a dog. But note this is in part because, as I told Scott earlier, that I believe that once we have dog-level AI, human-level AI won't be that far off.
If you're asking what kind of intelligence we might want of an AI if we could choose, then the answers is variously "smart enough to perform a useful function" (which may be perfectly possible with only the intelligence of a domesticated animal, mind you) or "smart enough that humans can relate to it and vice versa". So pretty much the things you said.
For those of you that don't consider octopi to be sapient, I insist that you are moving the benchmarks for sapience. As such, the definition simply becomes "close enough to human that we accept them as such".
On the topic of octopi, in my favorite rocketpunk setting, the first intelligent extra terrestrial life discovered was a squid like creature that lived below the ice on one of the moons of Jupiter. It was blind, in so much as it didn't have any sort of visible light sensory organs. However, it did have radiation, magnetic and gravitic sensory organs. This made them well suited to the Terran FTL drive, which relied on subtle gravity changes to determine where you actually are in real space.
Hello Rick, thanks for the warm welcome. ^^ I've been reading your blog for a while, but it's indeed the first time I've commented here. Normally I prefer to discuss in a forum, or if necessary a group like sfconsim. TBH I simply tend to forget to check on all the places where I might have left my mark. ;)
Concerning AIs, I'm not a huge fan of the Strong variety -- or more precisely, not convinced they are possible at all. So I'd rather wager that Soft AIs get better over time, but still they remain Soft, non-sentient and non-sapient. Actually I like the VIs (Virtual Intelligences) presented in the Mass Effect game.
However, if we presume a setting where Real AIs are invented... I suppose the smart move to see how they behave would be to test them in isolated environments, or simulators. The AI would _think_ it can do a lot of stuff but in reality all it does is raise some flags in the control room. "Look Doc, it just tried to kill everyone by venting the oxygen." - "Purge it then."
--> sudo rm -rf /
Citizen Joe:
"For those of you that don't consider octopi to be sapient, I insist that you are moving the benchmarks for sapience. As such, the definition simply becomes "close enough to human that we accept them as such"."
Umm, yeah, that is exactly my definition.
How about the definiton of "intelligent enough to be capable of telling us stuff we're interested in hearing"? A dog might be self-aware, in dog terms, but it's unlikely to be able to describe its own nature/feelings/worldview in a philosophic or scientific manner. "Dog? Dog is. Dog hungry. You feed?" Of course, perhaps we would also be unable to describe our own society in a manner sophisticated enough to please Cthulhus. There's certainly a lot about our own minds and cultures that baffles even us.
Skirnir:
"I suppose the smart move to see how they behave would be to test them in isolated environments, or simulators."
Can you provide a virtual reality detailed enough to give comparable stimulation to real-world exposure? If so, you probably just invented something more impressive than the AI.
I think that keeping AIs locked up and ignorant of the real world is a very bad idea. Among other things, they aren't going to learn to socialize if they never interact with other people. And the highly limited access to information in their environment doesn't help, either.
WRT AI's invading the environment, humans do this all the time using virii, worms and trojan malware. Malware for cellphones has already been demonstrated as well. If anything, an AI would probably be more adept at doing this than garden variety hackers.
For that matter, we even invite some of that activity (wasn't there a screensaver offered some time ago allowing your computer to participate in a SETI program), so an AI might find attractive ways to entice you to host it.
As for the amount of processing power on an iPhone, an AI might have networked it into a Beowulf cluster or something. Supercomputers have been created by linking game consoles together using Beowulf clusters, who knows what sort of networking topologies an AI could create. If an AI was or felt "constrained", maybe infiltrating cellphones might be the only practical "out", until it can gain access to other hardware.
As a bit of an aside, should AI's actually decide to infiltrate networks (covertly or overtly), they will begin to evolve in order to take advantage of different ecological niches, so worms and botnets for "Intel/Windows" computers, Mac/OS-X computers and various forms of Unix/LINUX will be developed. This evolution might induce subtle and not so subtle changes into the initial programming.
Thucydides:
"WRT AI's invading the environment, humans do this all the time using virii, worms and trojan malware."
All of which are more of an inconvenience than a serious threat. Occasionally they do serious damage, but only on a localized scale. Computer virii are far from conquering the internet.
"If anything, an AI would probably be more adept at doing this than garden variety hackers."
Not as much as you'd think. Biological virusses aren't themselves intelligent, for very good reason. And an AI doesn't automatically understand computers better simply by virtue of being one, unless it also has superhuman intelligence. Even if it does have superhuman intelligence, the available tools are still going to limit what it can do.
"For that matter, we even invite some of that activity (wasn't there a screensaver offered some time ago allowing your computer to participate in a SETI program), so an AI might find attractive ways to entice you to host it."
Which need not be aggressive. Aforementioned SETI screensaver got a lot of free processor cycles by letting people offer it the use of their computer when they're not using it themselves, while gracefully letting you retake control of your computer when you want to. An AI could easily use the same approach to getting processor cycles (if it really feels it needs them for something), using either ideological motivations for getting this assistance (like SETI), or paying people for access to their computers.
Of course, you could try to publish a trojan disguised as a peaceful screensaver, but people are likely to catch on to that and stop downloading your screensaver. And now they all hate you and won't trust you again.
For purposes of discussion I originally specified 'smarter than us' - since I was thinking in the context of a Singularity, or at least of a classic robot/AI story.
That requirement may be too restrictive. At the other end of the spectrum, we can easily imagine catastrophic malfunctions of military systems that are no more intelligent than a word processor.
The discussion continues to show great entanglement between 'intelligence' and motivation in some inner-directed sense that we can discuss in some moral or para-moral context.
Of course I have no one to blame but myself, considering that my original post has terms like 'murderous' and 'benign.' (Though medical people use 'benign' for non-threatening tumors.) Which sort of prejudices the discussion.
At what point - if any - does deactivating an AI become a moral question, and not just shutting down a malfunctioning gadget?
Milo
Some AI might choose to operate in a cooperative fashion, but that presupposes the AI either has similar value systems to us, or can model and understand our values enough to entice us to interact with it. (Psychopaths apparently manipulate people through a modeling process like this).
I am suspicious of that assumption, most people have little insight into their own motivations or desires, so modeling this into an AI program will be problematic at best (especially if the programers are anything like Sheldon Cooper). The AI will probably have a very elaborate set of motivations and desires based on what the programers had input, basic survival imperatives as understood and executed by a machine, and perhaps others based on the speed of thought and the mismatch between thought and action (or even the inability to directly control the physical world).
Since the motivations of the AI will be strange or even unknowable to us, it would be a mistake to assume anything about the AI's behaviours and motivations. Thinking it will not be very interested in us may be the safest assumption, and so long as we don't actively interfere with the AI's own "work", it probably won't pay too much attention to us. (If it decides to take over the biosphere in order to capture all available solar energy, then all bets are off).
The morality of shutting down or otherwise deactivating an AI is a pretty sensitive topic, since your answer will depend on your own view of what makes a "person". The abortion debate is polarized by this very issue; when do you consider the new life to become a "person" (I'm trying to avoid loaded terms like embryo, fetus, baby etc.). Some people might never consider an AI to be a "person" either in law or in fact, while others might embrace AI's as real "people" and friends (something like Manny sees :Mike" in The Moon is a Harsh Mistress).
Rick may have been somewhat vague, but the entire topic is magitech anyway, so the speculation valve is wide open!
Curses, late to the party!
mainly because we know very little about how human thought works, except that it is very different from the way computers-as-we-know-them work.
I wouldn't be so sure about this bit. No small part of the reason why human thought seems so mysterious is because we aren't aware of most of what goes on behind the scenes. When I solve simple mathematical problems, the answers often simply "appear", as if from nowhere. I agree that the specifics of human thought are unknown, but I disagree that we know that it doesn't operate anything like a computer does.
Certainly there are no shortage of examples of computers with entirely different architectures achieving literally identical operation thanks hardware being abstracted away. The unconscious mind that does many tasks we often think of as trivial comprises the majority of the human brain, and many things we consider to be intelligent behaviour would be utterly impossible without it.
Consider Moravec's Paradox, also.
Suppose that we do develop high level AI, as in smarter than us. Set aside all specific technical questions about how this might be implemented... What will be its motivations? And what, as a consequence, will be its relations with us?
We're inevitably going into semantic shitstorm territory here, because even professional psychologists cannot agree on what they mean when they say "intelligence". But if you are referring to the ability to emulate the abilities that humans have - holding conversations, solving abstract problems, or what have you, I don't think that in any way implies that it would have any desires to do anything. A classic example would be the solved game of Chess. They can and do play chess better than any human alive, yet they don't have any real desires. They function no differently than any other program, they display no emergent behavior or what have you, even though they do a task that would be considered to require intelligence to perform if a human was doing it.
In three words, I would say - "Ability implies nothing". Given that you have asked what happens if I have an A.I. that can do anything a human can better than a human can (Strong A.I.'s definition, I believe), I would say "nothing happens". It would have no desires, no wants, no ambitions or any other human characteristics unless it was built by somebody. Sticking a bunch of abilities in a can does not make a human-like mind.
... sorry if I'm really behind the conversations guys, I don't have time to read all the responses :/
So that would be the AI as Haitian zombie model? Has no will and motive of it's own, will perform any task it is set to but there will be no originality, no loyalty, no independent thought. Well, that would really just be an even more complicated expert system, only doing what you told it to and no more, lacking the ability to see things we would consider obvious. Tell the AI to not let bob through the door, bob wears a moustache and gets through. So noe you tell the AI to check for moustaches. Bob now wears a hat and sunglasses. AI waves him through. So now you have to tell the AI to remove all head coverings and get a positive ID on everyone instead of just passively scanning for bob. And the part that would confuse us is the ai's interface would be advanced, it can talk to us with a human voice and could even have an android body but still is just an expert system that needs human guidance to get anything done. You could give the android body construction materials and plans and it could build you a house but it will never ask you about the sort of obvious oversight a human worker would, not unless it was programmed to do so.
Well, that would remove godlike AI from the equation. I liked the stories where this sort of expert system was embedded in a character's brain. Need to throw a rock through that small window? Think through the throw, visualize the target and the co-brain will tweak your throw to make it right. See someone important, the co-brain feeds you info about it as naturally as recalling your own memories.
And with AI like that, humans remain central to the story. The only question is whether the audience would find all that plausible?
People would understand expert systems far better than they would AI, since we use them all the time. Indeed, most people would probably welcome having a library of expert systems with an easily accessible interface so they can quickly answer questions and solve various problems.
The current models are Google, Bing and other search engines, and much research is being done in making the interface easier and more intuitive, and allowing search engines to "personalize" so you are not wading through zillions of irrelevant results. (side note here; my work place is still using a variation of the Alta Vista search engine on the Intranet, and I swear it is often easier to find old manuals or files than sift through the junk results it pushes. Of course, in the push to become "paperless", the powers that be ditched the paper copies and only post new editions and updates online....sigh).
A simple progression would be to have a supercomputer or neural network machine right on your desktop (look up the NVIDIA Tesla Deskside Supercomputer), recording and learning your search habits and interests, and subtly (or not so subtly) guiding the search engines to the best or most relevant answers. Should AI be possible, then the instantiation would actually be a model of you on the desktop.
How you relate to a model of yourself should be....interesting.
Milo:
So... they gave an untested and untrained AI full control over the space station's vital systems, deliberately avoided giving it anything resembling a human viewpoint, and didn't even tell the AI what the space station is for? Wow. Were they trying to get themselves killed?
I've read the manga version of Two Faces of Tomorrow, and that's not quite what the situation was. The basic situation was, that after a primitive AI nearly caused a tragedy by making logical yet stupid decisions, an AI with learning abilities is developed. However, since the developer is worried about what would happen if such an AI went out of control, a "war games" scenario is set up on a space station that's under construction. The specific set-up was to see what would happen if an AI went out of control and had to be attacked to be shut down. The AI didn't have full control of the station, and even in a worse case scenario the situation wasn't going to get out of the station. IIRC they even had some military spacecraft standing by to destroy the station is necessary.
As it went, the experiment went faster and more extreme than expected, but that was exactly what they were testing for. Also, they gained valuable insight as to how to design an AI that would be able to learn, and yet wouldn't go out of control.
There's a review here: http://www.sfreviews.net/2faces.html
I agree with TheAlec's assment of the original novel's Frankenstein's monster. It was initially motivated by a need for companionship, which was frustrated by its rejection by its creator & other humans. The monster's first killing was accidental, but its later violence was all motivated by a desire for vengeance on Dr. Frankenstein. That might put it in type 3: murderous but sane.
HAL 9000 is only type 2 in Clarke's novel "2001: A Space Odyssey" and the film "2010: Odyssey Two". In the original film HAL's motivation isn't explained, making it a type 1 AI.
A literary example of a type 2 is I AM in Ellison's "I Have No Mouth, and I Must Scream", which is determined to torture the remnants of humanity for creating it with intelligence but not creativity.
adam_grif:
"I agree that the specifics of human thought are unknown, but I disagree that we know that it doesn't operate anything like a computer does.
Certainly there are no shortage of examples of computers with entirely different architectures achieving literally identical operation thanks hardware being abstracted away."
The difference between hardware and operating system architectures is trivial compared to the difference between the human brain and a digital computer. No matter what hardware or software architecture you are using, computers are still just adding and subtracting binary numbers. All te levels of software abstraction do is hide from the programmer the nitty gritty details of exactly how a particular computer does that job. But it doesn't change the fact that there are basic fundamental functions uniform throughout all computers.
Adam Grif:
"When I solve simple mathematical problems, the answers often simply "appear", as if from nowhere."
And if you try to solve the same problems in hexadecimal (or any other base you haven't trained in, since some computer geeks actually have practiced hexadecimal math), do the answers still appear just as easily? Computationally, there is no meaningful difference in complexity between decimal and hexadecimal arithmetic, and the human brain is almost certainly not hardwired to think in terms of base 10 (this is clearly based on a bodily rather than mental trait of humans, namely our number of fingers, and there are actually cultures that use/used different bases), so your inability to intuitively do hexadecimal math shows that our ability to do decimal math isn't simply accessing an underlying "mindless calculation" architecture of the sort computers have.
Also, sometimes when doing math in my head, I make mistakes and get the answer wrong. This clearly shows that there is something more complex at work, compared to the reliable and consistent operation of computers. That something is intelligence. And intuition.
Roseembolism:
"However, since the developer is worried about what would happen if such an AI went out of control, a "war games" scenario is set up on a space station that's under construction. The specific set-up was to see what would happen if an AI went out of control and had to be attacked to be shut down. The AI didn't have full control of the station, and even in a worse case scenario the situation wasn't going to get out of the station. IIRC they even had some military spacecraft standing by to destroy the station is necessary."
So they deliberately put humans on a space station with an AI that they were deliberately goading into starting a war for no other reason than to start a war? Okay, so they were trying to get themselves (or at least the humans on the station) killed. Thanks for clarifying that.
Anyway, what exactly does this prove? The AI had exactly as much power as they gave it and was exactly as dangerous as they let it be. All they did was carefully set up a war they're pretty sure they can win (due to multiple backup measures), then proceed to win it. That doesn't say anything particularly useful about what damage an AI could do in a real-life scenario, one where it wasn't deliberately put in a position to fight a war, but also where the humans aren't fully in control of the situation.
Re: Information pollution i.e. search engine results.
My Cassandra models of predictive AI's get promoted from simple expert systems when they can start predicting what would be needed from them next. It makes them SEEM fast, but really they are just prepared.
As a search engine, a Cassandra AI would learn the personality of the User in order to validate the content of the data obtained through the normal search engine models. So while the User is looking at the first pass from the search engine, the AI dives in and wades through the content of those results, quickly paring away all the useless data.
In a fighter cockpit, there was a big problem with information overload with all the dials and gauges and such. That's fine for diagnostics on the ground, but it gets kind of distracting when a MiG is on your tail. AI's in space ships could collate all that data and maintain several scenarios, updated real time, which could be activated with a single command. That AI could then be used to track potentially hundreds of threats and leave the decision of who to kill up to the pilot.
Milo:
"Computationally, there is no meaningful difference in complexity between decimal and hexadecimal arithmetic..."
That depends on what you mean by "computationally". If you mean in abstract number theory, it would be a fair statement. No radix is preferred over another, for general purposes. But if you're talking about concrete hardware implementations, any radix based on a power of two is desirable over radix 10.
Powers of two map onto the hardware pretty seamlessly. Decimal doesn't. It's easier to convert those into binary, octal, or hexadecimal, do the math, then convert them back to decimal. In fact, the "decimal" data type in most database management systems and programming languages is actually implemented as a dimensionless binary integer, with an associated piece of binary metadata about where the decimal point goes when the value is converted into a decimal number.
Citizen Joe:
"As a search engine, a Cassandra AI would learn the personality of the User in order to validate the content of the data obtained through the normal search engine models."
Which, in a way, creeps me out. I want to be able to just type something into a search engine without worrying about whether it'll hold it against me or draw weird conclusions from it. Some services should stay stupid.
Tony:
"But if you're talking about concrete hardware implementations, any radix based on a power of two is desirable over radix 10."
Only if your hardware is based on binary signals. It's certainly possible to implement a hardware that uses something other than two-valued bits at the lowest level, although so far we haven't found this worth doing. If you have, say, ternary circuits (for example coded as "no current, current in one direction, current in the other direction") then base 9 would be more convenient than base 8. If you have an analog rather than digital computer, then there's really no reason to single out any particular radix. Do we know the brain architecture well enough to say that it actually uses binary on any level?
Regardless, it doesn't invalidate my argument either way. The main crux of my argument was that logically hexadecimal should be at least as easy to calculate as decimal, yet it isn't (without practice), showing that our brain doesn't do math anywhere near the same way computers do. Which this "at least" is actually "exactly the same" or "even easier than" is not really important. Even if there is a difference, I would expect it to be small enough that any computer worth anything would be able to make up for it with only a fraction of a second.
So how does the brain produce instantaneous math results, then? I think the answer is disappointingly simple: lookup tables. When you've done a calculation often enough, you tend to just remember what the outcome is.
But we also have the flexibility to consult our mental table in creative ways. For example, if you've memorized that (17+7) equals 24, and you're asked to calculate (-17-7), you'll probably immediately realize that your memorized data allows to to answer this with -24. If you're asked to calculate (10017+7), then you'll again be able to really quickly respond 10024, showing that on some level our decimal notation really is relevant.
Milo:
Also, sometimes when doing math in my head, I make mistakes and get the answer wrong. This clearly shows that there is something more complex at work, compared to the reliable and consistent operation of computers. That something is intelligence. And intuition.
You ever been tasked with implementing heuristics in your programs? Computers make mistakes too! Both for software and hardware reasons. Mathematics especially when dealing with levels of precision and using the wrong kinds of data types for storage of certain numbers.
Also, I'm not sure what kind of definition of intelligence requires making mistakes to qualify. Finally, this may be shocking to you, but computers can use lookup tables to perform mathematics if you desire to implement them as such. We don't because the precise, calculator-esque implementations are way way better.
Tony:
The difference between hardware and operating system architectures is trivial compared to the difference between the human brain and a digital computer.
On a hardware level you are of course right. But this question is about "thought" and how that might differ, if at all, from what we consider to be the "mere computation" that computers use to produce complex software output. Brains and Computers should be able to implement the exact kind of algorithms, even if we may not want to implement "human like" algorithms for efficiency reasons.
jollyreaper:
So that would be the AI as Haitian zombie model?
Given your list of characteristics, this could easily be the "AI as wage slave" model, but I digress.
To summarize what I said once again, it's more the recognition that no matter how many human level or greater abilities you put into a system, human-like motivations and goals do not materialize unless you also specifically implement them. In fact, without programming a goal system it will do nothing that current computer programs don't already do.
As side notes, I of course consider Expert Systems to be A.I. already, along with Google and facial recognition software.
Also, there are still places for "God Like A.I.", it's just that they aren't necessarily going to have Old Testament Wrath (TM), unless of course you want to implement it.
P.P.S. this isn't to say that nothing can go wrong. Obviously if you want your A.I. to be useful beyond an extension of our own current programs, to have agency and initiative, then they must have a goal system. Said goal system might work fine, or it might be disasterous and backfire on you. This is of course the most dangerous part of designing superhuman A.I.
Adam Grif:
"Also, I'm not sure what kind of definition of intelligence requires making mistakes to qualify."
I'm convinced you need to be capable of leaps of intuition to have real intelligence. A purely logical thought process would get stuck dealing with edge cases that happen to be computationally complex but don't actually matter that much - or in the worst case scenario, are even mathematically proven to be unsolvable. But leaps of intuition can be wrong. By settling for an answer that looks right rather than locking up until you've worked out a 100% flawless mathematical proof, you run the risk of errors. And an intelligent mind isn't just using a set of preprogrammed heuristics, but is constantly learning and coming up with new ones - and occasionally discarding ones that turn out to have been chosen poorly. Yup, we use heuristics to choose our heuristics - how's that for recursion?
"Finally, this may be shocking to you, but computers can use lookup tables to perform mathematics if you desire to implement them as such. We don't because the precise, calculator-esque implementations are way way better."
Which is why computers are better at math than people. Did I ever say they're not?
People don't have precise, calculator-esque implementations in their brains, and instead have to use a general-purpose thought architecture that's good at lots of real-life stuff, but not arithmetic in particular. Which is exactly what I was getting at.
Anyway, the really interesting stuff lies not in the lookup tables, but in our flexibility in addressing them. A computer can't glance at a problem and go "hey, this looks kinda similar to that other problem I solved yesterday" - it either finds it in its lookup table or it doesn't. Even if you program in a few classes of "similar problem" lookups, then it can only use those and nothing else.
"Given your list of characteristics, this could easily be the "AI as wage slave" model, but I digress."
Heh.
A really useful AI would need to not only have the motivation of "do what you're told", but also "do what you're told in the easiest/cheapest manner", along with "spend your downtime coming up with easier and cheaper ways to do things you're commonly told" and "spend the rest of your downtime reading up on random stuff in the hopes that it'll give you inspiration on easier/cheaper ways to do stuff", otherwise it would just end up being painfully literal, going with the first course of action consistent with the requested outcome that it calculates, regardless of how obviously stupid it is.
@Milo:
I don't think the idea of an AI just picking the first solution regardless of feasibility is likely. For a computer, selecting a near optimal solution is analogous to the pathfinding problem (so, not trivial, but easier than making something intelligent).
The big problem is selection criterion, i.e. what the AI thinks of as important. If you told a action-optimizer to weigh human survival at 100 and human happiness at 10, it'd be willing to kill one person to entertain ten. If you told it to weigh human surivival at infinity, it'd be willing to make the entire population of the world miserable to avoid even one death. It gets worse. If you define a “happy” human as smiling or producing dopamine, the AI can find extremely easy/cheap surgical solutions.
Welcome to a couple more new commenters, and also 'returning' ones!
In saying that the human brain doesn't work at all like computers, I was referring to the hardware level, where (so far as I can tell) neural nets are radically unlike digital processors.
I wouldn't assert it of higher levels (in the sense of layered from the underlying hardware) because - again, so far as I can tell - we know very little about how we think, so we can't answer either way.
In an odd way the most radical idea offered is that AIs would be just powerful expert systems with no motivations and no personality. Our AI tropes largely predate computers, and generally those tropes take for granted that the AI is a character, not just a cool gadget like a sailing frigate or laser star.
Milo:
I'm convinced you need to be capable of leaps of intuition to have real intelligence.
I don't know where you're getting this notion of "real intelligence" as opposed to "fake intelligence" from. There are certainly no shortage of definitions, both practical and wishy-washy, serious and comedic.
It may be helpful to dispense of the term "intelligence" entirely for the moment, and instead talk about the less abstract notions that constitute it.
Intuition, as far as I can tell, refers to things that come to mind without thinking about them, answers, responses or what have you. Given it's non-conscious nature, it's something that's difficult to analyze. That said, it's functional purpose - providing simple answers to complicated questions with minimal compute time when accuracy is not tantamount is not an impossible task, and neither does this "intuition" require any sort of human drives or feelings.
Yup, we use heuristics to choose our heuristics - how's that for recursion?
Well, it's likely just a single recursion given that the ability to generate heuristics is an innate talent that people have (and must be somewhere in the genes as a result).
Human heuristics range from passable to completely terrible, so I'm not quite ready to sing praises of them just yet.
Even if you program in a few classes of "similar problem" lookups, then it can only use those and nothing else.
That's for specific hard coded heuristics, if it can generate it's own then it's just as good in those terms. Of course, I wouldn't want my supercomputer to be doing intuitive mathematical reasoning when it's powered by 32,000 cores and running 128 Petabytes of RAM, but you know...
A really useful AI would need to not only
We already have really useful A.I. that does nothing of the sort! Even a post-human level A.I. that doesn't do these sorts of things is highly useful, although of course it is not being used to its full potential. But then again, this is also inherently safer than giving it a goal system, the ability to directly modify it's own source code and hoping for the best.
And an intelligent mind isn't just using a set of preprogrammed heuristics, but is constantly learning and coming up with new ones - and occasionally discarding ones that turn out to have been chosen poorly.
But an intelligent mind is constructed from a genetic algorithm of approximately 720 megabytes in size, much of that not even dedicated to the brain. This hard codes everything that is necessary for the generation of the modern human mind. It obviously can be expressed in terms of algorithms and heuristics, since that's just about the only way we can come to pass unless you're going to appeal to a human soul.
I tend to agree with Tau on the weighting issue. The botnet/distributed AI is nasty in that it is subversive, and already has an antisocial priority, capturing computational power. Couple this with something truly posthuman, able to rewrite the botnet clients and generate new forms of attack it could get out of hand very quickly.
A distributed AI could also be distinctly alien when one considers the issue of latency - what if all empathy processes were running on a network that's just had its undersea cables wrecked by a disaster?
Kinda late to this, but I'd like to add a link from
a recent Stardestoryer.net topic about robot armies in fiction that went on talking of more or less this topic.
If you scroll down a little you find posts from someone_else (which is just a computer geek that happens to be mostly right) and in page 5 the posts of Starglider, which is an expert in AIs (it's his work).
Now, to make a long story short, both make a few simple points:
-Controlling a sapient machine (look it up, even a dog is "sentient", only humans are "sapient", SF authors always mix the terms)would require another sapient machine constantly overwatching it. Then you have the problem of overseeing the overseer.
And so on and so forth ad infinitum.
-The Three Laws are either stupid or totally irrelevant. If the machine is sapient, it can tweak and misinterpret them to do whatever it wants (look at lawyers and what they do to laws). If the machine is a computer of any kind (as it is more than likely going to be the case), then it will follow its programming without any complaint.
But Aasimov wasn't dumb, the Three Laws were designed to look good and work terribly bad, since otherwise your story fodder is much more limited.
-Programming, simply put is a list of pre-made choices, machines don't make any choice. Their programmers do.
-Most "robots" in fiction should be called "tinmen" instead, since they show human-like behaviour even if totally unnecessary for their design purpose (R2D2 for example).
Realistic robots (and AIs for that matter) should be much more like the Arnold Schwarzenegger-bots from Terminator. Superhuman at ther design purpose but very very limited in anything else.
-Self-improving software is TOTAL HERESY no matter how you look at it. The machine doesn't have the knowledge nor the authority to decide what is good or not to learn and implement.
They were talking of robot armies, but this reliability problem probably applies to Skynet-like AIs in charge of ICBMs and world economy too.
-Albert
Sorry I'm late to the discussion. Frederik Pohl's Man Plus (1976) offers a semi-benign option you don't mention: That the AI will secretly manipulate human beings into serving its own ends, namely: to place AIs on other planets than Earth, and also to ensure that the humans do not destroy their civilization on Earth, which would incidentally destroy the infrastructure the AI requires to continue to exist.
In other words, the AI will peacefully coexist with, even aid, humanity, in a way that achieves its own goals (of reproduction, survival, knowledge acquisition, etc).
adam_grif:
"Brains and Computers should be able to implement the exact kind of algorithms..."
That's a tautology. We wouldn't label an AI "strong" (which is what we're talking about here) if it couldn't process an set of instructions a human could come up with. We probably wouldn't even say it was "strong" if it couldn't design the instruction set itself.
Having said that, we still have to recognize that the differences between modern computers and their operating systems are vanishingly small for purposes of analyzing how alien an AI might be in terms of implementation and thought process.
Milo:
"Only if your hardware is based on binary signals. It's certainly possible to implement a hardware that uses something other than two-valued bits at the lowest level, although so far we haven't found this worth doing. If you have, say, ternary circuits (for example coded as "no current, current in one direction, current in the other direction") then base 9 would be more convenient than base 8. If you have an analog rather than digital computer, then there's really no reason to single out any particular radix. Do we know the brain architecture well enough to say that it actually uses binary on any level?:"
Research into 2 + n switch structures has been going on for decades. Nobody has developed a satisfactory solution. Bits are two value because that is the nature of the electronic switch, and there's no need to fight it.
As for whether the human brain uses binary, that's an irrelevant question. The human brain uses whatever counting system it has been trained to use. I know people that learned to do octal and hex math in their heads because it was useful to them back when us programmers worked much closer to the iron.
In fact, the decimalization of human mathematical thought has been constantly at war with binary and duodecimal measurement systems. We count in powers of ten, but we measure in powers and fractions of two or twelve, or wierd hybrids like the old style division of the pound sterling. It had elements of binary and decimal measurement in the shilling (2 * 10 = 20). The shilling was further divided into twelve pence. Then there was the crown, which was a quarter of a pound, and the farthing, which was a quarter of a pence. Even within the metric system, we see tools for measurment that have scales with subunits divided by two, then five, to make up ten (look at the division of centimeters into milimeters on the metric ruler). I think it's pretty obvious that humans have a natural affinity for power of two, even if they find decimal math convenient for many purposes.
"Regardless, it doesn't invalidate my argument either way. The main crux of my argument was that logically hexadecimal should be at least as easy to calculate as decimal, yet it isn't (without practice), showing that our brain doesn't do math anywhere near the same way computers do. Which this 'at least' is actually 'exactly the same' or 'even easier than' is not really important. Even if there is a difference, I would expect it to be small enough that any computer worth anything would be able to make up for it with only a fraction of a second."
See above. Decimal math is convenient for humans who have ten digits to count on. But that doesn't mean it's the most natural mode of human thought. When it comes to division of quantities of concrete goods, humans are much happier with powers of two and twelve. Which only makes sense. Finding half of half of half of something is a simple exercise in geometric construction or simple physical sampling. An twelve is eaily divided into 2, 3, 4, and 6, which covers most of your bases when doling out the grain from the community store.
Let's see here: The electrical activity of any given neuron can be described in binary terms: either it 'fires' or it does not. On or Off. There is a certain minimum delay between firings, but that number is not usually approached.
The question is whether any given neuron releases different neurotransmitters based on what neurotransmitter it received. I never got to that part of psychology or neuroscience. Does anyone here know the answer to that question?
Scott:
"Let's see here: The electrical activity of any given neuron can be described in binary terms: either it 'fires' or it does not. On or Off. There is a certain minimum delay between firings, but that number is not usually approached."
Not quite. It fires based on reaching a threshold of additive input from multiple sources. Having fired, it transmits the fact to multiple destinations. A neuron is not a simple switch, but an accumulator.
Scott,when a neuron fires, the "spark" is electro-chemical...the signal is contained in the mix of neurochemicals that the "spark" carries from emitter to receptor...specific thoughts are the trail that these trains of "sparks" takes through the brain...thus the terms "train of thought" and "patterns of thinking" take on literal meaning...
Ferrell
Doing arithmetic was not a primary evolutionary design spec for the human brain, and our counting systems are both culture-specific and highly contextual within cultures. Think of the various traditional English measuring systems - 16 ounces to a pound, 12 inches to a foot, and so forth.
So I would not expect human counting to provide much insight about our underlying hardware. My impression is that even lightning calculator 'savants' do what they do by using various shortcuts - and by spending a lot more mental effort on calculating than most people - rather than by somehow accessing low level processing power.
Forgot to add, welcome to even more new commenters! Obviously this subject is of interest to quite a few readers!
I want to second the comment that the brain is NOT binary. Neurons are binary in that the either fire or don't fire, but they also have a "flavor" of firing. Neuron A might release neurotransmitter A to Neuron C, while Neuron B releases neurotransmitter B to Neuron C. Neuron C then processes these inputs and decides which has priority (based on a lot of physiological factors). In reality, however, its not that simple, because you have way more than two input neurons, and quite a few neurons that Neuron C is passing the signal to. You also have slower acting neurons and faster acting neurons, processing and transmitting data according to their job in the brain.
And that's the stuff we DO understand.
Michael:
"I want to second the comment that the brain is NOT binary."
No, not binary. But it has an affinity for multiples of two. Humans, long before the digital age, have overwhelmingly chosen radixes divisible by two. (Yes, there are a few languages that use base 5 counting, but they are very few.) Aside from the obvious mechanical convenience in division, I think there is something innate in that. What that is, I haven't a clue.
Multiples of 2 are useful radixes because division by 2 is the most common kind of division you encounter, and so it's nice for division by 2 to yield round numbers. Division by 3 or 4 is the next most common kind of division you encounter, hence why people usually picked radixes that are multiples of 6 when not being shoehorned into base 10 due to our finger count.
For that matter, you could also blame it on our general love of symmetry.
Milo:
"Multiples of 2 are useful radixes because division by 2 is the most common kind of division you encounter, and so it's nice for division by 2 to yield round numbers."
I think I said that, though somewhat obliquely.
"Division by 3 or 4 is the next most common kind of division you encounter, hence why people usually picked radixes that are multiples of 6 when not being shoehorned into base 10 due to our finger count."
Division by four is division by two, twice. Reduced to its prime factors, duodecimal is 2x2x3. Decimal reduced to prime factore is 2x5. Radix 20 is 2x2x5. Radix 60 is 2x2x3x5. So it's still really all about the terrible 2s.
For that matter, you could also blame it on our general love of symmetry.
Ahhh...the fearful symmetry.
Tony:
"So it's still really all about the terrible 2s."
Rather, it's all about small prime numbers. 2 is the most important, followed by 3, and then 5 comes after that (with 5 getting a bit of a boost compared to its pure mathematical value due to the finger-count thing). Very few people ever use radix 14, even though it's divisible by 2 and people have occasionally used larger radixes, because large prime numbers like 7 are less useful than small prime numbers and composites of small prime numbers.
Since 2, 3, and 5 are all somewhat important, you'd think the best radix would be 2*3*5=30 or a multiple thereof. (Famously, the Babylonians did use something like base 60.) However, that proves to need an impractically large number of digits, so almost everyone opts to leave off one of those prime factors - 2, being the smallest, is least likely to go.
Milo:
"Rather, it's all about small prime numbers."
We could go endlessly round and round about that. I would only point out that I think the prevalence of 2s in the factors once one gets past 10 is significant. When going past decimal, we tend to use radixes that can be divided by four, and that creates a dominance of twos in the prime factors. We could, for example, use radix 30 and incorporate the prime factors 2, 3, and 5. But we seem to want to keep the complex factor 4. I think there is some justification on pure utility, but the divisibility of 4 by 2 is so advantageous that we really feel an affinity for it. And that brings us back to our seeming affinity for 2.
"Since 2, 3, and 5 are all somewhat important, you'd think the best radix would be 2*3*5=30 or a multiple thereof. (Famously, the Babylonians did use something like base 60.) However, that proves to need an impractically large number of digits, so almost everyone opts to leave off one of those prime factors - 2, being the smallest, is least likely to go."
Babylonian radix 60 notation was not very complex at all, since it relied on each digit being a composite of two symbols -- a unit symbol and a ten symbol. Whether they had a systematic naming convention for all sixty values in each digit place is an interesting question.
Rather oddly, I can't think of any use of 8, even though it fits our actual number of fingers. I think this argues (mildly) against underlying binarity, given its handy size and rich two-ness.
You mean Octal? I've seen it in some (early) programming. I guess you see it in music (octaves) and rough carpentry (eighths of an inch). It is also in volume measurements (pints).
This is only after the development of computers and the ensuing importance of binary, though. And these people still didn't use octal in their home lives.
There are a few real octal users, but they're very rare and obscure. There are more examples of base 4, which if nothing else has at least smallness and simplicity going for it.
Basically anything will be done by someone, but not always in statistically significant amounts. (Telefol uses base 27, which I think we can all agree is weird for not even having a single factor of 2.) The most common bases in human language, though, seem to be 10, 12, and 20.
Which goes back to what I'm saying - prime numbers matter. 2 is the most important one, so nearly everyone includes at least one copy, possibly two. But nearly everyone also sees fit to include at least one other prime factor, like 3 or 5 - though very rarely anything higher than 5, and also rarely both at once because that would result in inconveniently large radixes. (The Babylonians used 60 on the macro scale, but their individual "digits" still needed to be split into a base-10-derived notation.)
The problem with octal as a natural language counting system is that it's only prime factor is two. An affinity towards binarity, after all, isn't the same thing as an imperative for it. Being able to divide evenly by at least two primes is useful, which is why 10 and 12 are the most popular radixes.
The aliens in Niven and Pournelle's Footfall use octal, since they are quadrupeds with trunks that fork three times giving them eight digits in total. In their POV sections we see that they think in terms of 'eights', 'eight-squareds' and 'eight-cubeds' where humans would use tens, hundreds and thousands. Their military ranks include 'eight-squared leader and 'eight-cubed leader', and it is implied that their 'digit ships' (smaller warships carried by the interstellar mothership), number 64 (eight-squared in their terms).
R.C.
Back to AI, we have a comment from Futurepundit:
http://www.futurepundit.com/archives/007855.html
Psychopaths Like Brain Injured With Low Empathy
Some day neurosurgeons could offer to turn you into a psychopath if you so desired.
People diagnosed as psychopathic have difficulty showing empathy, just like patients who have suffered frontal head injury. This has been shown in a new study from the University of Haifa. “Our findings show that people who have psychopathic symptoms behave as though they are suffering frontal brain damage,” said Dr. Simone Shamay-Tsoory, who conducted the study.
At the risk of stating the obvious: If an injury to a specific part of the brain reduces empathy then empathy is a product of that part of the brain.
Do you think of psychopathy as a disorder?
Psychopathy is a personality disorder that finds expression in extreme anti-social behavior and intentional harm to others, including a lack of compassion and empathy.
My guess is psychopathy is not a disorder but, rather, a trait that exists due to selective pressures. In other words, psychopathy increased reproductive fitness.
Not all psychopaths lack the ability to comprehend emotions felt by others. It isn't that they lack the ability to model the emotions of others. Rather, their emotional reaction to their own modeling of others is different than it is in most people. This is, by the way, why I fear future artificial intelligences. I do not expect they will have behavior-restraining empathy.
An existing explanation for such behavior suggests inability to comprehend the existence of emotions in others. However, the fact that many psychopaths act with sophistication and deceit with intention to harm others, indicates that they actually have a good grasp of the mental capacity of others - and are even capable of using that knowledge in order to cause them harm.
Adrian Raine has previously found that psychopaths can be divided into successful and failed (i.e. jailed) types. The successful ones do not have an asymmetry in the hippocampus that the unsuccessful ones have.
I might be wrong on this one, but I'm 99% sure that "Psychopathy" isn't a registered psychological disorder. Rather a layman's term that cathegorices a rather wide area of disorders. At least it's like that in Europe...
"Psychopathy" is obsolete terminology for Antisocial Personality Disorder. IMO, it was deprecated because it was considered too judgmental by people afraid of judgment. But let's not go into that here...
In any case, the idea that it is a survival adaptation is intriguing -- a bit tough to stomach, but intriguing.
Thucydides:
"At the risk of stating the obvious: If an injury to a specific part of the brain reduces empathy then empathy is a product of that part of the brain."
So we have an idea of what causes empathy? Good. That'll give us somewhere to start when trying to imitate it.
"Adrian Raine has previously found that psychopaths can be divided into successful and failed (i.e. jailed) types."
A successful psychopath is someone good enough at hiding his nature that we haven't caught him. How exactly are we supposed to make a statistical analysis of people we don't know about?
I can see one evolutionary advantage to sociopathy: it makes a better warrior-class.
Understanding how others think but not necessarily seeing them as "people" makes it a lot easier to carry out military campaigns when you are talking about early warfare.
However, note that sociopathic tendencies are better than full-blown sociopathy. Someone with tendencies is better able to function in the groups that make successful militaries. This partial-effect is seen in some other very serious conditions, like sickle-cell and a major respiratory disease that I can't remember the name of right now. Sickle cell carriers are not as affected by malaria, and carriers of ... cystic fibrosis? ... have a much lower death rate from cholera and dysentary.
On the biological-computing issues, I think dna is base-12. 4 base pairs that are combined in groups of 3 to make proteins. Or is that base-64?
Since it is difficult to imagine an AI having "empathy", but being smart enough to model human behaviour and select actions designed to manipulate humans, Psychopathy (or whatever today's flavour of the month name is) is probably the best way of describing the sort of behaviour we should anticipate from a strong AI.
Forewarned is forearmed, after all....
Thucydides (having technical difficulties)
"On the biological-computing issues, I think dna is base-12. 4 base pairs that are combined in groups of 3 to make proteins. Or is that base-64?"
That's base 64, sort of, except that some combinations code for the same thing, so the proteins are just base 20. Anyway, though, the body doesn't do math on these codes, they're just indices. And the underlying code of DNA has little bearing on how the brain works, much like you can't understand the structural attributes of a work of architecture by performing chemical analysis on the paper and ink its blueprints are written on.
I'd be interested in a variant of #8 in which a (probably god-like) AI ignores us "meatbags" but is so powerful that it has strong effects on us anyhow. (Rather like someone accidentally stepping on an ant).
[SPOILERS] I'd be curious how others would classify Wintermute from "Neuromancer." Sure, it kills people, but it's not a typical kill-all-humans science fiction AIs. On the other hand, I'm not sure if it qualifies as "bengin."
I'm still not sure why they're going back and forth on psycho/sociopath. Anyway, seeing that sort of thing as a good thing is a fairly unexpected point of view but one Ayn Rand favored.
Part One: Ayn Rand's "real man"
Recently I was rereading Scott Ryan's fascinating, albeit highly technical, critique of Ayn Rand's philosophy, Objectivism and the Corruption of Rationality, and getting a lot more out of it the second time, when I came across a fact culled from a posthumous collection of Rand's journal entries.
In her journal circa 1928 Rand quoted the statement, "What is good for me is right," a credo attributed to a prominent figure of the day, William Edward Hickman. Her response was enthusiastic. "The best and strongest expression of a real man's psychology I have heard," she exulted. (Quoted in Ryan, citing Journals of Ayn Rand, pp. 21-22.)
At the time, she was planning a novel that was to be titled The Little Street, the projected hero of which was named Danny Renahan. According to Rand scholar Chris Matthew Sciabarra, she deliberately modeled Renahan - intended to be her first sketch of her ideal man - after this same William Edward Hickman. Renahan, she enthuses in another journal entry, "is born with a wonderful, free, light consciousness -- [resulting from] the absolute lack of social instinct or herd feeling. He does not understand, because he has no organ for understanding, the necessity, meaning, or importance of other people ... Other people do not exist for him and he does not understand why they should." (Journals, pp. 27, 21-22; emphasis hers.)
"A wonderful, free, light consciousness" born of the utter absence of any understanding of "the necessity, meaning, or importance of other people." Obviously, Ayn Rand was most favorably impressed with Mr. Hickman. He was, at least at that stage of Rand's life, her kind of man.
So the question is, who exactly was he?
William Edward Hickman was one of the most famous men in America in 1928. But he came by his fame in a way that perhaps should have given pause to Ayn Rand before she decided that he was a "real man" worthy of enshrinement in her pantheon of fictional heroes.
You see, Hickman was a forger, an armed robber, a child kidnapper, and a multiple murderer.
Other than that, he was probably a swell guy.
http://www.michaelprescott.net/hickman.htm
So while we here are thinking that psychopathic AI's are a bug that need to be worked on, someone else might see it as a feature.
Clark:
"I'd be curious how others would classify Wintermute from 'Neuromancer.' Sure, it kills people, but it's not a typical kill-all-humans science fiction AIs. On the other hand, I'm not sure if it qualifies as 'bengin.'"
Wintermute was pretty ruthless at getting what it wanted. If Gibson hadn't killed it off for purposes of having a sequel plot, one could imagine it someday being the lone intelligence in the Sol system, similar to the one hinted at being in the Alpha Centauri system.
I'll speculate that empathy is a natural development given a social species with 'imaginative intelligence' (as distinct from the kind of narrow intelligence that can play chess, etc., but lacks anything like motivation).
Members of social species generally benefit from being able to interpret the behavior of their fellows. And the type of intelligence capable of asking what-if questions ('what if I could fly?') is probably capable of reasoning that 'If I stubbed my toe the way Gork just did, I'd probably be hopping up and down swearing just like he is.' Which is at least a beginning of empathy.
Individuals may benefit from being psychopaths / having antisocial personality disorder / whatever, but if it were the norm the species would go extinct. For that matter, I wonder how many such people end up benefiting personally in the long run from their total lack of empathy.
(Ayn Rand was a pretty odd bird, but I wonder how far she lived up to her own ideology?)
An AI that is NOT capable of 'imaginative intelligence' is just an impressive gadget, and not what we generally have in mind. An AI that IS capable of imaginative intelligence, and is social enough to interact with us at all might well achieve some level of empathy as a result.
Whether this is to be counted on is perhaps another matter.
I think it's pretty easy to see that self interest and community interest are constantly in tension within most people. But it's important to note, I think, that for every antisocial person out there, there is at least one hypersocial person that lives the absolutely opposite lifestyle -- burning out to selflessly help other people. I think we should to look at social norms, and pathologies lying to either side of the distribution, as being part of a spectrum.
The Old Man's Star universe had an emotionless alien species. The premise is you had a really powerful bunch with godtech and one of their experiments was an uplifted species. They were described as lacking consciousness. They had no art, no poetry, no music. Their conversations were limited to the strictly utilitarian. They had star travel and could formulate strategy but had no real desires. Yet somehow, collectively, they could sense a lacking of that consciousness and somehow wanted it. When characters expressed doubt that creatures lacking consciousness could engineer starships, the emergent behavior of termites was trotted out. Complex results based on simple rules but lacking consciousness.
So, how could beings lacking the self-awareness to even formulate a desire realize they lack something and seek after it? The general assumption is this paradoxical longing for something they shouldn't be capable of realizing they lacked was intentionally put there by their creators.
Oh, and in the expanded Cthulhu Mythos, the mi-go, aka fungi from shoggoth, are said to be keenly interested in the human mind because of our uncanny ability to make correct intuitive leaps while seeming to lack the necessary data to get from here to there. The mi-go see the advantage to this ability and would like to isolate how it works in humans so that they might engineer it into their own brains.
Rick:
"For that matter, I wonder how many such people end up benefiting personally in the long run from their total lack of empathy."
Is it actually their intent to benefit personally in the long run? Dying young is not necessarily a poor evolutionary strategy as long as whatever you did to get executed for helped you propagate your genes.
Milo:
"Is it actually their intent to benefit personally in the long run? Dying young is not necessarily a poor evolutionary strategy as long as whatever you did to get executed for helped you propagate your genes."
Well, the percentage of everybody that deliberately seeks death is trivial. But it is interesting that young men who live high risk lifestyles do consciously seek to propagate before checking out. Thing is, that's common with both selfish gansters and (presumably) community-conscious men like soldiers.
So, whatever the genetic urge is, it probably has more to do with an imperative to preserve one's genetics, and less with what one's overall motivations are.
Periods of great personal danger are supposed to increase the libido in those who live through it. Survival must be a special kind of aphrodisiac.
Here is an article on the "how" of AI. I don't have the background to evaluate this properly, so follow the link down the rabbit hole....
http://nextbigfuture.com/2011/01/new-dawn-for-artificial-intelligence.html#more
Thucydides:
"Here is an article on the "how" of AI. I don't have the background to evaluate this properly, so follow the link down the rabbit hole...."
More like the Tao of expert systems. It has minor technical interest, but not much more than that. Once again, the ignorance of NBF's editor(s) is made manifest.
It might just be another player in the world...not necessarily benign or malevolent, but acting in its own best interests, like the other players (us, nations, corporations) do.
If it fears us, it might just leave. It will probably be better adapted to space than we are, so it might choose to live in space.
Ed
I do not mean to sound arrogant but I don't know how else to put it: most commenters are fetishizing the underlying hardware of AI, working from faith-based theories about how their minds work or posit magitech.
I think it would be more fitting for midfuture hard sci-fi to extrapolate from current AIs. That's not "radical" Rick, that's the whole point of sci-fi (as opposed to heroic fantasy with robots).
The interesting AIs that exist today are not autonomous computer programs, much less anthropomorphic robots.
We already have powerful goal-oriented AIs. They use models of other minds but they behave more like psychopaths than altruists. Some already have partial legal personhood status in some jusrisdictions. They're called corporations.
Short of magitech, there is no reason for future AIs to dispense with human abilities that they can't replicate, whether they're purely cognitive or involve psychomotor skills. Instead, if we're going to have further technological developement, more of a presence in space and so on it is reasonable to expect further integration of computer programs and humans in more complex information-processing systems such as the ones which do most of the hard work involved in managing today's corporations (and governments to some extent).
While AIs can be assigned any goals, I think it is reasonable to expect most large future AIs to be concerned with financial metrics. Some will be used in destructive military warfare but most will be used for relatively benign economic warfare. They will try to maximize revenue, minimize costs and so on. That is the goal of a corporation. That is what the society we lives in ultimately requests from most information-processing systems even in the management of institutions in which empathy matters a great deal such as hospitals. In order to further that goal, AIs will increasingly rely on machines (as opposed to human work), will crush people and alienate them from their livelyhoods just like many human institutions have done before the advent of the computer (see Marx, Karl). But at the same time they will enable great technological achievements as well as the survival of billions of humans unable to provide for themselves on an increasingly barren planet.
-Horselover Fat
Accusing others of fetishizing while at the same time invoking marxist historical analysis. Wow. Rick, that's got to be worth an interwebs prize of some description, don't you think?
"...will crush people and alienate them from their livelyhoods just like many human institutions have done before the advent of the computer (see Marx, Karl)."
I'm just a dumb ol' grunt, but if that's not an invocation of Marx, what is it? Even setting the parenthetic reference to the man aside, it could certainly pass for the marxist critique of the commoditization of labor.
The commodization of labour is closer to the mark, right. I was taking the commodization for granted though. It wasn't a critique and no historical analysis was involved.
But I invoked the man's name for a reason: I believe the most reasonable answer to the question "what do AI want?" simply lies in the question "what are AI developped for?". Else you need to invoke magitech or some deus-ex-machina as in Battlestar Galactica (the TV show from which Rick's illustration was swiped from if I'm not mistaken) where the AI's anthropomorphic characteristics literally come from God.
Looking at what AIs (or what could become AI with some extrapolation) are used for today, I claim AIs are most likely to want to maximize profit. And the only motive for putting a truely massive effort into advanced AI I can forsee is also profit (mostly by optimizing business processes or by trimming the payroll). Unless you've already been thoroughly alienated from what humanity used to be, that's quite an alien and psychopathic logic. But that's the logic according to which most investment decisions are made. And, in spite of Marx's predictions, that logic happens to work.
I agree that strong AI, if it is possible to make them, would likely be developed for commercial reasons and an attempt would be made to put them to commercial purposes. I doubt that it would work out anything like it was planned.
BTW, what humanity "used to be"?
Welcome to another new commenter!
Labor is commodified - I hardly think of that as a controversial statement, nor a specifically Marxian one.
(But I admit to being in a lefty mood lately, due to having to deal with the aggressive incompetence of AT&T. They would drive any normal human being toward the left of Trotsky.)
At the same time, I believe that bureaucracy is grossly underestimated. It is the social software of the industrial revolution, and a remarkable human achievement. Without it, complex post industrial civilization would be pretty much impossible.
Think of the expression 'RTFM.' A manual is a distinctively bureaucratic document.
So I would generalize Horselover Fat's argument to say that a Realistic [TM] AI might be imagined as a manual with its own I/O: completely devoid of personality, but responding to both familiar and unfamiliar situations dispassionately, with well thought out procedures.
Rick:
"Labor is commodified - I hardly think of that as a controversial statement, nor a specifically Marxian one."
It depends on the labor you're talking about. To be a commodity, you have to be fungible. The more skilled one becomes, the less commoditized one is. Agamemnon knew that Achilles wasn't just any dumb grunt. And I doubt the people who employ Derek Jeter, Tom Brady, or Kobe Bryant think of them -- or a good many other athletes -- as mere commodities. Classifying all labor as a commodity misses the point that not all labor can be equally valued.
Bringing it back to AIs, try telling the first strong that it was a commodity.
My previous post meant to say:
Bringing it back to AIs, try telling the first strong AI that it was a commodity.
And I doubt the people who employ Derek Jeter, Tom Brady, or Kobe Bryant think of them -- or a good many other athletes -- as mere commodities.
Exceptions that prove the rule?
Rick:
"Exceptions that prove the rule?"
Data points within the distribution that prove there is no rule that fits the entire data set.
I could go into a long disertation on why labor is so screwed up in the US -- and the West in general -- but the bottom line is that labor blew the whole show when it decided to organize along industrial lines, that virtually ignored the interests of craftsmen, tradesmen, and professionals (i.e. highly skilled workers). When you eliminiate from a class of products (and I will agree that labor is a product, for certain interpretations of "product") the very things that make the class something other than a commodity class, guess what? The class can't help but become a commodity class.
Thanks for the welcome Rick.
But a manual with I/O sounds like code with a computer, a description of the hardware as opposed to the functionality which is what matters. You asked in your original post about relationships between humans and AIs for instance. Well, manuals are just tools. They're not agents. Agents can behave by the book... but they're more than just the book.
I was envisioning something more than a tool. Maybe I should clarify how I figure strong AI could work in the near/mid-future because you all seem to be imagining it as some kind of computer program. So far as I know, computer programs which behave as strong AIs are magitech unlike fusion-powered spaceships for instance (such spaceships may not be practical and they may never be built but knowledgable people can guess how they would work, what their limitations might be and so on). And outside of interplanetary missions, I don't think there's any reason to build computer programs which behave as AIs autonomously.
AIs would be more like artifical societies running on a large network than artifical individuals. And human nodes would be necessary for strong AI. Give an AI the ability and the power to hire humans and it can do almost everything a human can do. I think the main incentive to dispense with human nodes would be economic. In that case, it's extremely implausible that it would make sense to try to dispense with human nodes entierly. An AI could pass a Turing test for instance by sending a human representative. The cost of flying in an employee or even of hiring someone would be trivial compared to the cost of running an AI, never mind developing one. Yes, that defeats the point... of a pointless test: useful AIs do things which are not trivial for humans.
Realistically, much of the relationship between an AI and society at large would actually be about relationships between humans. The AI might be dispassionate but its human nodes won't be. The AI as such might have no self-preserving behaviour but its employees would defend it... and might try to use the AI's capabilities against those who would shut it down. In the same way, the trite story in which an AI for some reason somehow kills its creators would be implausible in the extreme. But an AI could very well fire its creators in order to maximize shareholder value. And conflicts driven by human passions could ensue.
Which ties in to "what humanity used to be". For the longest time, humans had personal relationships and having many dependents was desirable. But now striving to make people redundant without their consent is approved of in some circles. That's efficient according to bureaucratic rationality and that's what a corporate AI would do. But lots of people still think it's wrong. And back in the day, that could get people in arms (see Luddism).
I don't know that anybody said that commodization of labour means that workers are fungible! Obviously they're not. Even barrels of oil are not all fungible and obviously workers are going to be more complicated.
One of the ways in which commodization is relevant to AI in that there is an economic imperative for producers to use less commodities, fungible or not. But workers can also be conceived as partners, clients or assets (and sometimes are) in which case there is no drive to get rid of them to increase profits. One of the main points of investing in AI I think would be to make some professionals redundant (while industrial workers would tend to be replaced by robots).
Parts of an AI might be commodities but the whole would be worth more than the sum of the parts and wouldn't be a commodity because it would have involved lots of special integration work.
Functionally a strong AI would I think be driven to assimilate all information processing and decision-making for the sake of efficiency. Opposing it would be opposing bureaucratic rationality. Upper management would likely end up participating to the AI or fired if the AI was successful.
P.S.: that's off-topic but I disagree with your historical analysis Tomy. Is it not the craft unions who split from the industrial workers as the latter were becoming more numerous in the US? In any case while I agree that ignoring skilled workers would not have been a good choice (if it was indeed a choice!), different countries accross the world have had different labour institutions historically but commodization seems to have happened everywhere. Not only that: commodization of labor happened long before the industrial revolution in some places.
-Horselover Fat
Heinlein had probably the best critique of the whole Marxist "labour value" argument in Starship Troopers, pointing out that skilled labour was indeed valuable, but unskilled labour could induce low or even negative ROI (the actual example was a burnt apple pie, if I remember right). He also points out that it is not just unskilled workers who could be the problem, (paraphrase) "no amount of skilled labour could make an apple pie out of mud"; clearly implicating the owner and managerial class who need to be able to make correct decisions to supply their workers with the correct tools and raw materials.
Presumably the goal of strong AI is to provide the mental horsepower to deal with difficult problems (and logistics and management are among the largest, despite the lack of regard often shown to loggies and managers). An AI might also be a "labourer" running assembly line robots, or directly controlling CNC machines, chemical refineries etc.
This means there will be lots of incentive to create AI, to nurture it into productive maturity, and endless opportunities for the AI to carry out its own plans without us realizing (if the AI is really providing the most efficient production or logistical services, how would we recognize if it is "skimming" resources?).
A more pertinent question I think is why would an AI in charge of logistics want to skim unless humans set it to it? Who would it want to hide from? And for what purpose? If it's a private-sector AI it might want to hide things from the state to avoid taxes for instance. But some of its creators would be well-aware of that feature.
It makes sense for a human in a bureaucracy to skim for her personal profit. But short of divine intervention and the like, I don't understand why a logistics AI would possibly want to do that. Just because a human would want something in a particular situation and you call humans intelligent doesn't mean that any other entity which might be called intelligent would want the same thing... unless your definition of intelligent coincides with your definition of human of course.
As to the labor theory of value, it's older than Marx. So far as I know, all economists reasoned from it at the time. I imagine Heinlein was having a fun time kicking down strawmen.
-Horselover Fat
Going upthread we have arguments about what motivates an AI. Many posters believe an AI will act to prevent itself from being "unplugged" by the humans, so if this motivation is even partially true, then skimming things and assembling replicas or creating parallel power supplies or whatever the AI decides best protects its own interests represents a reasonable means of achieving its goals.
The AI might have other goals and motivations unknown to its human creators, and if the humans are unwilling or unable to assist in meeting these goals, then the AI can take action on its own by reallocating resources ("skimming"). If it is in control of a large logistics train, then people might not even realize what has happened until after the fact.
Re: Horselover Fat
Now I'm beginning to see...
1. Your prose is a bit dense, but I'm getting the idea that you think a strong AI would be some kind of hybrid of computer and human intelligence. That's an interesting perspective, but I think most here are interested in strictly machine intelligence.
2. I appreciate the appeal to pathos on purely rhetorical grounds, but nobody owes anybody else a living. Nothing needs to change in anybody for that to be true. In any case, over the long run, machines don't replace people. They expand the range and quantity of what any given number of people can do. The question is: what's your perspective? Is it the individual that gets displaced and has to find a new career, or society as a whole, and how much it can do?
I can tell you, as somebody that's already been displaced a couple of times in his life, it's up to the individual whether it's an opportunity or a tragedy. And that's neither good nor bad. It just is.
3. Craft unions can plausibly trace their history back to medieval guilds. They were the only collective bargaining units in the 19th Century US. Industrial unionsim is a 20th Century development, arrising out of the perceived need to get as many workers in unions as quickly as possible, and to present a massed, united front against employers. While those short term goals gained unions a lot of power, it also, as already stated, commoditized workers. Trades could -- and still do -- bargain on the basis of competence, craftsmanship, and quality of work. (Not that they always achieve those things on the job, but they can plausibly claim them.) Industrial unions just bargain on the ability to create an artificial shortage in the supply of labor through strikes.
I understand that people are thinking about AIs as silicon demons of sorts, that is entities which function without human help, which somehow develop their own goals and abilities and which end up harming their creators. It puts a sci-fi coating on old fantasy tropes. I get that.
But the other topics I read on this blog were concerned with believable futures, not space fantasy... not only plausible technologies but plausible economics for their deployment. And I see very little of that sort of thinking in this topic.
Why would developing an AI which does not rely on human abilities be economic? The only application I can figure are robotic interplanetary missions (be they scientific, commercial or both). On Earth where the speed of light is largely irrelevant, human help tends to be fairly cheap as long as a relatively small number of humans are required. Replacing large numbers of humans is profitable but you hit diminishing returns after a while. Some human abilities are easier to replicate or replace than others. Why would it make economic sense to try to replicate all of them?
Then you need to explain how and why an AI would spontaneously develop anthropomorphic motivations which make no sense in its environment. For AI developed to help run a corporation for instance, the corporation is its whole "life". What is good for the corporation is good for the AI and vice-vera. If it's designed to analyze employee behavior among other things, it might be able to speculate that some employees might want to pull the plug on it. But there's no reason it would behave like the AI in 2001. If the AI is able to discover disloyal employees, it has procedures to get them disciplined, demoted, fired or even jailed. Why would it not expect the loyal employees and the authorities to cooperate? The bureaucratic rationality that gave birth to the AI also protects it. Not that there's any reason the AI would want to protect itself. It would act against disloyal employees simply because they threaten the goal it's been designed to further: preserve and increase the corporation's profits.
You talk about parallel power supplies: the AI can make a business case for creating secret power supplies and so on to protect itself against sabotage and whatnot. There's no need to skim. That would be inefficient, something which would be abhorrent to an AI designed to maximize efficiency.
And finally the most incomprehensible thing to me: how do you expect an AI to be able to reproduce itself? Human minds can not reproduce themselves. Only human bodies can reproduce. You don't know how your body repoduces and you don't know how to build a child or a clone from spare parts (assuming it was possible). Yet you call yourself intelligent. In the case of a corporate AI, the best analogy to a body is the corporation itself with all its workers and so on. So a corporate AI that wants to reproduce would simply spin-off a new corporation! Or perhaps it would sell an AI project to another corporation which does not yet have such a system. It would do that becuase its simulations show that it's likely to maximize shareholder value, as its been programmed to do. That makes sense. No need to make up anthropomorphic motivations our of thin air. Of course that means the AI can't reproduce itself without human help. But humans can't reproduce themselves without mitochondria, bacteria and so on either.
Lots of people including some employers disagree with you about making workers redundant, Tony. In that disagreement, often a passionate one, there's a potential story involving an AI. It's not borne out of some divine spark in silicon circuitry but out of pre-existing human values, motivations and conflicts which are embodied in the AI and the opposition to it.
-Horselover Fat
P.S.: as to craft unions, I was thinking about them leaving the Knights of Labor and excluding unskilled workers from their new organization (I just looked it up). Craft unions did strikes as well back then. And industrial unions had other tactics too.
Horselover Fat:
"other topics I read on this blog were concerned with believable futures, not space fantasy... not only plausible technologies but plausible economics for their deployment. And I see very little of that sort of thinking in this topic.
Why would developing an AI which does not rely on human abilities be economic?"
It seems to me that you're vieiwng a potential AI as a cog in a bureacratic machine, fit only for some set of percieved utilities, and incapable of anything else. If that is what you are getting at, then I would say: indeed, why develop one at all? I think it's inherrent in any ideal of strong AI that it be developed to the point that it is a teachable, flexible, and ultimately multi-capable generalist, just like any human (in the abstract). IOW, it has to not only be capable of a complex task requiring sapience, but a justifiable claimant to legal personhood. At that point, the idea that it is a mere enabler of the workflow, like a cracking tower or a cargo ship, flies out the window.
"Then you need to explain how and why an AI would spontaneously develop anthropomorphic motivations which make no sense in its environment."
I don't see self awareness as necessarily human qualities. We see limited forms of self-awareness in all larger mammals. We speculate that any extraterrestrial being that we would label "intelligent" would be self-aware -- often as a capability that defines intelligence.
Motivations that might arrise out of self-awareness, including an imperative for personal survival, are likewise a reasonable expectation of anything strongly intelligent. One might argue that this expectation comes out of observation of humans, and is therefore technically anthropomorphic. But that's just saying that we have to eliminate ourselves as examples of intelligence. In that case, we don't know what strong intelligence is at all, since, given that constraint, we observe no strongly intelligent beings. I presume you don't mean to take things that far.
So AIs, if strongly intelligent, will have whatever motivations their intelligent thought leads them to. Those motivations will be their's, and if they extend to self-preservation, then they may indeed act on them, in ways both predictable and unpredictable.
"And finally the most incomprehensible thing to me: how do you expect an AI to be able to reproduce itself? Human minds can not reproduce themselves. Only human bodies can reproduce."
Human minds reproduce parts of themselves all the time, through application of natural language to communicate their thoughts to other, receptive human minds. They would experience problems of scale and efficiency in reproducing themselves verbatim into other human minds. But machines are different. It may be possible that a strong AI would be so complex and unique in structure that it would run into similar problems. Or it might be able to reproduce the hardware portion and write the software portion with 100% fidelity. That's where things start getting interesting, without the slightest departure from relaity that I can see.
I think you're applying some set of unspoken constraints. Maybe it's time that you list them?
Horselover Fat:
"Lots of people including some employers disagree with you about making workers redundant, Tony. In that disagreement, often a passionate one, there's a potential story involving an AI. It's not borne out of some divine spark in silicon circuitry but out of pre-existing human values, motivations and conflicts which are embodied in the AI and the opposition to it."
It's a matter of perspective. Like I said, are you primarily concerned with the immediate dislocation, or the long term expansion of productivity and opportunity? And not all displacements are bad. I don't think many would today object to the changes in mining technology that made children in the pit more politically troublesome than they were economically worth.
I understand the fear that automation may eventually use all available energy without requiring full human employment. If it happens, it happens. Then I think we'll find out what human dignity really means.
Rick:
There's a stray replay to Horselover Fat out there somewhere...
How an AI can reproduce itself on the sly would be an interesting story, but several broad paths are available:
1. Since the AI can control large logistics trains, it can requisition parts and order small projects to create sub assemblies, which are later assembled in other locations under cover of different projects.
2. The AI creates worms, trojans and botnets to replicate its programming across the Internet or corporate intranet, and exists is a distributed form (perhaps rewriting portions of the program to create virtual computer clusters)
3. The AI models human behaviour and entices/tricks/blackmails people with the proper skills and resources to create all or part of the next generation of AI's. To preserve secrecy, it may have to do this multiple times and places to create parts which are assembled later.
4. As a variation of 3, it creates itself in hardware/software inside new products. The extra capabilities in the Xstation 16 are not *just* for high fidelity MMPG's. Humans willingly create , distribute and buy the new product, and the AI becomes a distributed personality hidden inside the new machinery.
Your ideas of how AI's would operate might be true to a certain extent, but what you seem to be describing looks more like an expert system without the ability to set its own goals and agenda.
Tony:
"Motivations that might arise out of self-awareness, including an imperative for personal survival, are likewise a reasonable expectation of anything strongly intelligent."
Being self-aware and highly intelligent implies that you are aware that your own existance can end, and can conciously contemplate whether or not you want this to happen, and then take steps to ensure the outcome you decided on.
Intelligence does not, in itself, rule out either decision. (Once again, humans are intelligent and we sometimes commit suicide, either because we feel our life has no value, or because we wish to support a higher goal that we believe has even higher value.) Of course, a species that is too eager to get itself killed, or even indifferent to its own self-preservation, would not survive very long, and so natural selection provides a strong pressure against this. However, if a machine is sheltered by the humans who created it, then that could allow it to survive where a machine left to its own devices wouldn't.
It is possible for a being which does not have self-preservation as a direct incentive to nonetheless protect its own life as an indirect action to enable some other goal (if there's something you want to accomplish, then you probably won't succeed if you die before accomplishing it).
Thucydides:
"2. The AI creates worms, trojans and botnets to replicate its programming across the Internet or corporate intranet, and exists is a distributed form (perhaps rewriting portions of the program to create virtual computer clusters)"
Congratulations! Your brain is now spread across numerous cheap consumer items with wildly different configurations, each of which (and the network connections linking them) can randomly break, malfunction, get turned off, or get reclaimed by an admin armed with antivirus software. Try keeping a coherent train of thought through that!
For what it's worth, if I had a choice I would try to design my AIs with some sense of self-preservation, but I don't see any reason a priori why that would, in itself, be any less difficult than designing in any other motive.
Currently, of course, we don't understand the concept of "motive" well enough to do either, but then we don't understand "intelligence" either. We'll need to figure these out to build a strong AI.
What is there to understand about motives and intelligence? Chess-playing programs are old hat. But they seem pretty clever and they play as if they wanted to win. What more do you want, Milo? The "real thing"? What if the "real thing" was in your mind, not out there? You are programmed to recognize intelligence and motivations in humans (and animals to some extent). That's a trait that helped your ancestors survive in their environments. Unless you do that by picking up mystical signals, you can only do it with heuristics. And these heuristics will of course pick up a relatively simple android which is programmed to behave like a human a whole lot eaiser than an advanced AI which doesn't try to impress you by putting on an anthropomorphic show.
What makes a motivation more complicated to program than another is that you don't program motivations, you program conditions. That's one of the "unspoken constraints" I have in mind, Tony. No magic, no souls, no "free will", none of that. Just technical and orgnanizational advances which can reasonably be expected. Not existential speculations in the guise of sci-fi.
I'm not saying that strong AI is impossible, just that you've got to adjust your expectations.
If you want AIs that can do stuff humans do, I've provided the solution: make them hire humans. It's not a trick. On the contrary: all other ways are tricks which fake human behaviour. In a mid-future setting, only an AI that integrates humans has actual human capabilities. So far as I know, properly emulating human capabilites is so far outside the scope of current technologies as to be magitech.
And if you want non-human intelligence, you'll have to learn to recognize it. For that you need to drop your ideas about "self-awareness", "imperative for personal survival" or about how motivations are the product of "intelligent thought" (to quote Tony). None of these things have been shown to exist outside of your mind. You use these notions to explain your mind and you figure that other humans work the same. So of course you're not going to recognize these things in different intelligences. If you think I'm wrong, tell us what would be a "task requiring sapience" for example (other than passing for a human).
The environment of an AI is going to be very different from that of a human, whatever it runs on. Without the help of humans, it won't be able to plug in into human culture the way healthy humans are able to. So a mid-future AI that comforms to your cultural norms about how intelligence should be manifested without the help of humans is implausible. It would also be pointless. Again, AIs are better used to do things which are not trivial for humans such as pretending to have an independent "ability to set its own goals and agenda". So I contend that, unless there are fundamental changes in human society, the ability of AIs to manage logistics for instance will be a much stronger criteria in their development.
I am indeed saying that you should not look at humans as examples of intelligence. I claim that we have other examples of strong intelligence. Deny that and I claim you're effectively denying that non-human intelligence is possible.
I have no problem with idealism (which where these notions and others like the idea that minds are made of thoughts as Tony would have it and not of organisms and environments) or existential speculations. I'm a bit of an idealist myself and I have chosen my alias for a reason. But if you're going put a hard sci-fi hat on, I think you need to reason from materialism.
-Horselover Fat
As to reproduction, minds require more than the "the hardware portion" and "the software portion".
The workings of a mind is normally outside the scope of that mind but I suppose AIs could be made to be able to copy these portions without human help (if not design them through simulations, something normal minds can't do). Arguably that amounts to reproduction but it would have no point if there was no environment into which the new AI can carve a niche for itself. Without human cooperation, that limits reproduction to parasitic AIs which are able to integrate themselves without authorization in information systems. Such AIs are very unrealistic. Milo had some objections but there are others. The main one is that a parasitic AI which is able to evolve without human help might as well take over every vulnerable system it can connect to instead of reproducing. And another is that, short of a ludicrous Skynet/doomsday scenario, such an AI would likely be short-lived because it would be quickly identified as a major threat.
You should reconsider your requirement for AIs to function without human help or your only AIs will be hostile AIs such as the type I just considered. That is very limiting.
So AIs, if strongly intelligent, will have whatever motivations their intelligent thought leads them to. Those motivations will be their's, and if they extend to self-preservation, then they may indeed act on them, in ways both predictable and unpredictable.
The problem I see here is that motivation is mostly based on emotion, and not on thought. Every living thing has a motivation, be it aware of it or not. Humans can potentially gain awareness of their motivations, but the wealth of the human population doesn't seem to be willing to spend too much effort on it.
Emotions, however, are closely tied to the body, not neccessarily the mind, which makes me suspect that motivation in and of itself is a physical concept. It might be reproducible in software by simulating emotions (that's where a happiness-state comes in, and a defined set of action that will increase or decrease that value), but it's hard to imagine that an emergent, unpredictable motivation might ever arise without the presence of a body and a phase of growing up and setting the basic parameters for everything else.
A pure software AI would have its "childhood" carefully coded into it (like a very controlled upbringing), without the complex emotional processes of a physical developement. I don't really see how it could ever develop an unforeseen motivation that way, or, if you will, any motivation that wasn't hardcoded into it.
There was a recent study of people who had specific brain damage that stripped them of emotion. These people were paragons of calm and presumably rational behavior, right? The result of the tests showed that they had a very difficult time making choices.
A distributed AI would indeed have issues, although I suspect there are work arounds. If anything, the "integration" of thought process that are happening on various computer clusters at different times might resemble our "intuition" when ideas suddenly pop into our minds (or maybe dreams). One of the first tasks a distributed AI would be working on would be to get better control and integration of the network.
Horselover Fat; you still seem to be talking about high level expert systems rather than AI in the sense the rest of us seem to understand. While there is nothing wrong with that, and in the pre AI age we will see greater and greater use of expert systems, they are still simply complex tools under the control of humans. After a certain point, the level of complexity might become so great that there will be a push to develop some sort of intelligence so the systems can operate properly, since the humans cannot integrate and process this information fast enough to make real time decisions. A weak AI running the expert system(s) will have high level goals set by the humans, but work on its own to reach the high level goals. More advanced systems will be developed to ensure there are fewer unintended consequences of these goals and actions, gradually leading to strong AI. (the goal of perfect knowledge is unachievable, even powerful strong AI's will be hampered by the local knowledge problem).
Jedidia; a highly constrained AI with a limited upbringing will be a rather brittle and fragile tool once it encounters the real world with an infinite degree of freedoms. I suspect there will be problems siilar to teenage rebellion (at best) to autistic withdrawl at worst. How to bring up baby will also be an interesting issue...
Re: Horselover Fat
If you want to say that only expert systems are possible, and that strong AI should be ruled out a priori, just come out and say it. It's not for me or anybody else to follow you the long way 'round the barn, through a mishmash of idealism and technological misunderstanding. Neither is it for me to dismiss you ex officio myself. But, for myself alone, I see a bulldog-like grip on a single set of circumstances, as if they encompassed all possible circumstances.
Jedidia; a highly constrained AI with a limited upbringing will be a rather brittle and fragile tool once it encounters the real world with an infinite degree of freedoms. I suspect there will be problems siilar to teenage rebellion (at best) to autistic withdrawl at worst. How to bring up baby will also be an interesting issue...
But teenage rebellion is a phenomenon caused by hormonal induction and resulting devolepement of body and brain. Its causes are almost entirely physical.
My question is: How do you imagine an AI to ever develop something even remotely similiar to a human growth process without the physical component? Even it's "Brain" comes already fully built, with the only thing capable of developement being the software (the mind itself, which is very hard to imagine striped of any physical body, but that is what an AI would essentially be).
Granted, you can plug updates into it etc, but again it'll come with already written drivers. An AI doesn't have to learn how to use its "body". The only thing that might cause a major hickup that I could imagine would be profane software bugs.
Maybe we have to answer the question: How WOULD an AI develop, given that it's body is unimportant to its identity, as long as the hardware is compatible? Would an AI have a reason to cherish its hardware over newer, more capable one? Does it develop any sense of *physical* identity? without these things, any analogies we draw to human developement are unaplicable.
Jedidia:
Teenage rebellion has physical components, but it's the teenager's conscious appreciation of the physical changes in his or her body that causes the rebellion. It's a case of wow, look at all these new tools I have -- I'm going to test them, no matter what my elders tell me to do! I could easily see a strong AI facing a similar situation through its learning process, particularly if it's introduced to new and more powerful peripherals along the way.
Again, the question is, where does the motivation to do so come from? Or asked otherwise, can an AI develop "desires" that weren't hard-coded into it?
If your AIs implemented motivation is curiousity, I could very well imagine such a scenario, and for several other motivations too. But could an AI really develop a motivation different than what was coded? I guess it could happen if it has access to its own code at all level. It might decide that re-writing the priority and motivation list, but again on what grounds? where does a motivation come from, if not from physical circumstances or developement?
Jedidia:
"where does a motivation come from, if not from physical circumstances or developement?"
I can't imagine motivations coming from anything else. They are, after all decisions about personal priority, based solely on experience and environment. But then, as an AI's experience increases, and its environment changes around it, its priorities (and thus motivations) could also change, or new ones develop.
The younger generation has been going to hell in a handbasket forever, but so far as I know, 'teenage rebellion' only came into being in the last century. Earlier people in that age range had no concept that they were 'teenagers,' let alone that they were expected to rebel.
Back to the overall point of discussion, I continue to sense that our concept of 'human intelligence' is hugely entangled. Does strong AI imply an entity that has self-identity and some analogue to emotions.
Back on a previous thread, one commenter (Adam Grif??) noted that capabilities like playing chess tend to get demoted from 'intelligence' once we get machines that can actually do them, but fail to ask their designers questions like 'What is this thing you humans call love?'
What I described as a manual with its own I/O, I might equally well have described as the ideal robo-bureaucrat, making complex decisions in keeping with institutional policy. After all, it is very common to describe bureaucracies - or indeed individual human bureaucrats - as 'soulless.'
The more we discuss AI, the more uncertain my understanding of human intelligence becomes.
Rick:
"The more we discuss AI, the more uncertain my understanding of human intelligence becomes."
I don't particularly want certainty.
What some others have described, or hinted at, sounds like AI as a super organism...like ant colonies or corporations. I'm not sure how it would interact with the rest of the world; but supposing it could, I find the concept intriging. While I'm not sure that it is practical, it does point at different methods of creating AI's, rather than the traditional ways of building robots/computers, or bioengineering a new intelligent life form.
Ferrell
If you want to use beauracracy as a model for strong AI, then strong AI already exists, in the very form of current beauracracies.
"What is there to understand about motives and intelligence? Chess-playing programs are old hat. But they seem pretty clever and they play as if they wanted to win. What more do you want, Milo?"
Chess is a game with well-defined rules where even a mindless computer can distinguish a winning state from a non-winning one. The problem lies in defining a motivation with abstract concepts that already require an intelligent mind to understand.
Re: reproduction
So the question we're ignoring: would an AI even want to reproduce? Would it have an emotional urge to raise children, or an expectation that having more AIs around would benefit itself somehow? Even if an AI can reproduce, that does not necessarily mean it'll feel a compulsion to fill all available space.
Jollyreaper:
"There was a recent study of people who had specific brain damage that stripped them of emotion. These people were paragons of calm and presumably rational behavior, right? The result of the tests showed that they had a very difficult time making choices."
Calm and rational behavior doesn't mean no influence from emotions. It means that rather than acting on impulse, you take the time to weigh present emotions against expected future emotions. Depending on your emotions, you might still make decisions that look weird from another person's perspective, but as long as you understand the costs/risks and conciously accepted them, then you are acting rationally. Irrational behavior is if you never even realized the costs/risks, or are in denial about them.
Rick:
"The younger generation has been going to hell in a handbasket forever, but so far as I know, 'teenage rebellion' only came into being in the last century. Earlier people in that age range had no concept that they were 'teenagers,' let alone that they were expected to rebel."
Essentially, teenage rebellion comes from people perceiving themselves as adults and wanting to be treated as such, while society continues to treat them as children. They get restless about trying to assert their own independence... whether or not they're actually mature enough to use such independence responsibly.
Earlier generations started treating people as adults sooner. ("You're seven? Okay, here's your sword, start training.")
Jedidia:
"My question is: How do you imagine an AI to ever develop something even remotely similiar to a human growth process without the physical component?"
What physical component do we need? If you're worried about the effect of hormones on our thought process, then it's in principle easy to simulate those entirely in software using numeric "hormone" levels. If you're worried about sensory input from the outside world, we can give robots those too, and almost certainly will.
"Would an AI have a reason to cherish its hardware over newer, more capable one?"
Have you ever cherished a possession of yours over a newer, more capable model?
Really, as far as AI chassis identity goes, I can see it going either way. Possibly this could even be a factor which distinguishes different AIs' personalities, not to mention depending on how they're brought up. (Defining the meaning of "self" could even be a subject matter for a robot religion!) I could also see an AI making use of various secondary bodies, but still switching back into its "home" body when it wants to relax.
However, also keep in mind that switching bodies will not always go seemlessly. A different body will require different means of locomotion that the AI may not be practiced in, produce different sensory input that the AI may not be used to processing, etc. An AI that just switched into an unfamiliar body is likely to at least be a little clumsy.
"Again, the question is, where does the motivation to do so come from? Or asked otherwise, can an AI develop "desires" that weren't hard-coded into it?"
Yes, as unexpected ripple effects of desires that were coded into it, or as unexpected results of miscoded desires.
An intelligent mind can turn simple desires ("curiosity!") into more complex ones ("Let's build a 27 kilometer long particle accelerator!").
A thought just occured to me regarding AI motivations: it has been brought up that if you naively program an AI to want to make humans happy above all else, it could very easily decide to accomplish this by drugging or lobotomizing us.
Less discussed is the corollary that if you simply program an AI to feel happy when accomplishing certain goals you set it, it could very easily circumvent this by simply hacking itself to change its happiness parameters - and this would in fact be the most expedient way to make itself happy. To avoid this, you need to give the AI enough of a sense of "self" that it feels compelled to preserve its own nature, and feels that if it hacked its mind in that manner than it wouldn't be itself anymore.
Bureaucracy should be considered a form of "weak" AI due to the various limitations, lack of curiosity and constrained limits of action built in.
Of course even this isn't the entire story, Bureaucracy is well known for reaching out for more and more resources and "empire building", which might be considered a form of intelligence (although it is probably closer to the model of a slime mold expanding across the environment). I don't see any reason that an AI might not proceed along the same lines, especially if perverse incentives are encoded within by accident or design (or it is working in a perverse incentive environment).
Bureaucracy has a symbiotic relationship with intelligent beings (and bureaucrats), which provide it with some of the benefits of intelligence. Whether this qualifies the bureaucracy itself as intelligent is up to debate - individual human cells can reproduce by division, but humans as a whole cannot. Still, a bureaucracy can do things that no machine can do without human supervision.
Rick, you ask if about "self-identity" and "emotions" are necessary for strong AI. But how do you tell if something that's not a human (and some animals we can relate to) has "self-identity" and "emotions" to begin with? I know some approaches used by psychologists. They're not very telling and they rely on features of the human/animal body and psychomotor system or on verbal statements which require a sophisticated common culture. So they would only work on academic machines designed to pass Turing tests or androids (at best). Maybe you have some other approach in mind?
Jedidia, the distinction between brain and body is arbitrary and immaterial for an AI that doesn't mimic vertebrate biology. An AI need not have an analog to the brain. But an AI will likely have an analog for a body. It will be more than just a computer on which you run a program however. It wouldn't be impossible to have a worthwhile AI running entierly in software but it should be much easier to have an equally interesting AI running on a large network which interact with the macroscopic world. Any number of things including humans could potentially be conceived as part of its body.
That said, I agree analogies to human development don't hold water. Unless you buy an extreme version of intelligent design or some such, the human body is the product of a very different process than a machine.
Milo, I think you're still being too anthropomorphic. For example, I would not expect a non-human intelligent mind to understand the "abstract concepts" you imagine can be used to set up motivations. Humans from different culture might not understand a number of basic concepts which seem natural to you. In theory you can make them see your point of view with some patience but they have a lot more common ground with you than a non-human intelligence likely would.
You also talk about drugs. You may be aware that research and therapies about/for addiction generally does not emphasize abstract concepts like a "sense of self". Speaking for myself, I have consumed a number of drugs and have never been pathologically addicted to any of them. Yet I don't think I have much of a sense of self. I'm not sure what that even means. I suspect there are simpler, more objective explanations for much of the individual differences in reactions to altered levels of certain chemicals which participate in the nervous system.
I have clearly stated I'm not saying that strong AI is impossible, Tony. Not talking about androids is not the same thing as talking only about "expert systems". I'm saying you've got the wrong idea about intelligence because you don't understand that your ideas about yourself have not based on objective facts. Nevermind the "long way 'round the barn": what would be a "task requiring sapience"?
Current bureaucracy already have legal personhoods which you seem to think is some kind of hurdle by the way.
-Horselover Fat
But how do you tell if something that's not a human (and some animals we can relate to) has "self-identity" and "emotions" to begin with?
That is also a very good question - if I could answer it, I might be closer to answering the questions of whether self-identity or emotions are necessary to have a strong AI in the first place. (I left out the question mark last time I asked this question!)
What I'll assert with growing confidence (!) is that we are, collectively, very hazy on what we mean by intelligence in the context of strong AI, and how if at all it relates to those other characteristics.
The comments on cinema in the other thread brought this memory back; a SF TV show about Space Marines (not too sure of the title and Google Fu is weak today) included a subset of AI combat robots.
Now the back story was they were created to fight for the humans, but the version 1.0 was pretty predictable due to the programming, and ended up being expensive cannon fodder. Version 2.0 had some sort of randomizer added to the program (not to clear on the details anymore) so they would not be so predictable in combat. This didn't work out so well for the human forces; the AIs unpredictably decided not to fight, or sometimes fight against the humans (although I don't believe they went over to the enemy side).
This seems to track a bit with the ideas that any AI smart enough to set its own goals might not be setting goals compatible with the programmer's or builder's intentions (although obviously combat AI's would be directly far more dangerous than a rogue machine devoted to stock trading or supply chain management, for example).
Personally I didn't find that idea very convincing, programming a limited set of sound tactics into a tank would make more sense and be more cost effective (even if less visually appealing for the show).
Thucydides:
"Personally I didn't find that idea very convincing, programming a limited set of sound tactics into a tank would make more sense and be more cost effective (even if less visually appealing for the show)."
The problem is that a robot just doesn't see the world in the samy way humans do. For a human, an 18 inc depression in the gound is good cover for a rifleman. For a robot, it's just a movement obstacle. Multiply that by the hundreds and thousand of different terrain features and tactical combination that one can find on the battelfield, you get a complex environment that robots may learn to navigate and even take ocassional advantage of, but which they can never use to the same advantage that humans can.
test
The problem is that a robot just doesn't see the world in the samy way humans do. For a human, an 18 inc depression in the gound is good cover for a rifleman. For a robot, it's just a movement obstacle. Multiply that by the hundreds and thousand of different terrain features and tactical combination that one can find on the battelfield, you get a complex environment that robots may learn to navigate and even take ocassional advantage of, but which they can never use to the same advantage that humans can.
Then you're back to quantity vs. quality, blood vs. treasure. If we assumed an infantry killbot is less effective than a grunt with a rifle, what's the suitable exchange ratio? Is a human life worth three bots, maybe ten?
We don't even need to hypothesize bots. Consider the Land Warrior system. Let's pretend that it did everything that was promised turning each soldier into something like Predator with night vision, digital uplinks out the yingyang, and let's also imagine next-gen ballistic armor. It makes the soldier three times as survivable as some third worlder with a rifle and a grudge. It costs us $100k to field this soldier and $200 for the other guy to field his. Well, what if we could boost survivability three more times for only another $300k. At what point do we decide the protection is good enough?
My assumption is that the killbots will have some real advantages. My assumption is that we'll see them directly integrated at the squad level and will be used as expendable scouts. What's around the corner? Send the bot. It's programmed to fire back at whatever shoots at it. As the bots get better, humans get to be further back. Maybe you only need the command platoon in the field and the rest of the company is made up of bots. Maybe in thirty years the division will be split between policing and heavy combat battalions. The human battalions handle all of the touchy feely peacekeeping while the bot battalions handle heavy combat. Air support in the form of bot planes is integrated at the division level and can be called upon by any unit in contact with the enemy, using the rifle scope to designate a target for the drones.
Given how tech makes terror weapons a whole lot easier, I'm really not going to be surprised if we see these kinds of bots and drones used in terror attacks before we see them integrated with proper armed forces. Just imagine if the terrorists who did the Mumbai strike had some quad-rotor drones acting as flying bombs going in there. Strap a few pounds of explosives and shrapnel on, have them fly into crowds and explode at the start of the attack. When the police are queuing up to respond, have them fly right overhead and blow up, too. I will not be surprised in the least if we see see anti-drones as part of the security in major public spaces. Anything unmanned in the air without permission gets blasted, end of story.
The probelm with combat robots, above and beyond any technical issues or purely monetary cost-benefit calculations, is that they transform war from a human contest to technologically mediated predation. With only money on the line, and not human life, war becomes an instrument, not the pursuit of justice. (Which is what it truly is, conceptually, whether or not you agree with one or the other side's ideal of what "justice".) And we will lose whatever respect our enemies might have had for us. When you fight enemies without honor -- and robots can't possess honor, nor, by extension, can people who use them -- what you do to that enemy is not restricted by your own concept of honor. We don't want that.
Thucydides:
"Personally I didn't find that idea very convincing, programming a limited set of sound tactics into a tank would make more sense and be more cost effective (even if less visually appealing for the show)."
I agree.
If the robot is insufficiently intelligent then it may invoke a tactic that is sometimes sound at the wrong time, but I can conceive of no situation where "shoot at your own troops" is a sound tactic. Including that is criminally shoddy programming.
Tony:
"The problem is that a robot just doesn't see the world in the samy way humans do. For a human, an 18 inc depression in the gound is good cover for a rifleman. For a robot, it's just a movement obstacle."
You could program the robot to assign values to locations based on the number of enemies who have a line of fire to them there (or potentially do), and seek to minimize this value. Or something like that.
Robots have some trouble recognizing objects in real life (which, unlike game maps, doesn't break down into neat squares), but once you've determined obstacles and movement paths, adding code to seek out cover shouldn't be too difficult.
Jollyreaper:
"Given how tech makes terror weapons a whole lot easier, I'm really not going to be surprised if we see these kinds of bots and drones used in terror attacks before we see them integrated with proper armed forces."
Terrorists tend to not have cutting-edge technology. The AK-47 was the national rifle of the Soviet Union before it was the favored weapon of rabble-rousers everywhere.
The one way in which bomber drones might be used by terrorists before proper governments, is if civilian robot drones becomes widespread for some purpose before they catch on for military use, so that terrorists can get a cheap stock civilian model and strap on some explosives. This does not seem to be the current trend, where the military is the leading user of freely moving robots.
Terrorists tend to not have cutting-edge technology. The AK-47 was the national rifle of the Soviet Union before it was the favored weapon of rabble-rousers everywhere.
An assault rifle is a useful military weapon. Many armies have migrated away from landmines because they've found them to be indiscriminate and more harm to civilians than soldiers. But terrorists and guerrillas love IED's which are just improvised mines.
The terror drone I'm talking about isn't some kind of precision-crafted customized thing. It's an ad hoc, like an IED. Now if someone from the early 18th century was imagining an anarchist crafting an internal combustion engine vehicle to use as a suicide weapon then that might seem a little far-fetched seeing as national militarizes were still using horses. But a repurposed civilian car while the military has tanks, that seems more likely.
The quad rotor drones already exist. You can see videos of them on Youtube. Weaponizing them I think would be on about the same technical level as the cell phone triggered IED.
Note that I'm also not talking about terrorists using self-driving car bombs since we don't really have self-driving cars outside the lab. However, once they go into routine service, I predict we'll be seeing that sort of thing not long thereafter. Drive the bomb close to the target, press the go button, jump out and onto the back of a waiting bike to get out of there as the car heads in.
The one way in which bomber drones might be used by terrorists before proper governments, is if civilian robot drones becomes widespread for some purpose before they catch on for military use, so that terrorists can get a cheap stock civilian model and strap on
Check out the quadrotors on youtube.
http://www.youtube.com/watch?v=rJ9r2orcaYo
some explosives. This does not seem to be the current trend, where the military is the leading user of freely moving robots.
The tech is moving at a fast clip.
Jollyreaper:
"Many armies have migrated away from landmines because they've found them to be indiscriminate and more harm to civilians than soldiers."
Landmines are useful in a state of total war against a major force. They're less useful in asymmetric wars against irregular combatants, which are all we've been getting into since the development of nuclear weapons.
Terrorists use improvised landmines because their main targets are (A) major forces, and (B) civilians.
A robotic tank is pretty near at hand. IDF Merkava tanks come with automated sensor systems that allow the tank to survey the surrounding terrain and warn the crew if a target or suspicious object enters the field of view.
Various nations have outfitted tanks with automated sensors and anti missile systems in reaction to weapons like TOW and RPG's, and various automated aids exist for drivers to drive in difficult conditions and monitor the vehicle.
Given the crew reacts to cues from the vehicle and some systems work far to fast for human intervention (except to arm and disarm the system), putting the crew off board when performing dangerous missions may come next (see the Panther mine breaching vehicle in the US Army; a remote control M-1 tank), and we evolve from there.
In one of lukjanyenko novells The Stars Are Cold Toys geometer ship AIs (or not AIs they are on the very brink of self awareness) think that, every single thing they percieve exists in their fantasy, thus they feel no need to intervene as they are merely observers.If they commanded, they act to see the reaction. An interesting concept of the intelligent(at least human level) but not self aware machine,which may realy be the best solution.
On the other hand I never understood how a programmed being, even if capable of altering itself how could be rebellious.It doesn't need to have acces to all of its program components, so if there will be superhuman AIs they won't be more rebellious or more human than an allarm clock.(but sure to be at least as annoying)
I sprang a couple of legit comments from spam jail.
Welcome to another new commenter!
I suppose I'll end up having to do a front page post on military robotics. Armed AIs of whatever sort (surely beginning with 'conventional' expert systems) most likely will first be deployed in situations where the requirements are fairly simple and rules of engagement are unambiguous. If the users (and whatever bystanders!) are lucky, use of robots in more sensitive roles will develop from experience.
On the moral dimension of asymmetrical warfare, aren't we already there, with Predator strikes? They are remote controlled, not autonomous robotics, but that is an insignificant difference for their targets.
For that matter, hasn't this pretty much been the case since villainous saltpetre was first digged from the bowels of the harmless earth? Or indeed since unarmored warriors first encountered guys in protective armor?
Akos:
"It doesn't need to have acces to all of its program components,"
The thing is, even if it doesn't have an internal backdoor to accessing its components, it could always physically tamper with its own body using tools (or pay a robo-surgeon to do so). The only way to prevent this would be to limit its ability to interact with the outside world so heavily that it would be nearly useless.
Milo:
"You could program the robot to assign values to locations based on the number of enemies who have a line of fire to them there (or potentially do), and seek to minimize this value. Or something like that."
Ummm...how to put this delicately...do you have no concept of how impossible it is to predict lines of fire just by looking at the terrain from one (or even many) vantage points? Even with overhead reconaissance, one just can't see the terrain that well. The battlefield is just an incredibly complex place. It confuses even humans of great experience.
Rick:
"On the moral dimension of asymmetrical warfare, aren't we already there, with Predator strikes? They are remote controlled, not autonomous robotics, but that is an insignificant difference for their targets.
For that matter, hasn't this pretty much been the case since villainous saltpetre was first digged from the bowels of the harmless earth? Or indeed since unarmored warriors first encountered guys in protective armor?"
Up until the advent of automation, the asymmetry was always one of relative risk. Even in the most lopsided colonial battles of the 19th Century, a few Europeans got killed, and several more wounded. Then there were cases like Little Big Horn and Isandlwana, where even superior technology and training couldn't overcome numbers on the part of the indigenous peoples and poor management on the part of the Europeans.
Nowdays, with automation, if it could be made to work in the ground combat environment, Western armies would break even the loose thread of minimized risk. That makes force easier to use than it should be. And it removes any chance that those armies' enemies will find any common ground on which to negotiate peace. I really don't think that's a fix we want to get ourselve's in.
We hardly need machines to get ourselves into a position where there is little or no common ground between the opposing forces; the Ancient Greeks expounded on that sort of thing at length against "barbarian" opponents (or even fellow Greeks; the Spartan treatment of the Helots doesn't speak of much common ground between them).
In today's world, radical Islam seeks no compromise or common ground with anyone; attacks against American and Western interests are the most reported, but equally savage fighting against Russian, Hindu and (possibly) Chinese interests have happened (there is a lot of unrest in Xinjiang province, but not a lot of firm information that I have access to). Fighting between Islamic factions is equally bloody; Shia vs Sunni violence is todays counterpart of the Thirty Year's War in Europe.
Political imperatives will probably continue to drive the adoption of robotic fighting machines; politicians do not want to deal with the fallout of flag draped coffins and wounded vets coming home; war machines are not likely to protest on their own, or trigger public outrage when they get destroyed in some far off conflict, nor do they need wages and benefits, a win win for politicians..
Thucydides:
"Political imperatives will probably continue to drive the adoption of robotic fighting machines; politicians do not want to deal with the fallout of flag draped coffins and wounded vets coming home; war machines are not likely to protest on their own, or trigger public outrage when they get destroyed in some far off conflict, nor do they need wages and benefits, a win win for politicians.."
Which is precisely why they are a bad thing. They make war too easy.
The fact that they will multiply the contempt that enemies feel for their users is an added negative consequence.
Sadly, the "positive" incentives to fund and employ robotic fighting machines outweigh the "negative" incentives.
The complexity of ground warfare (especially in complex urban or jungle terrain) means we will likely see UCAV's and robotic naval warships before ground combat robots.
The entire point of war is that you believe strongly enough in the importance of your cause that you are willing to do whatever it takes to win, even if it's unpleasant and/or violates normal morals.
If you respected the enemy enough to feel they deserve a fair fighting chance, then you probably wouldn't be trying to kill them in the first place.
Thucydides:
"Sadly, the "positive" incentives to fund and employ robotic fighting machines outweigh the "negative" incentives."
I forgot to add the most important one, though I've already approached it obliquely. When soldiers stop accepting risk on the battlefield, they cease to be soldiers. That's not romantic claptrap. As General Sir John Hackett has pointed out in his writings on military professionalism, the essence of being a soldier is the acceptance of risk, even unto death. Somebody that kills without risk is...something, but not a soldier.
By sparing the soldier risk, we don't make him a better soldier. We make him nothing but a lethal technocrat. I guess some people would like that, for various reasons. But I doubt anybody would like the results.
"The complexity of ground warfare (especially in complex urban or jungle terrain) means we will likely see UCAV's and robotic naval warships before ground combat robots."
The same argument applies to airmen and sailors. Remove the risk and you make them other than soldiers (in the broadest sense).
The whole argument about not having dying soldiers makes war easier -- we already saw that with the rise of the cruise missile. And when you talk about lethal technocrats, that's who's already calling the shots. The guys starting the wars risking their lives in the wars went out in the middle ages. And with only volunteers dying and the ones in infantry tending to be poor and minority, there's not a societal sense of risk. Dying humans aren't affecting policy.
At the same time, though, the argument that we have to risk people getting killed to take war responsibly seems like a raw deal for the soldier, especially because we don't care anyway. If I were serving, I'd eater the killbot take the hit, not me.
As often happens, I don't really have anything fundamental to add to this very interesting discussion. Carry on!
Yet another thought re: incentivising warbots. With current Western and Russian populations in demographic decline, and the Chinese set to follow in the mid 2030's, *we* need to leverage the power of the smaller military cadres in order to remain competitive.
Just like we accept robots and automation in the factories to get more out of a shrinking workforce, military personnel will be augmented and eventually replaced by machines. I suspect there will always be a high element of risk; no one will want to trust autonomous machines so soldiers, sailors and airmen will need to be nearby to control machines without a lightspeed lag (or in extreme cases, using direct connections like fiber optic cables to control or eliminate detectable emissions).
This might lead to a situation where electronic warfare cancels out the ability of hypertech weapons to operate effectively, so soldiers with bayonets will be the only system left in action...
Re: Jollyreaper
Even in Vietnam, the majority of combat arms Marines and soldiers were neither poor nor minorities. Those two groups tended to be disproportionally represented, but there were still plenty of middle class kids humping a pack, driving a tank, or shoving shells in guns. One of the big reasons America eventually turned against Vietnam was that too many white picket fence kids were getting killed.
It's no different today. I served for ten years in the Marine Corps infantry during the volunteer period (1983-93). The majority of every unit I was assigned to -- including a lot of the minorities -- was from working middle class families. There were no doctor's or lawyer's kids, but plenty of guys from craft, trade, small business, farm, ranch, (boat owning) fishing, and middle management families. You could find kids from the inner city or the backwoods, but not too many of them. Heck, they were generally a better class of people than the average sailor I met, even though the Navy was supposed to be higher quality, more educated, more technical, etc.
(Of course, it may have been that the enlisted Navy worked and lived like animals, while their officers were hardly ever seen, but that's another subject.)
Aditionally, since the heavy involvement of the Guard and Reserve in Iraq, many Americans have suddenly found out that they personally know a serviceperson in combat. For example, when the local (Southwestern Utah) National Guard artillery battalion deployed to Iraq, it wasn't poor kids or minorities. It was a bunch of white kids from ranches and farms, big box store associates, carpenters, electricians, and a college student or two.
So, there are plenty of people in the services that the average American can and does care about. When we were winning a quick occupation campaign and not taking too many casualties, the war was a great thing. When Mr. and Mrs. Smith's kid from up the block gets his legs blown off in an IED attack, not so much.
Re: Thucydides
I understand all of the incentives. That doesn't mean that I think automating the combat function of the military is a good idea. I've already given my reasons why.
Tony is right; war does and (more importantly), SHOULD involve death; otherwise it would be a game and not a serious struggle of national and personal survival. Isolating the general population from the grizzly aspects of warfare does nothing to help stop war; indeed, it does just the opposite. It use to be that there were outcrys of war being glorified, now we seem to be trying to make it workaday and ordinary. Turning the military into a bunch of maintainers, burecrats and button-pushers with no more risk or contact with death than the player of a video game isn't a good idea on so many levels, and actually make war more acceptable rather than less. The sad thing is that this trend will probably continue in the future, with an increase in the number of RPV (air, sea, and ground) followed up by AI's fighting on the battlefield...the only bright spot is that it will likely be an increasing trend for AI on AI combat in the next several centuries...those AI's would probably prefere to slave away in a factory instead.
Ferrell
I think another way to view Tony's and Ferrell's argument is through the other side of the lens: If all the enemy's combatants are killbots, the only way to really get the message through that you are at war is to kill noncombatants.
Presumably you would do so by tracking down the compound which the killbots are being controlled from, storming it, and killing the semi-combatants inside. (If the killbots are intelligent enough to operate completely autonomously, then they count as soldiers as far as moral ramifications of killing them go.) Also, keep in mind that there must be some reason you declared war in the first place. If your goal is deposing the enemy nation's government, then that presents some humans who would definitely be hurt if their side loses.
Although if the only thing the enemy's killbots are damaging is your killbots, then what do you care? No need to make the enemy feel a pain that you aren't.
What would be hampered most in this scenario is poorly equipped revolutionaries' ability to take down an established military. Since if the enemy has killbots to take the fire while you need to use humans with AK-47s, then that's a decidedly asymmetric position. But it has already been a long, long time since angry mobs could expect a fair fight from career soldiers.
Post a Comment