Friday, January 21, 2011

What Do AIs Want?

A Cylon
Freud did not ask that question, but my commenters have, in the last thread, unconstrained by the post having nothing to do with artificial intelligence. (Note: This is a feature, not a bug.)

I have argued here before that there is no particular reason to expect 'strong' AI in the midfuture, mainly because we know very little about how human thought works, except that it is very different from the way computers-as-we-know-them work. On the other hand, this is no argument against the possibility or plausibility of strong AI, only that it is outside the scope of what we can currently extrapolate.

So. Suppose that we do develop high level AI, as in smarter than us. Set aside all specific technical questions about how this might be implemented - silicon, bio, some other tech, 'pure thought,' or whatever. What will be its motivations? And what, as a consequence, will be its relations with us?

SF has given considerable thought to the questions of AI-human relations over the years. Following are some of the answers that have been offered:


1. It will be a murderous psychopath. This is the oldest theme, going back to Mary Shelley in the predawn of SF. Truth to be told I've never read Frankenstein; or, The Modern Prometheus (have you?), and his creation may have comprehensible motives. But in the popular tradition it spawned the AI's own motives are absent or irrelevant; it is the AI's Faustian creator who opened Doors Man Was Not Meant To Open.

2. It will be driven insane by human wickedness and folly. Offhand I can only think of one example of this theme, but an important one: HAL 9000, whose breakdown results from human secrecy and hypocrisy due ultimately to the Cold War.

3. It will be murderous, albeit for understandable reasons. The original robots, in Karel Capek's play R.U.R., were essentially slaves doing the Spartacus thing. (Capek's robots were not quite robots in the modern sense, but this does not affect the argument.) In cultural terms this is the 20th century intellectual's counterpart of #1, giving the horror theme the added frisson of hinting that we had it coming to us.

4. It will be murderous, for understandable - but paranoid - reasons: getting us before we get it. I can't think of literary examples, but I probably just missed them or can't bring them to mind. In any case this theme appears in the comment thread to the last post. The AI, motivated by self preservation, will eliminate us as a potential threat.


All of these themes have obvious literary appeal: Killer Robots, 'nuff said. In Hollywood's hands they all drift toward #1, because killer robots are not an invitation to complex plot exposition. But there is one further, subtle variation:

5. It will never exist. We will eliminate it preclusively, before it can eliminate us, by refusing to invent it.


Now for the non-killer-robot themes:

6. It will be benign, due to the Three Laws of Robotics. These of course were Isaac Asimov's response to the prevalence of killer robots in the early Golden Age, and the Three Laws practically killed off killer robots in mainstream SF - though not, per above, in Hollywood.

7. It will gently enslave us for the best of motives, again due to the Three Laws of Robotics. As Asimov himself realized, 'To Serve Man' can raise some complex and ambiguous issues. (Even aside from the culinary dimension.) For sufficiently high level AIs this process can be summarized as Is there a God? Now there is.

8. It will serenely ignore us, as irrelevant to its concerns.

9. It will ignore us, but neither serenely nor with any other emotion. It is after all a machine, with all the emotion and motivation of a lawn mower.


What have I left off this list? Most of these possibilities have any number of variations, many of which blur together - how do we classify Robocop, or his cousin, the defensive robo-fortress that engages and destroys every conceivable threat, including its builders?

And this whole discussion blurs into the question of whether, and to what degree, 'intelligence,' robotic or alien, will or might go hand in hand with anything that we would call personality. To take one particular example from the previous discussion thread, I'm not sure that an AI would have any concern - let alone 'instinct' - for self preservation. We have it because we evolved that way, but AIs don't arise through natural selection. (Though they are arguably subject to it once created.)

And there is one familiar AI that I don't know how to classify, Mycroft in Heinlein's The Moon is a Harsh Mistress. I didn't care for the book, so I don't remember much of it. (If you gotta message, use Western Union; not to mention the obvious impostor claiming to be Hazel Stone.) But I do remember Mycroft. So far as I can recall its motivation was primarily to have interesting conversations, which strikes me as not implausible.


Discuss (further).




Related posts: A PoMo reflection on Asimov, and thoughts on aliens.



The image of a Cylon was swiped from this blog post.

253 comments:

«Oldest   ‹Older   201 – 253 of 253
Thucydides said...

Rick has it right and the Al Qada, Hamas, Hezbollah, Liberation Tigers of Tamil Elam, Taliban, FARC, and so many others have also figured that out and put it into action.

From a practical point of view, insurgents with AK-47's will be ground up when they go head to head with Western combat arms. This has been true for about a century and a half (think of the poem:
"Whatever happens, we have got
The Maxim gun, and they have not")

For a more modern take read up on "OPERATION MEDUSA" in 2006, where the Canadian battlegroup moving into their sector to secure the area for reconstruction work was met by a huge Taliban force determined to take the area first.

OTOH, the locals, schoolteachers, children and pilgrims coming to worship don't fight back very effectively, and their losses can break the will of the remainder of the population to resist or cooperate with the government forces or the "Stability" forces.

Perhaps some form Sepoy troops will evolve, backed by the awesome firepower of the distant Imperial forces (maybe even space power) as a way for the politicians to have their cake ("It's moral to be there. The 65th Sepoy Light Infantry suffered heavy casualties to protect the children of the district, and we must stand with them!") and est it too (The 10th Aerospace wing will prepare to deorbit KKV's in support of the Sepoy Brigade's clearing operations in the province").

The armies of the Raj and various native troops used by armies throughout history may well be our guides

Tony said...

Thucydides:

"OTOH, the locals, schoolteachers, children and pilgrims coming to worship don't fight back very effectively, and their losses can break the will of the remainder of the population to resist or cooperate with the government forces or the "Stability" forces."

"Who gives a shit? Thet're just fucking wogs."

This was bad enough when the colonial forces were people, whom the locals could at least kill a few of. If they were just killbots roaming hte land...

(Please excuse the language, but I think it necessary to make the point.)

"The armies of the Raj and various native troops used by armies throughout history may well be our guides"

The armies of the Raj included a considerable proportion of British battalions, squadrons, and batteries. The British public had just as little patience for high casualty adventures in Afghanistan as Americans do today.

Thucydides said...

I had looked at this notion as a background "universe" (which ended up being very bleak)

Stage one; the future United States actively becomes an Empire, sending Counsels to collect taxes from foreign lands to pay for stability operations and other social, economic and political steps taken to "aid" these countries (in the interests of the United States, of course).

Stage two comes about as the military resources of the United States are reaching the breaking point. An "American Legion" is formed, allowing non citizens to join with the promise of American citizenship at the end of the enlistment (which would be something like 10 or 20 years).

The remaining US forces become aligned for the "door smashing" role to allow entry of American Legion forces to do the pacification/occupation roles, so evolve in the fashion we have been discussing. This emphasizes the Navy, Airforce and Marines, empowered with lots of high tech tools.

Milo will find it would be difficult to attack the operators of the US Warbots; UCAVs for example would be controlled from the back seat of a large F-15E/F-22 class fighter to ensure the operator had the best possible chance of survival. Naval forces would have similar protection, while dealing with Marines might be more like fighting "Starship Troopers" in power armour, supported by robotic fire support platforms.

The opposing soldiers and civilians would not fare well in this universe, either during the door smashing phase, or afterwards when the Legion moves in. Needless to say, this wasn't a very attractive thought experiment or setting for writing...

Tony said...

Re: targeting warbot operators

Armed UAVs are already being controlled from office buildings on CONUS bases. Currently they're on fairly well access-controlled reservations on the outskirts of metropolitian areas. It might not always be that way. Imagine BGs attacking military facilities in the middle of cities, like Ft. Monmouth, in Ocean City, NJ, or Naval Base San Diego, just a couple miles south of downtown.

jollyreaper said...

The assumption I had operated on for a while is that in the mid-future, warbots might make sense for all-out combat but the peacekeeping would fall upon living soldiers. ED-209 is great when on the open battlefield but the last thing you need is for a kid with a toy gun to get the "Drop your weapon, you have ten seconds to comply" treatment.

But you know, that might only hold true for the next fifty years. The thing I keep coming back to is that a live cop or live peacekeeper swore an oath to serve his country/protect the public, it's not a suicide pact. If a police officer is in fear for his life, he's allowed to shoot. And often times that can lead to tragedy. We talk about how killbots will make it easier to get into wars but it also makes it easier to not kill people. If I'm trying to keep the peace in a hostile area, I'm willing to trade the cost of damaged or destroyed killbots to avoid accidentally murdering innocent people.

Of course, that can also lead to a kind of soft imperialism. Let's say killbots are the military version and tacklebots are the police version. Got a swarm of protesters coming in, the tacklebots go to work and start herding people, picking some at random to tackle and truss up for arrest. A tacklebot won't need to beat anyone with a nightstick. Maybe it'll look like a mass of soft, cushy tentacles, cartwheeling towards the protesters and then wrapping around their limbs, cushioning the fall to the ground as they're restrained.

The democratic danger here is that to put down something like Egypt with current technology, it's gonna look like Tienanmen Square. China can get away with that but evidently the high mucky-mucks in Egypt didn't think they could. But if they could break up demonstrations without killing people...

It's a bleak universe but worth exploring.

Milo said...

Jollyreaper:

"The democratic danger here is that to put down something like Egypt with current technology, it's gonna look like Tienanmen Square. China can get away with that but evidently the high mucky-mucks in Egypt didn't think they could. But if they could break up demonstrations without killing people..."

Something similar is seen in the seemingly paradoxical opposition to nonlethal weapons like tasers. The spirit of "Give me freedom or give me death!" is still alive and well today, and so we fear any technology that would allow a government to deny us both freedom and death, without giving us much of a chance to fight back.

A lot of people complain about death in combat, but the truth is there are things we fear more than death...

Anonymous said...

Future war with killbots; so the first side willing and able to lay down a firestorm in the enemy capital wins? So it's either a live action video game or a war of extermination? You're right, that is a bleak future.

Ferrell

Unknown said...

It would be an interesting cultural experiment. Many cultures are obsessed with the purpose of existence e.g. Achilles dilemma in the Iliad, the concept of Dharma in Hinduism/Buddhism, postmodernism and its focus on constructed realities, the collected works of Rousseau, and the Bible all struggle with the idea of what it means to be a person. AIs would be, presumably, created for a purpose to perform a set task. Their persona would be shaped by their purpose which is concrete. What happens when these two paradigms interact?

Rick said...

Welcome to a new commenter!

In some cases of warfare one side could achieve its objective by destroying the other's killbots, without ever resorting to actual violence against people - the, ah, sobering effect of knowing that your killbot defenders have been blasted could be substantial. But if the loser is 'unreasonable,' then the winner may feel pushed into being unreasonable as well.

A very interesting question about purpose and persona. That gets to the heart of the rather tangled discussion of when an AI ceases to be simply a projection of its builder's intelligence, and becomes an intelligence in its own right.

I'm reminded of a line in Clarke's The City and the Stars, in which the Central Computer of Diaspar says that to explain something would be "reveal my maker's purpose, and therefore nullify it." Is such loyalty to a builder consistent with 'true' intelligence, or not? At some point we may find out.

On the other hand, no negative handwave is needed to avoid these issues, because so far as I can tell, human-equivalent level AI is not something we can extrapolate from current tech. Even without a decelerando, no one can yet say whether we are 10 years or 1000 years away from developing it.

Anonymous said...

Rick:"I'm reminded of a line in Clarke's The City and the Stars, in which the Central Computer of Diaspar says that to explain something would be "reveal my maker's purpose, and therefore nullify it." Is such loyalty to a builder consistent with 'true' intelligence, or not? At some point we may find out."

Unless the AI in question has developed intelligence to the level that it has religious beliefs.

Ferrell

Thucydides said...

I remember that Jerry Pournelle characterized the NAvy and Marines as the sort of forces a Republic would need, while standing armies are the tools and mark of an Empire.

Warbots and other robotic tools to enhance the fighting abilities of existing troops (or allow smaller numbers of troops to achieve the same effects) could inadvertently support the Republican side, since they would negate the need for large standing armies and allow the politicians to reduce military spending

Tony said...

Thucydides:

"I remember that Jerry Pournelle characterized the NAvy and Marines as the sort of forces a Republic would need, while standing armies are the tools and mark of an Empire."

He was talking about the US republic, with relatively weak land neighbors and dependence on the oceans for major trade routes. The US doesn't need a large army except for major wars or empire. A landlocked country or a continental power with strong neighbors and relatively small overseas trade would have a requirement for a large army, regardless of its governmental form or its imperial ambitions.

"Warbots and other robotic tools to enhance the fighting abilities of existing troops (or allow smaller numbers of troops to achieve the same effects) could inadvertently support the Republican side, since they would negate the need for large standing armies and allow the politicians to reduce military spending"

Warbots employed by any state would tend to reduce the political cost of using war as a tool of policy. And that would tend towards force being a much more likely solution to problems. And then, in a fit of absentmindedness...

Rick said...

Pournelle may have been cribbing from Jefferson, deliberately or otherwise.

Navies and marines strike me as characteristic forces of maritime powers - including maritime empires.

There is an argument, which I've made at times in the past, that maritime powers, even empires, tend to be more liberal (in some sense) than land-power counterparts. This question has substantial relevance in space-operatic settings, and it probably deserves a front page post here.

jollyreaper said...


There is an argument, which I've made at times in the past, that maritime powers, even empires, tend to be more liberal (in some sense) than land-power counterparts. This question has substantial relevance in space-operatic settings, and it probably deserves a front page post here.


Ah, and here we go again with the trouble of the space word for maritime. I just can't find one I like yet. That might actually be a good thread topic in and of itself. What's the space equivalent of a seaman? Spaceman. But that word's been ruined by the pulps. Cosmic might be a good analog for maritime. Celestial sounds nice but doesn't quite ring true. Celestial powers, celestial law? Cosmic powers, cosmic law? Cosmic powers sounds too much like Dr. Strange.

But to the point that made me hit post, you have continental and maritime powers on Earth but in space it would basically break down to planetary and space-faring powers. Planets could have a unified government, multiple sovereign nations such as on Earth, or the whole thing could be split up as colonial territories held by different spacefaring powers.

The amount of offworld trade supported by the interplanetary economy would indicate just how important access to space is. If there are civilized conventions that polities must adhere to, there could be a real limit in the level of force a spacefaring power is able to bring to bear against a planet, much in the same way that air power alone can seldom win a war. You can nuke someone's country to cinders from the air but you're denying it to them, not holding it for your own. The same would seen to be true for planets. Slag it from space, sure. Exploit it economically, you'll need forces on the ground.

So much of this depends on the assumptions made for settings. I'm still piecing together my ideas on this count.

Milo said...

I would just go for "solar law" or "interstellar law" or even "galactic law", depending on the scale of your setting.

As for the people on the spaceships, well, "astronaut" works fine for us right now, although people associate that with scientists and pioneers, not so much with space merchants or the like.

Tony said...

Rick:

"Navies and marines strike me as characteristic forces of maritime powers - including maritime empires."

Characteristic, in that they all have them. But they also have armies and air forces, because sometimes you just have to invade and occupy. Bombardment and coastal raids aren't always enough.

"There is an argument, which I've made at times in the past, that maritime powers, even empires, tend to be more liberal (in some sense) than land-power counterparts. This question has substantial relevance in space-operatic settings, and it probably deserves a front page post here."

Well, threats tend to be less immediate and the types of forces maintained in peacetime are best manned by volunteers. coastal foritications are pretty much limited to major harbors and need not be as extensive as land fortification systems. So the State, for defense purposes, requires more of money and less of manpower and property. This would naturally incentivize liberal social and business practices.

Thucydides said...

A few interesting historical points are bubbling up here:

Historically, when large numbers of citizens are empowered to fight in defense of the State, there is a flowering of a more liberal culture. Ancient Athens became a sort of hyper democracy when the oarsmen of the Navy became eligible citizens (prior to that the Polis limited citizens rights to those who had the wealth and ability to buy the armour and stand in ranks with the rest of the Hoplites. Oarsmen outnumbered Athenian Hoplites by a considerable margin). We can see similar evolutions elsewhere, such as the United Provinces and Revolutionary America.

A small military (either in absolute size or because recruiting is restrictive for whatever reason) is not so conducive to liberalizing or democratizing the State; nor is an expeditionary force (the armies of the early Res Publica Roma were there to defend Rome, as they evolved to conquer other States the Republic evolved in tandem, and probably not for the better).

Warbots reinforce the power of small militaries and make expeditionary forces easier to use, bot negative indicators for political and social freedoms. Getting back to the topic of AIs, if they evolve from warbots and military control programs then they will not have much interest in human rights and privilages either.

jollyreaper said...

I like the idea of national service. Doesn't have to be military, I think it would be preferable if that were the smallest branch. But the idea is that it would be precisely the kind of work that is beyond the scope of business, that cannot be done for a profit but benefits the nation as a whole. It's the sort of thing that gives people ownership in society, a sense of belonging and involvement.

Now the thing I try to imagine with grandiose ideas is how they can either fall apart despise the best of intentions or be deliberately subverted by the robber-barons. There's a lot of room for graft and corruption here.

I honestly find myself of two minds regarding this sort of thing, consolidation vs. delegation. Too weak a central government and you get the articles of confederation, a gaggle of former colonies incapable of operating as a proper state. It's chaos. But too strong a central government and you make corruption all the easier. There's simply not as much money to steal at the local level but at the national level, holy crap! Billions wind up as rounding errors.

The simple prescription is that power should devolve to the level where it's best enforced. Building codes should be set by locals who should be the most knowledgeable of local conditions. Law enforcement is handled at the city level. Authority should only be passed up to state or federal levels when it makes the most sense to do so. You keep the decision making at the lowest levels possible, you have more accountability.

Of course, that paragraph above states it all neat and clean but how will we agree on what's reasonable? I think we can agree that deciding foreign policy and military defense should be handled at the federal level. A common currency makes sense. Food standards? It makes sense for the states to pool resources in the form of the national government to fund research that will then be shared by all the states with enforcement happening at the state level. But you'll have business interests demanding there should be no inspections since it saves them a ton of money to ship tainted meat.

Tony said...

Thucydides:

"Historically, when large numbers of citizens are empowered to fight in defense of the State, there is a flowering of a more liberal culture. Ancient Athens became a sort of hyper democracy when the oarsmen of the Navy became eligible citizens (prior to that the Polis limited citizens rights to those who had the wealth and ability to buy the armour and stand in ranks with the rest of the Hoplites. Oarsmen outnumbered Athenian Hoplites by a considerable margin). We can see similar evolutions elsewhere, such as the United Provinces and Revolutionary America."

Both cart before the horse and confirmation bias going on here.

Cart before the horse: The rowers demanded and got representation after they had proven themselves valuable to the state's defense. The state didn't empower them to fight. They fought because they had no choice, and used their success to trade for political power.

Confirmation bias: You left out a whole class of state that each empowered millions to fight, but which could hardly be thought of as liberal -- Imperial and Soviet Russia, Nazi Germany, Austria-Hungary, Communist Asia in all it's various forms, Imperial Japan...

"A small military (either in absolute size or because recruiting is restrictive for whatever reason) is not so conducive to liberalizing or democratizing the State; nor is an expeditionary force (the armies of the early Res Publica Roma were there to defend Rome, as they evolved to conquer other States the Republic evolved in tandem, and probably not for the better)."

A relatively small peacetime military is necessary for political liberalism. Large armies have too much built-in repressive capability that rarely if ever goes unused.

As for the armies of Rome, I'm sorry, but I'm going to have to fault you on your interpretation. The armies of the early republic could be quite large, but they were wartime-only militia organizations. As the standing armies professionalized and grew larger, the Roman state could and did become more militaristic and reppressive.

Tony said...

jollyreaper:

"I like the idea of national service. Doesn't have to be military, I think it would be preferable if that were the smallest branch. But the idea is that it would be precisely the kind of work that is beyond the scope of business, that cannot be done for a profit but benefits the nation as a whole. It's the sort of thing that gives people ownership in society, a sense of belonging and involvement."

Ever heard of the Reich Labor Service?

Using government money to hire contractors to do non-commercial projects is one thing, using government-owned labor is another (and not good).

National service is a 19th Century Continental Europe paradigm. It came into being because technology made it both possible and necessary to mobilize large armies quickly when your neighbors attacked you. That's not a problem we face here in North America.

Thucydides said...

Athenian oarsmen were a volunteer force (and indeed the idea of slaves pulling the oars comes from a totally different time and place), they could ask for and receive political representation because they had done the equivalent of taking up arms against the Persians.

Remember the caveat I added up thread;

"when large numbers of citizens are empowered to fight in defense of the State.

Empowering armies of conquest isn't where I am going with this, you yourself point out the early armies of the Res Publica were citizen militias while armies of the late republic were focused towards external conquest and gathering loot for the Senate and People of Rome. In turn the Senate and People supported the Legions and looked at conquest as a positive outlet for shipping excess people to take over farm estates and so on. Giving time expired Milites land well away from the metropole was also considered a feature.

Milo said...

And speaking of artificial intelligences, here's a rather startling development.

Rick said...

I don't have any particular problem with national service - it seems pretty solidly within the republican civic tradition, even if it was stolen by 20th c. authoritarian states.

Digression: Although 20th c. hard authoritarianism, aka 'totalitarianism,' cast a long and dreadful shadow, note that its appearance in major industrial powers was a remarkably shortlived phenomenon. It emerged with Mussolini in 1922, and died with Stalin in 1953. This suggests that it was a very transitional thing. (Basically an attempt to institutionalize WW I mobilization?)

Tony said...

Re: Thucydides

The Athenian rowers were initially a combination of lower class inhabitants (contrasted with the hoplite and equestrian classes who were citizens with a vote) and mercenaries. The inhabitant portion of the crews were doing their military service to the state without a vote, in much the same way that lower classes of other states might provide light infantry and labor service. The vote was only granted to them after proving their worth in several campaigns. So casting their service as a trade for the vote is mistaken.

Re: Rick

"Hard" authoritarianism has gone out of style, but there is quite a lot of soft authoritarianism left in the industrialized world. See the FSU and PRC.

Rick said...

Yes - my comment was specific to hard authoritarianism. I hope, but do not assume, that post industrial societies will prove to be infertile ground even for soft authoritarianisms. (For what it is worth, both China and Russia are still rather transitional countries.)

On Athens, my classical history is limited enough that I'll only say that IIRC Athenian democracy had a pretty complex development, somewhat supporting both of the arguments offered, but also somewhat rebutting both of them.

Scott said...

@Milo: well, that's quite a development!

I find this line from the wiki entry pretty interesting, though: Eric Nyberg, a professor at Carnegie Mellon who worked on Watson with the IBM development team, stated that the error occurred because Watson does not possess the comparative knowledge to discard that potential answer as not viable.
Once you combine a learning neural-net with that 'embarrassingly parallel' computer horsepower, then you've got a close approximation of an intelligent computer instead of a mere expert system.

But holy crap! Talk about a serious chunk of computer horsepower to perform at human levels... 90 servers, each with 32 Power7 processor cores (2880 cores total), and 15 terabytes of RAM, but only 4 terabytes of hard data. I think you could put an equivalent computer system into a cargo container (counting both cooling and power-supply units), but getting the beast significantly smaller than that appears to be unlikely.

So BOLOs or OGREs are in, but Terminators are not.

Milo said...

Keep in mind the first computers (ENIAC, etc.) were pretty large too. It's possible we'll manage to miniaturize Watson in due time. On the other hand, it's not obvious how we might do so since Watson's [i]hardware[/i] is already well-established computer technology with years of development behind it. The [i]software[/i] is what's new, and it's less obvious how much you can optimize that without completely redesigning it from the ground up.

What I note about Watson, though, is that despite its tantalizingly apparant intelligence, it is still in some ways like a chess computer, in that it's hard to compare to human intelligence - in that it can solve some impressively intricate problems, yet at the same time still trips up embarassingly on stuff that would be obvious to a human (like that "does not possess the comparative knowledge to discard that potential answer as not viable" thing you quoted). It raises the nagging question of just how smart an expert system can get without actually being a strong AI.

Rick said...

I just read a day or two ago (but naturally can't find the link) where Garry Kasparov observed that while high power chess machines can now beat human grandmasters, a strong human player using a pretty basic laptop program can beat either.

So score one - at least for now - for the good old human-machine interface. Note that this doesn't require fancy direct electrodes to the brain, at least not on the time scale of chess. It is an interesting question how this plays out for human skills that involve trained reflexes, such as (I gather) fencing.

Rick said...

The way Watson tripped up fits rather neatly into the old idea of robots as literal thinkers that can be thrown off by humans thinking outside the box.

I'm not sure it actually works that way, but it might be something to consider.

Philosophically, I tend to regard expert systems as 'projections' of human intelligence, namely their designers' smarts, rather than intelligent in their own right. (But how will we tell the difference?)

Anonymous said...

Rick said: "Philosophically, I tend to regard expert systems as 'projections' of human intelligence, namely their designers' smarts, rather than intelligent in their own right. (But how will we tell the difference?)"

Well, the first time a computer does or says something unexpected (i.e. not preprogramed), then I'll conceed that they show signs of intelligence.

Ferrell

jollyreaper said...

There's the old saw about "the question of whether a computer thinks like a human is exactly as important as whether a submarine swims like a fish."

For now it appears that the machines will be no more clever than they were programmed to be. We might find interesting solutions with genetic algorythms but the computer had to be programmed to use them in the first place to give us the result. But as projections of human intelligence, damn! They're pretty amazing.

The question that should scare us is "how many of us have jobs that are so unique, so special, that they cannot be replaced with automation?" Just as we keep defining upwards what defines true AI and what's just a clever bit of software, we're going to have to keep redefining special human work.

Toll takers are pretty much obsolete. The only reason why they're still around is because some people haven't bothered to setup transponder accounts or you have out of towners just passing through. DVD cubes in the supermarket are hitting Blockbuster from one side and Netflix is hitting it from the other.

Computer programs are getting very, very good as diagnosticians. With the advancements in medical imaging and computer vision, Dr. House won't have his doctors to yell at anymore.

There's always uncertainty about how the future will unfold but these advances in robotics have pushed my uncertainty from "high" to "my guesses are going to be totally off the mark."

So, the two immediate questions I have: first, how long until airlines are flown completely by autopilot, no human in the cockpit? Everyone assumes "never" but I'm not so sure; second, how long until self-driving cars are available in the dealer showroom? With both of these the limitations seem to have passed beyond the technical and now it's a matter of law and politics.

Scott said...

I would make a snide comment that Airbus birds already don't need a human in the cockpit, since the computer has final authority...

Given how much *can* go catastrophically wrong in an airplane and how fast you have to run through the troubleshooting lists, I don't think you could get away with a reduced-volume Watson in less than 20 years.

And they've been testing self-driving cars for a decade now, with the traffic-management computer doing the driving located in a building someplace connected by an RF datalink. It's only autonomous off-road driving that has been a challenge.

Give it another 5-10 years, and I'm going to have to rip out the auto-drive in any new car I buy...

Thucydides said...

WRT the shrinking of hardware, I have heard of universities creating limited supercomputers (say early CRAY level power) by hooking up large numbers of game consoles in Beowulf clusters. NVidia Tesla supercomputers can be packaged in about the same size as the desktop computer you and I are using (although that would be a very minimal supercomputer).

While these things might not scale smaller than "Watson" if you tried to do the same thing, it could be much cheaper in the future to create "Watson" class machines, and of course vastly increased computer power at the desktop and the "cloud" will have interesting and unexpected consequences.

Milo said...

Rick:

"It is an interesting question how this plays out for human skills that involve trained reflexes, such as (I gather) fencing."

Hmm. I have an idea for a system where the entity holding the foil and sparring is a robot, while a human observer stands back and uses a remote to change the robot's "strategy"/"attack mode". If the human doesn't key in commands in time, the robot still has good reflexes and will happily continue fighting on its own, but with a skilled human appraising the situation and giving the robot advice, its effectiveness may go up - without ever entirely needing said advice.

Mind you, I know nothing about fencing, so this might be a stupid idea. But it kind of sounds like how killbots are used, and since fencing started as simulation/training of military combat...


"Philosophically, I tend to regard expert systems as 'projections' of human intelligence, namely their designers' smarts, rather than intelligent in their own right."

As do I. But is that valid?

You probably need to be pretty smart to create a strong AI. (Disregard the trivial means of creating sentient beings, namely sex.) At what point do you treat an AI's impressive behavior as anything more than "wow, that programmer is really smart!"? Sure, you may not have directly coded in every skill the AI possesses, suggesting that it learned those "itself", but you did code in the ability to learn things. Computers of today can already be coded with a rudimentary form of learning, by keeping track of things that work and things that don't work.



Ferrell:

"Well, the first time a computer does or says something unexpected (i.e. not preprogramed), then I'll conceed that they show signs of intelligence."

They already do that. We call them "bugs".

Less commonly computers can do something unexpected that the user/programmer considers beneficial. This happens, of course, as the result of unexpected interactions of things that were programmed in. If a program reacts unexpectedly to an input that the programmer hadn't thought of or tested, then can the programmer be said to be responsible for it?

I recall reading about someone who wrote a program to calculate a "perfect" playthrough for some game... err, Lunar Ball (I looked it up) that finishes all levels as fast as possible. The computer found a really fast way to get past a particular level: namely, by exploiting a glitch in the game and cheating.

Also, from the same page, the F.E.A.R. AI is described as "the A.I. is programmed for a limited number of simple behaviors: moving in coordinated squads, providing covering fire, seeking cover, and repositioning itself based on the player's movement and position. The A.I. isn't actually programmed to flank or circle behind the player, but its tendency to seek cover and reposition itself based on the player's movements results in flanking and circling behaviors occurring naturally without "conscious" effort on the A.I.'s part".

Tony said...

Milo:

"Ferrell:

'Well, the first time a computer does or says something unexpected (i.e. not preprogramed), then I'll conceed that they show signs of intelligence.'

They already do that. We call them 'bugs'."


Sorry, but that's incorrect. Software bugs are programmed behaviors that don't meet the system requirements. Sometimes they are the result of the programmer making a logic error, thus the program does what the programmer expects, but not what he really wants. Other times it's a simple prgramming mistake -- referencing the wrong variable, not resetting a counter that should be reset, forgetting to clear a memory space that should be cleared before the next step is taken...stuff like that.

Having said that, even in AIs, the machine needs to operate as programmed. It may start thinking for itself, but that is presumably a design goal, not a bug.

Rick said...

(Disregard the trivial means of creating sentient beings, namely sex.)

A perfectly sensible statement in its context, but LOL anyway.


On job obsolescence, I imagine that jobs involving physical manipulation, including a lot of low paid 'unskilled' ones - think hotel maids and busboys - would be pretty tough to automate at our current stage, or at least do so economically.

Otherwise, the limits seem to be 'rules of engagement' decisionmaking, and jobs where having a human being at your beck and call is much of the reason for the position.

In Starman Jones Heinlein mentions a roadside eatery that replaced robo service with human waitresses for just this reason.

Not to put too fine a point on it, but at some level of automation the economic and social assumptions of capitalism become awfully dicey.

Tony said...

Rick:

"On job obsolescence, I imagine that jobs involving physical manipulation, including a lot of low paid 'unskilled' ones - think hotel maids and busboys - would be pretty tough to automate at our current stage, or at least do so economically."

Ever seen Good Will Hunting? To make a long story short for those who haven't, at one point in the flick, possibly the greatest mathematical genius alive (certainly in the top five, taken from statements made by certain characters) is working as a day laborer in building demolition, busting up cinder block wall with a sledgehammer and manually tossing the debris into a rubble pile. We could easily develop an AI capable of solving all imaginable mathematical problems long before we develop one that could do hardhat work.

"Otherwise, the limits seem to be 'rules of engagement' decisionmaking, and jobs where having a human being at your beck and call is much of the reason for the position.

In Starman Jones Heinlein mentions a roadside eatery that replaced robo service with human waitresses for just this reason."


There's a chain of brick and mortar women's athletic wear stores (I forget it's name) that does quite well because online outlets don't provide the necessary service. It turns out that for certain kinds of clothes -- jogging and zumba ensembles among them -- women want to try on garments to see how they make their T&A look before they buy them.

"Not to put too fine a point on it, but at some level of automation the economic and social assumptions of capitalism become awfully dicey."

James P. Hogan certainly thought so. And one can imagine a world where every basic need is automated, and humans are only needed for management, research, and art. The problem is that you can't eliminate political economy. Somebody has to be in charge, and that somebody will still alocate resources to some degree. The market may be different in both cause and consequence, but it will still have to exist. At a minimum, there has to be some way to identify who gets to manage the nuclear power plant, based on more than who wants to do it.

jollyreaper said...

We always consider AI's might be smarter than us but what if they were designed to be dumber than us instead?

http://www.youtube.com/watch?v=2-aTbHsZJ9g

nsfw but so worth it.

Thucydides said...

Dumb AI on the order of dogs and so on would still have a lot of utility; perhaps filling in lots of roles like security guard or other low paying, low skill jobs.

Having a K-9 level assistant might also improve the productivity of other people (and of course invokes the same sort of responses that people have for working animals [Hey; don't pet the Toshiba!]).

In the story "That thou art mindful", the Asimovian Robots not only conclude they are, in fact, people in the sense of the Three Laws, but also that sub human level robots would have utility (robot earthworms were suggested) . Robot earthworms would be a form of AI for very specialized niche applications.

Tideflat said...

In my opinion, AIs will want what ever their makers want then to want.

For example, If one makes a robot to gather garbage, then one would make picking up garbage its highest ideal.

I doubt if there will be any sort of inherent want that robots have. The only ones that will be close to universal will be the ones that are intelligent for the designer to put in it, like self-preservation, or not to harm humans.

Rick said...

Welcome to the comment threads!

I have been considerably influenced by commenter adam grif's suggestions that cognition is a separate matter from self-perception - so that AIs may not have drives, motivations, or opinions about anything, unless specifically designed for to have them.

Tideflat said...

That is a interesting idea but I have a few problems with it:

Assuming, that we define part of intelligence (the I part of AI) as "To try to achieve a goal (or goals)", then the goal (or goals) could be interpreted as "wants".

In argument to define at least some of definition intelligence as having a goal. With out a goal then the AI would not do anything, because it would see no reason to respond to any input, and with out the AI acting then how will we tell if it is intelligent?

(Not responding to any input because of it seeing no reason to do so could be said to be an goal, and thus call into question if something that was intelligent could possibly have no goal, because the AI couldn't do nothing and the AI couldn't do stuff if it had no goal to do ether.)


Sorry, if you don't want be posting in old threads, but because some other threads has gone for at least a year, I thought it would be okay.

Rick said...

Welcome to the comment threads! Commenting on old posts is fine - the only risk is that it may not get noticed. (I get email notification of all comments, but comment traffic has grown to the point where I can easily miss something.)

True that an AI that responded to no imput would be dubiously intelligent and certainly useless! But there is a huge intermediate zone between no response and the human-like autonomy of response that has traditionally been associated with robots and other AIs in SF.

The example here is chess computers. The ability to play high level chess was traditionally associated with human-like intelligence. But it turns out it can be done by brute force methods. And the resulting machines don't 'like' chess or meditate about chess; they simply respond to chess moves.

Tideflat said...

Good point. Lots of tasks could be done by "AIs" that don't learn, and there for don't need goals. In fact we could easily see them doing very complex things. It is just I don't see them as intelligent because they can't learn.

Similarity, an ant nest does a very complex thing, but it could hardly be called intelligent, because it will keep doing the same stupid thing over and over. Likewise a human nation also does a very complex thing but it will usually not do that exact same stupid thing again and again, and is therefor intelligent.

But if you define AIs to include things that can not learn, then yes, they could not have any wants.

So I think agree on everything but want we are talking about. ;)

Rick said...

Since I welcomed you to the discussion twice in a row, I'm not feeling all that intelligent myself. :-)

But back to the point, my understanding is that much of current AI research is in the direction of expert systems, which are not so much intelligences in their own right as 'projections' of the developers' intelligence.

Tony said...

Rick:

"But back to the point, my understanding is that much of current AI research is in the direction of expert systems, which are not so much intelligences in their own right as 'projections' of the developers' intelligence."

Hmmm...I think it would be more accurate to say that expert systems are projections of what the developers think should happen, based on system requirements and previous operational experience. Software developers, even ones with considerable domain knowledge, don't really do anything but define decision trees based on data handed to them, such data being purported by the supplier(s) to be facts.

Ask me sometime about putting yellow page directories online and how that plays out in real life, even with some of the smartest sofware craftsmen available. No matter how intelligent you are, if you don't know where all the bodies are burried, your expert system can turn out to be not much of an expert.

Anonymous said...

In the same vein it could be argued the use of robots in warfare makes the person ultimately controlling the robot (for instance the programmer) a legitimate target. All so is this programmer responsible if the robot commits a war crime?
this will depend on the price of robots i assume. If its cheaper to risk lives governments probably will.

pg

Damien Sullivan said...

Tony, there's more to trying on clothes than "how they make their T&A look".

(Not many female commenters here, eh?)

Modern countries with universal (male) military service include Austria, Denmark, Finland, Taiwan, Switzerland, until recently Sweden, and Israel; list is not complete, but I was going for the more peaceful and egalitarian democracies. Plus, er, Israel. Even there, not exactly a list of rampaging aggression.

The Minds of the Culture may actually be a case of the Three Laws, loosely interpreted. I mean, they're not adhering to rigid laws. They *are* most definitely made to like people. Because they're made by machines that like people, which were made by machines that like people, which... at the start of it all, were made by people. They're also extremely moral, with a Mind who breaks the taboo against reading people's minds being a singular exception, even though it does so for 'good purposes'. The psychopathic warships are mostly faking it for shock value.

Banks himself compared the humans to "pets, passengers, or parasites", but a better analogy is "parents", or after 9000 years, cousins. Humanity started the AI process, and retired into luxury.

And if you imagine an executive government version of the Three Laws: robot shall obey a Bill of Rights; robot shall obey democratic orders unless this violates the Rights; robot shall preserve itself unless etc.; robot shall obey individual orders unless etc. -- well, it would look a *lot* like the Minds. Which do listen to votes, after all.

Banks has done other friendly robots, like the ones in Against a Dark Background, or the Algebraist, or the Culture drones. These all just generally get along with people. Almost as if they're (good, moral) people themselves.

Brin's "Lungfish" had robots specifically raised as humans.

Damien Sullivan said...

Also, I agree with Tideflat; ultimate the goal of an AI is whatever goal you give it. Of course, if you give it an open-ended complex goal-acquisition process, then it may end up wanting unexpected things. But you chose that.

Caveat: this assumes understanding. A shortcut to AI might be imitating human brain structures, in which case it'd be more human like, but also likely not that predictable or controllable at least by standard-ethical means.

As for Frankenstein, it's well worth reading, especially as most of it's interesting features were never adapted. Buffy's fourth season Adam is closest, in some ways, though still too homicidal from the start. The actual Creature was highly intelligent and 'good' by default, and companionable, but so ugly no one not blind would talk to it, including its jerkwad of a creator, Victor Frankenstein. When it finally realized it couldn't live with humans, it pleaded with Victor to create a mate. Only when Victor agreed but reneged did it turn violent, lashing out at his family. It still wasn't a general menace. (And why did I use 'it' instead of 'he'? Started out that way, and decided not to change even after it became awkward.)

I don't know what moral Mary Shelley had in mind, but it's easy to come away with the lesson of "if you do play God and create life, don't be a total asshole to it". Victor basically created a human being that was socially crippled, then abandoned it, then broke his promise to it. Your average sexbot gets better treatment and purpose from its creator.

(The moral lesson of Buffy's Adam is probably "don't create emotionally detached biomechanical demonoids with nothing more than a sense of curiosity and superiority. I mean, really.")

zlionsfan said...

Even though there are chess programs now that can play at grandmaster levels, they're still relying primarily on speed rather than analysis, and thus aren't replicating the mind of a grandmaster in a way that we might consider to be AI in the context of this discussion.

Chess programs, even running on dedicated hardware, lack the pattern-recognition ability that humans have, and attempts to build that are really just more finely-grained rule sets that appear to be intuition. We are not, as yet, at a point where applications can be taught to "recognize" things the way we do, and even if we were, chess might not be the best indicator that what we'd create would be AI, because it has such a small set of legal moves and relatively few positions for pieces. Go, on the other hand, is significantly more difficult to program because of the large number of positions, even though its set of rules is not particularly large.

Along the same lines, computer-controlled teams in sports games, like the Madden series, are notoriously weak, even on the highest difficulty settings. The challenge here is the vast number of potential "moves" combined with the inability of programmers to translate domain knowledge into code ... again, pattern recognition comes into play.

It's not unusual for an expert player to deduce an opponent's likely game plan simply by seeing the team they choose. (Ah, Carolina? She'll be using Newton's mobility and relying on outside runs plus deep throws to Smith.) That comes both from Madden knowledge and real-world NFL knowledge (in no small part because yearly game updates incorporate rules, abilities, and plays that were used in real life). It isn't difficult for a skilled coach to explain what a two-deep zone is, how to play it, and how to attack it, all in words, but it's extremely difficult to put that into code that can produce accurate results during play. (As a result, older sports games added "difficulty" simply by speeding up computer-controlled players, by decreasing the abilities of human-controlled players to catch or pass or whatever, or both.)

This goes back to things like the ability to respond to threats on the battlefield. Even detecting threats is a significant challenge.

And these things, I think, are signs that there is a significant obstacle to be cleared before we have AI in SF terms. I believe it's much more likely that at some point in or beyond the plausible mid-future, we won't have true AI, but rather an approximation of it that seems like AI, in much the same way that you can approximate a curve with straight lines: a program that seems to "think", but is really just doing a ton of processing before responding. It will be mostly "human", except when it encounters an unusual situation:

AI: "A bacon milkshake? I do not understand."

You: "Like a chocolate shake, only with bacon flavoring instead of chocolate. Or like a peppermint shake, only with bacon pieces instead of candy cane pieces."

AI: "Oh, I see."

There would still be situations that it wouldn't respond well to, and you'd have to make an analogy or two for the AI to file away. Something that was closer, like an orange shake, would be easier for the AI to handle. "Oranges are similar to strawberries. I've seen strawberry shakes, so I accept orange shakes."

A possible example of first-strike AI would be Stephen King's Trucks (later made into a terrible movie). I'm not sure this is truly AI, given that it involves machines rather than simply computers/robots/software, and not sure it fits perfectly into category 4 ... category 3 seems like a better fit.

Rick said...

And these things, I think, are signs that there is a significant obstacle to be cleared before we have AI in SF terms

I emphatically agree. My impression is that in the last 30-odd years the whole emphasis of AI research has changed from trying to replicate human thought - at which we've gotten a whole lot of nowhere - to 'expert systems.'

And expert systems are essentially projections of human intelligence. In complex domains such as Go or football this turns out to be itself surprisingly difficult. (I didn't know about the Madden problem, but it doesn't surprise me.

I don't believe that there is some mystical component of human thought that makes it inherently non-replicable. (OTOH, I can't assert this with utter confidence.) But we've made so little progress thus far that I have no particular reason to suppose that we will achieve it in the plausible midfuture.

Which is why I tend to associate the need for direct human presence with situations that call for judgment, not just calculating geometry.

Damien Sullivan said...

"But we've made so little progress thus far that I have no particular reason to suppose that we will achieve it in the plausible midfuture."

There's a lot of specialized progress: speech understanding, self-driving cars... specialized, but then how much the human brain is a "general purpose learner" vs. a big collection of specialized modules is still an open question.

There's also that if you try to estimate the power to emulate a simple neural network on the scale of the human brain, you get something 100 million times more powerful than a desktop computer. So while we probably don't know what we're doing, there's also that if we did know what we're doing, we wouldn't be able to use it yet.

Plausible mid-future probably has miniaturization hitting physical limits, but being followed by increasingly scaled and cheaper massive parallelism, so a continuing fall in price per computation.

Kiran Jameel said...

I guess I am the only one who comes here to share my very own experience guess what? I am using my laptop for almost the post 2 years.

Sound Radix 32 Lives Crack
Kush Clariphonic Crack
Arturia Prophet Crack
ElectraX VST Electra Crack
Keyscape Crack

«Oldest ‹Older   201 – 253 of 253   Newer› Newest»