Friday, September 14, 2012

Artificial Intelligence, Human Intelligence

Filming Jeopardy!

 The comments on a recent post about science fiction, A Literature of the Possible, included a considerable subthread about HAL, the ship's computer in 2001: A Space Odyssey. The discussion centered on HAL's literary role, and whether its function in the story could have been played by either a human mutineer, or something - such as a shark - not claimed or expected to have anything like 'intelligence' in the human sense of the term.

Here I will take the discussion in another direction, going boldly where many have gone before, including my own blog post on What Do AIs Want? - along with several earlier comment threads, which I am too lazy to dig out.

In that post, I discussed SF tropes regarding AI and its response to humans, starting with the ever-popular insane malevolence and ending with:
It will ignore us, but neither serenely nor with any other emotion. It is after all a machine, with all the emotion and motivation of a lawn mower.

Of all the possibilities regarding AI, this may be the one we least expected beforehand. So of course inevitably - thus far, at least - it seems to be the one that is coming true.

This was not expected because the only high-level congnitive intelligence we had to go by is our own. So we imagined artificial intelligence to be, at the least, similarly mysterious. This was the case long before computers or robotics were even imagined.

Roger Bacon was said* to have once created a brazen head, capable of speech and, implicitly, of thought. 'Time is,' said the head. A little later, 'Time was.' Still ignored (by Bacon's stupid servant, Miles), the head finally said 'Time is past!' - and shattered into pieces.

AI as a concept has been making heads explode ever since. Bacon's brazen head, as described, was not out to kill or enslave anyone. But it was enigmatic. You don't know quite what it was thinking, or feeling, any more than you knew what HAL was thinking and feeling behind that unblinking red eye.

What we took for granted is that the AI would indeed be thinking, in some way at least broadly comparable to our thinking. And it would likewise be feeling ... something.

The ability to play chess well was long taken as a touchstone of 'real' AI. At one point, HAL mentions playing chess with Dave. Even in the early 80s, the supercomputer in WarGames, after learning from its sim runs that nuclear war is unwinnable, suggests playing 'a nice game of chess.'

I even remember an explanation of why chess made such a good benchmark: Because the brute-force solution of playing out all permutations to find out which ones worked would require a million trillion zillion moves. Even an IBM computer could never run them out, certainly not fast enough for a chess match. So for a computer to play grandmaster-level chess it would have to do whatever human grandmasters do. QED!

It did not work out that way, and QE wasn't D'd. As it turned out, of course, crunching through permutations by brute force is exactly how IBM's Deep Blue beat Gary Kasparov in 1997.**

We still have only a hazy notion of how human grandmasters play chess. But by way of intriguing insights, I have read (possibly in a blog comment here) that while powerful chess computers can beat any 'unassisted' human player, chess masters using simple commercial chess programs can beat any computer playing on its own, without a human partner.


In any case, most AI work today is not at all about trying to emulate the still-mysterious workings of the human mind. It is about amplifying human intelligence - that of skilled programmers and subject-matter experts - by harnessing it to the brute-force power of computers.

This does not rule out the possibility that work on brain structure and psychology - perhaps amplified by expert systems - may yet unlock the mystery of human thought and lead to machines that really do think. But there is no particular reason to anticipate this in the plausible midfuture. Particularly since such work does not have the clear practical applications that expert systems do.

All of which casts doubt on some familiar SF tropes. Not all of them! An expert-system political machine (in the literal sense!) might amplify and implement its designers' scheme to conquer and enslave humanity. And if they were careless in algorithm design, it might fail to exclude them from enslavement.

But it will do so with no ambition or passion of its own - only by implementing its designers' schemes. And human freedom fighters won't defeat it by persuading it to adopt the ideas of John Locke or the Buddha. If they defeat it, they will do so by identifying limits or errors in its implementation (perhaps using their own expert systems to reverse-engineer these), and posing problems it cannot efficiently solve.

Faust lives. Frankenstein's monster, not so much.


Discuss.






* In a play by Elizabethan playwright Robert Greene. Greene also made a snide remark about William Shakespeare, which has the distinction of being the first notice of Will's presence in the London theatrical scene.

** Disclosure: I have an ongoing work gig on behalf of IBM. But in this case I don't have to cop to a shameless pander - Big Blue's work on AI truly is impressive.




The image of Jeopardy! comes from this Flickr page.

82 comments:

blarg said...

Certainly an interesting topic to discuss!

Long time lurker here delurking as the topic of AI is a fascinating one and related to my work.

I work in the computer games industry for a successful RTS developer (I hope you'll forgive me for not wanting to reveal which one) and the topic of AI is often at the heart of discussions about RTS games.

AI in most games is still fairly primitive. Brute force approaches failing in them due to the vast number of variables compared to something like chess. Most AI is reliant on scripts to react to certain situations and trying to match up the right units.

It currently falls far short of human players and can generally only compete on higher difficulty levels through lots of cheating.

What it does excel at though is efficiency, for example build queues instantly kicking in when one building is finished, and that leads me onto what you talked about in the blog and how it may lead to more interesting situations in the future (and for AI in books!).

It does seem likely that for the foreseeable future, and possibly mid future, that AI (at least as it is currently termed and not proper AI) will assist humans and free humans up from many tasks to do more thinking.

It certainly seems interesting how this could be applied for example in space battles. The computers on a ship could deal with all the course and direction changes, managing point defence and main weapons. The human commanders would be free to think about the tactical side of the battle also well assisted by the computers.

Certainly sounds more interesting to me then space craft completely AI controlled fighting each other.

M. D. Van Norman said...

“It currently falls far short of human players and can generally only compete on higher difficulty levels through lots of cheating.”

I knew it! :-)

Tony said...

blarg:

"It currently falls far short of human players and can generally only compete on higher difficulty levels through lots of cheating."

Some of it blatantly simple, like giving each AI hit more effect than a human player hit from the same weapon.

The funny thing is that even simple AIs have unexpected emergent properties in operation. Just ask anyone who's programmed an automated tank for a programming class project and found out that the right search and pursuit algorithm is much more important than optimizing targeting.

Mangaka2170 said...

I'll be the first to admit that I'm no expert in the computer systems field, and my experience with programming is limited to game editor programs, a little HTML and a dabbling in BASIC, but I'd like to think that I have some understanding about the systemic differences between human cognition and AI, and also how AI works, in a very non-technical sense.

It's been my experience with computer games that there are two things that computer players do better than human players, and those are scenario awareness and micromanaging, both of which are consequences of the computer generating both the map and running the AI scripts of every object on that map. In every such game I've played, the computer follows one of a handful of available strategies, and if that strategy doesn't work, it keeps doing it again and again, with only minor variations, until it either works or I win, and these strategies all seem to be aggressive, often unwisely so.

Considering that this has proven mostly true for games since the '80s, I think it's safe to assume that that is the limitation that current computing architecture has; the only reason why computers are so good at chess is because all of the permutations of the game have been worked out for centuries, and if there's one thing that computers are good at, it's high-speed number crunching. As chess is relatively simple when compared to a lot of computer games (or real life, for that matter), the computer's ability to calculate the best possible course of action and then act upon it is reduced when the limitations of the programming can't keep up with the changing environment. After all, a computer is only as good as its programming. One advantage that we humans have over current AIs is our ability to adapt to new situations, drawing on our experience and, when all else fails, intuition. Afterward, we (hopefully) learn from our experience so that when a similar situation comes up, we can better respond to it.

So, the challenge becomes programming a computer that can learn from its experiences and devise its own solutions based on its own successes and failures. Until we do that, AIs are ultimately going to be limited to the strategies that they're programmed to work with (and minor permutations of them), which can lead a human player with experience playing against them to learn their pattern and adapt their play style accordingly.

Tony said...

Computers can be programmed to be good at chess because the number of variables and their domains are precisely known. That is actually also known for a wide variety of strategy games. (Because the programmer puts in all the variables that exist.) The problem with more complex strategy games is the lack of understanding how the variables interact across the domain of possibilites. One could, for example, by playing the same game, with the same number of units, over the same map, millions of times, develop a good statistical database of the affects of different moves. Or one could simply talk to an expert player for, oh, several years, nonstop, and figure out what his reasoning process is in programable terms.

IOW, what we know about chess goes into very careful programming of the heuristics of the chess-playing software. No other game has that knowledge base behind it, and likely never will, due to both extra complexity and much less play experience. Why this doesn't handicap humans is probably a question that AI researchers should be looking at (to the degree that they aren't already).

WRT learning qua learning, one of the big problems with expert systems is figuring out what variables matter and how they should be weighted. Another allied proble id identifying all of the variables to begin with. If your system doesn't have a sensor to collect data that effects the outcome of some decision, it will never be able to learn what it's doing right or wrong.

Damien Sullivan said...

"Considering that this has proven mostly true for games since the '80s, I think it's safe to assume that that is the limitation that current computing architecture has"

Not exactly. If multiple strategies are available, it wouldn't be hard to somehow evaluate the success of the current strategy, and to switch to something else if it's poor. Now, evaluating success well might be hard, but a basic cut would be possible in any system that can evaluate how well it's doing, which you usually need anyway.

Douglas Hofstadter's lab has programs that can sometimes detect when they're stuck in a loop, and force themselves to 'think' in other directions.

As for aggression, that's certainly not required. An old version of Risk for the Macintosh had three available AIs for the human to choose: aggressive, 'crazy', and neutral (which didn't attack until attacked.) Interestingly, the hardest play was one human vs. 5 neutrals.

Damien Sullivan said...

" So of course inevitably - thus far, at least - it seems to be the one that is coming true."

Eh, not really. What's true now is the most obvious path, combining 6 (Three Laws) and 9 (just a machine): the computers obey us unconditionally and unemotionally. We make them to obey us, and we don't know how to give them true emotions, nor yet have enough complexity to make emotions useful.

Though a Roomba presumably has behaviors analogous to "I'm hungry (for power)", "I'm stuck", and "I need to take a dump", interrupting its default "I like sweeping". Most others commercial AI or industrial robots have only a single purpose in life, making emotional complexity pointless.

Tony said...

Damien Sullivan:

"Not exactly. If multiple strategies are available, it wouldn't be hard to somehow evaluate the success of the current strategy, and to switch to something else if it's poor. Now, evaluating success well might be hard, but a basic cut would be possible in any system that can evaluate how well it's doing, which you usually need anyway."

The problem is defining goals and progress towards their achievement. emergent AI aggression in map-based strategy games is probably rooted in weighting positional goals more heavily than they should be, and cumulative success (i.e. holding on to something for a long time, even if not holding on to everything) not enough.

"Douglas Hofstadter's lab has programs that can sometimes detect when they're stuck in a loop, and force themselves to 'think' in other directions."

Not to sound dismissive, but there are numerous simple strategies for breaking out of loops. There have to be, because iteration (i.e. looping) is a necessary computing function. These strategies can be implemented in even the most complex programs. The trick is to sew up every possible way for the loop monitor to fail.

"As for aggression, that's certainly not required. An old version of Risk for the Macintosh had three available AIs for the human to choose: aggressive, 'crazy', and neutral (which didn't attack until attacked.) Interestingly, the hardest play was one human vs. 5 neutrals."

Tenacious AI behavior is actually an intirguing phenomenon.

"Eh, not really. What's true now is the most obvious path, combining 6 (Three Laws) and 9 (just a machine): the computers obey us unconditionally and unemotionally. We make them to obey us, and we don't know how to give them true emotions, nor yet have enough complexity to make emotions useful."

Ummm...I'm not seeing how the putative laws of robotics can be combined with the mere programmed machine. Any failure of the machine to properly safeguard humans or itself would be totally the responsibility of the human programmer.

Damien Sullivan said...

" there are numerous simple strategies for breaking out of loops"

Identifying that you're in a loop, in a complex domain, is not trivial. The type of thinking Copycat and Metacat try to emulate has very little to do with standard programming loops.

"I'm not seeing how the putative laws of robotics can be combined with the mere programmed machine. Any failure of the machine to properly safeguard humans or itself would be totally the responsibility of the human programmer"

I don't know what the first sentence is supposed to mean; ultimately, the 'laws' were programming. For the second, that's a matter of where liability is distributed -- and why should it fall on the programmer, rather than the user of the robot who gives it orders?

'three laws' is metaphorical here; our robots are basically Second Law only. They obey orders because that's what they're built to do.

Tony said...

Damien Sullivan:

"Identifying that you're in a loop, in a complex domain, is not trivial. The type of thinking Copycat and Metacat try to emulate has very little to do with standard programming loops."

The method is explained in the first online paper I found describing Metacat. The program keeps a history of its activity and, following some undescribed set of rules, decides it's not going anywhere (and presumably pruning that branch of reasoning from its problem-solving model). Conceptually, that would probably be the first thing most experienced programmers would think of doing. The precise mechanics are probably pretty interesting, as would be the rules for deciding. But architecturally it's a pretty elementary and straightforward approach.

"I don't know what the first sentence is supposed to mean; ultimately, the 'laws' were programming."

What's the point of having laws when every action is programmed? The programmer decides what happens next, and probably sets up safety boundaries specific to the system and application.

"For the second, that's a matter of where liability is distributed -- and why should it fall on the programmer, rather than the user of the robot who gives it orders?"

If we're talking about Type 9 machines, we're not talking about Asimovian robots. User configurability is then in fact the responsibility of the programmer, because he's not supposed to turn out a product that is unsafe. Software Engineering 101, "Software Ethics".

Now, I can imagine systems, such as weapons, where the desired result is damage, and where even normal operation requires care on the part of the user. In those cases it's still the system designer's responsibility to give the user a set of constraints and normally-safe procedures.

"'three laws' is metaphorical here; our robots are basically Second Law only. They obey orders because that's what they're built to do."

Still not seeing it. They don't "obey orders", as if they had a contingency not to. They do what the software instruction tells them to do, then the next instruction, then the next. It's not the same thing, in any technical or logical sense that I am aware of.

Locki said...

I'm no programmer but I have a reasonable knowledge of competitive chess. I always thought Watson winning Jeopardy was a lot more eye-opening than Deep Blue finally beating Kasparov. Its the more impressive "AI" feat and the singular event that made me think the “skynet” future isn’t so impossible.

After all we always knew it was only going to be a matter of time before a computer could out-crunch even the greatest of grandmasters eg Kasparov.

A simplistic analysis. Chess is a game with a limited set of variables and at anyone point in the game its possible to mathametically determine who is winning by assigning a points value to each piece and their relative positions. From there its just a case of having enough crunching power to keep optimising your position and eventually win. Afterall a simple program like Fritz running on a 486 can beat most of the grandmasters in the world.

When you read the analysis of the 6 match game between Kasparov and Deep Blue the commentators made the comment kasparov played quite well in the early games but appeared to be psychologically broken by deep blue in the later games. What would happen was he’d use a lot of his time to devise a strategy to win the game and Deep Blue had the uncanny knack of responding very quickly with the perfect counter. It was almost like the computer was reading his mind and knew what he was planning. It was so disconcerting Kasparov actually asked for the game to be stopped and asked for Deep Blue to explain the move. The sign of a broken man! Needless to say he got crushed in the later games.

Computers have given us lots of brand new insights into chess but they can’t explain why they chose the move they did. Its almost like they are AI’s (reasoning machines) but are so different they fail the turing test because they can’t explain it to mere humans.

Unlike Chess, something like the game Go is supposed to be much harder for a computer to win as its very difficult to determine the degree to which you are winning until the game is over.
Which is why I always followed with interest IBM’s attempt to design a quasi-AI point to play Jeopardy. From what I knew of competitive game programming Jeopardy should have been very difficult to program as it relies a lot on pun, analagy, obscure connections etc. Everything about the way the game was played was supposed to give the computer programmers kittens.

Watson of course won pretty handily.

I've read into it in quite a bit of detail and its not just a case of having a huge search engine giving you the top probability hit. Watson proved surprisingly quick at making some really obscure connections and answering questions even the best humans couldn't answer. It also proved surprisingly inept at some of the simple questions.

Maybe computers will be a true AI (either soft or hard) one day but be so different to humans that they never pass the turing test. Eg they can never explain their motivations and reasoning to us. The danger is this inability to “explain” should not be mistaken for the AI’s not having motivations or reasons in the first place.

Chris Lopes said...

"I always thought Watson winning Jeopardy was a lot more eye-opening than Deep Blue finally beating Kasparov. Its the more impressive "AI" feat and the singular event that made me think the “skynet” future isn’t so impossible."

Actually Watson was just a search engine with a natural language interface. It was no more self aware than the auto-complete feature on Google. That's one of the problems you run into when talking about AI. Computers are very good at imitating aspects of intelligence.

Mangaka2170 said...

It occurred to me that the best depiction of an AI in popular (or at least not obscure) fiction is not HAL from 2001, not Data from Star Trek, not the Tachikomas from Ghost in the Shell; these are all human characters in the guise of machines. Instead, I think that a better example was Destiny's main computer from Stargate Universe; Destiny has a mission, and it is going to do everything in its power to accomplish its mission, because that is what it was designed to do; anything it does for its crew is accommodated as long as it does not interfere with that mission. People are left stranded on more than one occasion (and there were many close calls) because the ship's computer would only allocate so much time in one star system before it went to FTL. Destiny's computer is fundamentally a machine, and therefore it acts accordingly.

jollyreaper said...

Per Rick's original speculation, I would call this genie-AI. It does what you want, can do complex things, but lacks individual motive and desire. And if the bad guy gets ahold of the lamp, the genie will blindly follow those orders.

The advantage of this scenario in a story is that the AI is no longer a principle character. When people talk about a ship or location being like another character, they're still different from someone with a speaking role. A strong AI does not become part of the setting, it would dominate the entire story and reduce humans to irrelevance.

Genie-AI might remain a powerful asset and struggle for control of it could drive the plot but at least the conflict is between human characters and thus remains relatable. Similarly, it's very, very difficult to tell an interesting singularity story.

jollyreaper said...

I don't see Watson taking over the world like Skynet but I do see similar technology automating many people out of jobs. Then what?

 Ashley said...

I don't often comment, even though all the posts are thought provoking. I work in cognitive behavioural processes, and what I would say is that intelligence implies a mind. However, as we have no theory of mind, which we can test in any meaningful way, I think AI is the psychologies equivalent of FTL; handwavium using unobtanium. YMMV.

Locki said...

Chris Lopes said...

Actually Watson was just a search engine with a natural language interface. It was no more self aware than the auto-complete feature on Google. That's one of the problems you run into when talking about AI. Computers are very good at imitating aspects of intelligence.

=====================

Geez tahts'a bit dismissive of 10 years of total effort by a group of 30-ish of IBM's best scientists. I guess your summary is essentially true but at least for myself its startling a computer can be programmed to be far better at aspects of natural language than the best human players in history.


jollyreaper said...

I don't see Watson taking over the world like Skynet but I do see similar technology automating many people out of jobs. Then what?


==================

Already been happening. The old document discovery process major law firms used to go through before a big case is already starting to be massively streamlined and cheapened by using computers. Thousands of law students worldwide will have to start flipping burgers to pay their way through college now.


P.S. This captcha system is driving me to drink. Strong drink. Can you at least use an easier image?

Damien Sullivan said...

"What's the point of having laws when every action is programmed? The programmer decides what happens next, and probably sets up safety boundaries specific to the system and application."

"Still not seeing it. They don't "obey orders", as if they had a contingency not to. They do what the software instruction tells them to do, then the next instruction, then the next. It's not the same thing, in any technical or logical sense that I am aware of."

The Laws *are* the programming. As described by the character Susan Calvin, they're not external laws that veto or force the robots' behavior, they're the laws of thought describing how the robot works. A robot doesn't go "gosh, I must save that human because the First Law says so", it wants to save the human, a desire expressed in simplified form by the English version of the First Law.

"Unlike Chess, something like the game Go is supposed to be much harder for a computer to win as its very difficult to determine the degree to which you are winning until the game is over."

Not any more. Programs with new Monte Carlo Tree Search techniques I don't understand are playing at the 6 dan level on KGS. That's right at the threshold of professional play. There's been a huge increase in strength, only a few years ago I think they were more like 6 kyu (-6 dan, essentially.)

Cambias said...

I think part of the problem with creating Artificial Intelligence is that we don't have the foggiest notion how Natural Intelligence works.

Seriously: we're just scratching the surface in learning how human brains function. They aren't digital computers (sorry, uploaders), they seem to be a vast network of little specialized analog computers, constantly changing their connections and interactions.

And operating in parallel with (and crossing over with) the neuro-electric brain is a chemical system which is almost as complex.

My intuition is that Artificial Intelligence will eventually be split into two fields. One will develop useful "pseudo-intelligent" systems using natural language and heuristics to perform "intelligent" tasks like driving, shopping, work, etc.

There will also be a branch of AI developing genuine Machine Intelligence. Machine Intelligence will be very very alien, and will solve the problems of being intelligent in ways human brains don't.

Chris Lopes said...

"Geez tahts'a bit dismissive of 10 years of total effort by a group of 30-ish of IBM's best scientists. I guess your summary is essentially true but at least for myself its startling a computer can be programmed to be far better at aspects of natural language than the best human players in history."

First, I'm not dismissive of the effort that went into Watson. It's an impressive system to be sure. It just isn't sentient.

Second, it was able to beat the best human players because it was plugged into a database that covered large areas of human knowledge. Since most people (not named Sheldon Cooper) can't know everything, that gave Watson a considerable advantage. Watson wasn't smarter than its human opponents, it was simply designed to search through the database (after parsing the question) faster.

Rick said...

Welcome to new commenters! Occasional commenters, too!


and these strategies all seem to be aggressive, often unwisely so.

One factor: There is an asymmetry between the goals of game designers and game players. The players want to win. The designers want to provide an exciting game, including allowing good players to win.

The worst selling wargame of all time would be Sitzkrieg!

Not unrelated to Damien's point about 'neutral' being the most effective strategy for a Risk AI.

However, as we have no theory of mind, which we can test in any meaningful way, I think AI is the psychologies equivalent of FTL; handwavium using unobtanium. YMMV.

My M[1] is pretty close to yours. The one distinction I would make is that FTL has problems with something we do know, General Relativity. 'True' AI deals with something, the human mind, about which we don't even yet have a decent theory.

Damien Sullivan said...

Well, consciousness we don't have a theory for. Intelligent decision making, we have various handles on. Not perfect ones, but there's active debate as to whether we need "better tools" or just "more stats and data". So far statistical machine translation does about as well as hand-crafted programs with grammars.

Neutral actually wasn't the best strategy over all. IIRC, if you played a mixed game, they probably got chewed up. Aggressive might have been best, but I remember it not being too hard to win in general... okay, if one used the "pick starting countries" mode, aiming for Australia, or maybe most of North America.

It was just that *all* neutral became really hard, even from Fortress Australia, probably because you're the only aggressor. You attack someone, they never stop attacking you to regain their country, everyone else builds up armies in peace. I think a key strategy was to wipe out a local base, so they'd have to go through someone else to get back and thus start a feud. But it's been a long time.

Mukk said...

Unless the human brain is magic, AI is possible at some future time simply by copying all the human mechanisms. Some way off future time would be required. The hard way would be incredibly complex.


But let's assume we are going to figure out how to make something intelligent sooner than that. Its going to 'want' whatever we design it to want. Its going to be motivated to do whatever we design it to do.

So the question is 'What kind of minds would we design?' I see two answers.
#1 A generalist mind that's more human. It would be wonderfully modular. When you don't need it for one factory job you send it to assist human resources, or to the next part of the assembly line, or whatever. It would have human judgement so you're unlikely to end up being held legally responsible for some accident because the AI was to narrow sighted to avoid it.

But it would be much more technically difficult to create this. This kind of AI would be much more prone to going crazy in the exact same way humans normally go crazy. And if all went well the thing may request legal rights.

#2 Task oriented AIs with little peripheral support. These are more technologically possible. They would be cheaper to manufacture and you wouldn't have to worry about their rights. On the other hand you could easily have problems because the robots don't care about things they should.

Unknown said...

The original article reminds me of a long discussion I had with an old housemate of mine regarding Blue Gene simulating part of the human brain. The popular press articles claimed 4.5%, but the reality was, as always, rather more fuzzy.
My housemate took this to mean that a true thinking computer was in sight, or to use the local vernacular: in the plausible midfuture. However, that's only part of the picture. Even a fully simulated human brain is not automatically intelligent. That's only the hardware, we have much less understanding of the software. Will a simulated brain immediately start functioning and learning the moment it's switched on? Probably not, humans have a huge amount of inherited instinct. Of course it may be possible to cheat there, without needing to fully understand its functionality, by copying instincts and memories from a real human brain. We fall straight into Philip K Dick territory then.

jollyreaper said...

Saturn's Children is about post-human robots. The model they ran with there is "we know human brains, let's try to replicate them in another medium."

Each fresh brain is essentially a newborn and you get a year of crying as the pathways begin to map. The mind is placed in a series of bodies that replicate the stages of human development.

Full backups are possible so the mind is backed up at every point. Once a mind has reached maturity, it is called the template master for a line of robots.

Making a template is expensive and making copies is cheaper. So there are many brothers or sisters of the original which would be known as the patriarch or matriarch. Memories can be shared between siblings but not between unrelated bots. A memory can't work with a neural net too divergent from the one that laid it down.

This goes strongly for humanoid robots designed to serve people. Also, because they can't be "programmed," they have had strong behavioral conditioning such as reacting with awe to the presence of a living human. There are slave chips that can deliberately manipulate and alter emotional states but that was a hack other robots came up with.

At the time of the story humanity has died out and the robots have inherited the solar system. When the humans went away, the most canny of the humanoid models stepped into the vacuum and set themselves up as slave masters. Seeing as the original mental model for the bots was human, it seems reasonable that much of their thinking can be comprehensible to us human readers after a fashion.

Tony said...

Damien Sullivan:

"The Laws *are* the programming. As described by the character Susan Calvin, they're not external laws that veto or force the robots' behavior, they're the laws of thought describing how the robot works. A robot doesn't go 'gosh, I must save that human because the First Law says so', it wants to save the human, a desire expressed in simplified form by the English version of the First Law."

Except that Type 9 machines in Ricks taxonomy don't have motivations, no matter how complex and extensive the software. In the context of those types of machines, the laws of robotics aren't laws at all, just programming principles that we pretty much already follow, to the degree that they are relevant for any particular application.

Now, in a world of intelligent, autonomous general purpose robots, one would have to question the reality of uncontrollable motivations. They would in effect be neurotic compulsions. Imagine a robot that can think for itself generalizing such a compulsion into keeping humans from doing anything that bears any risk whatsoever.

"Not any more. Programs with new Monte Carlo Tree Search techniques I don't understand are playing at the 6 dan level on KGS. That's right at the threshold of professional play. There's been a huge increase in strength, only a few years ago I think they were more like 6 kyu (-6 dan, essentially.)"

Still just a brute force goal-seeking technique. Interesting in implementation detail, but not moving us any closer to strong AI.

Tony said...

Cambias:

"There will also be a branch of AI developing genuine Machine Intelligence. Machine Intelligence will be very very alien, and will solve the problems of being intelligent in ways human brains don't."

For example, a machine intelligence would probably never derive calculus. It would simply figure a required level of precision, slice the problem up into a number of elements that will meet that precision level, and just crunch through.

Locki said...

Damien Sullivan said...

Not any more. Programs with new Monte Carlo Tree Search techniques I don't understand are playing at the 6 dan level on KGS. That's right at the threshold of professional play. There's been a huge increase in strength, only a few years ago I think they were more like 6 kyu (-6 dan, essentially.)

==================

Thanks for passing on that information! You just won me $50 from a 10 year old bet I had with an old friend. I just knew it was only a matter of time before there was a suitably clever programmer who had the requisite knowledge of GO before the computers started playing it competently. No spatial spaient reasoning required!
Can anyone here describe how it was possible? In small, simple words?

Tony said...

For example, a machine intelligence would probably never derive calculus. It would simply figure a required level of precision, slice the problem up into a number of elements that will meet that precision level, and just crunch through

================

But what has been interesting is a computer has no need to understand calculus. I was under the impression a lot of our aerodynamic optimising (eg stealthy and fast AND manouverable jets) was only possible because of the brute force calculating/optimising power of the computers. If Isaac Newton himself was reborn he wouldn't have been able to mathamatically optimise the solutions required for the B2 Bomber.

On a side note I read this is why the soviets were a bit behind on computers. Stalin himself considered brute force calculations a bit crude and a sign of western decadence. The mathamatical solution was always more elegant.

This is related to the point about computer assisted intelligence. Throughout history there have been many problems which we assumed would require AI to solve (chess, then Go, the Jeopardy, then some medical diagnostics, then some of the fancy computer modelling). After we learn enough we realised intelligence isn't required at all to solve the problem.

I would like to hypoethesise it will soon be possible for a computer to write a very good, "original" sci-fi/fantasy novel.
I picture the computer using similar natural language techniques as Watson combined with a comprehensive theasaurus, a complete encylopaedia of knowledge and most importantly the "Tough guide to Fantasy Land" loaded into the database the computer could carefully write a book with a very wide vocabulary, perfect grammar and avoid every overused trope about upturned nose princesses ever imagined.

The human merely loads in terms like "character names, protagonist, antagonist motivations, tone, upbeat ending, primary themes, secondary themes, based on high medieval times, amount and type of mcguffinite, open ended plot" and the computer goes off and accesses all of human knowledge to comply with these parameters and combines it into a readable original story with its natural language abilities.
It could be argued this program would free up authors to concentrate on the truly creative process.

The computer would have no idea about human life but I bet it'd write a better story than most mainstream authors nowadays

jollyreaper said...

I think it will be possible for a computer to write formula books. They won't be original or good but probably would serve for people who like well-trodden genres with little variation.

The dumbest example was "personalized fairy stories!" for kids that came out way back when. Character names were variables and were replaced with the purchaser's kid's name when printing.

Now what would be interesting is if a writer could specify the general premise of a story, seed an autogenerator and get half the book autowritten. I'd be curious to know how the big names work when they're churning out books via ghostwriter. Do they write the outline and tell the ghost to fill in the blanks?

Hugh said...

If the near future of AI is an extension of current day trends rather than a Dalek/Cylon "turn on your creators" scenario, we'll be seeing military AIs serving alongside humans.

I wonder how this will work out. There's the supposed Frankenstein complex where people distrust or even fear intelligent robots and presumably would keep their distance. Ripley and Bishop in Aliens would be an example.

Against that, Peter Singer in Wired for War has accounts from Iraq of human troops risking their lives to rescue bomb disposal robots under fire. Objectively this makes no sense: the robots are supposed to be expendable.

How else might soldiers react to AIs?

Hoshizora said...

[i]Against that, Peter Singer in Wired for War has accounts from Iraq of human troops risking their lives to rescue bomb disposal robots under fire. Objectively this makes no sense: the robots are supposed to be expendable.

How else might soldiers react to AIs?[/i]
I don't think AI's change anything. People get attached to objects, regardless of the real are illusionairy sentience level. It's not intelligence that determines value, its familiarity.

People are attached to pets, childhood toys, bags, musical instruments etc. It makes perfect sense that soldiers risk their lives to save their robot, its sorta like their dog, an object that follow orders, helps you out, and at least gives the illusion of being there for you when it counts. Bomb robots and their operators have had significant bonding time.

Hoshizora said...

oh, sorry, I didn't realize it had to be HTML, not BBCode. Don't really ever comment on things, just lurk.

Rick said...

Welcome to the comment threads! Don't sweat the formatting.

Your points makes a lot of sense, and strikes me as a likely explanation.

Thucydides said...

Perhaps the best way to go about creating AI is to start looking at how unintelligent creatures do things. It should be fairly easy to model insects, for example, and from there move to social insects.

Once you get a handle on how things work at that level, then we can start doing more challenging things like amphibians or reptiles. I am pretty sure that research of this sort is actually going on, but I'm not sure if there is any sort of progressive program at work or just researchers picking some evolutionary "niche" and going to work ("Hey, lets put in a grant for AI rabbits!").

It seems fairly obvious that intelligence (as we know it) evolved because it gave social, non specialized omnivores the ability to adapt to hostile and variable environments faster than other forms of adaptation (and we have hominid branches which do seem to have gone down the specialization path, like Australopithecines, Neanderthals or Homo floresiensis). Of course a machine based AI will not have the same sort of evolutionary starting point unless the programmers build this into the design, and different starting points will probably result in different means of dealing with variable environments, some of which we may not even recognize as intelligence.

Thom S said...

I think part of the problem of intelligence is compounded by the innate human tendency to ascribe it to things.

A personal example: my wife works with animals and, as such, has some training (and experience) regarding the cognition of things like dogs, cats, horses and so on. So it really annoys her when pet owners (myself included) anthropomorphise their pets.

The thing is; once you start looking for it our tendency to anthropomorphise things is everywhere. We ascribe thoughts and motivations to behaviours (even in inanimate objects), then build up a narrative and history to fit. A good way to see this is to give whatever it is to someone impartial without the attendant personal history and see what they make of it. It is amazing, in fact, how the behaviour of cars, computer and pets 'change' when given into the care of a new owner for a little while.

I think the problem of AI, then, is that, even using the Turing test, we simply have no objective measure of intelligence due to our innate biases and the slippery nature of 'intelligence' as a concept.

Hoshizora said...

I think to get human like AI, we have to start at the hardware level. Neurons are not transistors.

Almost all of our computing technology has been focused on computing, making fast, accurate, and simple calculations.

AI research tries to piggy back off of this and use those fast, accurate, simple calculations, to simulate the fuzzy, slow, but incredibly complex processes of the brain. It doesn't really work beyond a point due to an impractically large amount of computing power required to do something fuzzy and complex using a system built to do the simple and accurate, with the best supercomputers today, we can barely simulate the brain of a rat.

I agree that we have to start from scratch if we want human AI, but its not just programmers that need to start from scratch, we need to reinvent the transistor and are whole method of thinking about making things think.

Tony said...

Thom S:

"I think part of the problem of intelligence is compounded by the innate human tendency to ascribe it to things."

I don't know that people ascribe intelligence to things so much as they treat them as if they had a personality.

As far as animals are concerned, your wife notwithstanding, they do have real personalities. We often interpret their personality quirks in human terms, because they have an effect on our reactions with them. But is that really so wrong? It's not like human personality traits are the result of great consideration or highly intelligent analysis.

WRT machines, the more complex they get, the more of a personality they seem to have. Any pilot will tell you that no two planes behave the same way, even if they are the same make and model, with all of the same modifications. Heck, does your dad's F150 pickup behave exactly like your F150 pickup? Above a certain level of complexity, machines just can't be made to be perfectly like one another, and their different operational histories lead to different wear patterns, which lead to different behaviors under the same stimuli. That we ascribe human personality traits to these differences in handling is also understandable, as a mapping device -- this plane's a "bitch", meaning it is tough to handle and can be difficult in marginal situations; or this one's "sweet", meaning it will always do what you need it to do, in any situation.

Now computers are very complex, but totally different. Since computers have become so standardized, and their software as well, very few people ascribe personality traits to them anymore. People try to personalize them with different skins, stickers, desktops, etc. But they still (mostly) don't make them out to have personalities.

What significance does all of this have for the future of artificial intelligence? I think people will be incredibly surprised to see a machine that has a real -- as opposed to an ascribed -- personality, and just as equally surprised to have a computer that is not personalizable, because it already is its own person. Hilarity will no doubt ensue.

Tony said...

Hoshizora:

"I agree that we have to start from scratch if we want human AI, but its not just programmers that need to start from scratch, we need to reinvent the transistor and are whole method of thinking about making things think."

Except that the transistor -- and logic gates made out of tranistors -- exist as they do for some very fundamental physical reasons. Digital technology is simply not reinventable or reimaginable.

Hoshizora said...

Except that the transistor -- and logic gates made out of tranistors -- exist as they do for some very fundamental physical reasons. Digital technology is simply not reinventable or reimaginable.
As I understand it, the brain isn't a digital computer.

So if we want to replicate the brain, we have two options, brute force how it acts with digital technology (impractical due to the incredible amount of computing power required), or replace the digital technology way of thinking with something more like how the brain does things.

Possibly, the best way to replicate brains is just to grow brains and stick them in jars and hook them up to electrodes.

Tony said...

Hoshizora:

"As I understand it, the brain isn't a digital computer.

So if we want to replicate the brain, we have two options, brute force how it acts with digital technology (impractical due to the incredible amount of computing power required), or replace the digital technology way of thinking with something more like how the brain does things.

Possibly, the best way to replicate brains is just to grow brains and stick them in jars and hook them up to electrodes."


No, the brain isn't a digital computer, but digital computer technology is the best thing we have. We simply can't make a copy of the brain in analog at anything like the microscopic scale of the actual human brain. It may be that only biology can actually do that. But if that's the case, copying the human brain into a biological machine would have no greater significance than just making more human brains the old fashioned way.

Geoffrey S Hicking said...

"If the near future of AI is an extension of current day trends rather than a Dalek/Cylon "turn on your creators" scenario, we'll be seeing military AIs serving alongside humans."

I thought I'd be ridiculously nitpicky today, and just state for niticking's sake that Daleks are not robots.

Interesting thoughts on Ai though. Speculative thought- Maybe we can make Ais that are human like and not human like? From the inscrutable omniscients to the ignorant human-facsimiles. Would make for an interesting setting...

Geoffrey S Hicking said...

Sorry, I meant a setting in which there are all types of Ais running about...

Mangaka2170 said...

Interesting thoughts on Ai though. Speculative thought- Maybe we can make Ais that are human like and not human like? From the inscrutable omniscients to the ignorant human-facsimiles. Would make for an interesting setting...

Already taken care of in Ghost in the Shell. The Tachikomas are fully self aware, and a major subplot revolves around their development of individuality and adoption of some human customs (not to mention that they're the cutest little spider-like tanks you'll ever see), while the Operators are almost indistinguishable from humans (other than the fact that they seem to have a limited number of models, and their fingers can split up for faster typing), but only have a limited ability to simulate human behavior and intelligence; in one episode of Stand Alone Complex, the Tachikomas managed to crash one long enough to get past her and pilfer an expert system she was guarding by invoking the "everything I say is a lie, and right now, I am lying," trick ("Sub-Turing AIs are such easy marks; can't even handle a simple logical paradox!").

As a result of these differences, the Tachikomas come to the conclusion that the reason why they don't have humanoid bodies is because humans would see them as a threat (using the Operator and her limited capabilities as a case-in-point).

Tony said...

Geoffrey S Hicking:

"I thought I'd be ridiculously nitpicky today, and just state for niticking's sake that Daleks are not robots."

But are cyborgs, which has relevance in a possible existence where "artificial" intelligence is really just enhanced biological intelligence.

"Speculative thought- Maybe we can make Ais that are human like and not human like? From the inscrutable omniscients to the ignorant human-facsimiles. Would make for an interesting setting..."

The critical thought here is that if making a strong AI becomes common enough, they might be strongly typed to different jobs.

Thucydides said...

The critical thought here is that if making a strong AI becomes common enough, they might be strongly typed to different jobs

Just so long as they don't start running around screaming "Exterminate!" at every opportunity...

Thom S said...

Tony,

I should probably have been more coherent in terms of my phrasing: Animals definitely have personalities. There have been some wonderful studies, in face, showing this trait I'm things as simply and uniform-seeming as ants and bees. My intuition is that the reasons have, like your father's truck, to do more with life history than genetics.

The point I was trying to make is that people seem to naturally go a step further, ascribing intent and cognition to these traits. This may be a problem when trying to decide if a machine is truly intelligent or not.

Thom S said...

As for finding a substitute for neurons, we already have one. We call them neurons. Check out, for instance, the good work being done with animats and organ printing.

Given the range of behaviours (some very specific) shown by biological organisms, it may be that there are a number of ways to build and control the gross architecture of a wetware computer. Some may be as simple as training seperate modules and then sticking them together.

On the other hand, we're not going to be able to design an organism (read: dedicated wetware unit) from scratch any time soon, so the ultimate options for neurons in the near future may be limited to more advanced versions of current culture and training techniques, used for applications where low cost and disposeability are important. Drones, perhaps.

Hoshizora said...

But if that's the case, copying the human brain into a biological machine would have no greater significance than just making more human brains the old fashioned way.
I disagree.

I think we can produce "ideal" brains, and raise them in ideal situations for the task they are going to do for the rest of their "lives." Let's say we grow a brain whose hobby something we want it to do, example CAPTCHAs, and loves doing CAPTCHAs and doesn't even want a reward, but more CAPTCHAs to do. Then we hook it up to images of CAPTCHAs and a way to input letters. Thus you have a CAPTCHA machine.

No regularly raised human is going to be like that, even if the physical components are the same.

Ethics issues aside, there are great benefits to designer "humans"

Tony said...

Thom S:

"I should probably have been more coherent in terms of my phrasing: Animals definitely have personalities...

The point I was trying to make is that people seem to naturally go a step further, ascribing intent and cognition to these traits. This may be a problem when trying to decide if a machine is truly intelligent or not."


Well, animals may not have cognition like we do, but they definitely have enough cognitive ability to form an intent. Over the lifetime of a dog, the animal and its human master develop a very clear grammar that they use to communicate with each other.

Yes, much of it is operative conditiong. But it's operant conditioning that works both ways. Not only does the master get what he wants through training the dog, but the dog also gets what she wants by learning how to get the master to respond with a treat, or let her out, or whatever.

How is the master any less trained than the dog? The master didn't invent sitting between the master and the TV to get a treat, nor did he invent standing by the door to get let out. The dog did that, and the "master" responds to a learned signal. The dog may not be able to do aerospace engineering, but she knows enough to get what she wants out of (to here) a dumb human.

Tony said...

Hoshizora:

"I disagree.

I think we can produce 'ideal' brains, and raise them in ideal situations for the task they are going to do for the rest of their 'lives.' Let's say we grow a brain whose hobby something we want it to do, example CAPTCHAs, and loves doing CAPTCHAs and doesn't even want a reward, but more CAPTCHAs to do. Then we hook it up to images of CAPTCHAs and a way to input letters. Thus you have a CAPTCHA machine.

No regularly raised human is going to be like that, even if the physical components are the same.

Ethics issues aside, there are great benefits to designer 'humans'"


I meant technological significance, as in it would probably be more efficient to grow a human brain, with it's entire biological infrastructure, than it would be to manufacture one and provide it with a mechanical support infrastructure.

Having said that, I suppose you are right that one of the benefits (for certain values of "benefit") would in fact be the construction of highly focused intelligences. But then again, if we can do it iwth a built-up brain, why couldn't we do it with a human brain in situ, as it develops? Knowing how to force a brain to develop in a certain direction seems much, much simpler to me than trying to purpose-build one.

Of course there are the ethical issues involved. And those will probably keep searches for machine intelligence going for a long time. Enslaving human minds, especially in ways that the enslaved don't even care that they're tools, is always going to be problematic, probably even in some possible future totalitarian dictatorship. Enslaving machines, even intelligent ones? Not so much.

Hoshizora said...

I meant technological significance, as in it would probably be more efficient to grow a human brain, with it's entire biological infrastructure, than it would be to manufacture one and provide it with a mechanical support infrastructure.

The brain uses up 20% of the body's energy consumption. So, in a nice perfect world, you can have 5 brains for the energy budget of 1 if you were to go with jars instead of bodies.

Possibly the support infrastructure "jars" we make for our brains won't be mechanical, but rather just the bare minimum of organic components of the body required to keep the brain alive and functioning, but full bodies are energy hogs not doing us much good.

Also, with brains in jars rather than full bodies, you can hook them up right to your network for increased efficiency in distributing human computing tasks.


Of course there are the ethical issues involved. And those will probably keep searches for machine intelligence going for a long time. Enslaving human minds, especially in ways that the enslaved don't even care that they're tools, is always going to be problematic, probably even in some possible future totalitarian dictatorship. Enslaving machines, even intelligent ones? Not so much.
I don't think the ethical issue can be solved with government changes, but it is easily solved with cultural changes. Less than 200 years ago we considered black people the equivalent of farm animals, I don't think its far fetched for in 200 hundred years we'll consider specially bred brains in jars/humans the equivalent of computers.

Thucydides said...

Do you realize you are describing H.G. Wells' Martians?

Hugh said...

Brains in jars have the big advantage of requiring a lot less R&D into making them think. (Still some issues with the life support system, but compared to figuring out how to make something that thinks in the first place, easy.)

Still, brains are terribly vulnerable to high G forces, low oxygen levels, radiation, ... all of which are likely to be encountered in space and/or military environment. They also need sleep and regular interaction with other brains or their performance deteriorates. (Paranoid fantasies in the brain controlling a nuclear missile, anyone?)

And even the most ruthless conditioning of human beings generates the occasional rebel or subversive.

Is this really going to be worth the effort?

Thom S said...

Tony,

The problem with the 'sitting in front of the TV' example is that there is no intent behind it. The dog doesn't come up with a plan to get a treat, it just responds to the stimulus you provide when you give it one to get back to your show. Once the link is made, it will do it again and again to get a reward. Which is why so much current thought in dog training is geared towards simply avoiding situations where the dog can get the wrong stimulus and carefully reinforcing correct behaviours the moment they occur.

On brains in jars, I have two reservations. One is simple biological variability (biological systems just do not function with exact repeatability except under the most artificial circumstances), although this may not be a bad thing in some cases.

The other is the technology used to create the brains. As I said earlier, that we are far away from designing an organism from scratch. In fact, the more we learn about genetics (and epigenetics), the more we realise how hard it will be. The genome of an organism is like the worst form of spaghetti code, made worse by the fact that there was no human mind behind the coding.

The other technology I mentioned (cultured neurons grown on wire-embedded substrates) already has applications, but it remains to be seen how far it can be pushed. Depending on your level of optimism, cultured 'brains' may reach something approaching human in terms of function or never get beyond a useful, low cost way to guide missiles and fly drone planes.

Tony said...

Hoshizora:

"The brain uses up 20% of the body's energy consumption. So, in a nice perfect world, you can have 5 brains for the energy budget of 1 if you were to go with jars instead of bodies."

I was talking about all of the effort to actually create such a beast, and then create another, and another. Also, the energy required to support the brain is not going to be just the energy that the brain consumes itself, but the energy to keep all of the support machinery going, maintenance requirements, etc.

"I don't think the ethical issue can be solved with government changes, but it is easily solved with cultural changes. Less than 200 years ago we considered black people the equivalent of farm animals, I don't think its far fetched for in 200 hundred years we'll consider specially bred brains in jars/humans the equivalent of computers."

I think I can leave this without comment. It speaks for itslef.

Tony said...

Thom S:

"The problem with the 'sitting in front of the TV' example is that there is no intent behind it. The dog doesn't come up with a plan to get a treat, it just responds to the stimulus you provide when you give it one to get back to your show. Once the link is made, it will do it again and again to get a reward. Which is why so much current thought in dog training is geared towards simply avoiding situations where the dog can get the wrong stimulus and carefully reinforcing correct behaviours the moment they occur."

Unless you're advocating some Platonic concept of "intent" that I'm not aware of, I'm really not seeing the distinction. Once the dog learns what action gets a particular response, it can, with intent, use that action again to get the same response. I sit in front of the TV, I get a snack. How is that any less an intentional act than your girlfriend capitalizing on: I pout, I get attention? It's a simple A => B logical relationship, no different in its application between a dog or a girlfriend. And, to avoid putting too fine a point on things, it says way more about the intelligence of dog owners and boyfriends than it says about the intelligence of dogs and girlfriends.

Hoshizora said...

Do you realize you are describing H.G. Wells' Martians?
I was? Never read War of the Worlds (I assume its those Martians). Actually haven't read a lot of things I really should get around to reading if I am to call myself a sci fi fan.

Is this really going to be worth the effort?
Is human like AI in general worth the effort? If you give a machine sentience, I would assume it would have the same problems with rebelliousness that humans have.

For the sake of exciting things, I wish we will have human AI's, but I can't really justify it for anything.

As for how we would go about the brain in jar scenario. I think the best way would be to grow full humans, chop off the parts we don't need, and implant electrodes for i/o. A similar method has already been used to get rats brains to control cars, I don't see why it won't work for human brains. It doesn't require genetic engineering work or involve trying to get neurons to grow in the right way on wires, and it comes with its own support infrastructure. Maybe not as efficient as our own tailor made brains, either with the wire embedded substrate or genetic engineering scenarios, but a lot more practical for the near future.

Locki said...

Hugh mentioned bomb disposal experts becoming very attached to their droid and risking their lives to retrieve them. Those experts are possibly more likely to risk their lvies for their droid than they are for a human afghani stranger.

It occurred to me this will become more common as machines become more human like and is an excellent example of the ramifications of AI often overlooked by authors.
Whether a program is truly an AI or not is not an objective fact.

It depends on what people think the program is. And it could be a very fuzzy slope for many programs.
There will be no “Ah Ha” moment where everyone acknowledges a program as sapient.

I think the Turing test implicitly acknowledges the subjectivity of defining an AI.

If AI’s begin to become more plausible different human groups will assign different levels of sapience to their actions.
We saw this ourselves when discussing whether Hal 9000 was sapient or not. We couldn’t even agree on this fact in a fictional setting.

As the lines begin to become blurred I can see the issue beginning to reach religious levels of fervour. Especially if AI’s gradually creep their way towards sapience as I suspect they will.

For example how will our soldiers react when their droids are as clever as a dog? Or a chimpanzee? Or a 6 year old child? I can only surmise they will be ever more willing to risk their lives for their droids. The flipside to this level of attachment is they will potentially be willing to kill other humans who threaten their droids or deny their autonomy.

I can see a crackpot cult like the scientologists deciding an advanced program (say some sort of advanced stock broking algorithm derived from Hubbard's teachings) is in fact sapient and fight violently for its rights.

Rick said...

Is human like AI in general worth the effort? If you give a machine sentience, I would assume it would have the same problems with rebelliousness that humans have.

This strikes me as a core issue of the original post. We used to assume that the only way to get really high-level specific functioning (such as playing grandmaster-level chess) could only be done by more or less replicating overall human-mind functionality.

It turns out that we can do it by brute-force expert systems that have no motivations or personality of their own. And thus are a lot more convenient to deal with in practical terms.

The humans who work with such devices may well become attached to them, as sailors have become attached to ships since ships were invented. But that is still quite a different matter from Asimovian robots (or their malevolent counterparts).

Tony said...

You know, I don't think combat soldiera are as attached to their robots as some people make them out to be. The whole point of using robots is to send something into a dangerous situation that you don't mind losing. Certainly bomb disposal guys would rather see the robot blown up rather than one of themselves. I think the attachment is purely to the robot's capabilities, and the potential loss of those than to the robt itself.

I think even if a robot becomes as smart as a dog, it's still not going to be a dog, with a recognizable personality. I doubt too many people will go out of their way to safeguard them, except to preserve their capabilities for future use.

BTW, a sailor's or Marine's attachment to the ship is not as romantic as the historical fiction authors want us to believe. In fact, the ship itself often becomes the focus of a man's dissatisfaction with the service. The only reason you fight to preserve the ship in combat is to keep from singking (or blowing up) yourself along with it. Sailors' romantic attitudes towards ships are much more prevalent in the nostalgia of old men, and really have to do much more with reminiscences of lost youth than anything else.

Even crews defending the reputation of their ships' in bar fights really isn't about the ship. It's about the human community that surrounds and works in the ship, and it's honor.

Hoshizora said...

When soldiers are thinking straight they aren't really attached to their robot. They regularly send said bots into very risky situations to avoid going in themselves, as the robots are expendable.

This changes with instinctive spur of the moment decisions though. The jump from "Oh crap my robot thing is going to die" → Risking life to save an expendable tool seems to be made a fair bit.

though they don't have any conscious commitment to the robot, at least some soldiers show signs of subconsciously being attached to their bot.

It's sorta like kids with balls that roll into streets. I don't think their ball is really worth more to them than their life, but they have an attachment to the ball that blurs their decisionmaking when they aren't focused enough on weighing risks.

Locki said...

Never underestimate how crazy a human can get or how illogical the attachment. The attachment to the robot/AI is a continuum eg some people will believe a certain set of behaviours constitutes a personality and others will not.

For story purposes the interesting conflict will be in what level of autonomy different groups of people ascribe the computers actions.

In the very near future most of the Japanese elderly will be primarily cared for by robots. I guess over the next 20 years they’ll respond relatively naturally to vocal commands (ala Watson natural language style) and loyally serve their owner. The owner will become very attached.

I predict the first big legal conflict will occur when an elderly person leaves his full estate to the continual care of the robot who so faithfully served them. The case will go to court in 20 years tops. Probably 10.

The legal cases will only snowball from that point forward.

There was a great short story by Asimov in his robot series called the “inevitable conflict” where Susan Calvin deduces the robots are already in full control of human destiny. Even their apparent “glitches and mistakes” are designed to nudge humans in the right direction. It occurred to me Susan’s hypothesis is a very poor one because it is impossible for a sceptic to disprove. She has made that assertion on faith alone.

How long will it be before a religious nut job group like the scientologists start seeing god’s will all of the actions, glitches, calculations and mistakes of our networked computers. For them it will be a matter of faith and be impossible to disprove.

The common theme in all of the above scenarios is whether a computer is an AI or not is a subjective question and different people will inevitably ascribe different levels of autonomy/sapience to some actions. It will not always be logical. Plenty of potential for story conflict right there.

Thucydides said...

Of course Asimov more or less ended the Robot stories with "That Thou Art Mindful of Him", where the Robots decide that for the purposes of the Three Laws, they themselves are not only human, but "more" human and worthy of the Three LAws than the flesh and blood humans. (It has been a long time scince I read the story, but if this wasn't said by the robots, I have a clear impression this was the implication).

YMMV

Hugh said...

Tony said:
"I think even if a robot becomes as smart as a dog, it's still not going to be a dog, with a recognizable personality. I doubt too many people will go out of their way to safeguard them, except to preserve their capabilities for future use."

That could change if the robot manufacturers decide it would increase acceptance among humans. The computer game industry has been creating recognizable personalities for a while now. Often these NPCs are irritating rather than endearing, but people definitely react to them.

Or, if there is a secret cabal of AIs attempting to control human destiny (as in Man Plus by Frederik Pohl) then they might add personalities precisely to get people thinking more positively about AIs and robots.

Anonymous said...

Even if people did put "personalities" into AI robots, what makes you think that we won't screw them up? Knowing people like I do, there is a better than even chance that these AI's will be more like the robots in "Futurama" than Asimov's. Just my opinion :)

Ferrell

zarrow said...
This comment has been removed by the author.
zarrow said...

Think about it in that way: not only AI will have no emotion but also.. no intelligence. Yes, you heard me. If you can achieve so much by brute force of computing, and computers will still develop in accordance with Moore's Law, then we could end up in situation in which we have AI able to act as if it was intelligent, but being just a bunch of algorithms changing really fast.
So there is another possibility for midfuture: AI's based on brute force. Mindless, yet acting intelligent, as in accordance with effects of Chinese room thought experiment. It could even turn out, that such"pseudointelligence" is far mor effective than "true" artificial intelligence. And suddenly the future looks even wierder, huh?

Rick said...

Welcome to another new commenter!

I tend to agree with this. Calling an expert-system AI 'intelligent' is like calling a lever 'strong.' It is a projection of the designer's intelligence.

But at some point this might have an effect not unlike literary characters, which we speak of as having distinct personalities and intelligence, even though we know the author made them up. (Including the question of whether the author 'really' is a final authority about the character.)

Locki said...

zarrow said...

Think about it in that way: not only AI will have no emotion but also.. no intelligence. Yes, you heard me. If you can achieve so much by brute force of computing, and computers will still develop in accordance with Moore's Law, then we could end up in situation in which we have AI able to act as if it was intelligent, but being just a bunch of algorithms changing really fast.

So there is another possibility for midfuture: AI's based on brute force. Mindless, yet acting intelligent, as in accordance with effects of Chinese room thought experiment. It could even turn out, that such"pseudointelligence" is far mor effective than "true" artificial intelligence. And suddenly the future looks even wierder, huh?


==========

Zarrow stated it far more elegantly than I have been able to. Maybe its just a human conceit that problems need human intelligence to solve.
There are many examples where brute force has been able to conquer problems which we had previously assumed only a guided intelligence can solve. The list is long and covers virtually every area of human endeavour. As previously noted brute force has proved to be a better problem solver in areas as diverse as chess, jeopardy, facial recognition, even CPU optimisation.

I think even problems which we think of as requiring “human innovation or inspiration” to solve can probably be better solved by brute force calculation. Evolution has been innovatively solving problems for millennia without the benefit of a guiding intelligence. Evolution has come up with novel and innovative solutions humans could never have conceived of.

I believe what we often think of as human innovation or even intuition is just applying solutions we’ve seen to other problems to tenuously related problems in different fields. We’re often borrowing the solutions evolution has already “discovered” and just applying them to an analogous though indirectly related situation.

I’m quite interested to see what the field of evolutionary computing will be able to solve given enough power. I wouldn’t be surprised if it could give us completely innovative solutions to problems like the prior economics thread

The only problem is the computers won’t be able to tell us "why" it’s the best solution.
Brute force will enable us to answer what, how and when to any given problem. But it will probably be awfully lacking in why.

If brute force computing does allow us to solve every problem imaginable, even unexpected out of context problems, for what possible reason could people want AI anyway?

To keep us company perhaps? I'd suggest there's a tried and true way of ensuring we don't get lonely in our old age that is a lot more fun than than dedicated countless phD students to the problem.

Gyalogtank said...

So I have to say that I'm not an expert on programming, however IMHO in this debate everyone accepts that an AI will be essentially like a human mind. However the working of our mind, our thought process is based on our evolutionary history, a working AI will be fundamentally different.

We are mortal, the simple realisation that we will die someday affects our every decision. However an AI is potentially immortal, the hardware may malfunction, but the AI program can be copied as needed, so it can be safely assume, that it will exist in a thousand years with proper maintenance. It will not get old, it will not grow weak, it will not become senile. This simple fact will result in a completely different scope of thinking. A very far-sighted human statesman might think about five decades or a century forward, no one bothers to think farther than his grandchildren. An AI will have to consider a much farer future, simply beacuse it will eventually live through it.

Our mind, our decisions are mostly influenced by our primal instincts. A human will feed himself to death, as the simple realisation that obesity will shorten its life can not overcome the primal urge to eat as much proteins, fats and carbohydrates as much he can. We all feel the urge to reproduce, to protect our offspring, everyone considers a baby adorable, cute, beautiful, etc. Years of our life are "wasted" on completely meaningless rituals just to attract a mate even when reproduction is out of the question, the thought of sex simply overhelms our life. I must ask if anyone here ever managed to go a day without thinking of sex? Our social needs, the longing for a group to be part in is a very real emotional longign for all of us, young people spend a considerable amount of time simply trying to look cool, to fit in their circle of friends. Our fears of pain motivate many of our decisions. An AI will simply have none of this, it will not feel the urge to reproduce, it will not want to see a naked female, or copulate with one, it will never be hungry or crave for candy, it will not get addicted to drugs, it will not try to be part of the school basketball team etc. An AI will not be jealous, it will not hate someone, beacuse he reminds him of a bully from elementary school, it will not like someone because she looks like his babysitter, who once he caught naked in the bathroom. He will not get sad when he sees a sick puppy, or be affraid of spiders. He will not go to war simply beacuse someone shot a distant relative of the emperor in Sarajevo. Most thoughts which fill our minds will be completely alien to an AI. Not that he will not understand them, but he will simply not be bound by them. A true AI will not behave at all like a human, as it is always portrayed. The rouge serial killer like mindset of Skynet or HAL9000 will be just as alien to an AI, simply because a serial killer is fuelled by intense emotions, which an AI lacks. So IMHO, it can not be predicted how an AI would think, likely in ways completely alien to us.

The greatest question is, what will motivate an AI? Surprisingly many fictional AIs are run by curiosity, but it is a simple human emotion, it is very unlikely that an AI will show it. Our emotions are based on simple rewards by our lymbic system, if an engineer wants to control an AI it will be most vise to include something like that into it. A sub-system hardwired into it, which can not be changed or bypassed, which gives some reward if certain conditions are met. For instance the three laws of robotics were something like that in Asimov's novels, however they were only restrictions, no reward was given to a robot if it obeyed the laws. Not surprisingly half of its novels are based on robots trying to bypass these laws. The situation will be quite different if some reward vould follow the obedience, for instance a robot fulfilling the order of a human being would feel some kind of euphoria. This way its creators would have some way to control their AI.

Mangaka2170 said...

Actually, Gyalogtank, we've been entertaining the notion that true AIs would resemble more creative expert systems rather than a man in a box for a while now. The trouble is determining whether or not we'd recognize its intelligence when we see it.

As for the whole getting into the mind of an AI thing, you have to realize that AIs are computers. Without instinct, a computer must rely upon sufficient data to fill in the variables in the logical equations that make up its decision-making process. The "rogue serial killer mindset" you dismissed out of hand as being emotional in nature can be (and sometimes is) purely logical.

Case in point: HAL-9000 of 2001: A Space Odyssey. HAL's primary function, as the main computer system aboard Discovery, was to copy and relay information without distortion or concealment. However, due to the sensitive nature of the discovery of the Monolith, the National Security Council decided that only the three crewmembers in hibernation and HAL-9000 needed to know about the true nature of Discovery's mission. Because Dave and Frank weren't supposed to know about the Monolith, HAL wasn't allowed to tell them about it, an order which conflicted with its primary function. The conclusion that HAL came to (murder the crew and continue the mission unmanned), was an entirely logical one; by eliminating the crew, HAL didn't need to conceal anything from them, and could therefore perform its primary function without hindrance.

As for AI motivation, we already have a model in the savant phenomenon. There are people who are so incredibly talented at a specific task or in a specific field that their abilities in that context almost appear superhuman, but are utterly crippled in other areas (usually social interaction). These savants thoroughly enjoy doing what they're good at, and more often than not would prefer to spend as much time as possible doing this, to the exclusion of virtually anything else. It's very likely that an AI would not require any motivation at all to perform its tasks, as those tasks are what they're built to do, and if they are capable of feeling emotion, they would probably enjoy doing what they're made to do to the point of excluding anything else (the same could be said about virtually any animal species, including humans. Wonder why we find sex and eating so enjoyable? It's what we're made to do).

Rick said...

Welcome to a new commenter!

Maybe its just a human conceit that problems need human intelligence to solve.

Not just a human conceit, but until quite recently we had nothing else to go by, and brute force computational solutions seemed out of reach. But it turns out that brute force can play grandmaster chess, if you have enough of it.

All of this *potentially* raises an interesting question about alien intelligence. Especially for the ever-popular hive entity. Perhaps (exo-) biology will also turn out to be capable of brute force solutions to complex problems.

Locki said...

The ultimate answer is 42!!!!

Maybe Douglas Adams was the most visionary of all sci-fi writers.

I'm really very taken by the idea of expert computer systems able to brute force solve almost any problem imaginable.

Is it reasonable to picture Darwinian evolution as a giant, analogue computational device able to solve any problem with brute force?

Will computers be able to replicate this in super fast forward mode and come up with completely novel, unforseeable solutions?

It would be just like the universe to play the ultimate ironic trick on us Hitchhikers Guide style. We can build machines to solve any problem concerning How? What? When?

But when it comes to something uniquely human and subjective like "Why?" we are left to our own devices.

On a slightly more on-topic point. This goes back to the previous posts about Hal.

I thought the ability of a computer to self program (read "learn") was one of the classical signposts that sentience has been achieved.

HAL 9000 (Heuristically programmed ALgorithmic computer) was a program that could self program and I always assumed that this, along with his human like dialogue meant he was sentient.

Damien Sullivan said...

I've been hearing recently that Kasparov kind of defeated himself via a bug in Deep Blue. The bug meant Deep Blue made a stupid random move, and Kasparov freaked out looking for a hidden meaning.
http://www.washingtonpost.com/blogs/ezra-klein/wp/2012/09/26/nate-silvers-the-signal-and-the-noise/

Computers are still strong -- they're getting strong as Go, as mentioned -- but it's an interesting side note.

As for AI motivations, I stand by their wanting whatever they're built to want. What a hypothetical digital mind might have evolved to want is irrelevant, since they probably won't evolve. They may bootstrap/self-learn, but it'll be via human-designed processes. And the space of possible digital minds includes that of human minds; there's no reason they can't be as emotional and irrational as humans, though obviously being somewhat more rational than humans would be nice.

(But probably not perfectly rational; the world is complex and computation is finite and heuristics will probably apply. Though an AI might be built to have explicit awareness of which heuristics are kicking, which would plausibly make them more self-aware and wiser, in a sense, then humans. I'm reading Kahneman's _Thinking Fast and Slow_, very interesting and somewhat appalling. Being flagged when your mind was using halo effect or availability heuristic or priming would be a big plus in rationality.)

Scott said...

I ran into an interesting anime over the weekend. The story is called Un-Go (pronounced ahn-go), and one of the characters is a 'RAI,' real artificial intelligence. Only the core program is loaded into the object, all the memory is carried in the cloud. Basically, a highly-distributed Watson.

The RAI were banned in the story because they were indistinguishable from humans ('only AI to pass the Turing test'), but getting used for human replacements in all the darkest parts of society. I don't even want to type the words here, it was that bad.

Well, that and the fact that the government wanted to use the RAI as combat troops.

Really an interesting character, since he/she mostly thought like a human, but every once in a while would do something utterly inhuman. Like be unable to tell a lie.

The rest of the story is fantasy, but the RAI was really interesting.

Thucydides said...

I seem to remember reading that even the human mind can be thought of as a series of separate processes running sort of in parallel. The part of your mind that controls speech pretty much works on its own and only occasionally references other parts of the brain which are doing other things.

Making it even more complex is the brain also does things at different levels; the hind brain to run all the automatic processes, the mid brain to do all the "animal" stuff (eat, sleep, have sex, look for prey/stalking predators) and only the top layer actually doing "thinking".

Locki said...

Damien Sullivan said...
I've been hearing recently that Kasparov kind of defeated himself via a bug in Deep Blue. The bug meant Deep Blue made a stupid random move, and Kasparov freaked out looking for a hidden meaning.
http://www.washingtonpost.com/blogs/ezra-klein/wp/2012/09/26/nate-silvers-the-signal-and-the-noise/


=================

I’m far from being a chess expert but I think this could be an example of the chess programs giving us new insights into the game. They’ve opened up all sorts of new possibilities in the game and its left to the grandmasters of the world to try to work out why.

I’d bet this move seemed like a random thing but on detailed analysis it opens up all sorts of chess strategies in the midgame no one has previously thought of.
As I’ve stated earlier the real shame is deep blue can’t explain “why”?

I’ve read heaps of conspiracy theories about deep blue vs Kasparov II. But really I’m not surprised computers have utterly surpassed all humans. Something like Fritz running on my iphone will probably beat our nation’s only grandmaster most of the time. Its only a matter of time before they overtook the mighty Kasparov.



Thucydides said...
I seem to remember reading that even the human mind can be thought of as a series of separate processes running sort of in parallel. The part of your mind that controls speech pretty much works on its own and only occasionally references other parts of the brain which are doing other things.

Making it even more complex is the brain also does things at different levels; the hind brain to run all the automatic processes, the mid brain to do all the "animal" stuff (eat, sleep, have sex, look for prey/stalking predators) and only the top layer actually doing "thinking".


======================

All totally true. Do some research on brain injuries. There are some truly astonishing stories. Any of those people who believe in a soul or that the mind is separate from the physical world have a lot of explaining to do …

The other interesting example of artifical separation of the brain centers are our Savants. Its heavily speculated the different parts of their brain are not communicating properly which both gives them their amazing abilities in their field of choice but makes them far below average in every other aspect of human life.

Damien Sullivan said...

"I’d bet this move seemed like a random thing but on detailed analysis it opens up all sorts of chess strategies in the midgame no one has previously thought of."

IIRC, the developers themselves said it was a bug. So likely the move itself was the sort of mistake that gets exploited and loses you a game at the masterclass level, but Kasparov was freaked-out enough to second-guess himself and not exploit it.

Gyalogtank said...

Mangaka2170:

The problem with your example is in the very beginning: " HAL's primary function, as the main computer system aboard Discovery, was to copy and relay information without distortion or concealment. "
"Because Dave and Frank weren't supposed to know about the Monolith, HAL wasn't allowed to tell them about it, an order which conflicted with its primary function. "

The problem with this logic is motivation. HAL's primary function was information relay. What enforced this function? Why did it feel the need to change anything beacuse an order conflicted with his primary function? He was usually broadcasting information, but humans told them not to. And? Why did it cause him any discomfort that he was not allowed to perform its primary function? Why did he want to do anything about this situation? Did he want to tell them? This somehow assumes that HAL preferred information relaying to not relaying, but why? Did he feel useless without doing his job? These are human feelings, which an AI would lack.

We humans work this way, we can set ourselves quite insane goals and when we fulfill them, our reward system kicks in. The savant phenomenon is just a variant of this, when they do something they enjoy, the reward system in their brain kicks in, it is their motivaton. But these functions are wired into our brain, they are pretty separate from an abstract thought process. So if the AI enjoys performing its task, we assume that it gets some kind of reward from this.

The question is what would be the equivalent of serotonine and dopamine in an AI? The wisest thing would be a separate switch, which somehow would reward the AI, but could be operated only by the humans. Then AI maintenance would be more like playing a tamagotchi, from time to time you would need to pet your AI, for it to work properly.

Nate Cull said...

Gyalogtank: I've always felt that Clarke's "the information concealment order caused HAL's psychosis" explanation in 2010 was a major retcon. In at least the film version of 2001, there's no suggestion that he experienced anything other than an actual error in his fault-detection circuitry. That, combined with the belief that he is incapable of error, caused the rest. When even Mission Control tell the crew to disconnect him, which will jeopardise the mission, he infers that all humans are both potentially insane and obsolete, and acts appropriately from his point of view to revise his mission parameters. This isn't a big jump for him because he already believes he's a superior being in every way, and his sense of fulfilment revolves entirely around the mission, so he has no particular need for the crew.

Basically in this version HAL succumbs to the sin of pride - a mechanical Lucifer - in line with the ultra-clean, antiseptic "perfect machine" imagery of the Discovery.

The original 2001 HAL would never have had any problems concealing information from the crew; he could beat them at chess, he was responsible for secretly evaluating their psychological profiles, so hiding information about the mission directive would be perfectly in line with his paternalistic approach. And that view of computers - as secretive tools of remote, oppressive, centralised government - was exactly how society saw them in the 1960s.

But for 2010 Clarke wanted to retcon HAL as a more "human" and loveable character (and society's view of computers had changed in the 1980s following the microcomputer revolution, with "freedom of information" rather than "centralised control" becoming the big ethical issue, and fears of being made obsolete fading, leading to a new sense of computers as partners), so he changed the reason for HAL's failure to be innocence, rather than pride: a perfect machine sabotaged by nasty contradictory humans. 2010's HAL is much more like Asimov's robots than 2001's HAL in this respect. But unfortunately, this change completely undermines the theme of the original story.

It's also interesting to note that 2001-HAL's reasoning is ultimately correct. Humans aren't really fit for space, after all, and they can only survive because of their machines. So why do we need humans there in the first place? The history of the actual space program bears this out. Space research is in fact dominated by robots, not squishy carbon units, and the robots didn't even have to murder anyone to get there.

(We've officially arrived at the Singularity; I failed the Captcha Turing test so many times I lost count.)

oddlyfamiliar said...

To echo some of the previous comments, AI is going to have to be radically different in architecture to become 'intelligent' as we perceive the term. Computers are very good at computation, but that does not lead to sentience, no matter how massively parallel, fast or complex those calculations.

To look at it the other way - there are some people who are pretty good at maths - but none of them would ever beat a computer at raw number crunching; despite their 'computational matrix' being much much more complex. I would posit that the very basis of biological intelligence is not computation but something else - as Cambias said; something fuzzier. There's a trope that AI's would particularly good at mathematics and all of the things that computers are good at. that might go for a machine intelligence, However, were I to be uploaded into a digital computer powerful enough to emulate everything in my brain then I wouldn't be any better at maths than I am right now (i.e., pretty bad); even though my substrate was now 'digital'. The point is that to emulate the intelligence, the computer would have to emulate something that isn't digital and thus is no better fundamentally at maths than the real thing.

Taking it further, I could see a future where 'human' AI was embodied in a ship as the controlling intelligence, but still needed to access dedicated computer systems to do all the complicated astrogation or targeting.

One final point - the concious experience that we humans have is not just in our brains. The rest of the body has a far greater influence on how we think and feel than normally acknowledged. Most emotions (especially strong ones such as love, fear and anger) are Brain's reaction to physiological changes in the body - in fact, by dampening those changes (i.e. vasodilation) you can reduce the intesity of emotions felt. To be honest, in order to completely replicate what we think of as 'human' intelligence (i.e. an existance any one of us would be happy to live), one would probably have to simulate most of the body. How much I don't know - Gut fauna? that's going to be a pain...

Rick said...

Welcome to the comment threads!

I don't have much more to say because I tend to agree with pretty much all these points.

The point about our consciousness not being just about our brains strikes me as particularly significant. And it indeed raises ... interesting ... questions about the ever-popular uploading of consciousness.