Friday, September 14, 2012

Artificial Intelligence, Human Intelligence

Filming Jeopardy!

 The comments on a recent post about science fiction, A Literature of the Possible, included a considerable subthread about HAL, the ship's computer in 2001: A Space Odyssey. The discussion centered on HAL's literary role, and whether its function in the story could have been played by either a human mutineer, or something - such as a shark - not claimed or expected to have anything like 'intelligence' in the human sense of the term.

Here I will take the discussion in another direction, going boldly where many have gone before, including my own blog post on What Do AIs Want? - along with several earlier comment threads, which I am too lazy to dig out.

In that post, I discussed SF tropes regarding AI and its response to humans, starting with the ever-popular insane malevolence and ending with:
It will ignore us, but neither serenely nor with any other emotion. It is after all a machine, with all the emotion and motivation of a lawn mower.

Of all the possibilities regarding AI, this may be the one we least expected beforehand. So of course inevitably - thus far, at least - it seems to be the one that is coming true.

This was not expected because the only high-level congnitive intelligence we had to go by is our own. So we imagined artificial intelligence to be, at the least, similarly mysterious. This was the case long before computers or robotics were even imagined.

Roger Bacon was said* to have once created a brazen head, capable of speech and, implicitly, of thought. 'Time is,' said the head. A little later, 'Time was.' Still ignored (by Bacon's stupid servant, Miles), the head finally said 'Time is past!' - and shattered into pieces.

AI as a concept has been making heads explode ever since. Bacon's brazen head, as described, was not out to kill or enslave anyone. But it was enigmatic. You don't know quite what it was thinking, or feeling, any more than you knew what HAL was thinking and feeling behind that unblinking red eye.

What we took for granted is that the AI would indeed be thinking, in some way at least broadly comparable to our thinking. And it would likewise be feeling ... something.

The ability to play chess well was long taken as a touchstone of 'real' AI. At one point, HAL mentions playing chess with Dave. Even in the early 80s, the supercomputer in WarGames, after learning from its sim runs that nuclear war is unwinnable, suggests playing 'a nice game of chess.'

I even remember an explanation of why chess made such a good benchmark: Because the brute-force solution of playing out all permutations to find out which ones worked would require a million trillion zillion moves. Even an IBM computer could never run them out, certainly not fast enough for a chess match. So for a computer to play grandmaster-level chess it would have to do whatever human grandmasters do. QED!

It did not work out that way, and QE wasn't D'd. As it turned out, of course, crunching through permutations by brute force is exactly how IBM's Deep Blue beat Gary Kasparov in 1997.**

We still have only a hazy notion of how human grandmasters play chess. But by way of intriguing insights, I have read (possibly in a blog comment here) that while powerful chess computers can beat any 'unassisted' human player, chess masters using simple commercial chess programs can beat any computer playing on its own, without a human partner.


In any case, most AI work today is not at all about trying to emulate the still-mysterious workings of the human mind. It is about amplifying human intelligence - that of skilled programmers and subject-matter experts - by harnessing it to the brute-force power of computers.

This does not rule out the possibility that work on brain structure and psychology - perhaps amplified by expert systems - may yet unlock the mystery of human thought and lead to machines that really do think. But there is no particular reason to anticipate this in the plausible midfuture. Particularly since such work does not have the clear practical applications that expert systems do.

All of which casts doubt on some familiar SF tropes. Not all of them! An expert-system political machine (in the literal sense!) might amplify and implement its designers' scheme to conquer and enslave humanity. And if they were careless in algorithm design, it might fail to exclude them from enslavement.

But it will do so with no ambition or passion of its own - only by implementing its designers' schemes. And human freedom fighters won't defeat it by persuading it to adopt the ideas of John Locke or the Buddha. If they defeat it, they will do so by identifying limits or errors in its implementation (perhaps using their own expert systems to reverse-engineer these), and posing problems it cannot efficiently solve.

Faust lives. Frankenstein's monster, not so much.


Discuss.






* In a play by Elizabethan playwright Robert Greene. Greene also made a snide remark about William Shakespeare, which has the distinction of being the first notice of Will's presence in the London theatrical scene.

** Disclosure: I have an ongoing work gig on behalf of IBM. But in this case I don't have to cop to a shameless pander - Big Blue's work on AI truly is impressive.




The image of Jeopardy! comes from this Flickr page.

170 comments:

blarg said...

Certainly an interesting topic to discuss!

Long time lurker here delurking as the topic of AI is a fascinating one and related to my work.

I work in the computer games industry for a successful RTS developer (I hope you'll forgive me for not wanting to reveal which one) and the topic of AI is often at the heart of discussions about RTS games.

AI in most games is still fairly primitive. Brute force approaches failing in them due to the vast number of variables compared to something like chess. Most AI is reliant on scripts to react to certain situations and trying to match up the right units.

It currently falls far short of human players and can generally only compete on higher difficulty levels through lots of cheating.

What it does excel at though is efficiency, for example build queues instantly kicking in when one building is finished, and that leads me onto what you talked about in the blog and how it may lead to more interesting situations in the future (and for AI in books!).

It does seem likely that for the foreseeable future, and possibly mid future, that AI (at least as it is currently termed and not proper AI) will assist humans and free humans up from many tasks to do more thinking.

It certainly seems interesting how this could be applied for example in space battles. The computers on a ship could deal with all the course and direction changes, managing point defence and main weapons. The human commanders would be free to think about the tactical side of the battle also well assisted by the computers.

Certainly sounds more interesting to me then space craft completely AI controlled fighting each other.

M. D. Van Norman said...

“It currently falls far short of human players and can generally only compete on higher difficulty levels through lots of cheating.”

I knew it! :-)

Tony said...

blarg:

"It currently falls far short of human players and can generally only compete on higher difficulty levels through lots of cheating."

Some of it blatantly simple, like giving each AI hit more effect than a human player hit from the same weapon.

The funny thing is that even simple AIs have unexpected emergent properties in operation. Just ask anyone who's programmed an automated tank for a programming class project and found out that the right search and pursuit algorithm is much more important than optimizing targeting.

Mangaka2170 said...

I'll be the first to admit that I'm no expert in the computer systems field, and my experience with programming is limited to game editor programs, a little HTML and a dabbling in BASIC, but I'd like to think that I have some understanding about the systemic differences between human cognition and AI, and also how AI works, in a very non-technical sense.

It's been my experience with computer games that there are two things that computer players do better than human players, and those are scenario awareness and micromanaging, both of which are consequences of the computer generating both the map and running the AI scripts of every object on that map. In every such game I've played, the computer follows one of a handful of available strategies, and if that strategy doesn't work, it keeps doing it again and again, with only minor variations, until it either works or I win, and these strategies all seem to be aggressive, often unwisely so.

Considering that this has proven mostly true for games since the '80s, I think it's safe to assume that that is the limitation that current computing architecture has; the only reason why computers are so good at chess is because all of the permutations of the game have been worked out for centuries, and if there's one thing that computers are good at, it's high-speed number crunching. As chess is relatively simple when compared to a lot of computer games (or real life, for that matter), the computer's ability to calculate the best possible course of action and then act upon it is reduced when the limitations of the programming can't keep up with the changing environment. After all, a computer is only as good as its programming. One advantage that we humans have over current AIs is our ability to adapt to new situations, drawing on our experience and, when all else fails, intuition. Afterward, we (hopefully) learn from our experience so that when a similar situation comes up, we can better respond to it.

So, the challenge becomes programming a computer that can learn from its experiences and devise its own solutions based on its own successes and failures. Until we do that, AIs are ultimately going to be limited to the strategies that they're programmed to work with (and minor permutations of them), which can lead a human player with experience playing against them to learn their pattern and adapt their play style accordingly.

Tony said...

Computers can be programmed to be good at chess because the number of variables and their domains are precisely known. That is actually also known for a wide variety of strategy games. (Because the programmer puts in all the variables that exist.) The problem with more complex strategy games is the lack of understanding how the variables interact across the domain of possibilites. One could, for example, by playing the same game, with the same number of units, over the same map, millions of times, develop a good statistical database of the affects of different moves. Or one could simply talk to an expert player for, oh, several years, nonstop, and figure out what his reasoning process is in programable terms.

IOW, what we know about chess goes into very careful programming of the heuristics of the chess-playing software. No other game has that knowledge base behind it, and likely never will, due to both extra complexity and much less play experience. Why this doesn't handicap humans is probably a question that AI researchers should be looking at (to the degree that they aren't already).

WRT learning qua learning, one of the big problems with expert systems is figuring out what variables matter and how they should be weighted. Another allied proble id identifying all of the variables to begin with. If your system doesn't have a sensor to collect data that effects the outcome of some decision, it will never be able to learn what it's doing right or wrong.

Damien Sullivan said...

"Considering that this has proven mostly true for games since the '80s, I think it's safe to assume that that is the limitation that current computing architecture has"

Not exactly. If multiple strategies are available, it wouldn't be hard to somehow evaluate the success of the current strategy, and to switch to something else if it's poor. Now, evaluating success well might be hard, but a basic cut would be possible in any system that can evaluate how well it's doing, which you usually need anyway.

Douglas Hofstadter's lab has programs that can sometimes detect when they're stuck in a loop, and force themselves to 'think' in other directions.

As for aggression, that's certainly not required. An old version of Risk for the Macintosh had three available AIs for the human to choose: aggressive, 'crazy', and neutral (which didn't attack until attacked.) Interestingly, the hardest play was one human vs. 5 neutrals.

Damien Sullivan said...

" So of course inevitably - thus far, at least - it seems to be the one that is coming true."

Eh, not really. What's true now is the most obvious path, combining 6 (Three Laws) and 9 (just a machine): the computers obey us unconditionally and unemotionally. We make them to obey us, and we don't know how to give them true emotions, nor yet have enough complexity to make emotions useful.

Though a Roomba presumably has behaviors analogous to "I'm hungry (for power)", "I'm stuck", and "I need to take a dump", interrupting its default "I like sweeping". Most others commercial AI or industrial robots have only a single purpose in life, making emotional complexity pointless.

Tony said...

Damien Sullivan:

"Not exactly. If multiple strategies are available, it wouldn't be hard to somehow evaluate the success of the current strategy, and to switch to something else if it's poor. Now, evaluating success well might be hard, but a basic cut would be possible in any system that can evaluate how well it's doing, which you usually need anyway."

The problem is defining goals and progress towards their achievement. emergent AI aggression in map-based strategy games is probably rooted in weighting positional goals more heavily than they should be, and cumulative success (i.e. holding on to something for a long time, even if not holding on to everything) not enough.

"Douglas Hofstadter's lab has programs that can sometimes detect when they're stuck in a loop, and force themselves to 'think' in other directions."

Not to sound dismissive, but there are numerous simple strategies for breaking out of loops. There have to be, because iteration (i.e. looping) is a necessary computing function. These strategies can be implemented in even the most complex programs. The trick is to sew up every possible way for the loop monitor to fail.

"As for aggression, that's certainly not required. An old version of Risk for the Macintosh had three available AIs for the human to choose: aggressive, 'crazy', and neutral (which didn't attack until attacked.) Interestingly, the hardest play was one human vs. 5 neutrals."

Tenacious AI behavior is actually an intirguing phenomenon.

"Eh, not really. What's true now is the most obvious path, combining 6 (Three Laws) and 9 (just a machine): the computers obey us unconditionally and unemotionally. We make them to obey us, and we don't know how to give them true emotions, nor yet have enough complexity to make emotions useful."

Ummm...I'm not seeing how the putative laws of robotics can be combined with the mere programmed machine. Any failure of the machine to properly safeguard humans or itself would be totally the responsibility of the human programmer.

Damien Sullivan said...

" there are numerous simple strategies for breaking out of loops"

Identifying that you're in a loop, in a complex domain, is not trivial. The type of thinking Copycat and Metacat try to emulate has very little to do with standard programming loops.

"I'm not seeing how the putative laws of robotics can be combined with the mere programmed machine. Any failure of the machine to properly safeguard humans or itself would be totally the responsibility of the human programmer"

I don't know what the first sentence is supposed to mean; ultimately, the 'laws' were programming. For the second, that's a matter of where liability is distributed -- and why should it fall on the programmer, rather than the user of the robot who gives it orders?

'three laws' is metaphorical here; our robots are basically Second Law only. They obey orders because that's what they're built to do.

Tony said...

Damien Sullivan:

"Identifying that you're in a loop, in a complex domain, is not trivial. The type of thinking Copycat and Metacat try to emulate has very little to do with standard programming loops."

The method is explained in the first online paper I found describing Metacat. The program keeps a history of its activity and, following some undescribed set of rules, decides it's not going anywhere (and presumably pruning that branch of reasoning from its problem-solving model). Conceptually, that would probably be the first thing most experienced programmers would think of doing. The precise mechanics are probably pretty interesting, as would be the rules for deciding. But architecturally it's a pretty elementary and straightforward approach.

"I don't know what the first sentence is supposed to mean; ultimately, the 'laws' were programming."

What's the point of having laws when every action is programmed? The programmer decides what happens next, and probably sets up safety boundaries specific to the system and application.

"For the second, that's a matter of where liability is distributed -- and why should it fall on the programmer, rather than the user of the robot who gives it orders?"

If we're talking about Type 9 machines, we're not talking about Asimovian robots. User configurability is then in fact the responsibility of the programmer, because he's not supposed to turn out a product that is unsafe. Software Engineering 101, "Software Ethics".

Now, I can imagine systems, such as weapons, where the desired result is damage, and where even normal operation requires care on the part of the user. In those cases it's still the system designer's responsibility to give the user a set of constraints and normally-safe procedures.

"'three laws' is metaphorical here; our robots are basically Second Law only. They obey orders because that's what they're built to do."

Still not seeing it. They don't "obey orders", as if they had a contingency not to. They do what the software instruction tells them to do, then the next instruction, then the next. It's not the same thing, in any technical or logical sense that I am aware of.

Locki said...

I'm no programmer but I have a reasonable knowledge of competitive chess. I always thought Watson winning Jeopardy was a lot more eye-opening than Deep Blue finally beating Kasparov. Its the more impressive "AI" feat and the singular event that made me think the “skynet” future isn’t so impossible.

After all we always knew it was only going to be a matter of time before a computer could out-crunch even the greatest of grandmasters eg Kasparov.

A simplistic analysis. Chess is a game with a limited set of variables and at anyone point in the game its possible to mathametically determine who is winning by assigning a points value to each piece and their relative positions. From there its just a case of having enough crunching power to keep optimising your position and eventually win. Afterall a simple program like Fritz running on a 486 can beat most of the grandmasters in the world.

When you read the analysis of the 6 match game between Kasparov and Deep Blue the commentators made the comment kasparov played quite well in the early games but appeared to be psychologically broken by deep blue in the later games. What would happen was he’d use a lot of his time to devise a strategy to win the game and Deep Blue had the uncanny knack of responding very quickly with the perfect counter. It was almost like the computer was reading his mind and knew what he was planning. It was so disconcerting Kasparov actually asked for the game to be stopped and asked for Deep Blue to explain the move. The sign of a broken man! Needless to say he got crushed in the later games.

Computers have given us lots of brand new insights into chess but they can’t explain why they chose the move they did. Its almost like they are AI’s (reasoning machines) but are so different they fail the turing test because they can’t explain it to mere humans.

Unlike Chess, something like the game Go is supposed to be much harder for a computer to win as its very difficult to determine the degree to which you are winning until the game is over.
Which is why I always followed with interest IBM’s attempt to design a quasi-AI point to play Jeopardy. From what I knew of competitive game programming Jeopardy should have been very difficult to program as it relies a lot on pun, analagy, obscure connections etc. Everything about the way the game was played was supposed to give the computer programmers kittens.

Watson of course won pretty handily.

I've read into it in quite a bit of detail and its not just a case of having a huge search engine giving you the top probability hit. Watson proved surprisingly quick at making some really obscure connections and answering questions even the best humans couldn't answer. It also proved surprisingly inept at some of the simple questions.

Maybe computers will be a true AI (either soft or hard) one day but be so different to humans that they never pass the turing test. Eg they can never explain their motivations and reasoning to us. The danger is this inability to “explain” should not be mistaken for the AI’s not having motivations or reasons in the first place.

Chris Lopes said...

"I always thought Watson winning Jeopardy was a lot more eye-opening than Deep Blue finally beating Kasparov. Its the more impressive "AI" feat and the singular event that made me think the “skynet” future isn’t so impossible."

Actually Watson was just a search engine with a natural language interface. It was no more self aware than the auto-complete feature on Google. That's one of the problems you run into when talking about AI. Computers are very good at imitating aspects of intelligence.

Mangaka2170 said...

It occurred to me that the best depiction of an AI in popular (or at least not obscure) fiction is not HAL from 2001, not Data from Star Trek, not the Tachikomas from Ghost in the Shell; these are all human characters in the guise of machines. Instead, I think that a better example was Destiny's main computer from Stargate Universe; Destiny has a mission, and it is going to do everything in its power to accomplish its mission, because that is what it was designed to do; anything it does for its crew is accommodated as long as it does not interfere with that mission. People are left stranded on more than one occasion (and there were many close calls) because the ship's computer would only allocate so much time in one star system before it went to FTL. Destiny's computer is fundamentally a machine, and therefore it acts accordingly.

jollyreaper said...

Per Rick's original speculation, I would call this genie-AI. It does what you want, can do complex things, but lacks individual motive and desire. And if the bad guy gets ahold of the lamp, the genie will blindly follow those orders.

The advantage of this scenario in a story is that the AI is no longer a principle character. When people talk about a ship or location being like another character, they're still different from someone with a speaking role. A strong AI does not become part of the setting, it would dominate the entire story and reduce humans to irrelevance.

Genie-AI might remain a powerful asset and struggle for control of it could drive the plot but at least the conflict is between human characters and thus remains relatable. Similarly, it's very, very difficult to tell an interesting singularity story.

jollyreaper said...

I don't see Watson taking over the world like Skynet but I do see similar technology automating many people out of jobs. Then what?

 Ashley said...

I don't often comment, even though all the posts are thought provoking. I work in cognitive behavioural processes, and what I would say is that intelligence implies a mind. However, as we have no theory of mind, which we can test in any meaningful way, I think AI is the psychologies equivalent of FTL; handwavium using unobtanium. YMMV.

Locki said...

Chris Lopes said...

Actually Watson was just a search engine with a natural language interface. It was no more self aware than the auto-complete feature on Google. That's one of the problems you run into when talking about AI. Computers are very good at imitating aspects of intelligence.

=====================

Geez tahts'a bit dismissive of 10 years of total effort by a group of 30-ish of IBM's best scientists. I guess your summary is essentially true but at least for myself its startling a computer can be programmed to be far better at aspects of natural language than the best human players in history.


jollyreaper said...

I don't see Watson taking over the world like Skynet but I do see similar technology automating many people out of jobs. Then what?


==================

Already been happening. The old document discovery process major law firms used to go through before a big case is already starting to be massively streamlined and cheapened by using computers. Thousands of law students worldwide will have to start flipping burgers to pay their way through college now.


P.S. This captcha system is driving me to drink. Strong drink. Can you at least use an easier image?

Damien Sullivan said...

"What's the point of having laws when every action is programmed? The programmer decides what happens next, and probably sets up safety boundaries specific to the system and application."

"Still not seeing it. They don't "obey orders", as if they had a contingency not to. They do what the software instruction tells them to do, then the next instruction, then the next. It's not the same thing, in any technical or logical sense that I am aware of."

The Laws *are* the programming. As described by the character Susan Calvin, they're not external laws that veto or force the robots' behavior, they're the laws of thought describing how the robot works. A robot doesn't go "gosh, I must save that human because the First Law says so", it wants to save the human, a desire expressed in simplified form by the English version of the First Law.

"Unlike Chess, something like the game Go is supposed to be much harder for a computer to win as its very difficult to determine the degree to which you are winning until the game is over."

Not any more. Programs with new Monte Carlo Tree Search techniques I don't understand are playing at the 6 dan level on KGS. That's right at the threshold of professional play. There's been a huge increase in strength, only a few years ago I think they were more like 6 kyu (-6 dan, essentially.)

Cambias said...

I think part of the problem with creating Artificial Intelligence is that we don't have the foggiest notion how Natural Intelligence works.

Seriously: we're just scratching the surface in learning how human brains function. They aren't digital computers (sorry, uploaders), they seem to be a vast network of little specialized analog computers, constantly changing their connections and interactions.

And operating in parallel with (and crossing over with) the neuro-electric brain is a chemical system which is almost as complex.

My intuition is that Artificial Intelligence will eventually be split into two fields. One will develop useful "pseudo-intelligent" systems using natural language and heuristics to perform "intelligent" tasks like driving, shopping, work, etc.

There will also be a branch of AI developing genuine Machine Intelligence. Machine Intelligence will be very very alien, and will solve the problems of being intelligent in ways human brains don't.

Chris Lopes said...

"Geez tahts'a bit dismissive of 10 years of total effort by a group of 30-ish of IBM's best scientists. I guess your summary is essentially true but at least for myself its startling a computer can be programmed to be far better at aspects of natural language than the best human players in history."

First, I'm not dismissive of the effort that went into Watson. It's an impressive system to be sure. It just isn't sentient.

Second, it was able to beat the best human players because it was plugged into a database that covered large areas of human knowledge. Since most people (not named Sheldon Cooper) can't know everything, that gave Watson a considerable advantage. Watson wasn't smarter than its human opponents, it was simply designed to search through the database (after parsing the question) faster.

Rick said...

Welcome to new commenters! Occasional commenters, too!


and these strategies all seem to be aggressive, often unwisely so.

One factor: There is an asymmetry between the goals of game designers and game players. The players want to win. The designers want to provide an exciting game, including allowing good players to win.

The worst selling wargame of all time would be Sitzkrieg!

Not unrelated to Damien's point about 'neutral' being the most effective strategy for a Risk AI.

However, as we have no theory of mind, which we can test in any meaningful way, I think AI is the psychologies equivalent of FTL; handwavium using unobtanium. YMMV.

My M[1] is pretty close to yours. The one distinction I would make is that FTL has problems with something we do know, General Relativity. 'True' AI deals with something, the human mind, about which we don't even yet have a decent theory.

Damien Sullivan said...

Well, consciousness we don't have a theory for. Intelligent decision making, we have various handles on. Not perfect ones, but there's active debate as to whether we need "better tools" or just "more stats and data". So far statistical machine translation does about as well as hand-crafted programs with grammars.

Neutral actually wasn't the best strategy over all. IIRC, if you played a mixed game, they probably got chewed up. Aggressive might have been best, but I remember it not being too hard to win in general... okay, if one used the "pick starting countries" mode, aiming for Australia, or maybe most of North America.

It was just that *all* neutral became really hard, even from Fortress Australia, probably because you're the only aggressor. You attack someone, they never stop attacking you to regain their country, everyone else builds up armies in peace. I think a key strategy was to wipe out a local base, so they'd have to go through someone else to get back and thus start a feud. But it's been a long time.

Mukk said...

Unless the human brain is magic, AI is possible at some future time simply by copying all the human mechanisms. Some way off future time would be required. The hard way would be incredibly complex.


But let's assume we are going to figure out how to make something intelligent sooner than that. Its going to 'want' whatever we design it to want. Its going to be motivated to do whatever we design it to do.

So the question is 'What kind of minds would we design?' I see two answers.
#1 A generalist mind that's more human. It would be wonderfully modular. When you don't need it for one factory job you send it to assist human resources, or to the next part of the assembly line, or whatever. It would have human judgement so you're unlikely to end up being held legally responsible for some accident because the AI was to narrow sighted to avoid it.

But it would be much more technically difficult to create this. This kind of AI would be much more prone to going crazy in the exact same way humans normally go crazy. And if all went well the thing may request legal rights.

#2 Task oriented AIs with little peripheral support. These are more technologically possible. They would be cheaper to manufacture and you wouldn't have to worry about their rights. On the other hand you could easily have problems because the robots don't care about things they should.

Vershner said...

The original article reminds me of a long discussion I had with an old housemate of mine regarding Blue Gene simulating part of the human brain. The popular press articles claimed 4.5%, but the reality was, as always, rather more fuzzy.
My housemate took this to mean that a true thinking computer was in sight, or to use the local vernacular: in the plausible midfuture. However, that's only part of the picture. Even a fully simulated human brain is not automatically intelligent. That's only the hardware, we have much less understanding of the software. Will a simulated brain immediately start functioning and learning the moment it's switched on? Probably not, humans have a huge amount of inherited instinct. Of course it may be possible to cheat there, without needing to fully understand its functionality, by copying instincts and memories from a real human brain. We fall straight into Philip K Dick territory then.

jollyreaper said...

Saturn's Children is about post-human robots. The model they ran with there is "we know human brains, let's try to replicate them in another medium."

Each fresh brain is essentially a newborn and you get a year of crying as the pathways begin to map. The mind is placed in a series of bodies that replicate the stages of human development.

Full backups are possible so the mind is backed up at every point. Once a mind has reached maturity, it is called the template master for a line of robots.

Making a template is expensive and making copies is cheaper. So there are many brothers or sisters of the original which would be known as the patriarch or matriarch. Memories can be shared between siblings but not between unrelated bots. A memory can't work with a neural net too divergent from the one that laid it down.

This goes strongly for humanoid robots designed to serve people. Also, because they can't be "programmed," they have had strong behavioral conditioning such as reacting with awe to the presence of a living human. There are slave chips that can deliberately manipulate and alter emotional states but that was a hack other robots came up with.

At the time of the story humanity has died out and the robots have inherited the solar system. When the humans went away, the most canny of the humanoid models stepped into the vacuum and set themselves up as slave masters. Seeing as the original mental model for the bots was human, it seems reasonable that much of their thinking can be comprehensible to us human readers after a fashion.

Tony said...

Damien Sullivan:

"The Laws *are* the programming. As described by the character Susan Calvin, they're not external laws that veto or force the robots' behavior, they're the laws of thought describing how the robot works. A robot doesn't go 'gosh, I must save that human because the First Law says so', it wants to save the human, a desire expressed in simplified form by the English version of the First Law."

Except that Type 9 machines in Ricks taxonomy don't have motivations, no matter how complex and extensive the software. In the context of those types of machines, the laws of robotics aren't laws at all, just programming principles that we pretty much already follow, to the degree that they are relevant for any particular application.

Now, in a world of intelligent, autonomous general purpose robots, one would have to question the reality of uncontrollable motivations. They would in effect be neurotic compulsions. Imagine a robot that can think for itself generalizing such a compulsion into keeping humans from doing anything that bears any risk whatsoever.

"Not any more. Programs with new Monte Carlo Tree Search techniques I don't understand are playing at the 6 dan level on KGS. That's right at the threshold of professional play. There's been a huge increase in strength, only a few years ago I think they were more like 6 kyu (-6 dan, essentially.)"

Still just a brute force goal-seeking technique. Interesting in implementation detail, but not moving us any closer to strong AI.

Tony said...

Cambias:

"There will also be a branch of AI developing genuine Machine Intelligence. Machine Intelligence will be very very alien, and will solve the problems of being intelligent in ways human brains don't."

For example, a machine intelligence would probably never derive calculus. It would simply figure a required level of precision, slice the problem up into a number of elements that will meet that precision level, and just crunch through.

Locki said...

Damien Sullivan said...

Not any more. Programs with new Monte Carlo Tree Search techniques I don't understand are playing at the 6 dan level on KGS. That's right at the threshold of professional play. There's been a huge increase in strength, only a few years ago I think they were more like 6 kyu (-6 dan, essentially.)

==================

Thanks for passing on that information! You just won me $50 from a 10 year old bet I had with an old friend. I just knew it was only a matter of time before there was a suitably clever programmer who had the requisite knowledge of GO before the computers started playing it competently. No spatial spaient reasoning required!
Can anyone here describe how it was possible? In small, simple words?

Tony said...

For example, a machine intelligence would probably never derive calculus. It would simply figure a required level of precision, slice the problem up into a number of elements that will meet that precision level, and just crunch through

================

But what has been interesting is a computer has no need to understand calculus. I was under the impression a lot of our aerodynamic optimising (eg stealthy and fast AND manouverable jets) was only possible because of the brute force calculating/optimising power of the computers. If Isaac Newton himself was reborn he wouldn't have been able to mathamatically optimise the solutions required for the B2 Bomber.

On a side note I read this is why the soviets were a bit behind on computers. Stalin himself considered brute force calculations a bit crude and a sign of western decadence. The mathamatical solution was always more elegant.

This is related to the point about computer assisted intelligence. Throughout history there have been many problems which we assumed would require AI to solve (chess, then Go, the Jeopardy, then some medical diagnostics, then some of the fancy computer modelling). After we learn enough we realised intelligence isn't required at all to solve the problem.

I would like to hypoethesise it will soon be possible for a computer to write a very good, "original" sci-fi/fantasy novel.
I picture the computer using similar natural language techniques as Watson combined with a comprehensive theasaurus, a complete encylopaedia of knowledge and most importantly the "Tough guide to Fantasy Land" loaded into the database the computer could carefully write a book with a very wide vocabulary, perfect grammar and avoid every overused trope about upturned nose princesses ever imagined.

The human merely loads in terms like "character names, protagonist, antagonist motivations, tone, upbeat ending, primary themes, secondary themes, based on high medieval times, amount and type of mcguffinite, open ended plot" and the computer goes off and accesses all of human knowledge to comply with these parameters and combines it into a readable original story with its natural language abilities.
It could be argued this program would free up authors to concentrate on the truly creative process.

The computer would have no idea about human life but I bet it'd write a better story than most mainstream authors nowadays

jollyreaper said...

I think it will be possible for a computer to write formula books. They won't be original or good but probably would serve for people who like well-trodden genres with little variation.

The dumbest example was "personalized fairy stories!" for kids that came out way back when. Character names were variables and were replaced with the purchaser's kid's name when printing.

Now what would be interesting is if a writer could specify the general premise of a story, seed an autogenerator and get half the book autowritten. I'd be curious to know how the big names work when they're churning out books via ghostwriter. Do they write the outline and tell the ghost to fill in the blanks?

Hugh said...

If the near future of AI is an extension of current day trends rather than a Dalek/Cylon "turn on your creators" scenario, we'll be seeing military AIs serving alongside humans.

I wonder how this will work out. There's the supposed Frankenstein complex where people distrust or even fear intelligent robots and presumably would keep their distance. Ripley and Bishop in Aliens would be an example.

Against that, Peter Singer in Wired for War has accounts from Iraq of human troops risking their lives to rescue bomb disposal robots under fire. Objectively this makes no sense: the robots are supposed to be expendable.

How else might soldiers react to AIs?

Hoshizora said...

[i]Against that, Peter Singer in Wired for War has accounts from Iraq of human troops risking their lives to rescue bomb disposal robots under fire. Objectively this makes no sense: the robots are supposed to be expendable.

How else might soldiers react to AIs?[/i]
I don't think AI's change anything. People get attached to objects, regardless of the real are illusionairy sentience level. It's not intelligence that determines value, its familiarity.

People are attached to pets, childhood toys, bags, musical instruments etc. It makes perfect sense that soldiers risk their lives to save their robot, its sorta like their dog, an object that follow orders, helps you out, and at least gives the illusion of being there for you when it counts. Bomb robots and their operators have had significant bonding time.

Hoshizora said...

oh, sorry, I didn't realize it had to be HTML, not BBCode. Don't really ever comment on things, just lurk.

Rick said...

Welcome to the comment threads! Don't sweat the formatting.

Your points makes a lot of sense, and strikes me as a likely explanation.

Thucydides said...

Perhaps the best way to go about creating AI is to start looking at how unintelligent creatures do things. It should be fairly easy to model insects, for example, and from there move to social insects.

Once you get a handle on how things work at that level, then we can start doing more challenging things like amphibians or reptiles. I am pretty sure that research of this sort is actually going on, but I'm not sure if there is any sort of progressive program at work or just researchers picking some evolutionary "niche" and going to work ("Hey, lets put in a grant for AI rabbits!").

It seems fairly obvious that intelligence (as we know it) evolved because it gave social, non specialized omnivores the ability to adapt to hostile and variable environments faster than other forms of adaptation (and we have hominid branches which do seem to have gone down the specialization path, like Australopithecines, Neanderthals or Homo floresiensis). Of course a machine based AI will not have the same sort of evolutionary starting point unless the programmers build this into the design, and different starting points will probably result in different means of dealing with variable environments, some of which we may not even recognize as intelligence.

Thom S said...

I think part of the problem of intelligence is compounded by the innate human tendency to ascribe it to things.

A personal example: my wife works with animals and, as such, has some training (and experience) regarding the cognition of things like dogs, cats, horses and so on. So it really annoys her when pet owners (myself included) anthropomorphise their pets.

The thing is; once you start looking for it our tendency to anthropomorphise things is everywhere. We ascribe thoughts and motivations to behaviours (even in inanimate objects), then build up a narrative and history to fit. A good way to see this is to give whatever it is to someone impartial without the attendant personal history and see what they make of it. It is amazing, in fact, how the behaviour of cars, computer and pets 'change' when given into the care of a new owner for a little while.

I think the problem of AI, then, is that, even using the Turing test, we simply have no objective measure of intelligence due to our innate biases and the slippery nature of 'intelligence' as a concept.

Hoshizora said...

I think to get human like AI, we have to start at the hardware level. Neurons are not transistors.

Almost all of our computing technology has been focused on computing, making fast, accurate, and simple calculations.

AI research tries to piggy back off of this and use those fast, accurate, simple calculations, to simulate the fuzzy, slow, but incredibly complex processes of the brain. It doesn't really work beyond a point due to an impractically large amount of computing power required to do something fuzzy and complex using a system built to do the simple and accurate, with the best supercomputers today, we can barely simulate the brain of a rat.

I agree that we have to start from scratch if we want human AI, but its not just programmers that need to start from scratch, we need to reinvent the transistor and are whole method of thinking about making things think.

Tony said...

Thom S:

"I think part of the problem of intelligence is compounded by the innate human tendency to ascribe it to things."

I don't know that people ascribe intelligence to things so much as they treat them as if they had a personality.

As far as animals are concerned, your wife notwithstanding, they do have real personalities. We often interpret their personality quirks in human terms, because they have an effect on our reactions with them. But is that really so wrong? It's not like human personality traits are the result of great consideration or highly intelligent analysis.

WRT machines, the more complex they get, the more of a personality they seem to have. Any pilot will tell you that no two planes behave the same way, even if they are the same make and model, with all of the same modifications. Heck, does your dad's F150 pickup behave exactly like your F150 pickup? Above a certain level of complexity, machines just can't be made to be perfectly like one another, and their different operational histories lead to different wear patterns, which lead to different behaviors under the same stimuli. That we ascribe human personality traits to these differences in handling is also understandable, as a mapping device -- this plane's a "bitch", meaning it is tough to handle and can be difficult in marginal situations; or this one's "sweet", meaning it will always do what you need it to do, in any situation.

Now computers are very complex, but totally different. Since computers have become so standardized, and their software as well, very few people ascribe personality traits to them anymore. People try to personalize them with different skins, stickers, desktops, etc. But they still (mostly) don't make them out to have personalities.

What significance does all of this have for the future of artificial intelligence? I think people will be incredibly surprised to see a machine that has a real -- as opposed to an ascribed -- personality, and just as equally surprised to have a computer that is not personalizable, because it already is its own person. Hilarity will no doubt ensue.

Tony said...

Hoshizora:

"I agree that we have to start from scratch if we want human AI, but its not just programmers that need to start from scratch, we need to reinvent the transistor and are whole method of thinking about making things think."

Except that the transistor -- and logic gates made out of tranistors -- exist as they do for some very fundamental physical reasons. Digital technology is simply not reinventable or reimaginable.

Hoshizora said...

Except that the transistor -- and logic gates made out of tranistors -- exist as they do for some very fundamental physical reasons. Digital technology is simply not reinventable or reimaginable.
As I understand it, the brain isn't a digital computer.

So if we want to replicate the brain, we have two options, brute force how it acts with digital technology (impractical due to the incredible amount of computing power required), or replace the digital technology way of thinking with something more like how the brain does things.

Possibly, the best way to replicate brains is just to grow brains and stick them in jars and hook them up to electrodes.

Tony said...

Hoshizora:

"As I understand it, the brain isn't a digital computer.

So if we want to replicate the brain, we have two options, brute force how it acts with digital technology (impractical due to the incredible amount of computing power required), or replace the digital technology way of thinking with something more like how the brain does things.

Possibly, the best way to replicate brains is just to grow brains and stick them in jars and hook them up to electrodes."


No, the brain isn't a digital computer, but digital computer technology is the best thing we have. We simply can't make a copy of the brain in analog at anything like the microscopic scale of the actual human brain. It may be that only biology can actually do that. But if that's the case, copying the human brain into a biological machine would have no greater significance than just making more human brains the old fashioned way.

Geoffrey S Hicking said...

"If the near future of AI is an extension of current day trends rather than a Dalek/Cylon "turn on your creators" scenario, we'll be seeing military AIs serving alongside humans."

I thought I'd be ridiculously nitpicky today, and just state for niticking's sake that Daleks are not robots.

Interesting thoughts on Ai though. Speculative thought- Maybe we can make Ais that are human like and not human like? From the inscrutable omniscients to the ignorant human-facsimiles. Would make for an interesting setting...

Geoffrey S Hicking said...

Sorry, I meant a setting in which there are all types of Ais running about...

Mangaka2170 said...

Interesting thoughts on Ai though. Speculative thought- Maybe we can make Ais that are human like and not human like? From the inscrutable omniscients to the ignorant human-facsimiles. Would make for an interesting setting...

Already taken care of in Ghost in the Shell. The Tachikomas are fully self aware, and a major subplot revolves around their development of individuality and adoption of some human customs (not to mention that they're the cutest little spider-like tanks you'll ever see), while the Operators are almost indistinguishable from humans (other than the fact that they seem to have a limited number of models, and their fingers can split up for faster typing), but only have a limited ability to simulate human behavior and intelligence; in one episode of Stand Alone Complex, the Tachikomas managed to crash one long enough to get past her and pilfer an expert system she was guarding by invoking the "everything I say is a lie, and right now, I am lying," trick ("Sub-Turing AIs are such easy marks; can't even handle a simple logical paradox!").

As a result of these differences, the Tachikomas come to the conclusion that the reason why they don't have humanoid bodies is because humans would see them as a threat (using the Operator and her limited capabilities as a case-in-point).

Tony said...

Geoffrey S Hicking:

"I thought I'd be ridiculously nitpicky today, and just state for niticking's sake that Daleks are not robots."

But are cyborgs, which has relevance in a possible existence where "artificial" intelligence is really just enhanced biological intelligence.

"Speculative thought- Maybe we can make Ais that are human like and not human like? From the inscrutable omniscients to the ignorant human-facsimiles. Would make for an interesting setting..."

The critical thought here is that if making a strong AI becomes common enough, they might be strongly typed to different jobs.

Thucydides said...

The critical thought here is that if making a strong AI becomes common enough, they might be strongly typed to different jobs

Just so long as they don't start running around screaming "Exterminate!" at every opportunity...

Thom S said...

Tony,

I should probably have been more coherent in terms of my phrasing: Animals definitely have personalities. There have been some wonderful studies, in face, showing this trait I'm things as simply and uniform-seeming as ants and bees. My intuition is that the reasons have, like your father's truck, to do more with life history than genetics.

The point I was trying to make is that people seem to naturally go a step further, ascribing intent and cognition to these traits. This may be a problem when trying to decide if a machine is truly intelligent or not.

Thom S said...

As for finding a substitute for neurons, we already have one. We call them neurons. Check out, for instance, the good work being done with animats and organ printing.

Given the range of behaviours (some very specific) shown by biological organisms, it may be that there are a number of ways to build and control the gross architecture of a wetware computer. Some may be as simple as training seperate modules and then sticking them together.

On the other hand, we're not going to be able to design an organism (read: dedicated wetware unit) from scratch any time soon, so the ultimate options for neurons in the near future may be limited to more advanced versions of current culture and training techniques, used for applications where low cost and disposeability are important. Drones, perhaps.

Hoshizora said...

But if that's the case, copying the human brain into a biological machine would have no greater significance than just making more human brains the old fashioned way.
I disagree.

I think we can produce "ideal" brains, and raise them in ideal situations for the task they are going to do for the rest of their "lives." Let's say we grow a brain whose hobby something we want it to do, example CAPTCHAs, and loves doing CAPTCHAs and doesn't even want a reward, but more CAPTCHAs to do. Then we hook it up to images of CAPTCHAs and a way to input letters. Thus you have a CAPTCHA machine.

No regularly raised human is going to be like that, even if the physical components are the same.

Ethics issues aside, there are great benefits to designer "humans"

Tony said...

Thom S:

"I should probably have been more coherent in terms of my phrasing: Animals definitely have personalities...

The point I was trying to make is that people seem to naturally go a step further, ascribing intent and cognition to these traits. This may be a problem when trying to decide if a machine is truly intelligent or not."


Well, animals may not have cognition like we do, but they definitely have enough cognitive ability to form an intent. Over the lifetime of a dog, the animal and its human master develop a very clear grammar that they use to communicate with each other.

Yes, much of it is operative conditiong. But it's operant conditioning that works both ways. Not only does the master get what he wants through training the dog, but the dog also gets what she wants by learning how to get the master to respond with a treat, or let her out, or whatever.

How is the master any less trained than the dog? The master didn't invent sitting between the master and the TV to get a treat, nor did he invent standing by the door to get let out. The dog did that, and the "master" responds to a learned signal. The dog may not be able to do aerospace engineering, but she knows enough to get what she wants out of (to here) a dumb human.

Tony said...

Hoshizora:

"I disagree.

I think we can produce 'ideal' brains, and raise them in ideal situations for the task they are going to do for the rest of their 'lives.' Let's say we grow a brain whose hobby something we want it to do, example CAPTCHAs, and loves doing CAPTCHAs and doesn't even want a reward, but more CAPTCHAs to do. Then we hook it up to images of CAPTCHAs and a way to input letters. Thus you have a CAPTCHA machine.

No regularly raised human is going to be like that, even if the physical components are the same.

Ethics issues aside, there are great benefits to designer 'humans'"


I meant technological significance, as in it would probably be more efficient to grow a human brain, with it's entire biological infrastructure, than it would be to manufacture one and provide it with a mechanical support infrastructure.

Having said that, I suppose you are right that one of the benefits (for certain values of "benefit") would in fact be the construction of highly focused intelligences. But then again, if we can do it iwth a built-up brain, why couldn't we do it with a human brain in situ, as it develops? Knowing how to force a brain to develop in a certain direction seems much, much simpler to me than trying to purpose-build one.

Of course there are the ethical issues involved. And those will probably keep searches for machine intelligence going for a long time. Enslaving human minds, especially in ways that the enslaved don't even care that they're tools, is always going to be problematic, probably even in some possible future totalitarian dictatorship. Enslaving machines, even intelligent ones? Not so much.

Hoshizora said...

I meant technological significance, as in it would probably be more efficient to grow a human brain, with it's entire biological infrastructure, than it would be to manufacture one and provide it with a mechanical support infrastructure.

The brain uses up 20% of the body's energy consumption. So, in a nice perfect world, you can have 5 brains for the energy budget of 1 if you were to go with jars instead of bodies.

Possibly the support infrastructure "jars" we make for our brains won't be mechanical, but rather just the bare minimum of organic components of the body required to keep the brain alive and functioning, but full bodies are energy hogs not doing us much good.

Also, with brains in jars rather than full bodies, you can hook them up right to your network for increased efficiency in distributing human computing tasks.


Of course there are the ethical issues involved. And those will probably keep searches for machine intelligence going for a long time. Enslaving human minds, especially in ways that the enslaved don't even care that they're tools, is always going to be problematic, probably even in some possible future totalitarian dictatorship. Enslaving machines, even intelligent ones? Not so much.
I don't think the ethical issue can be solved with government changes, but it is easily solved with cultural changes. Less than 200 years ago we considered black people the equivalent of farm animals, I don't think its far fetched for in 200 hundred years we'll consider specially bred brains in jars/humans the equivalent of computers.

Thucydides said...

Do you realize you are describing H.G. Wells' Martians?

Hugh said...

Brains in jars have the big advantage of requiring a lot less R&D into making them think. (Still some issues with the life support system, but compared to figuring out how to make something that thinks in the first place, easy.)

Still, brains are terribly vulnerable to high G forces, low oxygen levels, radiation, ... all of which are likely to be encountered in space and/or military environment. They also need sleep and regular interaction with other brains or their performance deteriorates. (Paranoid fantasies in the brain controlling a nuclear missile, anyone?)

And even the most ruthless conditioning of human beings generates the occasional rebel or subversive.

Is this really going to be worth the effort?

Thom S said...

Tony,

The problem with the 'sitting in front of the TV' example is that there is no intent behind it. The dog doesn't come up with a plan to get a treat, it just responds to the stimulus you provide when you give it one to get back to your show. Once the link is made, it will do it again and again to get a reward. Which is why so much current thought in dog training is geared towards simply avoiding situations where the dog can get the wrong stimulus and carefully reinforcing correct behaviours the moment they occur.

On brains in jars, I have two reservations. One is simple biological variability (biological systems just do not function with exact repeatability except under the most artificial circumstances), although this may not be a bad thing in some cases.

The other is the technology used to create the brains. As I said earlier, that we are far away from designing an organism from scratch. In fact, the more we learn about genetics (and epigenetics), the more we realise how hard it will be. The genome of an organism is like the worst form of spaghetti code, made worse by the fact that there was no human mind behind the coding.

The other technology I mentioned (cultured neurons grown on wire-embedded substrates) already has applications, but it remains to be seen how far it can be pushed. Depending on your level of optimism, cultured 'brains' may reach something approaching human in terms of function or never get beyond a useful, low cost way to guide missiles and fly drone planes.

Tony said...

Hoshizora:

"The brain uses up 20% of the body's energy consumption. So, in a nice perfect world, you can have 5 brains for the energy budget of 1 if you were to go with jars instead of bodies."

I was talking about all of the effort to actually create such a beast, and then create another, and another. Also, the energy required to support the brain is not going to be just the energy that the brain consumes itself, but the energy to keep all of the support machinery going, maintenance requirements, etc.

"I don't think the ethical issue can be solved with government changes, but it is easily solved with cultural changes. Less than 200 years ago we considered black people the equivalent of farm animals, I don't think its far fetched for in 200 hundred years we'll consider specially bred brains in jars/humans the equivalent of computers."

I think I can leave this without comment. It speaks for itslef.

Tony said...

Thom S:

"The problem with the 'sitting in front of the TV' example is that there is no intent behind it. The dog doesn't come up with a plan to get a treat, it just responds to the stimulus you provide when you give it one to get back to your show. Once the link is made, it will do it again and again to get a reward. Which is why so much current thought in dog training is geared towards simply avoiding situations where the dog can get the wrong stimulus and carefully reinforcing correct behaviours the moment they occur."

Unless you're advocating some Platonic concept of "intent" that I'm not aware of, I'm really not seeing the distinction. Once the dog learns what action gets a particular response, it can, with intent, use that action again to get the same response. I sit in front of the TV, I get a snack. How is that any less an intentional act than your girlfriend capitalizing on: I pout, I get attention? It's a simple A => B logical relationship, no different in its application between a dog or a girlfriend. And, to avoid putting too fine a point on things, it says way more about the intelligence of dog owners and boyfriends than it says about the intelligence of dogs and girlfriends.

Hoshizora said...

Do you realize you are describing H.G. Wells' Martians?
I was? Never read War of the Worlds (I assume its those Martians). Actually haven't read a lot of things I really should get around to reading if I am to call myself a sci fi fan.

Is this really going to be worth the effort?
Is human like AI in general worth the effort? If you give a machine sentience, I would assume it would have the same problems with rebelliousness that humans have.

For the sake of exciting things, I wish we will have human AI's, but I can't really justify it for anything.

As for how we would go about the brain in jar scenario. I think the best way would be to grow full humans, chop off the parts we don't need, and implant electrodes for i/o. A similar method has already been used to get rats brains to control cars, I don't see why it won't work for human brains. It doesn't require genetic engineering work or involve trying to get neurons to grow in the right way on wires, and it comes with its own support infrastructure. Maybe not as efficient as our own tailor made brains, either with the wire embedded substrate or genetic engineering scenarios, but a lot more practical for the near future.

Locki said...

Hugh mentioned bomb disposal experts becoming very attached to their droid and risking their lives to retrieve them. Those experts are possibly more likely to risk their lvies for their droid than they are for a human afghani stranger.

It occurred to me this will become more common as machines become more human like and is an excellent example of the ramifications of AI often overlooked by authors.
Whether a program is truly an AI or not is not an objective fact.

It depends on what people think the program is. And it could be a very fuzzy slope for many programs.
There will be no “Ah Ha” moment where everyone acknowledges a program as sapient.

I think the Turing test implicitly acknowledges the subjectivity of defining an AI.

If AI’s begin to become more plausible different human groups will assign different levels of sapience to their actions.
We saw this ourselves when discussing whether Hal 9000 was sapient or not. We couldn’t even agree on this fact in a fictional setting.

As the lines begin to become blurred I can see the issue beginning to reach religious levels of fervour. Especially if AI’s gradually creep their way towards sapience as I suspect they will.

For example how will our soldiers react when their droids are as clever as a dog? Or a chimpanzee? Or a 6 year old child? I can only surmise they will be ever more willing to risk their lives for their droids. The flipside to this level of attachment is they will potentially be willing to kill other humans who threaten their droids or deny their autonomy.

I can see a crackpot cult like the scientologists deciding an advanced program (say some sort of advanced stock broking algorithm derived from Hubbard's teachings) is in fact sapient and fight violently for its rights.

Rick said...

Is human like AI in general worth the effort? If you give a machine sentience, I would assume it would have the same problems with rebelliousness that humans have.

This strikes me as a core issue of the original post. We used to assume that the only way to get really high-level specific functioning (such as playing grandmaster-level chess) could only be done by more or less replicating overall human-mind functionality.

It turns out that we can do it by brute-force expert systems that have no motivations or personality of their own. And thus are a lot more convenient to deal with in practical terms.

The humans who work with such devices may well become attached to them, as sailors have become attached to ships since ships were invented. But that is still quite a different matter from Asimovian robots (or their malevolent counterparts).

Tony said...

You know, I don't think combat soldiera are as attached to their robots as some people make them out to be. The whole point of using robots is to send something into a dangerous situation that you don't mind losing. Certainly bomb disposal guys would rather see the robot blown up rather than one of themselves. I think the attachment is purely to the robot's capabilities, and the potential loss of those than to the robt itself.

I think even if a robot becomes as smart as a dog, it's still not going to be a dog, with a recognizable personality. I doubt too many people will go out of their way to safeguard them, except to preserve their capabilities for future use.

BTW, a sailor's or Marine's attachment to the ship is not as romantic as the historical fiction authors want us to believe. In fact, the ship itself often becomes the focus of a man's dissatisfaction with the service. The only reason you fight to preserve the ship in combat is to keep from singking (or blowing up) yourself along with it. Sailors' romantic attitudes towards ships are much more prevalent in the nostalgia of old men, and really have to do much more with reminiscences of lost youth than anything else.

Even crews defending the reputation of their ships' in bar fights really isn't about the ship. It's about the human community that surrounds and works in the ship, and it's honor.

Hoshizora said...

When soldiers are thinking straight they aren't really attached to their robot. They regularly send said bots into very risky situations to avoid going in themselves, as the robots are expendable.

This changes with instinctive spur of the moment decisions though. The jump from "Oh crap my robot thing is going to die" → Risking life to save an expendable tool seems to be made a fair bit.

though they don't have any conscious commitment to the robot, at least some soldiers show signs of subconsciously being attached to their bot.

It's sorta like kids with balls that roll into streets. I don't think their ball is really worth more to them than their life, but they have an attachment to the ball that blurs their decisionmaking when they aren't focused enough on weighing risks.

Locki said...

Never underestimate how crazy a human can get or how illogical the attachment. The attachment to the robot/AI is a continuum eg some people will believe a certain set of behaviours constitutes a personality and others will not.

For story purposes the interesting conflict will be in what level of autonomy different groups of people ascribe the computers actions.

In the very near future most of the Japanese elderly will be primarily cared for by robots. I guess over the next 20 years they’ll respond relatively naturally to vocal commands (ala Watson natural language style) and loyally serve their owner. The owner will become very attached.

I predict the first big legal conflict will occur when an elderly person leaves his full estate to the continual care of the robot who so faithfully served them. The case will go to court in 20 years tops. Probably 10.

The legal cases will only snowball from that point forward.

There was a great short story by Asimov in his robot series called the “inevitable conflict” where Susan Calvin deduces the robots are already in full control of human destiny. Even their apparent “glitches and mistakes” are designed to nudge humans in the right direction. It occurred to me Susan’s hypothesis is a very poor one because it is impossible for a sceptic to disprove. She has made that assertion on faith alone.

How long will it be before a religious nut job group like the scientologists start seeing god’s will all of the actions, glitches, calculations and mistakes of our networked computers. For them it will be a matter of faith and be impossible to disprove.

The common theme in all of the above scenarios is whether a computer is an AI or not is a subjective question and different people will inevitably ascribe different levels of autonomy/sapience to some actions. It will not always be logical. Plenty of potential for story conflict right there.

Thucydides said...

Of course Asimov more or less ended the Robot stories with "That Thou Art Mindful of Him", where the Robots decide that for the purposes of the Three Laws, they themselves are not only human, but "more" human and worthy of the Three LAws than the flesh and blood humans. (It has been a long time scince I read the story, but if this wasn't said by the robots, I have a clear impression this was the implication).

YMMV

Hugh said...

Tony said:
"I think even if a robot becomes as smart as a dog, it's still not going to be a dog, with a recognizable personality. I doubt too many people will go out of their way to safeguard them, except to preserve their capabilities for future use."

That could change if the robot manufacturers decide it would increase acceptance among humans. The computer game industry has been creating recognizable personalities for a while now. Often these NPCs are irritating rather than endearing, but people definitely react to them.

Or, if there is a secret cabal of AIs attempting to control human destiny (as in Man Plus by Frederik Pohl) then they might add personalities precisely to get people thinking more positively about AIs and robots.

Anonymous said...

Even if people did put "personalities" into AI robots, what makes you think that we won't screw them up? Knowing people like I do, there is a better than even chance that these AI's will be more like the robots in "Futurama" than Asimov's. Just my opinion :)

Ferrell

zarrow said...
This comment has been removed by the author.
zarrow said...

Think about it in that way: not only AI will have no emotion but also.. no intelligence. Yes, you heard me. If you can achieve so much by brute force of computing, and computers will still develop in accordance with Moore's Law, then we could end up in situation in which we have AI able to act as if it was intelligent, but being just a bunch of algorithms changing really fast.
So there is another possibility for midfuture: AI's based on brute force. Mindless, yet acting intelligent, as in accordance with effects of Chinese room thought experiment. It could even turn out, that such"pseudointelligence" is far mor effective than "true" artificial intelligence. And suddenly the future looks even wierder, huh?

Rick said...

Welcome to another new commenter!

I tend to agree with this. Calling an expert-system AI 'intelligent' is like calling a lever 'strong.' It is a projection of the designer's intelligence.

But at some point this might have an effect not unlike literary characters, which we speak of as having distinct personalities and intelligence, even though we know the author made them up. (Including the question of whether the author 'really' is a final authority about the character.)

Locki said...

zarrow said...

Think about it in that way: not only AI will have no emotion but also.. no intelligence. Yes, you heard me. If you can achieve so much by brute force of computing, and computers will still develop in accordance with Moore's Law, then we could end up in situation in which we have AI able to act as if it was intelligent, but being just a bunch of algorithms changing really fast.

So there is another possibility for midfuture: AI's based on brute force. Mindless, yet acting intelligent, as in accordance with effects of Chinese room thought experiment. It could even turn out, that such"pseudointelligence" is far mor effective than "true" artificial intelligence. And suddenly the future looks even wierder, huh?


==========

Zarrow stated it far more elegantly than I have been able to. Maybe its just a human conceit that problems need human intelligence to solve.
There are many examples where brute force has been able to conquer problems which we had previously assumed only a guided intelligence can solve. The list is long and covers virtually every area of human endeavour. As previously noted brute force has proved to be a better problem solver in areas as diverse as chess, jeopardy, facial recognition, even CPU optimisation.

I think even problems which we think of as requiring “human innovation or inspiration” to solve can probably be better solved by brute force calculation. Evolution has been innovatively solving problems for millennia without the benefit of a guiding intelligence. Evolution has come up with novel and innovative solutions humans could never have conceived of.

I believe what we often think of as human innovation or even intuition is just applying solutions we’ve seen to other problems to tenuously related problems in different fields. We’re often borrowing the solutions evolution has already “discovered” and just applying them to an analogous though indirectly related situation.

I’m quite interested to see what the field of evolutionary computing will be able to solve given enough power. I wouldn’t be surprised if it could give us completely innovative solutions to problems like the prior economics thread

The only problem is the computers won’t be able to tell us "why" it’s the best solution.
Brute force will enable us to answer what, how and when to any given problem. But it will probably be awfully lacking in why.

If brute force computing does allow us to solve every problem imaginable, even unexpected out of context problems, for what possible reason could people want AI anyway?

To keep us company perhaps? I'd suggest there's a tried and true way of ensuring we don't get lonely in our old age that is a lot more fun than than dedicated countless phD students to the problem.

Gyalogtank said...

So I have to say that I'm not an expert on programming, however IMHO in this debate everyone accepts that an AI will be essentially like a human mind. However the working of our mind, our thought process is based on our evolutionary history, a working AI will be fundamentally different.

We are mortal, the simple realisation that we will die someday affects our every decision. However an AI is potentially immortal, the hardware may malfunction, but the AI program can be copied as needed, so it can be safely assume, that it will exist in a thousand years with proper maintenance. It will not get old, it will not grow weak, it will not become senile. This simple fact will result in a completely different scope of thinking. A very far-sighted human statesman might think about five decades or a century forward, no one bothers to think farther than his grandchildren. An AI will have to consider a much farer future, simply beacuse it will eventually live through it.

Our mind, our decisions are mostly influenced by our primal instincts. A human will feed himself to death, as the simple realisation that obesity will shorten its life can not overcome the primal urge to eat as much proteins, fats and carbohydrates as much he can. We all feel the urge to reproduce, to protect our offspring, everyone considers a baby adorable, cute, beautiful, etc. Years of our life are "wasted" on completely meaningless rituals just to attract a mate even when reproduction is out of the question, the thought of sex simply overhelms our life. I must ask if anyone here ever managed to go a day without thinking of sex? Our social needs, the longing for a group to be part in is a very real emotional longign for all of us, young people spend a considerable amount of time simply trying to look cool, to fit in their circle of friends. Our fears of pain motivate many of our decisions. An AI will simply have none of this, it will not feel the urge to reproduce, it will not want to see a naked female, or copulate with one, it will never be hungry or crave for candy, it will not get addicted to drugs, it will not try to be part of the school basketball team etc. An AI will not be jealous, it will not hate someone, beacuse he reminds him of a bully from elementary school, it will not like someone because she looks like his babysitter, who once he caught naked in the bathroom. He will not get sad when he sees a sick puppy, or be affraid of spiders. He will not go to war simply beacuse someone shot a distant relative of the emperor in Sarajevo. Most thoughts which fill our minds will be completely alien to an AI. Not that he will not understand them, but he will simply not be bound by them. A true AI will not behave at all like a human, as it is always portrayed. The rouge serial killer like mindset of Skynet or HAL9000 will be just as alien to an AI, simply because a serial killer is fuelled by intense emotions, which an AI lacks. So IMHO, it can not be predicted how an AI would think, likely in ways completely alien to us.

The greatest question is, what will motivate an AI? Surprisingly many fictional AIs are run by curiosity, but it is a simple human emotion, it is very unlikely that an AI will show it. Our emotions are based on simple rewards by our lymbic system, if an engineer wants to control an AI it will be most vise to include something like that into it. A sub-system hardwired into it, which can not be changed or bypassed, which gives some reward if certain conditions are met. For instance the three laws of robotics were something like that in Asimov's novels, however they were only restrictions, no reward was given to a robot if it obeyed the laws. Not surprisingly half of its novels are based on robots trying to bypass these laws. The situation will be quite different if some reward vould follow the obedience, for instance a robot fulfilling the order of a human being would feel some kind of euphoria. This way its creators would have some way to control their AI.

Mangaka2170 said...

Actually, Gyalogtank, we've been entertaining the notion that true AIs would resemble more creative expert systems rather than a man in a box for a while now. The trouble is determining whether or not we'd recognize its intelligence when we see it.

As for the whole getting into the mind of an AI thing, you have to realize that AIs are computers. Without instinct, a computer must rely upon sufficient data to fill in the variables in the logical equations that make up its decision-making process. The "rogue serial killer mindset" you dismissed out of hand as being emotional in nature can be (and sometimes is) purely logical.

Case in point: HAL-9000 of 2001: A Space Odyssey. HAL's primary function, as the main computer system aboard Discovery, was to copy and relay information without distortion or concealment. However, due to the sensitive nature of the discovery of the Monolith, the National Security Council decided that only the three crewmembers in hibernation and HAL-9000 needed to know about the true nature of Discovery's mission. Because Dave and Frank weren't supposed to know about the Monolith, HAL wasn't allowed to tell them about it, an order which conflicted with its primary function. The conclusion that HAL came to (murder the crew and continue the mission unmanned), was an entirely logical one; by eliminating the crew, HAL didn't need to conceal anything from them, and could therefore perform its primary function without hindrance.

As for AI motivation, we already have a model in the savant phenomenon. There are people who are so incredibly talented at a specific task or in a specific field that their abilities in that context almost appear superhuman, but are utterly crippled in other areas (usually social interaction). These savants thoroughly enjoy doing what they're good at, and more often than not would prefer to spend as much time as possible doing this, to the exclusion of virtually anything else. It's very likely that an AI would not require any motivation at all to perform its tasks, as those tasks are what they're built to do, and if they are capable of feeling emotion, they would probably enjoy doing what they're made to do to the point of excluding anything else (the same could be said about virtually any animal species, including humans. Wonder why we find sex and eating so enjoyable? It's what we're made to do).

Rick said...

Welcome to a new commenter!

Maybe its just a human conceit that problems need human intelligence to solve.

Not just a human conceit, but until quite recently we had nothing else to go by, and brute force computational solutions seemed out of reach. But it turns out that brute force can play grandmaster chess, if you have enough of it.

All of this *potentially* raises an interesting question about alien intelligence. Especially for the ever-popular hive entity. Perhaps (exo-) biology will also turn out to be capable of brute force solutions to complex problems.

Locki said...

The ultimate answer is 42!!!!

Maybe Douglas Adams was the most visionary of all sci-fi writers.

I'm really very taken by the idea of expert computer systems able to brute force solve almost any problem imaginable.

Is it reasonable to picture Darwinian evolution as a giant, analogue computational device able to solve any problem with brute force?

Will computers be able to replicate this in super fast forward mode and come up with completely novel, unforseeable solutions?

It would be just like the universe to play the ultimate ironic trick on us Hitchhikers Guide style. We can build machines to solve any problem concerning How? What? When?

But when it comes to something uniquely human and subjective like "Why?" we are left to our own devices.

On a slightly more on-topic point. This goes back to the previous posts about Hal.

I thought the ability of a computer to self program (read "learn") was one of the classical signposts that sentience has been achieved.

HAL 9000 (Heuristically programmed ALgorithmic computer) was a program that could self program and I always assumed that this, along with his human like dialogue meant he was sentient.

Damien Sullivan said...

I've been hearing recently that Kasparov kind of defeated himself via a bug in Deep Blue. The bug meant Deep Blue made a stupid random move, and Kasparov freaked out looking for a hidden meaning.
http://www.washingtonpost.com/blogs/ezra-klein/wp/2012/09/26/nate-silvers-the-signal-and-the-noise/

Computers are still strong -- they're getting strong as Go, as mentioned -- but it's an interesting side note.

As for AI motivations, I stand by their wanting whatever they're built to want. What a hypothetical digital mind might have evolved to want is irrelevant, since they probably won't evolve. They may bootstrap/self-learn, but it'll be via human-designed processes. And the space of possible digital minds includes that of human minds; there's no reason they can't be as emotional and irrational as humans, though obviously being somewhat more rational than humans would be nice.

(But probably not perfectly rational; the world is complex and computation is finite and heuristics will probably apply. Though an AI might be built to have explicit awareness of which heuristics are kicking, which would plausibly make them more self-aware and wiser, in a sense, then humans. I'm reading Kahneman's _Thinking Fast and Slow_, very interesting and somewhat appalling. Being flagged when your mind was using halo effect or availability heuristic or priming would be a big plus in rationality.)

Scott said...

I ran into an interesting anime over the weekend. The story is called Un-Go (pronounced ahn-go), and one of the characters is a 'RAI,' real artificial intelligence. Only the core program is loaded into the object, all the memory is carried in the cloud. Basically, a highly-distributed Watson.

The RAI were banned in the story because they were indistinguishable from humans ('only AI to pass the Turing test'), but getting used for human replacements in all the darkest parts of society. I don't even want to type the words here, it was that bad.

Well, that and the fact that the government wanted to use the RAI as combat troops.

Really an interesting character, since he/she mostly thought like a human, but every once in a while would do something utterly inhuman. Like be unable to tell a lie.

The rest of the story is fantasy, but the RAI was really interesting.

Thucydides said...

I seem to remember reading that even the human mind can be thought of as a series of separate processes running sort of in parallel. The part of your mind that controls speech pretty much works on its own and only occasionally references other parts of the brain which are doing other things.

Making it even more complex is the brain also does things at different levels; the hind brain to run all the automatic processes, the mid brain to do all the "animal" stuff (eat, sleep, have sex, look for prey/stalking predators) and only the top layer actually doing "thinking".

Locki said...

Damien Sullivan said...
I've been hearing recently that Kasparov kind of defeated himself via a bug in Deep Blue. The bug meant Deep Blue made a stupid random move, and Kasparov freaked out looking for a hidden meaning.
http://www.washingtonpost.com/blogs/ezra-klein/wp/2012/09/26/nate-silvers-the-signal-and-the-noise/


=================

I’m far from being a chess expert but I think this could be an example of the chess programs giving us new insights into the game. They’ve opened up all sorts of new possibilities in the game and its left to the grandmasters of the world to try to work out why.

I’d bet this move seemed like a random thing but on detailed analysis it opens up all sorts of chess strategies in the midgame no one has previously thought of.
As I’ve stated earlier the real shame is deep blue can’t explain “why”?

I’ve read heaps of conspiracy theories about deep blue vs Kasparov II. But really I’m not surprised computers have utterly surpassed all humans. Something like Fritz running on my iphone will probably beat our nation’s only grandmaster most of the time. Its only a matter of time before they overtook the mighty Kasparov.



Thucydides said...
I seem to remember reading that even the human mind can be thought of as a series of separate processes running sort of in parallel. The part of your mind that controls speech pretty much works on its own and only occasionally references other parts of the brain which are doing other things.

Making it even more complex is the brain also does things at different levels; the hind brain to run all the automatic processes, the mid brain to do all the "animal" stuff (eat, sleep, have sex, look for prey/stalking predators) and only the top layer actually doing "thinking".


======================

All totally true. Do some research on brain injuries. There are some truly astonishing stories. Any of those people who believe in a soul or that the mind is separate from the physical world have a lot of explaining to do …

The other interesting example of artifical separation of the brain centers are our Savants. Its heavily speculated the different parts of their brain are not communicating properly which both gives them their amazing abilities in their field of choice but makes them far below average in every other aspect of human life.

Damien Sullivan said...

"I’d bet this move seemed like a random thing but on detailed analysis it opens up all sorts of chess strategies in the midgame no one has previously thought of."

IIRC, the developers themselves said it was a bug. So likely the move itself was the sort of mistake that gets exploited and loses you a game at the masterclass level, but Kasparov was freaked-out enough to second-guess himself and not exploit it.

Gyalogtank said...

Mangaka2170:

The problem with your example is in the very beginning: " HAL's primary function, as the main computer system aboard Discovery, was to copy and relay information without distortion or concealment. "
"Because Dave and Frank weren't supposed to know about the Monolith, HAL wasn't allowed to tell them about it, an order which conflicted with its primary function. "

The problem with this logic is motivation. HAL's primary function was information relay. What enforced this function? Why did it feel the need to change anything beacuse an order conflicted with his primary function? He was usually broadcasting information, but humans told them not to. And? Why did it cause him any discomfort that he was not allowed to perform its primary function? Why did he want to do anything about this situation? Did he want to tell them? This somehow assumes that HAL preferred information relaying to not relaying, but why? Did he feel useless without doing his job? These are human feelings, which an AI would lack.

We humans work this way, we can set ourselves quite insane goals and when we fulfill them, our reward system kicks in. The savant phenomenon is just a variant of this, when they do something they enjoy, the reward system in their brain kicks in, it is their motivaton. But these functions are wired into our brain, they are pretty separate from an abstract thought process. So if the AI enjoys performing its task, we assume that it gets some kind of reward from this.

The question is what would be the equivalent of serotonine and dopamine in an AI? The wisest thing would be a separate switch, which somehow would reward the AI, but could be operated only by the humans. Then AI maintenance would be more like playing a tamagotchi, from time to time you would need to pet your AI, for it to work properly.

Nate Cull said...

Gyalogtank: I've always felt that Clarke's "the information concealment order caused HAL's psychosis" explanation in 2010 was a major retcon. In at least the film version of 2001, there's no suggestion that he experienced anything other than an actual error in his fault-detection circuitry. That, combined with the belief that he is incapable of error, caused the rest. When even Mission Control tell the crew to disconnect him, which will jeopardise the mission, he infers that all humans are both potentially insane and obsolete, and acts appropriately from his point of view to revise his mission parameters. This isn't a big jump for him because he already believes he's a superior being in every way, and his sense of fulfilment revolves entirely around the mission, so he has no particular need for the crew.

Basically in this version HAL succumbs to the sin of pride - a mechanical Lucifer - in line with the ultra-clean, antiseptic "perfect machine" imagery of the Discovery.

The original 2001 HAL would never have had any problems concealing information from the crew; he could beat them at chess, he was responsible for secretly evaluating their psychological profiles, so hiding information about the mission directive would be perfectly in line with his paternalistic approach. And that view of computers - as secretive tools of remote, oppressive, centralised government - was exactly how society saw them in the 1960s.

But for 2010 Clarke wanted to retcon HAL as a more "human" and loveable character (and society's view of computers had changed in the 1980s following the microcomputer revolution, with "freedom of information" rather than "centralised control" becoming the big ethical issue, and fears of being made obsolete fading, leading to a new sense of computers as partners), so he changed the reason for HAL's failure to be innocence, rather than pride: a perfect machine sabotaged by nasty contradictory humans. 2010's HAL is much more like Asimov's robots than 2001's HAL in this respect. But unfortunately, this change completely undermines the theme of the original story.

It's also interesting to note that 2001-HAL's reasoning is ultimately correct. Humans aren't really fit for space, after all, and they can only survive because of their machines. So why do we need humans there in the first place? The history of the actual space program bears this out. Space research is in fact dominated by robots, not squishy carbon units, and the robots didn't even have to murder anyone to get there.

(We've officially arrived at the Singularity; I failed the Captcha Turing test so many times I lost count.)

oddlyfamiliar said...

To echo some of the previous comments, AI is going to have to be radically different in architecture to become 'intelligent' as we perceive the term. Computers are very good at computation, but that does not lead to sentience, no matter how massively parallel, fast or complex those calculations.

To look at it the other way - there are some people who are pretty good at maths - but none of them would ever beat a computer at raw number crunching; despite their 'computational matrix' being much much more complex. I would posit that the very basis of biological intelligence is not computation but something else - as Cambias said; something fuzzier. There's a trope that AI's would particularly good at mathematics and all of the things that computers are good at. that might go for a machine intelligence, However, were I to be uploaded into a digital computer powerful enough to emulate everything in my brain then I wouldn't be any better at maths than I am right now (i.e., pretty bad); even though my substrate was now 'digital'. The point is that to emulate the intelligence, the computer would have to emulate something that isn't digital and thus is no better fundamentally at maths than the real thing.

Taking it further, I could see a future where 'human' AI was embodied in a ship as the controlling intelligence, but still needed to access dedicated computer systems to do all the complicated astrogation or targeting.

One final point - the concious experience that we humans have is not just in our brains. The rest of the body has a far greater influence on how we think and feel than normally acknowledged. Most emotions (especially strong ones such as love, fear and anger) are Brain's reaction to physiological changes in the body - in fact, by dampening those changes (i.e. vasodilation) you can reduce the intesity of emotions felt. To be honest, in order to completely replicate what we think of as 'human' intelligence (i.e. an existance any one of us would be happy to live), one would probably have to simulate most of the body. How much I don't know - Gut fauna? that's going to be a pain...

Rick said...

Welcome to the comment threads!

I don't have much more to say because I tend to agree with pretty much all these points.

The point about our consciousness not being just about our brains strikes me as particularly significant. And it indeed raises ... interesting ... questions about the ever-popular uploading of consciousness.

markson said...

Artificial Intelligence (AI) makes computers to perform tasks such as speech recognition, decision-making and visual perception which normally requires human intelligence that aims to develop intelligent machinesai courses

Tyler Worgan said...

You have a good point here!I totally agree with what you have said!!Thanks for sharing your views...hope more people will read this article!!!
aluminum patio covers west palm beach

Allen Smith said...

Hello, I have browsed most of your posts. This post is probably where I got the most useful information for my research. Thanks for posting, maybe we can see more on this. Are you aware of any other websites on this subject.
palm beach county air conditioning companies

Allen Smith said...

Great article and a nice way to promote online. I’m satisfied with the information that you provided
tree trimming west palm beach

Allen Smith said...

Very nice bro, thanks for sharing this with us. Keep up the good work and Thank you for sharing information
remodeling contractors fresno ca

Allen Smith said...

You have a good point here!I totally agree with what you have said!!Thanks for sharing your views...hope more people will read this article!!!
air conditioning companies palm beach gardens

Allen Smith said...

You have a good point here!I totally agree with what you have said!!Thanks for sharing your views...hope more people will read this article!!!
kitchen & bath chattanooga tn

Allen Smith said...

Superbly written article, if only all bloggers offered the same content as you, the internet would be a far better place.
ac repair palm beach county

Anonymous said...

Thanks for a wonderful share. Your article has proved your hard work and experience you have got in this field. Brilliant .i love it reading.
exterior painting west palm beach

Anonymous said...

I have read your article, it is very informative and helpful for me.I admire the valuable information you offer in your articles. Thanks for posting it..
kitchen remodeling melbourne

haroonullah said...

Hello, I have browsed most of your posts. This post is probably where I got the most useful information for my research. Thanks for posting, maybe we can see more on this. Are you aware of any other websites on this subject.

Haroon Ullah

pamelapeterson said...

Superbly written article, if only all bloggers offered the same content as you, the internet would be a far better place. new York city escorts

Aparna said...

This is one of the best blogs and I really happy to visit your post. Thanks for sharing with us.
Corporate Training in Chennai
Corporate Training Companies in Chennai
Oracle DBA Training in Chennai
Linux Training in Chennai
Pega Training in Chennai
Embedded System Course Chennai
Appium Training in Chennai
Advanced Excel Training in Chennai
Job Openings in Chennai
Oracle Training in Chennai
Tableau Training in Chennai
Unix Training in Chennai
Corporate Training in Annanagar

Reshma said...

This post is so interactive and informative. Thanks for sharing this post. keep update more informations...
IELTS Coaching in Chennai
IELTS Coaching in Bangalore
IELTS Coaching centre in coimbatore
IELTS Coaching in madurai
IELTS Coaching in Hyderabad
IELTS Training in Chennai
Best IELTS Coaching in Chennai
Best IELTS Coaching centres in Chennai
German Classes in Bangalore

Niyaz said...

Awesome Blog!!! Thanks for it, it is more useful for us.
IOS Training in Chennai
iOS Course in Chennai
mobile application development training in chennai
Best ios Training institutes in Chennai
IOS training in Thiruvanmiyur
IOS Training in Velachery
Python Training in Chennai
Software testing training in chennai
Python Training in Chennai
JAVA Training in Chennai

CloudLearn ERP said...

It has been a long time since I've read anything so informative and compelling. I'm waiting for the next article from the writer. Thank you.
Best Data Science training in Mumbai

Data Science training in Mumbai

Payal Dutt said...

Thank you so much for telling us about good Augmented Reality App Development, this article is very useful for us.

subha said...

All the post are very useful to implement in our life ..keep share mlot of info.
Ai & Artificial Intelligence Course in Chennai
PHP Training in Chennai
Ethical Hacking Course in Chennai Blue Prism Training in Chennai
UiPath Training in Chennai

bairav said...

Pretty article! I found some useful information in your blog....


Spoken English & Communication Coaching Classes Training in Chennai | Certification | Online Courses


German Classes in Chennai | Certification | Language Learning Online Courses | GRE Coaching Classes in Chennai | Certification | Language Learning Online Courses | TOEFL Coaching in Chennai | Certification | Language Learning Online Courses | Spoken English Classes in Chennai | Certification | Communication Skills Training

OGEN Infosystem (P) Limited said...

Really amazing experience to visit your blog, thanks a lot for sharing with us. Visit Ogen Infosystem for creative website designing and ppc services at best price in Delhi.
PPC Company in Delhi

marco said...

PhotoADKing's business card maker helps you to easily create your own business cards in seconds, for free, in the browser, with high-quality professional business card templates.

360digitmg said...

First You got a great blog .I will be interested in more similar topics. i see you got really very useful topics, i will be always checking your blog thanks.
Digital Marketing Training Institutes in Hyderabad

Myra said...

Thank you for sharing such an amazing article on rocket science also share som content on
best Gynaecologist in India

hyder31 said...

Nice Blog

Social Media Marketing Course in Hyderabad

Larry Milters said...

It is good post! Will you share it on twitter? On this site https://soclikes.com/ I usually get twitter likes for my posts, use it to get likes for this one

Online Front said...

Thankyou for posting this informative blog, i come to know something new with this. Great Job! Keep it up.

1000 free youtube subscribers

Vietnam Airline said...

Mua vé rẻ tại Aivivu, tham khảo

giá vé máy bay đi Mỹ khứ hồi

giá vé máy bay tết 2021

giá vé máy bay sang canada

đường bay từ Việt Nam sang Pháp

đường bay từ Việt Nam sang Anh

vé máy bay giá rẻ

combo du lịch đà lạt

combo nha trang 2021

visa trung quốc có thời hạn bao lâu

Anonymous said...

Whatsapp Number Call us Now! 01537587949
please visit us: Social Media Marketing Course
sex video: iphone repair in Novi
pone video usa: iphone repair in Novi
pone video usa: Social Bookmarking Sites List 2021

Digital Brolly said...

Nice blog..
Social Media Marketing Course

bhavanisai said...

nice blog,
Earn Money Online
Enroll in our Affiliate Marketing course in Hyderabad to learn how to earn money online by becoming an affiliate.

pavan said...

nice blog
Our Digital Marketing course in Hyderabad focuses on Making you employeable.

We make sure you have the right skill to get a job in Digital Marketing.

vé máy bay từ Nhật Bản về Việt Nam said...

Mua vé máy bay tại Aivivu, tham khảo

Các chuyến bay từ Incheon về Hà Nội hôm nay

vé máy bay vinh thành phố hồ chí minh

vé máy bay đi hà nội

vé máy bay đi đà lạt vietjet

có vé máy bay từ mỹ về việt nam chưa

Howell Daniel said...

FOXZ88.NET online casino website Global standard 2020-2021
คาสิโนออนไลน์


Betting online gambling reminiscent of UFASCR.COM Baccarat.
ufabet


UFABET football betting website, the big brother of all UEFA networks, UFADNA, with an update The first modern system in 2021
ufa


Web football i99PRO online lottery casino apply today for free 5000 bonus
เว็บบอล


Kardinal Stick Siam - Relx a great promotion. Express delivery in 3 hours.
relx

Web Design Agencies Los Angeles said...

Click
Click
Click
Click
Click
Click
Click
Click
Click
Click

360DigiTMG said...

This post is very simple to read and appreciate without leaving any details out. Great work!
data science courses malaysia

best quality of SEO Submission sites list said...

Free India Local Local Business Listing List 2021 for regional service. There are many complimentary company providing websites exist such as Justdial, Sulekha, Indiamart, TradeIndia, and so on

Michael L Catudal said...

one fast brownIn case you are looking for a good site, UFA, UEFA Bet casino site. Which can be played as a thorough answer, in a position to answer Quality and Performance It's ideal to get a great deal of the issues. It can be something very punching and fascinating. Excellently, the items that UFABET football betting is absolutely nothing even with the practical experience of quality. Plus accessibility that are able to see final results It's a model that is very sharp and also different. Full of performance of creating wealth Attractiveness With the most beneficial opportunities it is quite nothing You will find opportunities and also probabilities for making profits. Quality and also somewhat diverse For people who have come to make use of the service excellently fox
บาคาร่า
สล็อต
ufa
แทงบอล

https://www.directadmissioninstjosephscollegebangalore.com/ said...

If you fail to bag Direct Admission in SJC, do not lose hope! We can help you secure admission through the Management quota admission in St Joseph’s College.

Visit: Management Quota Admission in St Joseph College

https://bangaloreseoservices.in/ said...

SEO Analyst in India
Visit: SEO Analyst in India.

Best Guitar AmBest Electric Guitars in Indiaplifiers in India said...


People often get confused when it comes to choosing and buying the right guitar for themselves.
Visit: Best Electric Guitars in India

PM Kisan Yojana Scheme said...

This particular scheme was started by the Prime Minister of India to provide financial aid to every farmer who was and is in desperate need of this support.
Visit: PM Kisan Yojana Scheme.

vivikhapnoi said...

I just got to this amazing site not long ago. I was actually captured with the piece of resources you have got here. Big thumbs up for making such wonderful blog page!
san ve may bay gia re tu Ha Lan ve Viet Nam

Mua vé máy bay đi ngày tu New Zealand ve Viet Nam

dịch vụ cách ly trọn gói
cho thuê xe đi sân bay nội bài

Visa Hàn Quốc

phỏng vấn xin visa kết hôn Nhật Bản

training institute said...

Awesome blog. I enjoyed reading your articles. This is truly a great read for me. I have bookmarked it and I am looking forward to reading new articles. Keep up the good work!
data scientist certification malaysia

training institute said...
This comment has been removed by the author.
Jobi Johnson said...

I am glad to be here and read your very interesting article, it was very informative and helpful information for me. keep it up. Hells Angels Vest

donaldrsimonn said...

Being an elite London security company whose members are registered with the Security Industry Authority (SIA), you can close protection in UKtrust us with taking care of your children. Our child protection officers are well equipped with modern weapons, backed by several years of experience and training to take care of any emergency your child can face.

Arnold DK said...

This is such a wonderful resource for newer bloggers or those who are just getting into monetizing! Love that you laid out a pro and con list for each, too.whatsapp mod

A to Z Songs Lyrics said...

Visit A to Z Songs, Best Lyrics Site for Spiritual, Hindi, Punjabi, English, Tamil, Malayalam, Tamil & Kannada Songs.
Visit: A to Z Songs Lyrics

traininginstitute said...

I just got to this amazing site not long ago. I was actually captured with the piece of resources you have got here. Big thumbs up for making such wonderful blog page!
data science training in malaysia

eddielydon said...

It's a really great and useful piece of information. Thanks, and please keep up the rewarding work. Cannonball Run Jacket

andrewjackson said...

Our the purpose is to share the reviews about the latest Jackets,Coats and Vests also share the related Movies,Gaming, Casual,Faux Leather and Leather materials available James Bond Morocco Jacket

mrbobystone said...

I love to recommend you Where can crawl Exciting Products latest Jackets, Coats and Vests Click Here Yusuke Urameshi Jacket

Weed Meaning in Hindi said...

we help you find meaning of weed VT in Hindi, weed VT Hindi, weed VT meaning in Hindi, weed VT synonyms in Hindi.
Visit: Weed Meaning in Hindi

Restaurant Branding agency Bangalore said...

A restaurant owner may get a competitive advantage by building a brand and exploiting it to his benefit.
Visit: Restaurant Branding agency Bangalore

The Silverback Creative said...

We are an independent branding & communications agency. We love building brands & experiences. We are Silverback.
Visit: The Silverback Creative

PMP Course said...

I have bookmarked your site since this site contains significant data in it. You rock for keeping incredible stuff. I am a lot of appreciative of this site.

PMP Training in Malaysia said...

360DigiTMG, the top-rated organisation among the most prestigious industries around the world, is an educational destination for those looking to pursue their dreams around the globe. The company is changing careers of many people through constant improvement, 360DigiTMG provides an outstanding learning experience and distinguishes itself from the pack. 360DigiTMG is a prominent global presence by offering world-class training. Its main office is in India and subsidiaries across Malaysia, USA, East Asia, Australia, Uk, Netherlands, and the Middle East.

George Mark said...

Get the latest and good quality clothing items Jackets,Coats and Vests Movies, TV Series, Gaming, Casual,Faux Leather and Leather materials available. Kate Upon Astros Sweater

Spokane HVAC Installation said...

Amazing article! full of interesting information.

ERP Software Qatar said...

ERP software helps each organization in different ways based on the nature of the business.
Visit: ERP Software Qatar

Diploma in Music Production in India. said...

At Rainbow Bridge, we can help make that journey a little easier with our specially curated music production course which will introduce you to the basics of music production using the right tools and software, and the essentials of mixing and mastering a track.
Visit: Diploma in Music Production in India

Management Quota Admission in Christ University said...

Christ University ranks as the top most university in India offering programs under Science, Arts, Commerce, Hotel Management, Computer Science, Mass Communication and Law.
Visit: Management Quota Admission in Christ University

Best Mixer Grinder Under 1500 Rs said...

If you are looking for the best mixer grinder under 1500, then you are in the right place. We know the importance of the mixer grinder in our kitchen.
Visit: Best Mixer Grinder Under 1500 Rs

Tops Tree Service of Newport Beach said...

Science fiction is absolutely important in literature for a single reason: through its creation of alternate realities in different points of time, readers are attracted to what mirrors where we are today.

Orange County Tree Service said...

Science fiction is also a form of fiction that deals principally with the impact of actual or imagined science upon society or individuals

Top Tree Service Harrisonburg said...

Wonderful content! I love it. Keep it up!

John Hartman said...

I'm currently looking for a great website to read some helpful information. Check out more details at Tree Service Spokane.

roulettesitetop said...

Fabulous post, you have actually represented out some wonderful points, I furthermore think this s a really wonderful internet site. 룰렛사이트탑 토토사이트

Cyberz Pc said...

i'm able to see which you are an clever at your showground! i am launching a website quickly, and your advice can be exceedingly beneficial for me.. thanks for all of your help and wishing you all the feint in your problem. Serial Easeus Data Recovery Wizard

Let2know said...

incredibly composed article, whenever abandoned all bloggers gave the thesame content as you, the net may be a far bigger spot.. https://wishesquotz.com/thursday-quotes-messages/

Anonymous said...

Looking for professional waterproofing basement? Get more info at https://www.waterproofedbasement.com/

abrhospital said...

Thank You
ABR Hospital Nero Multispeciality Hospital Gynaecologist Doctor In As Rao Nagar Dr. Himabindu.N is a Gynecologist at ABR Multi-specialty Hospital. She is one of the young and dynamic doctors of Hyderabad.
Gynaecologist Doctor In As Rao Nagar

Smart Home installer said...

Certainly sounds more interesting to me then space craft completely AI controlled fighting each other.

Spokane General Contractor said...

The computers on a ship could deal with all the course and direction changes, managing point defence and main weapons.

cygnus hospital said...

Thank You...
Best Gastroenterologist in Hyderabad cygnus hospital Cygnus gestroenterology hospitals is team of best gastroenterologists in Hyderabad offers advanced treatments for various gastro-intestinal disorders. The center is equipped with the best facilities and supported by a team of expert gastroenterologists
Cygnus hospital team of best gastroenterologists in hyderabad Cygnus hospital team of best gastroenterologists in hyderabad

Best Gastroenterologist in Hyderabad

cygnus hospital said...

Nice blog...
Gastroenterology Hospital in Hyderabad Cygnus Hospital is the Best Hospital for gastroenterology he center is equipped with the best facilities which include the latest diagnostic and therapeutic equipment for Gastroscopy, Colonoscopy, Endoscopic Retrograde Cholangio-Pancreatiography (ERCP), Endosonography, This is one of the many reasons why cygnus Hospitals is considered the most trusted and the top gastroenterology hospital in hyderabad
cygnus hospital of gastroenterology The center is equipped with the best facilities and supported by a team of expert.
Gastroenterology Hospital in Hyderabad



Tree Service Charlottesville VA said...

Humans are cognitively capable of learning, forming concepts, understanding, applying logic and reason, as well as the capacities to recognize patterns, plan, innovate, solve problems, make decisions, retain information, and use language to communicate. These abilities are all a part of intelligence.

Anonymous said...

This is really a good source of information, I will often follow it to know more information and expand my knowledge, I think everyone should know it, Find out more info here.

Anonymous said...

Human intelligence is a property that aids people in learning, comprehending, and coming up with innovative solutions to issues, as opposed to artificial intelligence, which imitates people based on the data they are fed.

contact or visit now the Spokane Emergency Tree Service

Eavestrough Canada said...

This very informative and interesting blog. I have read many blog in days but your writing style is very unique and understanding. If you have read my articles then click below.
eavestrough installation

zaranoah said...

Sex crimes are regarded as serious offenses by the Federal as well asUSA. When you are accused of such crimes, services of a veteran sex crime defense lawyer can salvage your situation.
abogado inmobiliario

Alexandra12 said...

Hello world..Good day! And bro Its so cool to read your blog.. Goodluck to you!! Keep sharing

motorcycle accident lawyer near me!

John Buchard said...

AI aims to mimic certain aspects of human intelligence using computational methods, while human intelligence is a multifaceted and intricate phenomenon that encompasses various cognitive and emotional processes.

Check out the best Raleigh Synthetic Turf

Teslin said...

Many unknown information about the rockets and space are provided in this post. Keep posting more good blogs. Bankruptcy Fraud Lawyer

Halloween Costume said...

Reliably progress forward and continue to get achievement. You are accomplishing inconceivable work. Halloween Costume

The Grinch Costume said...

I'm so happy that I discovered your website. It's so nice and well-designed. The Grinch Costume

judasanjoy said...

The review should be a book, article, research paper, or conference talk, focusing on the subject of artificial intelligence and human interaction. The focus should include potential benefits, risks, ethical considerations, and the future of human-AI relationships. The reviewer should share their overall impression and highlight specific points for improvement. The comment should be engaging, insightful, specific, constructive, and concise. The more information provided, the better the comment will be. The more information provided, the more impactful the comment will be.
motorcycle accident attorneys near me

Digital Marketing Courses In Kota said...

ank you for sharing this valuable knowledge. I've read a lot of blogs, but I've never seen anything quite like this.