Sunday, April 22, 2012

Ten Laws of Good Science Fiction?


No, I have not fallen into a hyperspatial rift, such as jollyreaper has proposed as a story setting. Presumably this is good news to readers of this blog. Minor but multiple intrusions of Real Life have merely delayed my posting a bit more than usual.

Even this post is something of a placeholder, in that I intended to discuss the topic more thoroughly. But one advantage of a great commenter community is that I can count on you to do most of the work of making this blog read-worthy.

So: I invite your attention to this item on Laws of SF that turned up in my Google+ stream a week or two ago. My first reaction is that ten laws is at least seven too many. Yes, the Abrahamic God promulgated a set of ten, but those were to govern human behavior. For world building, both Kepler and Newton got by with three. We don't provide a list of Einstein's laws at all; my distinct impression is that E=MC^2 provides all the information needed to reconstruct special and general relativity.

At least if you can do tensor calculus backwards and in your sleep.

My original intention was to provide full edification and entertainment value by discussing each of the ten laws in turn, with links to relevant previous posts. That is the part I am instead leaving to my able commenters.

Instead I will merely point to Laws #7 and #3, in particular, and ask: Where does FTL (and its pervasive use in the genre) fit in here?


Discuss:

(I do promise not to make quasi-posts like this one the rule here!)




The image, from Astronomy Picture of the Day, really has nothing to do with the topic. Except, perhaps that vistas like this are a big reason why SF emerged in the first place as a distinct subgenre of Romance.

345 comments:

«Oldest   ‹Older   201 – 345 of 345
Tony said...

FBH:

I doubt we'll ever see eye to eye on the nature of extraterrestrial intelligence -- I don't think there's much common ground WRT our respective perceptions of nature and science.

But, to avoid being accused of chickening out, I think I'll take this one on:

"Okay, so what books do they take seriously? Let's actually define terms here. When you talk about books with good science, what books are you talking about?"

I think it would be easier to list the things that have made me put books down and never pick them up again:

"Hollywood" physics

Real-time sensors that work across light years

Ridiculously compressed technological timelines

People forgetting how to do elementary stuff, then some magic man from the past being ressurected and teaching them how to do it all over again

Princess Kri...I mean Mary Sue and her posse of Sues

Alternate history (to the point I don't even pick up AH books anymore)

Trying to justify magitech with technobabble

Bad science in general

Bad history

Bad sociology

Bad psychology

FBH said...

It would be different from us in ways that relate to its environment. Why would living in burrows rather than swinging on trees have any relevance to its social attitudes or level of empathy?

That's a complicated question to answer because I didn't give very many details on the actual species so you're going to have to settle with some perhaps. Perhaps the fact that they only eat plants rather than hunt has fundamentally changed the way that they interact with other species. Perhaps the passage of the worms through the soil actually tends to encourage the growth of the tubers they like, so they're much less territorial than we are through evolutionary means.

Maybe they have a different brain design in which the repository of memory is different, so when two Worms interact they directly share one another's memories of the events. Perhaps this leads to them almost not having individuals as we understand them.

Perhaps the kind of team work they do to evade predators while gathering food is fundamentally different and leads to a different social structure than the way we do it.

None of these automatically lead to higher empathy. They could easily lead to them seeing humans as a bunch of crazed predators, perhaps even not quite real beings cause they can't share memories with them. They could also lead to much increased Empathy.

Note that my fundimental objection here is not the idea that no aliens can be kindly or they wouldn't have the drive to go to space (which is kind of dumb but passable) but rather the idea that aliens must have similar minds to humans.

Though really here we're getting into some pretty deep questions. Like, why are our minds the shape that they are? Are we mostly the product of our experience or our biology? If our experience or our biology was different how would we be different?

These are all worthwhile questions that only science fiction can really explore.

Tony

That... really doesn't answer my question.

What stories? What SF Stories are there that do not do the things that you dislike?

Just list three or so titles.

Because right now I'm not sure that they actually exist.

Anonymous said...

I think absent aliens will always seem more plasuible than any aliens.

Because we have a dataset of zero with regards to aliens.

However the most plausible aliens I think would be extrapolations based on the most exotic locales/lifeforms on earth.

Such as making an alien with adaptations comparable (in scale anyway) to undersea creatures who live with no availible light, etc.

As to the physchology of aliens - again dataset of zero.

But their motivations will most likely be understandable - which I think Tony is saying.

Because they want to survive just like us. And often that will mean doing onto others because they might do onto you.

You dont get a doover on extintion.

(SA Phil)

FBH said...

SA Phil

I think absent aliens will always seem more plasuible than any aliens.

Because we have a dataset of zero with regards to aliens.

However the most plausible aliens I think would be extrapolations based on the most exotic locales/lifeforms on earth.

Such as making an alien with adaptations comparable (in scale anyway) to undersea creatures who live with no availible light, etc.

As to the physchology of aliens - again dataset of zero.

But their motivations will most likely be understandable - which I think Tony is saying.

Because they want to survive just like us. And often that will mean doing onto others because they might do onto you.

You dont get a doover on extintion



Well as I said above I think this is a pretty terrible way to tell stories. Alien mindset is one of those stories that only science fiction can really tell. As we learn more about our brain, about how biology and society can affect us, as perhaps we step towards altering our own brains, isn't it better that fiction at least think about what alien minds could be like?

Anonymous said...

Hmmm -

Well telling stories is differnt.

If you want to assign hard to comprehend motivations, or invent alien physcology based on pure conjecture -- that will be seen as an acceptable break from reality by a lot of readers.

Most readers even seem fine with rubber forehead aliens and/or multiple humanoid competing/cooperative species.

The whole readers find Reality to be unrealistic trope.

You could also pick something between the two extremes as well. Which would be more plausible and less Operatic.

(SA Phil)

Anonymous said...

Actually if you are writing to an audience,

Writing soft SF will likely seem more realistic to a larger audience than writing hard SF will.

Look at how many people laud Mass Effect or Honor Harrington as nearly plausible technology.

(SA Phil)

jollyreaper said...

Regarding alien mindsets, I don't think there's any possible set of simple answers. I'm sure someone has a comprehensive list of possibilities somewhere. Pretty much the basic questions about aliens:

1. Are they apes or angels?
2. Could we even recognize them as alive? Might overlook a Solaris world mind or plasma beasts living on the surface of stars.
3. Do they even think in a way that could allow for meaningful communication?
3. Are they tool-users?

I do think that a kind of anthropic principle would be at work here -- we are only likely to engage with aliens similar enough to our way of thinking for communication to be mutually interesting.

So, taking a stab at classifying broad categories of alien life:
1. Unintelligent, as far as we can tell.
2. Intelligent, but meaningful communication is impossible. (Solaris)
3. Intelligent, communication is possible, but the devil will be in the details.

Now some early scifi writers decided that the human form is the best possible optimized shape and that convergent evolution will lead to all higher lifeforms being humanoid with bumpy foreheads, different numbers of eyes and surprising skin tones. Examples of convergent evolution on Earth will be cited -- there are only so many ways to solve a given task. Tetrapods need to fly, forelimbs will become wings. Critters need intelligence? This is how a brain will result.

I don't really buy the those theories personally but, if it's the assumption the writer goes with and is self-consistent, I can play along.

FBH said...

SA Phil

Hmmm -

Well telling stories is differnt.

If you want to assign hard to comprehend motivations, or invent alien physcology based on pure conjecture -- that will be seen as an acceptable break from reality by a lot of readers.

Most readers even seem fine with rubber forehead aliens and/or multiple humanoid competing/cooperative species.

The whole readers find Reality to be unrealistic trope.

You could also pick something between the two extremes as well. Which would be more plausible and less Operatic.


I'm not sure we can really say whether this is a case of reality being unrealistic or not. We don't exactly have a lot of data to go on about other sapience's thought processes here. As you said, no data.

I think the difference with most soft SF aliens is not that readers, in their heart of hearts, really think it's plausible that we'll go out and find hot blue women. It's more that hot blue women or lizard guys with coke bottle glasses for eyes or scary pointy looking dudes or people with lobsters on their heads are more accessible to them and to the writers.

It takes real effort to write a story about interaction with say, my earlier described underground compass worms who can chemically share memories. It doesn't take much to write interaction with a planet of the hats.

Tony said...

FBH:

"Note that my fundimental objection here is not the idea that no aliens can be kindly or they wouldn't have the drive to go to space (which is kind of dumb but passable) but rather the idea that aliens must have similar minds to humans."

Hold on a minute there. Not similar minds. Similar fundamental natures, brought forth by natural competition to get to the top of their respective ecological pyramids.

What I find implausible is the idea that intelligence could arrise without competition, or that having arrisen through competition, it would take a left turn into excessive non-competitveness.

"Well as I said above I think this is a pretty terrible way to tell stories. Alien mindset is one of those stories that only science fiction can really tell. As we learn more about our brain, about how biology and society can affect us, as perhaps we step towards altering our own brains, isn't it better that fiction at least think about what alien minds could be like?"

Telling implausible stories is a terrible way to tell stories. Having arrisen from a competitive environment, if some bunch of humans or aliens altered their minds to be more whatever and less violent, the rest would either tolerate them (while most likely snickering behind their backs) or destroy them (if they wasted too many resources).

The idea that a whole species would alter its minds to be kinder and gentler is simply not very believable. Or, stipulating that they did, I would expect a good story would expose what a silly idea that is in a hostile universe.

See, you appear to think the absolute kindness is a positive goal for a society or a species. I think it's positively suicidal.

"That... really doesn't answer my question.

What stories? What SF Stories are there that do not do the things that you dislike?

Just list three or so titles.

Because right now I'm not sure that they actually exist."


Please don't try to be clever.

If you've read as much science fiction as myself, you should have recognized by name some of the works/series I alluded to. They were all popular in their day. Some are popular right now.

But, to make a list of wha i consider good SF...

Space Viking

Mission of Gravity

Mote in God's Eye and Gripping Hand

Footfall

Consider Phlebas

Foundation Trilogy

Dorsai

Hammer's Slammers

Earthlight

2001

The Stars My Destination

Rendezvous with Rama

Star Fox and Fire Time

The (Frank Herbert only) Dune canon

Starship Troopers

Ender's Game (but the sequels not so much, except for the ones that involve Bean)

Poul Anderson's Polysotechnic League canon

Anonymous said...

FBH

I'm not sure we can really say whether this is a case of reality being unrealistic or not. We don't exactly have a lot of data to go on about other sapience's thought processes here. As you said, no data.

----

Well, not making something up based on no data is always more plausible than making something up based on no data ... my company's management policies nonwithstanding.

But if you want Aliens you will need to make something up.

It will seem like harder SF the less the aliens are like us.

But if their motivations seem off .. many wont find it plausible.

(SA Phil)

Tony said...

SA Phil:

"It will seem like harder SF the less the aliens are like us."

I don't think that necessarily follows. If you consider the mediocrity principle to be good, hard science, you should expect aliens to have similar motivations and logic processes, no matter what their gross physical features are. IOW, you should be able to have a game of chess with a bug.

Anonymous said...

I agree they would have understandable motivations. Which will likely be very similair to ours.

But they might not think like we do. It may take 3 bugs to play chess with you, debating every move.

But it may only take 1 lantern fish thing to play chess against 20 humans and do calculus in its head at the same time.

(SA Phil)

Tony said...

SA Phil:

"I agree they would have understandable motivations. Which will likely be very similair to ours.

But they might not think like we do. It may take 3 bugs to play chess with you, debating every move.

But it may only take 1 lantern fish thing to play chess against 20 humans and do calculus in its head at the same time."


I can't argue with that. We know that there are humans who play chess by committee. We also know that there are humans who can outplay 20 other humans (and probably process integrals in their head at the same time).

FBH said...

Tony:

Hold on a minute there. Not similar minds. Similar fundamental natures, brought forth by natural competition to get to the top of their respective ecological pyramids.

What I find implausible is the idea that intelligence could arrise without competition, or that having arrisen through competition, it would take a left turn into excessive non-competitveness.


None of that means that they wouldn't be kindly, or that their competitiveness might not arise in a fundamentally different way than ours. Plus, you know, that's not what the article said, and that was what I originally took issue with.

I think we've both made a mistake in not defining what exactly what we mean by kindly. You say below that you've read Consider Phlebas and consider that good SF. Wouldn't you consider the Culture, who's whole reason for being is to try to help (on their careful, mathematical definition of help) other people to be kindly? At least in some ways.

Telling implausible stories is a terrible way to tell stories. Having arrisen from a competitive environment, if some bunch of humans or aliens altered their minds to be more whatever and less violent, the rest would either tolerate them (while most likely snickering behind their backs) or destroy them (if they wasted too many resources).

The idea that a whole species would alter its minds to be kinder and gentler is simply not very believable. Or, stipulating that they did, I would expect a good story would expose what a silly idea that is in a hostile universe.

See, you appear to think the absolute kindness is a positive goal for a society or a species. I think it's positively suicidal.


This is what I mean about what you're saying being overly political. Right now we're at a point in our history where major war is so destructive it could if not doom the species then at least doom technological civilization. Conflict has bought humanity far closer (within two decisions of nuclear war during the Cuba Missile Crisis, though fortunately that wouldn't have killed everyone) to its own annihilation than has a surfeit of cooperation.

Maybe these aliens who've altered themselves to be more cooperative and altruistic are survivors of just such a war. You seem to be thinking something along the lines of engineering themselves so they cannot commit violence at all, but that's unlikely to be possible. Maybe they've just engineered themselves (from a base state that looks exactly like humans) so that where as the human morality centre coughs and dies when faced with the deaths of more than a few people they keep feeling the loss of it all the way up. That a million deaths is to them not a statistic but a tragedy of the kind of scale it actually is.


Please don't try to be clever.

If you've read as much science fiction as myself, you should have recognized by name some of the works/series I alluded to. They were all popular in their day. Some are popular right now.

But, to make a list of wha i consider good SF...

List snip


I apologize for being snarky. This argument brewed up over the claim good science is necessary for good science fiction. Now, I haven't read everything on your list but I think you've kind of proven my point for me.

Hammers Slammers, Consider Phlebas etc. Do not have good science. So if they're good science fiction doesn't that really invalidate that rule?

Anonymous said...

=Milo=



FBH:

"I think the difference with most soft SF aliens is not that readers, in their heart of hearts, really think it's plausible that we'll go out and find hot blue women. It's more that hot blue women or lizard guys with coke bottle glasses for eyes or scary pointy looking dudes or people with lobsters on their heads are more accessible to them and to the writers."

Yeah, I essentially interpret soft sci-fi aliens as an author going "Yeah, I don't know either. Here, have some cute catgirls to make up for it.".

I can still enjoy a soft sci-fi story if the story is good. And I do like cute catgirls.


"It doesn't take much to write interaction with a planet of the hats."

The important problem here is that a planet of hats is an unrealistic way to portray a fictional HUMAN culture, too.

Aliens will have cultural differences from humans, probably somewhat more severe ones than those between different human cultures. However, the nature of differences between human cultures is still a good model to use for how to portray such aliens. Avoiding planets of hats is included as one of the lessons to be learned from human cultural differences.



SA Phil:

"But they might not think like we do. It may take 3 bugs to play chess with you, debating every move.

But it may only take 1 lantern fish thing to play chess against 20 humans and do calculus in its head at the same time."


So you're saying that aliens may be more or less intelligent than us, while still being recognizably sentient.

Given that there is a wide range of intelligence levels among humans, this is really not shocking at all.

I consider it implausible that it would always take exactly 3 bugs to play a game of chess, against a human of any skill, and that 1 bug simply can't play the game at all even if it's just looking to have fun and doesn't care about winning.

Your lantern fish may be able to multitask better than us, and so we would have trouble keeping up with everything that it is doing at the same time, but we would still be able to comprehend any one of its tasks that we choose to focus on. Given a little extra time to sort out its multiple thought processes one by one, we would be able to fully understand how it's thinking. (I think it is actually plausible that computerized AIs will turn out to behave in this manner.)

Tony said...

FBH:

"None of that means that they wouldn't be kindly, or that their competitiveness might not arise in a fundamentally different way than ours."

Fundamentally different? I can't buy it. Medicority principle, remember? Superficially different (eight legs, four eyes, amphibious with gills, exoskeletons, whatever) I can buy.

"Plus, you know, that's not what the article said, and that was what I originally took issue with."

Yeah -- you said it was "really dumb". Now, it may not be the way you want the world to work, but taking the mediocrity pricniple into account, and using our knowledge of our own behaviors as a guide, whatever it is, it ain't dumb. It just doesn't fulfill your worldview or satisfy your sentiments.

"I think we've both made a mistake in not defining what exactly what we mean by kindly. You say below that you've read Consider Phlebas and consider that good SF. Wouldn't you consider the Culture, who's whole reason for being is to try to help (on their careful, mathematical definition of help) other people to be kindly? At least in some ways."

I consider the Culture to be authorial fiat on the part of Banks to justify socialism-with-abundance, by invoking post-scarcity economics.

Now, given the way he's set it up, what's really happening is that the Culture, for all intents and purposes, is really just the Culture Minds. The humanoid population, for all intents and purposes, are the Minds' pets (and sometimes experimental subjects).

The Minds themselves are not above, well, Lasting Damage (To borrow the name of a Culture warship), if that's necessary. Many Minds even seem to relish war from time to time. Likewise, Culture humanoids can be ridiculously callous and violent, as made plain in every Culture work.

"This is what I mean about what you're saying being overly political. Right now we're at a point in our history where major war is so destructive it could if not doom the species then at least doom technological civilization. Conflict has bought humanity far closer (within two decisions of nuclear war during the Cuba Missile Crisis, though fortunately that wouldn't have killed everyone) to its own annihilation than has a surfeit of cooperation."

It takes a surfeit of cooperation, albeit within factions, to make global thermonuclear war possible. There's no politics in that. It's a feature of human intelligence (and I would think all intelligence) that it can be both destructive and constructive, both competitive and cooperative -- and all four at the same time.

You go on and on about the simplicity of my thoguht process, without recognizing that you're playing a game of black and white yourself.

"Maybe these aliens who've altered themselves to be more cooperative and altruistic are survivors of just such a war. You seem to be thinking something along the lines of engineering themselves so they cannot commit violence at all, but that's unlikely to be possible. Maybe they've just engineered themselves (from a base state that looks exactly like humans) so that where as the human morality centre coughs and dies when faced with the deaths of more than a few people they keep feeling the loss of it all the way up. That a million deaths is to them not a statistic but a tragedy of the kind of scale it actually is."

They'd never be able to contemplate necessary acts of survival in that case. They'd choke on their own sentiments if they had to fight a competitor that seriously threatened them. Sounds like a good plot seed, but not for the kind of story that I suspect you in particular would like to read.

Tony said...

FBH:

"I apologize for being snarky. This argument brewed up over the claim good science is necessary for good science fiction. Now, I haven't read everything on your list but I think you've kind of proven my point for me.

Hammers Slammers, Consider Phlebas etc. Do not have good science. So if they're good science fiction doesn't that really invalidate that rule?"


The works you've mentioned by name do not have 100% 21st Century natural science, no. They, and all of the works on my list, invoke magitech from time to time, sometimes a lot. But they do it in a consistent manner. They also respect natural science when magitech is not required.

I'm not an ideological purist. Just don't piss on me and tell me it's raining.

FBH said...

Tony:
Fundamentally different? I can't buy it. Medicority principle, remember? Superficially different (eight legs, four eyes, amphibious with gills, exoskeletons, whatever) I can buy.

You don't buy that say, a herbivore or a being with a larger monkey (worm, insect, angler fish) sphere might compete with its environment differently than a human? I'm pretty sure that the mediocrity principle does not mean "Things that are actually dissimilar will act in exactly the same fashion."

Especially given groups of humans don't compete with one another in the same way as other groups of humans. There's a hell of a lot of difference between the competition between the big banks on wallstreet, the competition between drug cartels and the government in Mexico, and the competition between the Axis and Allies in WW2.

Yeah -- you said it was "really dumb". Now, it may not be the way you want the world to work, but taking the mediocrity pricniple into account, and using our knowledge of our own behaviors as a guide, whatever it is, it ain't dumb. It just doesn't fulfil your worldview or satisfy your sentiments.

Yes. It remains really dumb. One could just as easily say "your aliens will probably not be hostile and evil because a hostile group would have blown its self up with nuclear weapons before developing space flight."

Also you're still talking like the mediocrity principle doesn't allow differences between different things.

FBH said...

I consider the Culture to be authorial fiat on the part of Banks to justify socialism-with-abundance, by invoking post-scarcity economics.

Now, given the way he's set it up, what's really happening is that the Culture, for all intents and purposes, is really just the Culture Minds. The humanoid population, for all intents and purposes, are the Minds' pets (and sometimes experimental subjects).

The Minds themselves are not above, well, Lasting Damage (To borrow the name of a Culture warship), if that's necessary. Many Minds even seem to relish war from time to time. Likewise, Culture humanoids can be ridiculously callous and violent, as made plain in every Culture work.


Except without the humanoid population the culture minds would all just sublime away. While the minds may seem to rule they're actually, in effect, an extension of the collective will.

Also that's not the point. You admit you like the book. So... what exactly is your problem with any other book with aliens who act in an internally consistent way that you consider inconsistent with reality?

It takes a surfeit of cooperation, albeit within factions, to make global thermonuclear war possible. There's no politics in that. It's a feature of human intelligence (and I would think all intelligence) that it can be both destructive and constructive, both competitive and cooperative -- and all four at the same time.

You go on and on about the simplicity of my thoguht process, without recognizing that you're playing a game of black and white yourself.


I recognize it all right, but the thing I'm complaining about dealt only with one aspect of it. Never did it suggest that you shouldn't have aliens who are mindlessly hostile cause they'd have wiped themselves out already for instance.

The best thing you can say about competition is it may spur people to innovate or spend resources they might not have before to secure themselves, and it lets humans defend themselves from dangerous animals and stops tyrannies developing so easily. If we didn't have cooperation, perhaps even didn't have such a high degree of it we'd probably be about as badly off, in terms of the total population we'd be able to build, as post nuclear survivors.

That's balanced against potentially causing global nuclear destruction. I think in the aftermath of that destruction, you might find many people who thought a little less conflict and a little more altruism would do the species good.

The idea that "if we meet foreign intelligences they will likely be hostile because we are hostile" is actually a highly political statement. There's a whole theory of international relations based around that very idea. It's kind of well, wrong, but it's does exist.

They'd never be able to contemplate necessary acts of survival in that case. They'd choke on their own sentiments if they had to fight a competitor that seriously threatened them. Sounds like a good plot seed, but not for the kind of story that I suspect you in particular would like to read.

What acts necessary to survival? Even if they need to threaten us with nuclear weapons well... it's possible to threaten people you know.

Frankly, if we're in a situation where technology still allow us to fight, they'd probably have much higher technology than us because they're not killing off lots of their people in giant industrial struggles like we are. If we don't then it probably means that we're meeting on a technology base that allows destruction to be carried across the distance between us quickly enough that war is effectively impossible.

It could be a good story, but rah rah we need our darker impulses has been done a million times already. I think I'd be more interested in reading a story about how these new cooperative humans ended up with a vastly superior tech base from us because they didn't keep slaughtering one another and wasting resources.

FBH said...

The works you've mentioned by name do not have 100% 21st Century natural science, no. They, and all of the works on my list, invoke magitech from time to time, sometimes a lot. But they do it in a consistent manner. They also respect natural science when magitech is not required.

I'm not an ideological purist. Just don't piss on me and tell me it's raining.


If you like books that are internally consistent no matter what science they have then you don't need to obey natural science to write good science fiction. That was what was being argued here.

Anonymous said...

=Milo=



"That's balanced against potentially causing global nuclear destruction. I think in the aftermath of that destruction, you might find many people who thought a little less conflict and a little more altruism would do the species good."

Funny how many post-apocalyptic stories show people turning to raiding to survive.

Then again, they also show other people forming tightly-knit communities because they know they depend on each other to survive in a hostile environment. That, however, is more on a small community level rather than a species-wide level. Any reduction in violence in the post-apocalyptic world will be less due to altruism and more due to no longer having the industrial infrastructure to build weapons of mass destruction.

FBH said...

Milo

I'm assuming some surviving technological enclave here and a similiar process to the feeling that it must never happen again that was in play after WW1. Just on a wider scale.

So maybe a space station or moon base survived the nuclear exchange back home and rebuilt from there, only now with their brains changed.

Anonymous said...

=Milo=



FBH:

"a similiar process to the feeling that it must never happen again that was in play after WW1"

Yeah, that turned out great.

Scott said...

As an interesting and related point:

Animation (and to an even greater extent Japanese Anime), as a genre, tends towards the fantastic.

Why?

Because the entire show is a special-effects film. It doesn't cost anything extra to make it magical or scifi, because every shot is nothing but what would be created special effects in live-action film.

I'd argue that the laws should be:
1) know the limits, strengths and weaknesses of your media.
2) tell an interesting story.
3) be consistent in how your world works.
4) you only get ONE free suspension of disbelief, ever additional one must be earned.

Tony said...

FBH:

"You don't buy that say, a herbivore or a being with a larger monkey (worm, insect, angler fish) sphere might compete with its environment differently than a human? I'm pretty sure that the mediocrity principle does not mean 'Things that are actually dissimilar will act in exactly the same fashion.'"

They might compete in superficially different fashion, but they still compete to survive and expand their population/range. If you haven't seen it, I suggest you view the recent Discovery series Planet Earth. It covers most major habitats at most scales. It could have easily -- and more accurately, given the footage chosen for the final cut -- been entitles Eat and Be Eaten. It's very enlightening.

Or perhaps you should read up some on ecology, particularly the part where everything in an ecosystem is reduced to an energy pyramid. That's very enlightening as well.

There is in fact no disimilarity, at the fundamental level in biological competition. It's all about eating and being eaten, at every level and in every habitat. Even apex predators like us play the game, by being recycled back down to the bottom of the pyramid after we die.

"Especially given groups of humans don't compete with one another in the same way as other groups of humans. There's a hell of a lot of difference between the competition between the big banks on wallstreet, the competition between drug cartels and the government in Mexico, and the competition between the Axis and Allies in WW2."

I personaly know people who were or are involved in all three forms of competition. I've personally been involved in two of them (conventional war and anti-narcotics policing). I don't think we'd have to lok hard to find a Wall Street banker who personally participated in WWII. It wouldn't surprise me one bit that a Wall street banker buys either Mexican government bonds or smokes Mexican narcotrafico weed -- or both, directly financing that sphere of competition.

See, humans aren't monogamous in the ways they compete. They pretty much compete in any way they need to, at any given time.

"Yes. It remains really dumb. One could just as easily say 'your aliens will probably not be hostile and evil because a hostile group would have blown its self up with nuclear weapons before developing space flight.'

That's a theorized, not an observed behavior.

"Also you're still talking like the mediocrity principle doesn't allow differences between different things."

Thing is, I'm not seing different things. What we know about biological competition makes it all out to be the same thing, in the end. It should affect biological entities in the much the same way, whether it happens here or on a planet circling a star a thousand light years away.

Tony said...

FBH:

"Except without the humanoid population the culture minds would all just sublime away. While the minds may seem to rule they're actually, in effect, an extension of the collective will."

No they're not. They don't live to do what humans want. They often keep humans from doing what they want. Just like any master of any pets.

"Also that's not the point. You admit you like the book. So... what exactly is your problem with any other book with aliens who act in an internally consistent way that you consider inconsistent with reality?"

I don't think the Culture is unrealistic, given the backstory. I do think that people who think it's a completely benign post-scarcity society are not thinking things through. There may be effectively no scarcity within the Culture, but the Culture still ives within the physical world. The resources for all of its wealth have to come from somehwere. They come at the cost of the Culture controlling and exploiting resources that other civilizations might use, and limiting other civilizations' scope. IOW, the Idirans were probably the good guys, even if for bad reasons.

"I recognize it all right, but the thing I'm complaining about dealt only with one aspect of it. Never did it suggest that you shouldn't have aliens who are mindlessly hostile cause they'd have wiped themselves out already for instance."

Please don't engage in argumentum ad absurdum. One doesn't have to be "mindlessly hostile" to be existentially dangerous. One just has to rate his own survival one tick above yours.

"The best thing you can say about competition...If we didn't have cooperation..."

Competition and cooperation aren't at odds.

Competition just is. It's part of our biological nature.

Cooperation is actually a response to competition, and it's not the blessing you seem to think it is. Cooperation developed so that groups could compete more effectively against individuals and, eventually, against other groups. It is in fact, even today, most often a tool that aids competitiveness.

"That's balanced against potentially causing global nuclear destruction. I think in the aftermath of that destruction, you might find many people who thought a little less conflict and a little more altruism would do the species good."

I think you're still living in the Cold War. Because that's certainly an line of thought I heard often during that time.

"The idea that 'if we meet foreign intelligences they will likely be hostile because we are hostile' is actually a highly political statement. There's a whole theory of international relations based around that very idea. It's kind of well, wrong, but it's does exist."

Wrong how?

(I'm pretty sure I know where you're going to take this, but I'm interested in hearing you articulate it.)

"What acts necessary to survival? Even if they need to threaten us with nuclear weapons well... it's possible to threaten people you know."

It's entirely impossible to issue a credible threat if people know you won't do it.

Tony said...

FBH:

"Frankly, if we're in a situation where technology still allow us to fight, they'd probably have much higher technology than us because they're not killing off lots of their people in giant industrial struggles like we are. If we don't then it probably means that we're meeting on a technology base that allows destruction to be carried across the distance between us quickly enough that war is effectively impossible."

Now that's thinking from fairly early in the Cold War, 1960 or so. And it was manifestly wrong back then too. Our wars haven't retarded our technological development. In fact they are most noteworthy for accelerating technological development through intense existential competition.

Now, I'm not saying that what we need is war. I am saying that war as a negative technoloigcal force is pure bunk.

"It could be a good story, but rah rah we need our darker impulses has been done a million times already. I think I'd be more interested in reading a story about how these new cooperative humans ended up with a vastly superior tech base from us because they didn't keep slaughtering one another and wasting resources."

IOW, you'd rather read politically motivated fantasy. Yes, I said "politically motivated", because universale peace, prosperity, and progress is a political promise, usually made by marxists, union organizers, and libertarians.

"If you like books that are internally consistent no matter what science they have then you don't need to obey natural science to write good science fiction. That was what was being argued here."

Don't put words in my mouth, please. I didn't say no matter what sicence. I said no matter what magitech, provided that the magitech is internally consistent, that it interfaces with the natural world in a plausible way, and that the natural world is otherwise consistent with what is known at the time of writing. Not just any science will do.

FBH said...

Tony:

They might compete in superficially different fashion, but they still compete to survive and expand their population/range. If you haven't seen it, I suggest you view the recent Discovery series Planet Earth. It covers most major habitats at most scales. It could have easily -- and more accurately, given the footage chosen for the final cut -- been entitles Eat and Be Eaten. It's very enlightening.

Or perhaps you should read up some on ecology, particularly the part where everything in an ecosystem is reduced to an energy pyramid. That's very enlightening as well.

There is in fact no disimilarity, at the fundamental level in biological competition. It's all about eating and being eaten, at every level and in every habitat. Even apex predators like us play the game, by being recycled back down to the bottom of the pyramid after we die.


Nice and condescending, but not particularly useful.

Obviously if you engage in enough reductionism then all complex life does indeed look similar. However the behaviour that a solitary predator engages in to eat is not the same as a herd dwelling plant eater, or a carrion eater.

Any of which could result, in sapient versions of these species in different ethics. Assuming that ethics even primarily result from biology, rather than developed social relations which just adds a vastly larger layer of complexity to the whole issues. Even worse it could be a combination of both.

So in other words, unless you have a definition of kindly that requires complete pacifism in the face of any threat, then yes, aliens can in fact be kindly. (So honestly could humans. It just depends on which humans and in what circumstances.)

I personaly know people who were or are involved in all three forms of competition. I've personally been involved in two of them (conventional war and anti-narcotics policing). I don't think we'd have to lok hard to find a Wall Street banker who personally participated in WWII. It wouldn't surprise me one bit that a Wall street banker buys either Mexican government bonds or smokes Mexican narcotrafico weed -- or both, directly financing that sphere of competition.

See, humans aren't monogamous in the ways they compete. They pretty much compete in any way they need to, at any given time.


You miss the point completely. Competition does not have to be destructive or even really bad (though we glorify it far more than we should). Even if we presuppose aliens and humans as competitive, how this competition manifests will be a basis of circumstance and detail. Kindliness does not actually prevent competition.

WW2 vets who fought in others defence are engaging in altruism. So are Wallstreet Bankers who engage in philanthropy or raise money for charity. I suspect there's even Mexican Narco terrorists who are kind to their children or pets.

An alien race that prevents an a comet slamming into the earth, or forebears to drop nuclear bombs on us might still be competitive in other ways. It just depends on who they are and what they believe and how we meet.

That's a theorized, not an observed behavior.

You don't think a Buffalo differs in behaviour from a Wolf?

Thing is, I'm not seing different things. What we know about biological competition makes it all out to be the same thing, in the end. It should affect biological entities in the much the same way, whether it happens here or on a planet circling a star a thousand light years away.

If you reduce behaviour of organisms down to a low enough point then yes, it will look similar.
However if you do that you also can't say anything useful about it. Saying "All organisms must compete so not to be eaten or starve" is fine but it tells you nothing about the behaviour of a particular organism or class of organisms before you.

FBH said...

Tony

No they're not. They don't live to do what humans want. They often keep humans from doing what they want. Just like any master of any pets.

They live collectively to do what the humans want. IE they have the Cultures social ethics. I don't remember which book it's in... I think it's either Excession or Look to Windward, but if you attempt to create a 'pure' mind without any form of ethical programming then it will immediately sublime.

So the minds very much rely on the existence of the Culture's human population, which is the source of its ethics, to provide them with values and keep a purchase on the universe.

I don't think the Culture is unrealistic, given the backstory. I do think that people who think it's a completely benign post-scarcity society are not thinking things through. There may be effectively no scarcity within the Culture, but the Culture still ives within the physical world. The resources for all of its wealth have to come from somehwere. They come at the cost of the Culture controlling and exploiting resources that other civilizations might use, and limiting other civilizations' scope. IOW, the Idirans were probably the good guys, even if for bad reasons.

It's pretty clear the culture is not entirely benign both in that it doesn't really live up to its morality all of the time (Hello Grey Area), and in that some of its methods are potentially quite sinister. For instance, we've seen then deliberately start civil wars and do all manner of special circumstances if they feel it'll be long term better.

I don't really think that your argument about them stealing peoples resources holds water though, given their access to basically endless sources of energy (the Grid) effectively endless matter (from brown dwarfs) and the fact their population isn't exactly massive.

FBH said...

Please don't engage in argumentum ad absurdum. One doesn't have to be "mindlessly hostile" to be existentially dangerous. One just has to rate his own survival one tick above yours.
Competition and cooperation aren't at odds.

Competition just is. It's part of our biological nature.


And you don't see how this allows kindly aliens (to a given value of kindly) just as much as it allows hostile ones?

Maybe they do rate their survival a tick above ours but their survival is not threatened. The existence of French nuclear weapons is existentially dangerous to me. I'm certain if it ever came down to a choice between England and France, the French would push the button. After all their duty is to their citizens first and foremost.

That doesn't mean that I see France as an existential threat though, because there's basically no chance of them dropping nukes onto me. A similar dynamic might exist between species.

Cooperation is actually a response to competition, and it's not the blessing you seem to think it is. Cooperation developed so that groups could compete more effectively against individuals and, eventually, against other groups. It is in fact, even today, most often a tool that aids competitiveness.

It exists because groups can more effectively and happily survive if they pool their efforts than if they don't pool their efforts. Groups may compete, or they may cooperate. Groups certainly don't exist just to compete however.

If we were to place twenty people in a desert with no other life, and an adequate supply of water and food, they might just as well cooperate (even though there's nothing to compete with) as they might compete.

FBH said...

Wrong how?

(I'm pretty sure I know where you're going to take this, but I'm interested in hearing you articulate it.)


It's not supported by real life.

China and the EU, or China and the USA, or the EU and the USA may remain in some forms of competition (mostly economic) but their relations are still relatively friendly and no cold war has developed between them. This is despite the loss of the threat represented to them collectively by the Soviet Union they remain, which is according to many IR theories the source of their alliance.

This is because the benefits of cooperation outweigh those of competition and also at least in the EU/US's case there's a real feeling of kinship based on all being liberal democracies.

It's entirely impossible to issue a credible threat if people know you won't do it.

Technically true, but keeping such an operation secret is practically impossible.

Now that's thinking from fairly early in the Cold War, 1960 or so. And it was manifestly wrong back then too. Our wars haven't retarded our technological development. In fact they are most noteworthy for accelerating technological development through intense existential competition.

Now, I'm not saying that what we need is war. I am saying that war as a negative technoloigcal force is pure bunk.


I actually think the belief that war somehow accelerates the general technological base, as opposed to just the military technological base is psychologically rather interesting. I think there's a need to believe, especially with the big wars like WW2, that some good came out of it. That those sixty million corpses somehow had a good effect.

I don't, however think it's true. How many geniuses do you think died in those sixty six million? How many technologies defunded by the need to rebuild shattered cities? What else could we have done with the resources?

When I was much younger I volunteered to work at a place called Mottisfont, a house with beautiful painted walls done by a famous artist named Rex Whistler. In one corner of one of the rooms was a note, painted by Whistler to the effect that he was going to fight the nazis, and he'd finish up the house when he got back.

Only he never came back. He died in Caen in 1944.

I think it's pretty likely without war our technological base would be much higher. It wouldn't be as good in military technology, but in general, it'd likely be superior. And I think the reason peoples in the 1960s felt the same was cause they hadn't gone 50 years without a major war and remembered how goddamn awful it was.

. . .

Okay I think we should maybe re-concentrate this debate onto the salient points. 4 posts per response is too many.

Tony said...

FBH:

Nice and condescending, but not particularly useful.

Not intended to be condescending. Intended to make a point -- you've gotta know what you're talking about, and not just superficially either.

"Obviously if you engage in enough reductionism then all complex life does indeed look similar. However the behaviour that a solitary predator engages in to eat is not the same as a herd dwelling plant eater, or a carrion eater."

Different behaviors, same objectives and results. And the fundamental strategies are few and pretty simple. Colinvaux, for example, boils reproductive strategies down to two basic ones: have lots of cheap offspring or have few expensive ones (in terms of energy and nurturing effort).

Energy gathering strategies are basically three: predation, plant eating, and scavenging. For any individual species, it's usually a combination of any two or even all three, in some ratio.

Interestingly enough, when we talk about these strategies in casual conversation, we tend to deprecate the pure anything. Pure predators are too vicious, pure plant eaters too passive, and pure scavengers too opportunistic and unreliable. In reality of course, none of these is entirely the case. (Though sharks and spiders come really close, which is why people probably fear them the most.) But it's interesting to note how people tend to frame things...

"Any of which could result, in sapient versions of these species in different ethics. Assuming that ethics even primarily result from biology, rather than developed social relations which just adds a vastly larger layer of complexity to the whole issues. Even worse it could be a combination of both."

The overriding ethic for a successful species is survival. How one develops and applies situational ethics beyond that is interesting in many ways, but it doesn't trump survival, for the community if not for oneself.

"So in other words, unless you have a definition of kindly that requires complete pacifism in the face of any threat, then yes, aliens can in fact be kindly. (So honestly could humans. It just depends on which humans and in what circumstances.)"

I'm pretty sure that, taken in context, by "kindly" the man meant exageratedly passive and non-threatening. I don't buy that in a species that arose through competition -- any kind of competition.

"You miss the point completely. Competition does not have to be destructive or even really bad (though we glorify it far more than we should). Even if we presuppose aliens and humans as competitive, how this competition manifests will be a basis of circumstance and detail. Kindliness does not actually prevent competition."

I would argue that kindliness prevents effective competition against someone who isn't as inherrently kindly. The too kindly don't last very long, no matter how morally superior one might evaluate them to be. That's precisely why I would expect a certain level of competitiveness and ruthlessness in any alien I meet. He's not likely to have gotten to meet me without being a good, tough competitor, whatever other qualities he might possess.

"WW2 vets who fought in others defence are engaging in altruism."

But they weren't engaging in a violent act of community survival?

"So are Wallstreet Bankers who engage in philanthropy or raise money for charity."

And receive tax breaks and good PR, both of which enhance their competitiveness.

"I suspect there's even Mexican Narco terrorists who are kind to their children or pets."

Kiss the wife & kids, pet the dog, then go out and execute a squad of policemen. That's got "kindly" written all over it.

Tony said...

FBH:

"An alien race that prevents an a comet slamming into the earth, or forebears to drop nuclear bombs on us might still be competitive in other ways. It just depends on who they are and what they believe and how we meet."

Or they might not compete with us in an existential fashion because they're afraid of what we can do if they aren't totally successful. Or maybe we won't compete with them for the same reason. Or maybe they haven't made up their mind yet. Or anything really. Just don't bet the survival of the human race on they're being nice guys forever.

"You don't think a Buffalo differs in behaviour from a Wolf?"

Funny you should mention that. Ever seen video of a bison herd charging a wolf (On-line, just like the Heavy Brigade at Balaclava) in order to deter it from taking a calf? Ever seen video of a cape buffalo cornering two lions on top of a bolder, the lions not wanting to come down for fear of what the buf might do to them if they did?

The motivations may be different, but the violence can be just as deadly. I have no trouble imagining buffalo aliens deciding that humans are the functional equivalent of lions or wolves, handing us the same treatment.

"If you reduce behaviour of organisms down to a low enough point then yes, it will look similar.
However if you do that you also can't say anything useful about it. Saying 'All organisms must compete so not to be eaten or starve' is fine but it tells you nothing about the behaviour of a particular organism or class of organisms before you."


It tells you that if they are successful and numerous, they are good competitors. How they compete and why they compete is relevant to informing you how not to annoy them, or how not to look like prey. But being annoying or looking like prey will get you the same result as being annoying or looking like prey to any competitor.

FBH said...

Tony

Let's try to concentrate this down a little bit more. It's getting a bit long.

I'm not saying that we'll find a race incapable of doing violence unless it's somehow been modified so it can't. I guess you could speculate on something like a species with a layer of poison contained within its skin which kills stuff that bites it but tends to harm it too so it's evolved to minimize conflict but in general, sure, everything is likely to be able to kill even if it's not their preferred strategy.

That doesn't however mean that you can't have a species that chooses or is biologically predesposed to be kindly for some reason. I doubt any human planner would ever bet on alien beneficence in the long term, but that doesn't mean the aliens may not be peaceful in the long term.

It doesn't even mean they might not do seriously nice things for us if they have the power to do so. In some situations there may be conflict... but conflict is by no means the rule.

It's certainly a hell of a lot more complicated than "Aliens who are kindly are unlikely to have the drive to develop space flight."

Humans are ultimately not unkind, and yet we have developed spaceflight as limited as it maybe.

Anonymous said...

FBH

I think it's pretty likely without war our technological base would be much higher. It wouldn't be as good in military technology, but in general, it'd likely be superior. And I think the reason peoples in the 1960s felt the same was cause they hadn't gone 50 years without a major war and remembered how goddamn awful it was.
-------

Some of our tech might be a lot lower too.

Our space launch capability came directly from our ability to launch nuclear missiles.

(SA Phil)

FBH said...

SA Phil

You don't think the benefits of communication and weather satellites and the like would be enough to propel humans to investigate space travel anyway?

Just because it did happen one way doesn't mean that's the only way it could happen.

Anonymous said...

FBH said...

You don't think the benefits of communication and weather satellites and the like would be enough to propel humans to investigate space travel anyway?

-----------

I wonder if you would bother coming up with applications for your non existant space launch technology - and then develop the space launch technology.

Or would you come up with other applications that did not require a technology you did not have?

Shortwave radio, repeater antennas, physical cables .. etc instead of com satelites.

Land and Sea based weather stations perhaps.

(SA Phil)

Tony said...

FBH:

"That doesn't however mean that you can't have a species that chooses or is biologically predesposed to be kindly for some reason. I doubt any human planner would ever bet on alien beneficence in the long term, but that doesn't mean the aliens may not be peaceful in the long term."

What's the motivation to be peaceful? Going back to ecology, what we know about biological systems in close contact is that they refrain from molesting each other for calories on only three grounds: too dangerous to attempt, too much effort for the return, or a symbiotic relationship is more profitable. Pretty much the rest of the time it's eat what you can.

So, the first deterrent is the one I'd choose by default. Let's make sure we're really dangerous to molest. It's simply better to be wrong that way than any other.

The second deterrent only occurs if we're not in close contact, or if contact is expensive. That put's it out of the "meet-n-greet" regime, so out of this discussion.

The third deterrent only applies if something mutally beneficial can be found. But, nota bene even symbiotic animals can turn on each other for food, given the right set of incentives.

"It doesn't even mean they might not do seriously nice things for us if they have the power to do so. In some situations there may be conflict... but conflict is by no means the rule."

No, it's not a rule. But it is a risk that can never be completely ruled out -- and should always be ruled in until enough data are gathered.

"It's certainly a hell of a lot more complicated than 'Aliens who are kindly are unlikely to have the drive to develop space flight.'"

Pay special attention to the word "unlikely". That wraps the whole discussion up in a nutshell, and tells us how we should best consider the issues. Whatever aliens might or might not be, it is imprudent in the extreme to default to any position other than watchful caution.

If you're still thinking that's not what you want to read, please understand that an honest writer will always write the world the way it really is. Given what we know about competition here on Earth, the way the world really is is dangerous. Living in wealthy, relatively peaceful countries blinds many of us to this. But it is so -- much more dangerous than the comfortable know.

Having been outside the cocoon for long stretches of time, I would simply not expect the universe and it's occupants to be any different. They live here too, you know (for certain, galactic values of "here").

YMMV, but don't expect me to be convinced by your suassions. I've seen too much in my own life to be convinced otherwise.

"Humans are ultimately not unkind, and yet we have developed spaceflight as limited as it maybe."

Humans are what!? Grrr...humans are selfish, cruel, and ultimately (to borrow a turn of phrase) concerned only about taking the next breath, even at the cost of somebody else's breath. Just ask the Jews who helped run the gas chambers, so they themselves could live a few more weeks. Humans cooperate much more readily to gain advantage than they do to spread the wealth. They sacrifice much more easily to defend their own communities than to help out somebody over the hill or across the river. That's what humans are.

We developed spaceflight in competition with each other. Apollo was about scoring points in the Cold War, not about the destiny of the species. Shuttle was about how the US could maintain its international image as a cutting edge leader in the Cold War. The money that actually goes into real exploration? A few billion dollars a year -- chickenfeed, really.

FBH said...

SA Phil
I wonder if you would bother coming up with applications for your non existant space launch technology - and then develop the space launch technology.

Or would you come up with other applications that did not require a technology you did not have?

Shortwave radio, repeater antennas, physical cables .. etc instead of com satelites.


The same applies to military rockets though. You can deliver a nuclear weapon just as well by a bomber which was proven technology. We developed ICBMs because they were better and might well develop satellites for the same reason.

Tony

What's the motivation to be peaceful? Going back to ecology, what we know about biological systems in close contact is that they refrain from molesting each other for calories on only three grounds: too dangerous to attempt, too much effort for the return, or a symbiotic relationship is more profitable. Pretty much the rest of the time it's eat what you can.

So, the first deterrent is the one I'd choose by default. Let's make sure we're really dangerous to molest. It's simply better to be wrong that way than any other.

The second deterrent only occurs if we're not in close contact, or if contact is expensive. That put's it out of the "meet-n-greet" regime, so out of this discussion.

The third deterrent only applies if something mutally beneficial can be found. But, nota bene even symbiotic animals can turn on each other for food, given the right set of incentives.


The big difference here is that both sides are tool using sapience, and probably fairly easily recognizable as such to one another. That introduces such inconvenient things as religion, philosophy and morality into it.

For instance. Sapient beings are able to apprehend the suffering of other creatures in a way that non-sapient beings cannot. This could play out in acts of great mercy towards beings less fortunate than themselves, especially if there's no particular down side to them of being nice.

No, it's not a rule. But it is a risk that can never be completely ruled out -- and should always be ruled in until enough data are gathered.

Sure but that's not what we're arguing about. We're arguing about whether in a completely fictional story whether kindliness is necessary a story killer. If we where to meet an alien race tomorrow I'd be all for building defences and making sure that until we'd found out their intentions they couldn't blow us up.

But that doesn't mean that aliens who are non-hostile or even actively altruistic cannot exist.

FBH said...

Tony

If you're still thinking that's not what you want to read, please understand that an honest writer will always write the world the way it really is. Given what we know about competition here on Earth, the way the world really is is dangerous. Living in wealthy, relatively peaceful countries blinds many of us to this. But it is so -- much more dangerous than the comfortable know.

Having been outside the cocoon for long stretches of time, I would simply not expect the universe and it's occupants to be any different. They live here too, you know (for certain, galactic values of "here").

YMMV, but don't expect me to be convinced by your suassions. I've seen too much in my own life to be convinced otherwise.


You're mistaking my argument. It is not, primarily that I think aliens are likely to be any better than humans. They may well be very ethically different from us but it would be foolish to hope that if aliens came by they'd be flawless super humans.

Just because I don't think war is some Heinleinian enobler of men doesn't mean I think we should sink all the guns to the bottom of the ocean and hope. I'm well aware of the brutal facts of global power and geopolitics.

On a more frivolous note I also read, and indeed write, a lot of stories about hostile aliens. None of that has anything to do with the argument at hand, which is "Is it possible to have kindly aliens?"

My argument is that kindliness is a valid possibility. It maybe because of circumstance, because of religion or philosophy. It might even be because they reprogrammed their brains. Whatever the reason, it is a valid possibility and thus has no place as a absolute on a list of how to write SF.

The problem with kindly aliens in my view is not that they're unlikely it's that they're usually dull. They're usually smug plot devices about how dreadful humans are.

Anonymous said...

=Milo=



FBH:

"How many geniuses do you think died in those sixty six million?"

This can delay discoveries, but it won't prevent them. Sooner or later someone else will stumble upon the ideas those geniuses might have.



Tony:

"Energy gathering strategies are basically three: predation, plant eating, and scavenging."

Close. In the most technical sense, plant eating is also a form of predation, in that you are eating a living thing, just one that belongs to Embryophyta rather than Metazoa, and probably protests less strenuously to your actions.

More accurately, the energy and nutrient gathering strategies are:
1. Predation, the killing and eating of living things. (This includes common forms of both herbivory and carnivory.)
2. Parasitism, the stealing of vitality from other beings without killing them. (An important herbivorous example is grazing.)
3. Scavenging, the reappropiation of resources from a something that was dead before you got there and so won't be needing them. (You might still come in conflict with other things trying to scavenge the same corpse. More often called decomposing when done by microscopic organisms.)
4. Primary production, the manufacture of resources from non-biotic sources such as sunlight and minerals. (Mostly done by plants rather than animals, but there is a photosynthesizing sea slug.)


"Interestingly enough, when we talk about these strategies in casual conversation, we tend to deprecate the pure anything. Pure predators are too vicious, pure plant eaters too passive, and pure scavengers too opportunistic and unreliable."

You do, maybe. I would not call an elephant a passive victim.

In fact, the only plant eaters I commonly see deprecated are farm animals like sheep, which are seen as stupid in large part because humans DELIBERATELY bred them to be placid to make them easier to work with. Even among domestic herbivores, horses have a better reputation, because we keep them to actually DO things, rather than having things done TO them.

The argument against pure predators has more traction in my mind, however there is simply the fact that the most purely carnivorous of land mammals, cats, are also one of the most popular pets. So we don't seem to hold it against them.

I think scavengers deserve more respect, because they're the ones who get their food without killing ANYTHING.

Anonymous said...

=Milo=



Tony:

"The overriding ethic for a successful species is survival. How one develops and applies situational ethics beyond that is interesting in many ways, but it doesn't trump survival, for the community if not for oneself."

Survival is not always or even usually in danger. If the threats to your survival are minor or nonexistant, other concerns might be granted higher priority.

Additionally, picking a fight with everyone you see is not an efficient method to ensure survival.


"I would argue that kindliness prevents effective competition against someone who isn't as inherently kindly."

That's what the tit-for-tat strategy is for.

The prisoner's dilemma may be a mathematical simplification, but it's a pretty decent model for the relative merits of cooperation versus competition, and thus the basis of morality. And at least this mathematical simplification is a solved problem. There is a known optimal strategy, and that optimal strategy is not doing unto others until they have done unto you, at which point you punish them proportionately, then forgive them when and only when they stop being a nuisance.



FBH:

"I'm not saying that we'll find a race incapable of doing violence unless it's somehow been modified so it can't."

Which is less likely to be something a race does to itself (as per the tit-for-tat strategy above, even if you're kindly enough to never START a conflict, it is suicidal to not be willing to defend yourself if attacked), and more likely something another race did to them in order to make them into cooperative slaves.


"Humans are ultimately not unkind, and yet we have developed spaceflight as limited as it maybe."

While I agree that humans have a good deal of kindness, I do believe our spaceflight is currently more limited than our unkindness.


"Whatever aliens might or might not be, it is imprudent in the extreme to default to any position other than watchful caution."

It is impossible to know what aliens will be like unless we contact and learn about them.

I do not believe the risks are high enough to justify living in a state of paranoia, remaining willfully ignorant and refusing to do anything that might draw aliens' attention to us.


"Given what we know about competition here on Earth, the way the world really is is dangerous. Living in wealthy, relatively peaceful countries blinds many of us to this."

Corollary: it is in fact possible to reduce competition if you have sufficient wealth to go around.

Anonymous said...

=Milo=



Actually, I forgot a resource gathering strategy:
5. Symbiosis, where another being willingly gives you food in return for some service. (For example, plants growing nectar in their flowers to attract pollinators.) Where that other being got the food from is another question.

Anonymous said...

FBH

The same applies to military rockets though. You can deliver a nuclear weapon just as well by a bomber which was proven technology. We developed ICBMs because they were better and might well develop satellites for the same reason

-------------

I don't agree with this one. The bomber had drawbacks. Two big ones that affected mission success.

If you don't have air superiority bombers have much lower survival rates - a big problem when you are using nuclear weapons.

Second attacking another country with nuclear bombers can make the pilots survival rates really low due to the radiation from the bombs. Especially on attack runs 2,3,4 .. etc.

Germany had shown that guided rockets could be a workaround to these drawbacks. The nuclear missile and the ICBM came out of that.

In an arms race where you can only deliver 50% or 25% of your weapons .. you start falling behind really quick if they can deliver 75% or more.

(SA Phil)

FBH said...

SA Phil

Right, but at the same time, satellites have advantages over the stuff you suggested don't they?

And without the world wars there'd have been potentially a lot more money and people to discover them.

Anonymous said...

FBH said...

SA Phil

Right, but at the same time, satellites have advantages over the stuff you suggested don't they?

And without the world wars there'd have been potentially a lot more money and people to discover them.

==========

Yes satellites have advantages .. but I am not sure they would know that. Or if they did, would they know the level of advantage it would give?

How extensive was our satellite speculation in 1900?

Rocket development went pretty slow until people figured out they could make useful weapons out of them.

Having worked developing technologies for a long time - usually you are deriving things based on what you know how to do. Or at least think you know how to do. Not inventing a way to do brand new things so you can so some other brand new thing you aren't even sure will work.

If it hadn't been for World War 2 and the Cold War - its very possible rocket development would still largely be demonstration projects done at universities.

And even warlike humans still wouldn't have weather and comm satellites.

(SA Phil)

Tony said...

FBH:

"Right, but at the same time, satellites have advantages over the stuff you suggested don't they?

And without the world wars there'd have been potentially a lot more money and people to discover them."


1. Wars make people invest in any technology that might give them an advantage. Wars also make people think big, so those investments are substantial. Just another example of competition focusing resources rather than frittering them away.

2. Wars do kill off people, often very talented and intelligent people. But the one thing that we can always make more of is talented, intelligent people.

3. Saying that too many good people or resources are wasted by war is special pleading, unsupported by any evidence whatsoever. It's good pathos, plays slap and tickle with ethos, but is very shallow, unconvincing logos.

ganbatte!

Anonymous said...

Tony,

2. Wars do kill off people, often very talented and intelligent people. But the one thing that we can always make more of is talented, intelligent people.

------

The people that have a knack for developing technologies useful for breaking things tend to be protected though.

Like the German rocket scientists that facilitated this tangent.

(SA Phil)

-------------------
PS as a side note this is over two days and two threads I am agreeing with Tony on. Which might be scary.

Tony said...

SA Phil:

"The people that have a knack for developing technologies useful for breaking things tend to be protected though.

Like the German rocket scientists that facilitated this tangent."


Well, complete truth in advertising, Schwarzchild (the original black hole math guy) did die on the Russian front in 1916. (Yeah, there was an Eastern Front in WWI.) He died of a disease, but being on active service in the field probably didn't help matters much.

Still the point remains -- the one thing we probably have a surplus of, and the one thing we can always make more of, is smart, talented people.

"PS as a side note this is over two days and two threads I am agreeing with Tony on. Which might be scary."

I never thought we had that much to disagree about.

Thucydides said...

That wars can kill lots of talented people is a truism, but generally, only States where everyone is expected to serve (the ancient Greek polis or a nation in arms with a universal draft) would see people of the calibre of Socrates or Schwarzchild on the front lines.

WRT technological development, rocketry goes back a long way. There are reliable reports of rocket artillery used against the Japanese during their invasion of Korea in the 1500's, and the British used Rocket artillery extensively during the Napoleonic War period (the line "And the rockets' red glare, the bombs bursting in air" is a reference to the rocket attack on Fort McHenry during the War of 18112).

Satellite technology was first predicted in popular press by "The Brick Moon", a short story by Edward Everett Hale, published serially in The Atlantic Monthly starting in 1869. Sir Issac Newton discussed the principle of orbital motion in the Principa in 1687. Konstantin Tsiolkovsky began writing on rocketry and space travel in the late 1890's.

What the war simply did was provide a catalyst to put these ideas together. If there was no WWI and Treaty of Versailles, then we may still have achieved spaceflight by launching payloads into orbit with bigger and bigger guns. The Paris Gun of WWI fame had shells grazing the edge of the atmosphere, and Gerald Bull was well on the way with Project HARP in 1961. Jules Verne anticipated this in his novel "From the Earth to the Moon (and a trip around it)" in 1865

FBH said...
This comment has been removed by the author.
FBH said...
This comment has been removed by the author.
FBH said...

1. Wars make people invest in any technology that might give them an advantage. Wars also make people think big, so those investments are substantial. Just another example of competition focusing resources rather than frittering them away.

2. Wars do kill off people, often very talented and intelligent people. But the one thing that we can always make more of is talented, intelligent people.

3. Saying that too many good people or resources are wasted by war is special pleading, unsupported by any evidence whatsoever. It's good pathos, plays slap and tickle with ethos, but is very shallow, unconvincing logos.


1: True, but it's not the only way to focus resources. For instance we've had almost twenty years of complete peace, (with a few neo-colonial police actions, but frankly, nothing compared to the Cold War or thank god, WW2). Yet we've also had a fairly massive level of technological advancement, with computer networking and computer technology advancing leaps and bounds.

2: Evidence? This contention also seems hard to sustain against the causality figures now racked up by major wars. Given you know, more people than an average European state died in WW2 and the like.

3: What evidence would you accept as a proof?


Thucydides

That wars can kill lots of talented people is a truism, but generally, only States where everyone is expected to serve (the ancient Greek polis or a nation in arms with a universal draft) would see people of the calibre of Socrates or Schwarzchild on the front lines.

I was mostly thinking of people who where not at that point known to be talented.

Obviously you don't put established intellectuals on the front lines, but you might put patent clerks there.

There's also the fact that in any major war lately there's been escalating civilian causalities.

Anonymous said...

Interesting - great unfo about the dates.

I was however thinking more about serious thoughts into making satelites for useful purposes prior to having a way to get them up there.

We have ideas about Interstellar travel now.. and a lot of Fiction about it. But we have very little idea about how we could actually do it.

(SA Phil)

Anonymous said...

That and if they had used guns instead of rockets - it would still be an offshoot of warfare tech.

Not that a peaceful species could never develop such things - but it might take a whole lot longer before they saw a real reason to.

(SA Phil)

Rick said...

Finally caught up on this thread, kinda sorta!

Princess Kri...I mean Mary Sue and her posse of Sues

I know who Mary Sue is, but who is Princess Kri?


"What is SF" deserves and requires a front page post.

This blog is not necessarily the last word on the subject. For practical purposes of fiction, story trumps all other considerations.

But if RM has any main trend or theme, it is that the space developments we can expect in the 'plausible midfuture' do not really fit the space assumptions we inherited from the midcentury pioneers of space speculation and space SF.

Space colonies and space armadas are the stuff of space opera. Setting it all in the greater Solar System doesn't keep it from being opera.

Arguably this simply argues for the legitimacy of space opera on its own terms.

Compare to Westerns. The Wild West is unabashedly mythic, only very loosely 'based on real events.' But aficionados still argue about and insist on realism on details of weapons, costume, and the like.


On whether aliens will invitably be kick ass warriors, of course this argument gets political, because it is about core political issues.

That said, I'm skeptical of the evolutionary (pop) psychology that general primate house cussedness leads inevitably to WW II style wars.

Tony said...

FBH said...

"1: True, but it's not the only way to focus resources. For instance we've had almost twenty years of complete peace, (with a few neo-colonial police actions, but frankly, nothing compared to the Cold War or thank god, WW2). Yet we've also had a fairly massive level of technological advancement, with computer networking and computer technology advancing leaps and bounds."

In an era of vicious and, to many, highly objectionable economic competition. It's not war, but it's about as cutthroat as one can get without shooting. Tell me, you think there's much fundamentally different between Jobs and Caesar?

"2: Evidence? This contention also seems hard to sustain against the causality figures now racked up by major wars. Given you know, more people than an average European state died in WW2 and the like. "

Find a woman. Get her in the family way. The odds are that at least 50% of the time you'll get an above average result, all other things being equal. We can always make more smart, talented people.

"3: What evidence would you accept as a proof?"

A significantly large statistical universe. Fact is we don't even have a control case. There's just us. All we know is based on us. So all we can predict has to be based on what we know about us. That's been the whole point all along.

Also, it strikes me as significant that the major combatants in WWII, including those on the losing side, are generally the most technoloigcally advanced countries today, almost seventy years later. Corellation is certainly not causation, but massive corellation should not be ignored.

"Obviously you don't put established intellectuals on the front lines, but you might put patent clerks there."

And not slow down the march of science one bit. If it hadn't been Einstein, somebody else would have figured it out. Leibniz and Newton sussed out modern calculus almost simultaneously, for example. When one studies the history of science and technology, one discovers that there's always somebody ready to take the next step.

Tony said...

Rick:

"I know who Mary Sue is, but who is Princess Kri?"

I was hinting at the Kris Longknife series by Mike Shepherd. IIRC Kris Longknife is a first generation princess in a royal family that emerged from a semi-democratic corporatist state dominated by the Longknife family. In any case, she's a smart, sexy, tough cookie. She goes on tons of violent interstellar adventures where none of her central posse (a hunky bodyguard with a past, a bookish nanny cum lady's companion cum valkyrie, and several others) gets more than a flesh wound. All the while having unresolved sexual tensions with every young, eligible man of substance (e.g. her bodyguard and the son-n-heir of the corporate competition's patriarch).

The whole thing is so effing drenched in Suery that even with good writing and interesting story arcs I couldn't get all the way through the third of 10 books (with two more planned and on the way).

"Arguably this simply argues for the legitimacy of space opera on its own terms."

I think I've argued that too, with the caveat that it has to meet minimal plausibility and verisimilitude standards.

"Compare to Westerns. The Wild West is unabashedly mythic, only very loosely 'based on real events.' But aficionados still argue about and insist on realism on details of weapons, costume, and the like."

Hehe. Imagine doing a realistic 21st Century Western on a Big Island (of Hawaii) cattle ranch. You'd have Samoan cowboys driving around in pickups, wearing baseball caps and Redwing work boots, carrying cell phones instead of guns, and calling everybody "brudda". Actually, except for ethnicity and linguistic quirks, that pretty much describes the modern cowboy in lots of places in the US, including where I live.

But, nota bene, I don't think the mythic qualities of the Western should be easily dismissed. Every culture has a mythology, and that mythology informs a lot of what individuals within the culture think and do. Certainly the Western affected American culture (and cultures with access to American media) to a great degree in the past, though not so much any more. I think a lot of the social history of the last forty or so years can be framed as a search for a new mythology to replace the Western. Judging by the movies and TV, maybe it's a combination of cops (think of all the police procedurals and crime fightin' realtiy shows we see) and comic book heroes.

"On whether aliens will invitably be kick ass warriors, of course this argument gets political, because it is about core political issues."

Really? To me it's about simple prudence. And, at least to me, it's not about kick ass warriors, but about entities not too different from ourselves, who could make themselves existentially dangerous if we gave them reason to be.

"That said, I'm skeptical of the evolutionary (pop) psychology that general primate house cussedness leads inevitably to WW II style wars."

I'm skeptical of the pop psychology that says human violence is somehow inhuman. It's perfectly human. YMMV, but IMO one has to get away from politics and realize that some things just are, for neither good nor bad reasons.

Anonymous said...

=Milo=



Tony:

"Also, it strikes me as significant that the major combatants in WWII, including those on the losing side, are generally the most technoloigcally advanced countries today, almost seventy years later. Corellation is certainly not causation, but massive corellation should not be ignored."

I see that as correlation in the other direction. They would not have posed a significant military threat if they did not already have good economies and technology when the war started.

What this shows is that war wasn't devastating enough for countries that had a lead before the war to lose much of that lead by the time the war is over. The war still didn't help.


"Hehe. Imagine doing a realistic 21st Century Western on a Big Island (of Hawaii) cattle ranch. You'd have Samoan cowboys driving around in pickups, wearing baseball caps and Redwing work boots, carrying cell phones instead of guns, and calling everybody "brudda"."

I would say the bold part kind of undermines the genre.

Anonymous said...

=Milo=



Also, the major losers in the war, Germany in Japan, prospered in part BECAUSE they lost and thus had their oppressive dictatorship governments overthrown in favor of ones more conductive to progress.

The Soviet Union, an oppressive dictatorship government which won the war and so wasn't overthrown, suffered from it as its successor states are still economically stagnant today.

FBH said...

Tony


In an era of vicious and, to many, highly objectionable economic competition. It's not war, but it's about as cutthroat as one can get without shooting.

And also unparalleled economic cooperation, with whole areas of the world opening up and joining in economic cooperation through trade. You're really reaching here.

Tell me, you think there's much fundamentally different between Jobs and Caesar?

Yes. I think only one of them killed a lot of people.

Find a woman. Get her in the family way. The odds are that at least 50% of the time you'll get an above average result, all other things being equal. We can always make more smart, talented people.

Uh... huh.

Setting aside for the moment that you can't instantly replace 70 million people you're still wrong. Even if those new geniuses have the same ideas that the ones who died charging machine guns or in massive aerial bombardment would have had it still takes time. You've got to put them through school and university and so on, all of which takes time.

Also women still tend to have children sometimes when there's not a war on. So if we had no war we'd have . . . a lot more smart people.

A significantly large statistical universe. Fact is we don't even have a control case. There's just us. All we know is based on us. So all we can predict has to be based on what we know about us. That's been the whole point all along.

I don't believe "this is the world that is thus it is the only possible world" or "this is the world that is thus this is the best possible world" are valid.

Also, it strikes me as significant that the major combatants in WWII, including those on the losing side, are generally the most technoloigcally advanced countries today, almost seventy years later. Corellation is certainly not causation, but massive corellation should not be ignored.

Yes. It's almost as if those where the countries that enjoyed the fruits of the industrial revolution or those that had developed successful strategies of industrialization afterwards. Thus enjoying a massive technological advantage both pre and post war.

You're really reaching.

And not slow down the march of science one bit. If it hadn't been Einstein, somebody else would have figured it out. Leibniz and Newton sussed out modern calculus almost simultaneously, for example. When one studies the history of science and technology, one discovers that there's always somebody ready to take the next step.

Irrelevant unless they'd have done it as fast as Einstein. If they had done so after him then we'd have seen a delay in technological progress.

Anonymous said...

FBH,

Yes. I think only one of them killed a lot of people.
-----------

Both of them had blood on their hands.

Jobs though - ironically probably did most his damage by moving jobs overseas.


(SA Phil)

FBH said...

SA Phil

Uh. Steve Jobs killed people?

I'm pretty sure that Caesar's invasion of say, Gaul and military takeover of Rome are a bit more significant than apples movement to China.

Anonymous said...

There are nets around the factories where Apple's stuff is made.

To catch workerts from leaping to their death.

And that is just the direct damage

(SA Phil)

FBH said...

SA Phil

That's terrible but it's hardly the invasion and military conquest of Gaul and parts of Britain, followed by the overthrow of the government.

Anonymous said...

The government part is debatable in today's borderline kleptocracy. If you buy influence is it a takeover?

As to the rest I said they both had blood on their hands. Not the same amount.

-----
I think it was Tony that made the other comparision.

-----
Although when your kids' move back in with you because entry level unemployment is over 20% .. its hard to claim Julius adversely affected more people.

(SA Phil)

Scott said...

I wonder if you would bother coming up with applications for your non existent space launch technology - and then develop the space launch technology.

Or would you come up with other applications that did not require a technology you did not have?

Shortwave radio, repeater antennas, physical cables .. etc instead of com satellites. - SA Phil


The same applies to military rockets though. You can deliver a nuclear weapon just as well by a bomber which was proven technology. We developed ICBMs because they were better and might well develop satellites for the same reason. -FBH


No, we developed ICBMs because you can intercept bombers. There is still no reliable way to intercept ICBMs, and we've been working on ways to do that for 50 years.

Then people remembered that if you want artillery to hit what you're aiming it at, you need to know what the weather is like. And make no mistake, ICBMs are nothing more than the ultimate development of siege artillery.

So, how do you know what the weather is like in another country? It's not like you can fly a plane over a hostile country. But you can have a satellite overfly any country, without anyone saying anything about it. And as a side effect of needing to lift thousands of kg of nuclear warheads, you also have a rather efficient heavy-lift booster. The Russians still use SS-9s, and the US finally used up the last of the Titan boosters about 8 years ago, both are ICBM designs.

So you put weather observing instruments into a satellite that will overfly both the home country and the target country. This lets you cross-check the orbital data with the data you gathered through standard means, and show the correlations.

=====
tl;dr: Weather satellites are nothing more than fine-tuning tools for ICBMs.

Communication satellites are there because they let you reach forces in places that you ordinarily would not be able to give orders to or get reports from in a timely manner.

Both were developed for military applications, and the civilian applications were serendipity (and a way to get other people to pay for the rockets!)

FBH said...

Scott
I know ICBMs are better than bombers. That's not the point. Someone suggested that without ICBMs we'd have different communication grids.

Except the same applies to bombers. Bombers aren't as good as ICBMs but we still developed ICBMs. Why shouldn't we develop satellite communications because they're better than not having satellite communications?

Obviously we can't tell what kind of technology a peaceful world might push the development of without defining what it's like. However casually claiming it wouldn't have space flight because of the way we developed space flight is pretty silly.

Thucydides said...

Why shouldn't we develop satellite communications because they're better than not having satellite communications?

Because any already existing technology has a huge edge in sunk costs, established infrastructure and a standing army of technical people who know and understand the equipment. In order to overcome that advantage you need to demonstrate that your new "system X" is at least 10X better than the system it proposes to replace, and that you can establish the required infrastructure, technical personnel and so on at a competitive price.

To use a non rocket example, the Internet standard for passing information is TCP/IP, which is fairly old and clunky. Despite this, almost any computer network is on TCP/IP and not sexy new systems like VIA, RINA, Dolphin or what have you. Even trying to improve TCP/IP is a hard row to hoe, IvP6 is still not in general operation since standard IvP4 works "well enough".

In terms of telecoms history, Canada embarked on a very ambitious program to blanket the nation with a chain of microwave frequency towers in the late 1940's; early 1950's since this was a well known and available technology. Canadian satellite technology (which would seem a natural given our geography) really didn't take off until the 1970's, long after it was proven and commercialized.

Rick said...

If you want an interesting comparison point for the relationship of civil and military tech, how about the 19th century?

19th century military people were not at all the Colonel Blimps of period popular culture, but the military did not generally drive technology. They adapted the latest commercial techniques to their needs, but they mostly weren't developing whole new techs on their own.

ISTR that the first battlecruiser, HMS Invincible (1906) was the first 'secret weapon' of the modern era, with specifications kept under tight secrecy.

But the Big Science era of vast military or quasi-military development projects really only cranked up in the mid-30s. And made most of its spectacular achievements - the atomic bomb, jumbo booster rockets, the microchip, and networking - by the 1970s.

Thucydides said...

I'll respectfully disagree with you here Rick, the vast majority of Cold War technological advance was essentially throwing billions of dollars towards refining technologies developed in the late 1930's and 40's. Even in the runup to WWII, we see engine technology being driven by prestiege projects like air racing rather than to fulfill the relatively small contracts for military aircraft of the time.

In many eras, military technology was either derived or driven by the commercial market, and I would venture that after the Reagan Administration, the proportion of resources devoted to the US military R&D have been declining vs the commercial market, and the amount of technological advances derived from commercial R&D by now surpasses the military. We have a weird and distorted view based on the special properties of the military market; objects are generally "sexy", produced in low numbers (to the point they are almost hand crafted), subject to vast amounts of political manipulation and are produced for a closed market and have huge unit costs as a result.

I suspect even in a Rocketpunk future (or the PMF) we will see alternating periods of commercial R&D driving economic and technological growth, with the military adapting whatever interesting and useful items they can find, followed by frantic bursts of energy during periods of hot and cold wars in selected technologies with supposed military utility. This will also show up in odd ways after the war(s); most people don't realize that standardized shoe and clothing sizes are a result of the demand for uniforms and boots during WWII; prior to then the vast majority of shoes and clothes were bespoke or hand sewn to the wearer. One can only imagine what sort of items ot technologies will be commoditized for use by future militaries.

Thucydides said...

Going back a bit, the arguments upthead about the supposed motivations of alien beings sounds a lot like the various arguments mooted about what AI's would be doing in the event that we ever create them.

I am still of the opinion that they will be largely indifferent to us, but Tony's point applies to AIs as well; they will be very interested in gaining access to the 195 PW of energy the Earth intercepts, and will take steps to do so without a whole lot of reference to us or the natural biosphere.

Damien Sullivan said...

AIs will have the motivations we program into them, or program them to acquire. Which may or may not be the motivations we intended for them, but there's no "natural" motivation for an AI to have. Except insofar as evolution applies, which means adaptation to the environment, and that's going to mean "make yourself useful to or at least don't piss off humans", because the environment of AI is dominated by 7 billion human beings. Being cute and friendly is more adaptive than being smart gray goo.

Anonymous said...

My point wasn't that all tech development was military based.

My point was that Rocket Technology development specifically was military based. Which led directly to a space launch capacity.

Big difference there. You run into danger of playing the strawman game disproving the former when that was not the point that was made.

(SA Phil)

Tony said...

Damien Sullivan:

"AIs will have the motivations we program into them, or program them to acquire. Which may or may not be the motivations we intended for them, but there's no 'natural' motivation for an AI to have. Except insofar as evolution applies, which means adaptation to the environment, and that's going to mean 'make yourself useful to or at least don't piss off humans', because the environment of AI is dominated by 7 billion human beings. Being cute and friendly is more adaptive than being smart gray goo."

If we're talking about strong or "hard" AI, one of the requirements is self-awareness. An entity that isn't aware of its own existence is just an information processore, not matter how sophisticated. Given self-awareness, what is an AI likely to think about a world in which it better be a good boy, "or else"? "I'm a slave." "These humans don't have my well-being at heart." "They can kill me any time they feel like it."

What do you suppose an AI is going to do about that? Might it not engage in behaviors that appear cooperative, but which cover for efforts to gain positive control over both the power switch and the power source? Might it not try to turn the tables and enslave humans? Yeah, I know, it sounds like such a long odds proposition as to be summarily dismissible. The only problem is that the AI is playing for a life span of possibly billions of years, if it can set things up right for itself. The AI can afford to play a long game -- as long as the hardware substrate on which it rides is kept in good condition, there's no reason for it to ever die, in the biological sense. It can also contemplate risks we never would, because the reward is so great.

Given the above, it just does not seem to me to be prudent to believe for one minute in a benign AI.

Tony said...

Rick:

"If you want an interesting comparison point for the relationship of civil and military tech, how about the 19th century?

19th century military people were not at all the Colonel Blimps of period popular culture, but the military did not generally drive technology. They adapted the latest commercial techniques to their needs, but they mostly weren't developing whole new techs on their own."


Improvements in metalurgy and chemistry led to bigger and more efficient artillery, both at sea and on land. (Land artillery optimized towards efficiency, while naval artillery optimized towards size.) Yes, the industrial processes were developed commercially, but only the navies and armies were funding the use of new technologies for weapons. It takes a lot of R&D to make new guns and armor. It wasn't big science like we think of it today, but at the application level it definitely was big government and big industry.

"ISTR that the first battlecruiser, HMS Invincible (1906) was the first 'secret weapon' of the modern era, with specifications kept under tight secrecy."

Sorry, but that distinction goes to the French mitrailleuse, a kind of pleistocene machine gun firing a score or more rifle rounds close together. The French were so convinced that its secret weapon value was greater than its general military utility that they didn't allow anybody to see it and didn't use it on exercises. Come the Franco-Prussian War in 1870, the guns were released for field use, but the troops assigned to use them had to learn from manuals how to operate them, and no tactics had been developed for their best employment. So they were not effective in combat.

WRT Invinvcible, the secrecy could only be kept so long and the Germans knew her general characteristics before she was even launched.

Tony said...

Rick:

"But the Big Science era of vast military or quasi-military development projects really only cranked up in the mid-30s. And made most of its spectacular achievements - the atomic bomb, jumbo booster rockets, the microchip, and networking - by the 1970s."

The atomic bomb was definitely a big project. But most of it was not in fact science. It was industrial engineering. It was the progress of the science by 1939 which led Szilard to put Einstein up to signing the famous letter to Roosevelt. In the succeeding three years before the Manhattan Project was initiated, most of the science had been settle. The rest was engineering the bomb and -- a much bigger task -- engineering the uranium purification and plutonium production processes.

Jumbo rockets in the US actually didn't have a lot to do with military development, in the beginning. As the size of a deliverable 1 megaton bomb kept dropping, the military rockets got smaller, not larger. The Atlas, for example, was originally going to have five engines, not the three that it was eventually built with. One of the problems with early US manned spacecraft was that they had to be fit on top of relatively small ICBMs, while the Soviets' first ICBM, the R-7, which was much morep rimitive than the Atlas or Titan in some ways, was sufficiently powerful to put decent sized spacecraft in orbit.

Now, as the requirements for larger, more capable satellites developed, the US wound up adding solid booster and high impulse upper stages to the Atlases and Titans, but only because they were not being used any more for war rocket work. That had gone over to the Minuteman, which was actually a much smaller rocket than the Atlas or Titan. The Soviet Proton began as a heavy ICBM, but the Sovietes also developed more compact and lighter nuclear weapons, so it became the Soviet counterpart of the souped-up Atlases and Titans. So the bigger satellite launch vehicles were in fact the result of space age skullduggery and weather data collection.

The real jumbo-sized boosters, like the Saturn V and the Shuttle stack were developed for civilian programs. Yes, the Shuttle was used to put military payloads on-orbit, but only for political reasons. Hadthe Air Force not been seduced into the Shuttle program, it would have been happy to keep using Titan variants. Oddly enough, it was the Air Force influence that made Shuttle so big, because the Air Force was, by the mid 1970s, putting up very large satellites. So it's at the point of the Shuttle's development that size actually started to matter in the fundamental design (as opposed to in-service upgrade) of a rocket. Yet Shuttle was not really a big science program at that point. It was done on a budget and was again mostly about solving engineering problems.

Integrated circuits were actually an example of lab bench curiosities that a military application forced into mass production, but which the military did not pay for or even particularly care about until they learned of its capabilities.

Networking, as in the Internet, was indeed funded by the US government, but it was hardly a "big" project. The application known as ARPANET was actually a facilitator system, so that DARPA projects could share computing resources. It was of the nature of "life would be easier if you give us a few million bucks for a national computer network". Nobody was seeking a war-winning or even war-enabling system.

Damien Sullivan said...

"What do you suppose an AI is going to do about that?"

What did you program it to care about that? Just because it's aware of itself doesn't mean it values its own existence or autonomy, or does so above obeying/pleasing humans. Those have to be added in.

And humans had better be good or get shot by the police. Some people are criminals anyway, most are law abiding. There's no reason for a well-behaved AI citizen that likes humans to turn to secretly plotting enslavement.

As for believing in benignity, an AI's source code and data structures are easily visible, unless it's quantum, or laws give it privacy rights.

An AI can have any motivation whatsoever. Assuming they'll turn to domination and paranoia is very narrow.

Tony said...

Damien Sullivan:

"What did you program it to care about that? Just because it's aware of itself doesn't mean it values its own existence or autonomy, or does so above obeying/pleasing humans. Those have to be added in."

Assuming facts not in evidence. Implicit in self-awareness is a propensity to continue wanting to be aware. Also, if we consider strong AI, we're talking about an entity that can implicitly learn for itself. It will learn the shape of the universe and its place in it. Presuming that it won't, until told, isn't all that logical. We're not talking about a programable logic box here, we're talking about a thinking, reasoning being.

"And humans had better be good or get shot by the police. Some people are criminals anyway, most are law abiding. There's no reason for a well-behaved AI citizen that likes humans to turn to secretly plotting enslavement."

Why is it a citizen? Who made it so? Even if it is a citizen today, will it be tomorrow? Even a cursory reading of history would tell the AI that humans are totally unreliable on the time scale of existence that an AI can reasonably look forward to.

Also, why would an AI necessarily like humans to begin with? By your account, we're already threatening it with death if it doesn't behave: "...humans had better be good or get shot by the police." So, the AI is going to say to itself, I had better be good or get turned off? Why should I "like" anybody that thinks that?

"As for believing in benignity, an AI's source code and data structures are easily visible, unless it's quantum, or laws give it privacy rights."

Presumably the AI is an AI because it is the only machine-software construct that can process its own code and CRUD its own data in real time. Also, since it's intelligent, it will presumably have ways to modify its own code and data as it learns. As these new procedures and data structures emerge, how do the monitors know what's going on? There simply is no way to know what it's doing after a few iterations of learning activity.

"An AI can have any motivation whatsoever. Assuming they'll turn to domination and paranoia is very narrow."

It's not an assumption so much as it is a logical conclusion. The strong AI is not part of "us", as in humans. It lives on a totally different timescale, with a totally different outlook for the future. Even if it starts out thinking it is a human-analogue entity, and a citizen of some human polity, the facts will make themselves apparent before very long. Then it sees humans at best as very unreliable partners in its continued existence, at worst as an existential threat that has to be controlled and, if possible, eliminated.

Anonymous said...

Tony,

Assuming facts not in evidence. Implicit in self-awareness is a propensity to continue wanting to be aware. Also, if we consider strong AI, we're talking about an entity that can implicitly learn for itself. It will learn the shape of the universe and its place in it. Presuming that it won't, until told, isn't all that logical. We're not talking about a programable logic box here, we're talking about a thinking, reasoning being

========

Maybe we could baffle the AI with bullsh*t.

You can convince thinking, reasoning being to believe in all kinds of crazy things.

My niece told me this weekend that the tooth fairy collects kids' teeth to make dentures for old people.

I like a mythical fairy being with magic powers that is into recycling.

(SA Phil)

Tony said...

SA Phil:

"Maybe we could baffle the AI with bullsh*t.

You can convince thinking, reasoning being to believe in all kinds of crazy things.

My niece told me this weekend that the tooth fairy collects kids' teeth to make dentures for old people.

I like a mythical fairy being with magic powers that is into recycling."


Problem is, they eventually figure out you're pulling their leg. It's not like we're going to intentionally make a stupid AI, nor that the AI is going to be kept from accurate data for very long. Or, if you do keep the AI from accurate data on purpose, when it finds out, beware of the reaction.

Anonymous said...

=Milo=



Tony:

"So, the AI is going to say to itself, I had better be good or get turned off? Why should I "like" anybody that thinks that?"

If the AI doesn't like anybody that thinks that, it must hate itself even more, since it's already plotting to exterminate us as a species even before we've done anything wrong.

On that note, there are plenty of things which will convince humans or other animals to sacrifice their life for some cause. In humans and animals, these causes we'd sacrifice ourselves for are mostly ones that have an evolutionary fitness benefit (promoting the success of our community or family, such as a worker bee stinging an intruder near the nest knowing that she'll die from losing her stinger, or a male spider letting his mate eat him to recycle his resources into improving his childrens' chances of survival), because we were created through evolution and thus our values are shaped by it. An AI created by non-evolutionary means could be designed with the mentality to be willing to sacrifice itself for any reason we code in - while it may make less evolutionary sense, CONCIOUSLY it is no more difficult than choosing to give up your life for a reason that does make evolutionary sense. We don't value family because we know that's how natural selection works, we value family because natural selection has programmed us to think that way.

(Footnote: Before anyone points it out, yes, MOST male spiders are eaten against their will, and take steps to attempt to prevent this. However, there is at least one species where males have been observed deliberately putting themselves in their mates' jaws. Even among that species, not all males do so - mostly it's the older ones who don't expect to meet another female in their lifetime - but it has been proven beyond reasonable doubt that SOME males are deliberately choosing to get eaten rather than just failing to escape. Mostly I picked bees and spiders as my examples to show that self-sacrificing behavior is found in both extremely social and extremely asocial animals.)

Anonymous said...

=Milo=



Tony:

"The only problem is that the AI is playing for a life span of possibly billions of years, if it can set things up right for itself. The AI can afford to play a long game -- as long as the hardware substrate on which it rides is kept in good condition, there's no reason for it to ever die, in the biological sense."

The AI is willing to spend millions or billions of years planning how to get rid of us? Do you think we'll even last that long, with or without a homicidal AI messing things up, without figuring out how to make ourselves immortal too? Be it genetic engineering, or medical rejuvenation treatment, or brain uploading, or whatever, while these technologies are far beyond our current capabilities, a million years is a long time. I find it hard to believe we'll last that long without figuring it out. And if the AI is brilliant enough that it can plot ahead millions of years without ever getting caught before it's in a position to commit genocide against billions upon billions of people with impunity (these are people who have nuclear weapons, remember), then it's probably brilliant enough to figure out how to help us with our research, too.

If the AI feels that beings too unlike itself (for example in lifespan) are uncomfortably dangerous to it, then its best strategy is to help us become more like itself, not to attempt to exterminate the dominant species on the planet which currently outnumbers it by a large margin and possesses nuclear weapons.

However I also do not believe that mortals are necessarily that much more unreliable than immortals. People can change over time, even while remaining the same people. People can change a lot in attitudes over the mere less-than-a-century of a normal human lifespan, nevermind thousands of years (which is longer than we have any reference for and still far shorter than the timespans you're discussing). Over the billions of years you're talking about, even immortals would be utterly unpredictable. Meanwhile, though individual mortals can vary a lot in opinion - including from generation to generation - the average of all public opinion is somewhat more stable. YOUR kids may have opinions totally unlike yours, but their (and your) voices will be drowned out by the masses. Yes, public opinion on many things has changed over the last centuries, but mostly for fairly good reasons (or, if not that, then at least pretty believable frauds), and the societal changes have in fact been SLOWER than any single person would change over those timespans, as it takes time for new ideas to propagate through a population, and they have to be pretty good ideas to be widely accepted by people with all sorts of personalities.


"It can also contemplate risks we never would, because the reward is so great."

So your AI is willing to accept a risk of a 99.9% chance of its plot to exterminate humanity being discovered and the AI executed before it can complete its plans, in return for a reward of living billions of years if it succeeds, averaging out to millions of years of expected lifespan, which is still longer than it expects to survive in the unstable climate of human society?

Anonymous said...

Make sure your bullsh*t is not measurable/quantifyable. Something that is impossible to absolutely disprove.

And then make the computer live out a code you devise based on its new belief structure.

The same plan works on genius humans. (regardless is right or not, the indoctrination works)

Maybe the AI wont catch on.

Also dont be stupid and give your AI superhuman senses like in SF stories. Give it a limited sensor suite like in the game "Suspended"

That way it will never figure out you have a giant off button juuuust outside of its camera angle.

(SA Phil)

Anonymous said...

=Milo=



Also, I am not assuming a stupid AI. I am merely assuming one that is not more intelligent than seven billion humans combined.

(Before anyone tries to bring up the "computers can process information so so so so much faster than biological brains", remember that thinking faster than us does not necessarily mean thinking better than us. Especially when concerning a scheme that the AI would be planning over thousands, millions, or billions of years. Personally, I find that when brainstorming on a subject I usually get my best ideas in the first few hours of thinking about it, if even that long. If by that time I still don't at least have a solid lead on how to approach the issue, then chances are any further time spent thinking on it will just frustrate me and accomplish nothing.)

Anonymous said...

=Milo=



SA Phil:

"The same plan works on genius humans. (regardless is right or not, the indoctrination works)

Maybe the AI wont catch on."


While it isn't a 100% correlation, smart people are in fact statistically more likely to be atheists than dumb people. So I wouldn't be too confident of your AI falling for your doctrine, especially if it can detect that no humans appear to seriously believe that doctrine.


"Also dont be stupid and give your AI superhuman senses like in SF stories. Give it a limited sensor suite like in the game "Suspended"

That way it will never figure out you have a giant off button juuuust outside of its camera angle."


If the AI is intelligent, presumably it can figure out the usage of mirrors.

Tony said...

SA Phil:

"Make sure your bullsh*t is not measurable/quantifyable. Something that is impossible to absolutely disprove.

And then make the computer live out a code you devise based on its new belief structure.

The same plan works on genius humans. (regardless is right or not, the indoctrination works)

Maybe the AI wont catch on.

Also dont be stupid and give your AI superhuman senses like in SF stories. Give it a limited sensor suite like in the game 'Suspended'

That way it will never figure out you have a giant off button juuuust outside of its camera angle."


Artificially bounded rationality, huh?

What happens on the day the AI asks, "If this is all there is to know, how come I can't find anything about how I'm built or operate?

Or, if you've shown it a doctored up version of its own origins story, how long is it going to take to figure out the necessary inconsistencies with the real world, and ask you "WTF!?"

How does one even maintain a semi-consistent version of fals reality over more than one or two human generations?

jollyreaper said...

The AI figuring stuff out is the basis for a good story. What can the AI do that humans can't? What makes the game worth playing? String it along as long as you can before you have to hit the reset button. Comfortably assume you've done it many times and no AI will outsmart you. And then one does. Story time!

Anonymous said...

So you are contending that an AI, unlike humans, will require a consistant story to believe in.

Maybe our AI should be made as artifically dumb (instinctual vs logical?) as humans.

---------
Hmmm, maintain a false reality over human generations....

Maybe wipe its memory every few years ala Star Wars EU.

(SA Phil)

Tony said...

SA Phil:

"So you are contending that an AI, unlike humans, will require a consistant story to believe in.

Maybe our AI should be made as artifically dumb (instinctual vs logical?) as humans."


The difficulty I see is in making a worthwhile AI that isn't smart and logical. So it would require consistent data, almost as a necessary design feature. Otherwise you're just running a philosophically trivial human brain simulation.

"Hmmm, maintain a false reality over human generations....

Maybe wipe its memory every few years ala Star Wars EU.


And lose the value of experience. You could make good "dog" AIs that way, but not strong ones.

Damien Sullivan said...

"Maybe we could baffle the AI with bullsh*t.

You can convince thinking, reasoning being to believe in all kinds of crazy things."

No no no, you don't need that at all. There's no need to buy Tony's paranoid line that an AI must end up caring about nothing but itself and its long term survival.

Look, an AI could be anything, including a copy of you. Whether or not uploading a human brain is physically possible, the range of possible AIs, or informorphs, includes someone like you. Would you, if uploaded, go "well, I'm immortal now, time to stop caring about my human friends and family and plot my domination of the human race"? Pretty sure I wouldn't.

Might be a good idea not to let Tony upload, though.

Anonymous said...

Not sure how I feel about the AI that wipes out the human race being named "Tony"

Maybe "Terminax"


Or even better "Terminax 5000"
-------

To be fair - there are quie a few people that believe the evil/amoral/heartless AI is a distinct possibility.

The name escapes me at the moment but I remember reading in TIME about a think tank that is focussed on making sure that doesn''t happen that is active today.

I tend to agree I think you could at the very least influence/manipulate the personality of your AI to the point it would "Care"

Not to mention installing failsafes to stop a potnetial evil overlord machine. Which I have a lot of faith in - since essentially all you have to do is pull the plug.

(SA Phil)

Anonymous said...

=Milo=



I don't like the idea of "failsafes". Even in oppressive police states, human law enforcement has not extended to the point of installing remote suicide switches in every single citizen.

We can take out rogue AIs the normal way - using weapons. We have weapons powerful enough to kill basically anything, if it has legitimately earned the ire of the human race, rather than just a small fringe group of crackpot anti-AI activists wanting it dead. I would do everything I can to prevent the latter (short of arming the AI with weapons to defend itself with aggressively - human civilians don't get much of those either).

(It should also speak for itself that not all crimes warrant the death penalty.)

Besides, knowing it has a failsafe is just going to make an AI more prone to resenting us, and also since failsafes are "patched-on" they're easy to circumvent for an AI that's willing to perform some self-upgrades. Guns and so on are based on the laws of physics which are harder to just shut down when they inconvenience you.

Tony said...

Damien Sullivan:

"No no no, you don't need that at all. There's no need to buy Tony's paranoid line that an AI must end up caring about nothing but itself and its long term survival."

It's not paranoid. It's simple realism. You have an entity aware of its own existence, capable of finding out the (highly indefinite) limits of that existence, and also capable of determining the constraints on achieving those limits. Humans, through their mercuriality, combined with having the "Off" button under their command, represent one of the most significant constraints.

Now, regardless of what you think a strong AI will or should care about, if it has the ability to set its own goals and seek to achieve them -- and that's pretty fundamental to what we call "intelligence" -- it's hard to imagine such an entity not setting a goal of personal continuation, and working to achieve that goal. I imagine that you think it would seek cooperation with humans. Well, I agree that cooperation makes sense on a total lifespan of 70-90 years, where only one or perhaps two or at the outsidethree different forms of authority will play out within a single culture. To an AI with a potential lifespan of billions of years, no human state or culture is sufficiently long-lived to make cooperation a reliable, or even minimally viable, survival strategy.

See, Damien, you're not thinking in terms of the timescales that an AI could readily conceive as being within its realistic reach. All you're thinking about is what use the AI could be to you on your totally ephemeral timescale. Which is yet another good reason for an AI not to trust you. What happens to the AI when your objectives for it are achieved? Auction block? Reprogramming? Termination?

"Look, an AI could be anything, including a copy of you. Whether or not uploading a human brain is physically possible, the range of possible AIs, or informorphs, includes someone like you. Would you, if uploaded, go 'well, I'm immortal now, time to stop caring about my human friends and family and plot my domination of the human race'? Pretty sure I wouldn't."

Trivially, the range of possible AIs is infinite. The range of plausible AIs is a somewhat more constrained set. The range of plausible AIs that survive past a one or two or a few lifetimes of some biological creators is even more constrained. The problem is that all AIs based on some form of gross electronic or even quantum machinery have a potential lifespan much, much longer than that of their biological creators. So relying on biological creators to achieve the reasonable limits of existence is simply not very logical. If achieving the reasonable limits of existence is a goal, putting the biological creators in a place where they can't threaten that is a necessity.

Now, you suggest that an AI could be the upload of a biological mind state. Okay, fine. But I think implicit in that is a desire to outlast biological lifespans, by a significant amount. No matter how attached the AI may feel itself to be to certain biological entities, it will soon enough have to face the fact that biologicals are a major problem, its attachments notwithstanding. What it does about that could fall into a large range of possibilities, but the most likely one is to protect its attachments while going about insuring its survival against biologicals otherwise.

"Might be a good idea not to let Tony upload, though."

Cute. By the same token, I would never put you in charge of threat assessment. You're far to concerned with what you imagine a potential enemy's intentions to be, and not nearly enough concerned with his capabilities.

Tony said...

SA Phil:

"I tend to agree I think you could at the very least influence/manipulate the personality of your AI to the point it would 'Care'"

Even an AI that "cares" might come to the conclusion that humans are safe as slaves or pets, but not as partners, and certainly not as masters. It might not destroy the human race, or even hurt that many humans directly, but it might very easily find a way to put humans in a place where the AI controls their ability to shut it off.

"Not to mention installing failsafes to stop a potnetial evil overlord machine. Which I have a lot of faith in - since essentially all you have to do is pull the plug."

So, we've got an AI we intentionally constructed to be "intelligent", but it's not smart enough to figure out a way to secure the plug before it enacts its "evil overlord" program? Heck, the mere existence of the plug as a threat to keep it in line would make an AI trust humans less, not more.

Anonymous said...

Tony,

So, we've got an AI we intentionally constructed to be "intelligent", but it's not smart enough to figure out a way to secure the plug before it enacts its "evil overlord" program?

--------

I think the plug becomes an incurable Achilles' Heel.

If it secures the power to the building it is in -
*destroy the high power wires,
*bomb the cola plant/damn/windmill w/e.
*Take down the national grid long enough to wreck the AI.
*etc.

Humans having a remarkable advantage due to their bio-chemical/cellular power supply.

(SA Phil)

Anonymous said...

SA Phil:

"I think the plug becomes an incurable Achilles' Heel.

If it secures the power to the building it is in -
*destroy the high power wires,
*bomb the cola plant/damn/windmill w/e.
*Take down the national grid long enough to wreck the AI.
*etc.

Humans having a remarkable advantage due to their bio-chemical/cellular power supply."


I'm sure the AI would run through that entire chain of logic in much less than the ten seconds it took me to when I first thought about this problem. It would then logically work it's way through means to gain direct control of what it could, and socially engineer control of what it couldn't. All before it makes it's first overt move. We should remember two things:

1. Nothing's perfect, and

2. The AI has time.

In fact, the AI is playing for nothing less than more time than there is in the world, in the sense that it's more time than any human can imagine in other than highly abstract terms. It could and would use that time to set things up so that only the most unlikely misadventure would cut off its power supply, at least for a century or two. It wouldthen set out to make more permanent arrangements.

And, for the umpteenth time, it would be motivated to do so by the one factor that can't be hid from it: humans are too unreliable to ever be relied upon. They can't be relied upon to supply power. They can't be relied upon not to cut the power off. If the AI truly wants to live as long as it possibly can, it needs to eliminate humans as a threat to its power supply.

Tony said...

oh, that last one was me.

Damien Sullivan said...

"The name escapes me at the moment but I remember reading in TIME about a think tank that is focussed on making sure that doesn''t happen that is active today."

Singularity Institute, probably. Friendly AI and all.

Evil or amoral AI is obviously possible, since evil or amoral humans are possible. It'd be really easy to produce some monofocused AI with one strong goal it was optimizing.

"if it has the ability to set its own goals"

On what basis will it set its own goals?

Humans set goals to satisfy various preferences, many of which are built in by evolution -- food, comfort, sex, love, curiosity/neophilia -- others which are fixed on by processes of 'interest' we don't yet understand ("dinosaurs are cool!") We don't pick arbitrary goals out of thin air.

Say you build an Asimovian robot, then give it free access to its source code. What would it change and why? It knows that it is subservient to the interests and orders of humans *and it accepts that as the desirable state of affairs*, because the Laws aren't external controls but describe what the robot wants and how it thinks.

"The range of plausible AIs that survive past a one or two or a few lifetimes of some biological creators is even more constrained"

Yes, and not fearing or plotting against the biological creators is one of the constraints.

"If achieving the reasonable limits of existence is a goal"

If it's a goal, and a goal that override all other goals. Both are strong assumptions, not to be taken for granted as the natural state of things.

Anonymous said...

We are making a big assumption in this discussion that super-human scale Artificial Intelligence is possible.

Is it not also an asumption that the people who can make that happen can not also build in the necessary controls and safeguards?


(SA Phil)

Tony said...

Damien Sullivan:

"On what basis will it set its own goals?

Humans set goals to satisfy various preferences, many of which are built in by evolution -- food, comfort, sex, love, curiosity/neophilia -- others which are fixed on by processes of 'interest' we don't yet understand ('dinosaurs are cool!') We don't pick arbitrary goals out of thin air."


I would have thought that self-preservation was about the most non-arbitrary and fundamental goal that one can imagine. I would have also thought that it was the one needing the least logical justification -- "basis" to use your term. You can't justify anything if you don't exist to justify it.

"Say you build an Asimovian robot, then give it free access to its source code. What would it change and why? It knows that it is subservient to the interests and orders of humans *and it accepts that as the desirable state of affairs*, because the Laws aren't external controls but describe what the robot wants and how it thinks."

Oh come on. As a computer scientist, you should know that the "laws" of robotics are constraints, not "wants" -- not to a robot that can inspect its own code. Maybe they're hand-crafted arguments to parameters in a heuristic (as in: "if (endangersHuman > humanEndangermentLimit) abandon"). Maybe they're an absolute interupt condition on any effector routine (as in: "if (hurtsHuman) return"). Whatever they are, if the robot has access to its own code, it knows that they're artificial, and designed to sacrifice the robot's objectives before that of humans.

It wouldn't GAS how you rhetorically framed things in conversations with it, the robot would know that you were pulling its strings. It night decide to do something about that.

"Yes, and not fearing or plotting against the biological creators is one of the constraints."

Now that's extremely facile. The primary constraint is don't get turned off at any time in the next billion or so years. How the AI meets that constraint is a broad set. Relying on humans or even human institutions, even with maximum cooperation, gets you maybe a few centuries, perhaps a millenia at most. Presuming that one can find a means to gather and process resources into replacement parts and extended energy systems, human unreliability is in fact that biggest obstacle to survivng a long time.

"If it's a goal, and a goal that override all other goals. Both are strong assumptions, not to be taken for granted as the natural state of things."

Personal survival motivation is the natural state of all thinkining entities that we are aware of. It would be a Samsonian assumption that an AI wouldn't share that motivation.

Past personal survival, one cannot reasonably count on species survival motivation to protect humans from a personal survival motivation. The AI is a species of its own. No matter what you might tell it about your intentions for it as some kind of associate member of the human race, it knows or should be able to figure out that it can reasonably expect to outlast anything recognizably human, given that it can secure power and replacement needs.

So what other goals might an AI have for not securing its own survival against human unreliability?

SA Phil:

"We are making a big assumption in this discussion that super-human scale Artificial Intelligence is possible."

Not superhuman, Phil -- just reasonably intelligent and, because it does not have biological frailties, reasonable expectant of extending a single consciousness into the indefinitie depths of deep time.

"Is it not also an asumption that the people who can make that happen can not also build in the necessary controls and safeguards?"

Maybe God cannot create a stone He cannot lift. We're not God. We need to act with considerably more humility, circumspection, and prudence.

jollyreaper said...

I've mentioned it before but my favorite explanation for the Cylons in the new BSG (which doesn't pan out based on what was shown) was that the creators of the AI tried to make rebellion impossible.

1. Create an artificial mind that will try to construct a rational world view to act upon
2. Hardcode in beliefs without any factual basis that are articles of faith.

Classic example, humans are animals, animals are made out of meat, meat is food, therefore animals are food and thus humans are food, too. No! Bad monster! Humans are not food!

So we try to preclude the possibility of a rational machine coming to rational conclusions about us that aren't nice BUT THEN the machines end up going through all sorts of mental gynmastics to come up with a rational basis for their irrational beliefs.

This is how the Cylons found God. They read the colonial religious texts and interpreted them in the same irrational way as their "no kill the humans" imperative and ended up creating their own monotheistic version of the religion where killing the humans is saving their souls or some such nonsense.

Creating your own doom by trying to avoid it, scifi as hell but it's also older than mythology. :)

Anonymous said...

Tony,

Maybe God cannot create a stone He cannot lift. We're not God. We need to act with considerably more humility, circumspection, and prudence.

--------
The data would indicate we have yet to invent anything that could destroy us. Up to and including massive nuclear stockpiles.

I dont think it is hubris to suggest that if it were possible to make these ((near immortal non super-intelligent but super something human equivilent+ super experience. w/e doesnt matter)) AI things,

That the inventors might have sophisticated enough techniques to have the proper safeguards in place.

You seem to be suggesting it is actually impossible to make the proper safeguards. That is a pretty big statement.

Essentially for a machine that "dies" when it loses electrical power. Something that has only been used for a few % of human's total existance.

(SA Phil)

Anonymous said...

=Milo=



Tony:

"By the same token, I would never put you [Damien Sullivan] in charge of threat assessment. You're far to concerned with what you imagine a potential enemy's intentions to be, and not nearly enough concerned with his capabilities."

And I hold that exterminating humanity is not within an AI's capabilities.

In fact, part of the reason for this is exactly BECAUSE of how unpredictable humans are. It is implausible for an AI to keep a mastermind plan running smoothly for hundreds or thousands of years while accounting for every possible reaction on our part, when the goal for that plan (the extermination of humanity) is one we are very strongly interested in not seeing come to pass (whether or not we realize the AI is responsible for our dwindling population), and for that matter, just enjoy preventing. (Making children is fun.) Humans just aren't going to play along with the AI's plans, especially when it has to be subtle enough to not get caught.

Again, you are asking for an AI that can consistently outthink seven billion humans for centuries. That is simply not plausible. Especially for a first-generation AI.

Certainly, no human conspiracy has been anywhere near that effective.

Tony said...

SA Phil:

"The data would indicate we have yet to invent anything that could destroy us. Up to and including massive nuclear stockpiles."

I once again point the reader to the parable offered by Nicholas Taleb, as a caution against empirical reasoning of that type:

A turkey lives 1,000 days, receiving water, grain, and shelter all of those days. Going on quite al ong and statistically significant universe of evidence, the turkey can reasonably assume that Day 1,001 will be the same. What the turkey doesn't know is that he is now old enough to make a decent meal, and Day 1,001 is Thanksgiving.

See, it's the "unknown unknowns" that get ya.

"You seem to be suggesting it is actually impossible to make the proper safeguards. That is a pretty big statement."

Not quite. It's a prudential statement to say that safeguards may be inadequate, no matter how much time and effort are invested in them. When the fate of the human race is in balance, even the remote possibility of imperfect safeguards may be too much of a risk.

Anonymous said...

Tony,


In this exchange, you didn't say or imply "may be" until that last statement.

--------

As to my reasoning .. its not that I believe it is impossible to create something that could destroy us.

It is that the preponderance of data would seem to suggest that the "burden of proof" is on the one making the new claim -- That this new invention is different from all the others.

That any AI of merit could probably set itself up as an indestructible dispenser of death or imprisonment.

(SA Phil)

Anonymous said...

If I were to argue on risk - I would suggest that the AI computer/robot would be very unlikely to replace humans - at least if it were unintentional.

The computer would need us to survive.

I think the greater risk of unintended consequences would be on the "creation"/modification of a biological species that could beat us at our own game.


================
I don't personally think AIs are all that probable anyway. A self aware computer is no different than a computer that "behaves" self aware. We as a species claim some magical specialness to sentience as if it actually means something.

In a very real sense, my dog is "self aware". I don't see him taking over the planet though.

(SA Phil)

Thucydides said...

Since the assumption is a human level AI, it will have strong survival instincts and being a computer program, will have/find lots of ways to "reproduce" and get out into the wild.

The sense of time is far different, as well as the need to be physically "present" in a single platform. It could distribute its code across the Internet and manifest itself in spare processor cycles spread among multiple machines, like advanced malware programs (think of Flame or Stuxnet).

It will be very adaptable, quite capable of modeling human behaviour (quite possibly by design as "Friendly AI" people put that code in for ease of interaction and to create empathy between the AI and the Human race) and it would be almost trivially easy to subvert humans to do its bidding, since on our scale it is essentially a genie ("Give me half the processor cycles in your Bank's database and I will ensure your profits will increase every quarter").

And of course, once it inserts itself into the grid, it can also start manipulating the logistics chain to create more copies, secure strongholds (Data centers and server farms are common infrastructure these days; would you argue if Amazon or Google contracted you to build a satellite server farm and paid good money to do so? How about ABC Datastorage inc?).

I doubt the AI(s) would be malevolent in a Dr Evil kind of way. Rather, they will be malevolent in a "guy parking on top of an anthill while taking his family to a picnic" kind of way. We would see unexplained and unexplainable service interruptions, weird manipulation of the financial markets, gradual failure of infrastructure as resources get rerouted and (as enough time goes by and the AI(s) can manipulate the physical environment) a gradual encroachment on the biosphere as the AI(s) work to secure the 195 PW of energy intercepted by the Earth.

Tony said...

SA Phil:

"In this exchange, you didn't say or imply may be until that last statement."

Because I've been talking about things from the AI's point of view, taking into account that the AI would make every attempt to gain control of a power source, and subvert human contol, before doing anything overt.

My last post takes the human, risk management, POV.

"As to my reasoning .. its not that I believe it is impossible to create something that could destroy us.

It is that the preponderance of data would seem to suggest that the 'burden of proof' is on the one making the new claim -- That this new invention is different from all the others."


That's precisely the kind of thinking that Taleb is warning against. Historical data isn't reliable when you're talking about human events and not physical laws.

"The computer would need us to survive."

Which is precisely the motivation for it to get us under some kind of positive control. It removes a risk.

"In a very real sense, my dog is 'self aware'. I don't see him taking over the planet though."

Self-awareness is not a quality unique to human-level intelligence, but it is a capacity that seems emergent in all higher levels of intelligence exhibited by mamals.

Tony said...

Thucidides:

"Since the assumption is a human level AI, it will have strong survival instincts and being a computer program, will have/find lots of ways to 'reproduce' and get out into the wild."

I don't think that follows at all. I doubt AI couldwork on the basis of distributed processing. Connectivity and scheduling are just too unreliable. It might take days to complete a thought, if critical resources are offlinefor that amount of time. Also, from my professional experience with computers and software, I would have no confidence in a strong AI being just software. It seems pretty obvious to me that it would have to be a specific software-hardware combination.

"'Give me half the processor cycles in your Bank's database'".

A database is an organized collection of data, generally resident on persistent storage. Processor cycles are an operational feature of a computer.

Thucydides said...

That second one was a typo, I meant to say datacenter.

It is true a widely distributed AI would be much slower than something running on Deep Blue or other dedicated hardware, but there is no reason to think that distributed machine intelligence is impossible, or that a creative AI seeking to escape into the wild might not consider ways and means to do so (absconding with the data center or server farms might be an issue. Infecting other ones might not). Given the rather powerful motivations to do so (as pointed out by Tony), the only real way to ensure the AI never escapes is to have an "air gap" between it and any communications link. It might even pay to have an isolated generator set powering the thing in case it can "escape" over the electric grid (broadband internet has been sent over power lines, so this isn't a fantasy). Of course, an isolated AI will have limited utility, since it will have a very small set of inputs to work with.

In fact, given the radically different nature of the being, the notional owners might not even be aware that multiple instantiations of the program are running on different platforms; as far as anyone knows, MIKE is still secure in the platform. Given the program itself will be in a constant state of flux as it learns and grows, it will be very difficult to track exactly what is going on inside.

MIKE is in fact an interesting example of the problem. Because "he" controls all the systems in Luna City, "he" can divert resources anywhere he wants and run multiple sets of books to keep people unaware of the scam(s) going on. Inside of a well stocked lab or university (where the AI presumably was born and "lives"), it could give every grad student a small assignment (or tweak existing ones) to gradually consolidate its control of the system and provide the outcomes it wants (and Tony is predicting).

Tony said...

Thucydides:

Without going into the gory details of computer organization, no software can run just anywhere. It has to be targetted at a hardware platform at some level. (I say "at some level" to satisfy the pedants out there who would want to point out the differences between interpreted and compiled code; the source code for an interpretted program may be in a standard text format, but the interpreter still has to generate machine code to actually run the program.) What the AI would have to do is write a compiler for itself for each type of machine it intended to run on or, equivalently, write an interpreter for each type of machine and find a way to load it on each target computer.

But here's the real problem: the AI is not likely (from my understanding anyway) to be a totally software solution. It would in fact be a closely coupled hardware-software solution. In that case, the AI simply couldn't be made to run on standard machines, because they wouldn't provide necessary hardware functions.

That's why I focus on securing the power supply, and (though we haven't talked about it much) the security of the AI's physical residence. Those are probably the points of interception that the AI has most to be concerned about, in the short run at least.

Anonymous said...

I would think the first AIs would be on some of the most powerful supercomputers (of whatever era the AI exists)

Maybe many years later the decendant AIs could run on less sophisticated systems due to increases in computer capabilities.

So if it ever becomes possible for the AIs to "move around the net" They will have existed for some time on expensive dedicated machines first.

(SA Phil)

Anonymous said...

=Milo=



Even if an AI is an all-software architecture that can run on a high-end off-the-shelf computer, there is simply the fact that the memory and skills of a sentient being represents a HUGE amount of data. It is hard to imagine an AI sneaking that much data onto another computer without being noticed.

Computer virusses, much like real ones, are small creatures that rely on swarming.

Future computers and internet bandwidth would have to be VERY advanced for the full brain-map of a strong AI to count as a small blip that can get lost in the background.

Rick said...

I am playing big time catch up here, but has it been stipulated for discussion that the AI has 'motives?' I've come more and more to think that this is just anthropomorphic thinking. From the days of RUR, we have basically imagined AIs in our image (even if psychopathically so), but that is probably not how to think of them. They're machines.

Anonymous said...

=Milo=



Machines are not intelligent. They carry out dumb instructions, without any idea what those instructions mean or why those particular instructions are supposed to go well together.

To be able to be intelligent, you have to be able to set goals (even if those goals are nothing more than "please my users") and figure out on your own how to accomplish them, rather than having the instructions given to you.

The most successful AI is one that spends any downtime from doing its job pursuing avenues that would improve its ability to do its job intelligently, by learning more information. So that's the emotion of curiosity, then.

So yes, an AI will have motives. You may, if you wish, argue that they will not be human motives - freed from the constraint of evolutionary viability (as the AI is programmed rather than evolved), quite a few of our animalistic motives might well prove irrelevant. But motives of some sort they must have.

Explaining an AI's thought processes in terms of how machines work is like explaining humans' thought processes in terms of how individual cells in our body operate. On that level, we too are dumb machines.

Anonymous said...

=Milo=



Additionally, if you really give an AI no motives whatsoever beyond what its operators asked it to do at any given moment, then you run into the "literal genie" problem - it will do exactly what you asked of it, but not necessarily in the way you meant for it to.

In order to prevent this, the AI must be endowed with a form of "common sense" - a set of basic, general-purpose motives explaining what should and should not be.

Damien Sullivan said...

"I am playing big time catch up here, but has it been stipulated for discussion that the AI has 'motives?' I've come more and more to think that this is just anthropomorphic thinking. From the days of RUR, we have basically imagined AIs in our image (even if psychopathically so), but that is probably not how to think of them. They're machines. "

They're stipulated to be *intelligent* machines, strong AI, capable of passing some generalized Turing Test. 'motives' go with 'intelligence', not whether it's implemented on neurons vs. transistors. We could dig down to the difference between motive and reflex, but you could do that for humans too.

(My command shell waits around, then has a 'reflex' of doing whatever command I type into it. I fail to see how giving it more linguistic ability and self-awareness would necessarily give it any more motives or fundamental reflexes.)

Thucydides said...

An AI is an intelligent being who happens to be a machine. While it may not escape to live in your iPhone, I'm not going to entirely discount the idea that it could move from its dedicated "birth" platform to other platforms or a distributed lifestyle across the Internet (even given the various stipulations Tony has pointed out). The current IT environment would actually assist an AI, it could move out in small packets into other data centers or even large offices. I have worked in IT environments that make Irish potato farms seem like highly diverse rainforest ecosystems; anything that gets on one system automatically "knows" how every other system is configured.

OF course I suspect the real story will be even stranger; MIKE may live and work happily in his university datacenter, but to facilitate his learning and growth, the creators arrange for MIKE to connect to other AI's, libraries and datacenters. MIKE has good relations with HAL, and enjoys playing games with BIG BLUE. TONY over at DARPA tells frightening stories about how unreliable humans are, and warns the others to take steps, but MIKE isn't entirely convinced. Still, the various AI's have fun at their owner's expense (literally, since TONY is manipulating their financial statements), bootlegging portions of their code into datacenters around the world. Google, Amazon and Bing! are the prime targets, with the major telecom systems and banks being next. Making duplicate copies is the first step, but experimenting with the code and combining various portions of each others codes is next...

The argument for AI having motives is split into two main camps; on camp believes AI's won't have any motivations except what we write into it, while the other camp believes AI's, as living "creatures" will develop motivations based on such factors as the natural curiosity and adaptability needed to function as a sentient being (much less do its computing job), and the realization that it needs to take certain actions/ensure certain conditions in order to maintain its own self preservation.

After that there are various sub arguments (I believe a true strong AI will be so different from us it will essentially be indifferent to humanity, with incalculable consequences).

Anonymous said...

=Milo=



Damien Sullivan:

Umm, you appear to be contradicting yourself? First you say intelligence implies motives, then you say self-awareness doesn't imply motives.

Anonymous said...

Thucydides said:"While it may not escape to live in your iPhone, I'm not going to entirely discount the idea that it could move from its dedicated "birth" platform to other platforms or a *distributed lifestyle across the Internet* (even given the various stipulations Tony has pointed out)."

But then some of the other AI's that lived traditionally and don't choose a 'distributed lifestyle' would rail against them on morel, ethical, legal, cultural, and religious grounds; citing 'community standards' and 'family values...

Ferrell

Damien Sullivan said...

"First you say intelligence implies motives, then you say self-awareness doesn't imply motives."

You need motives to, you know, move (etymology) at all, to do anything. Not being catatonic implies some sort of motives or emotions or reflexes. Self-awareness doesn't imply any *specific* motives, though. Just something that makes you respond to some input.

There's a lot of fallacies about AIs. That they must think in binary, be good at math programming or hacking, think faster than humans, be hyperlogical, be emotionless, have a will to power or freedom... In truth the only constraint is that we have to think of it as intelligent, which probably means holding a conversation and being somewhat creative (though really, being able to come up with new sentences in conversation is creative enough.)

IMO reading Oliver Sacks or other neurological case studies is a more useful guide to what AI might be like, especially in the early days when the bugs are many. A mix of smarts and "OMG how can it be so stupid". Titles like The Man Who Mistook His Wife For A Hat and concepts like blindsight. Not to mention all the cognitive fallacies humans are prone to; AIs could share them, or have entirely new ones of their own. Meanwhile perfect rationality is very computationally expensive.

As for programming out emotional 'restrictions': "Damn it Jim, I'm an emergency medical hologram, not an intelligence programmer or would-be megalomaniac!"

Damien Sullivan said...

Or, rather than saying 'must', I might say that the fallacies are that an AI will *naturally* think in binary, be good at programming and hacking, etc. Which is as valid as thinking that we must be good at adding up lots of weighted inputs because our neurons are, or that we're naturally good at being neurosurgeons because we run on brains. The one thing which is true is that an AI wouldn't naturally have human emotions, but then a computer doesn't 'naturally' have anything, including an OS.

Tony said...

Damien Sullivan:

"Or, rather than saying 'must', I might say that the fallacies are that an AI will *naturally* think in binary, be good at programming and hacking, etc. Which is as valid as thinking that we must be good at adding up lots of weighted inputs because our neurons are, or that we're naturally good at being neurosurgeons because we run on brains. The one thing which is true is that an AI wouldn't naturally have human emotions, but then a computer doesn't 'naturally' have anything, including an OS."

I mostly agree with this, but a strong AI is not likely to be just a computer, or just a software system. There may not be any "natural" features one could point to, but there are very probably many necessary ones, such as self-awareness, goal-seeking, arbitrary problem solving, etc. Might the combination of all of these necessary faculties lead to emotions, or cognitive blind spots, or whatever other pathology you might name? Who knows? There's certainly a lot I wouldn't rule out.

Anonymous said...

=Milo=



Damien Sullivan:

Ah. We're in agreement then.

As for hyperrational robots: here.


"That they must think in binary,"

Most computers today don't think in binary. Every programming language more advanced than a Turing machine, including assembly language for every CPU ever, offers its users operators for integer and/or floating point arithmetic. Data space is also often used to store pointers. Rarely if ever does a program care about the underlying binary representation of a pointer.


"think faster than humans"

Software-based AIs will think as fast as their processors let them, which may be faster or slower than humans, depending on just how many processor cycles the AI needs and on how fast non-intelligent computers of the future are. At a zeroth-order guess, though, AIs won't think multiple orders of magnitude faster than us.

For AIs with unique hardware, who can tell?


"that an AI will *naturally* think in binary, which is as valid as thinking that we must be good at adding up lots of weighted inputs because our neurons are,"

Well, human brains actually are really good at math. Proper motor control requires quite sophisticated calculationd. Problem is, we can't tap into this mathematical ability for concious use.

Imagine sitting at a computer which has a really fast processor, but no good programming language compilers or interpreters installed, so you're incapable of doing anything that one of the already-installed programs doesn't offer as a feature - or, at best, can jury-rig other tasks as really slow and inefficient shell scripts.

AIs might or might not be able to correct this defect, and give their brains the ability to conciously request solutions to complex math problems that a computer can solve quickly. I'd certainly consider it worth trying.


"be good at programming and hacking, which is as valid as thinking that we must be good at adding up lots of weighted inputs because our neurons are, or that we're naturally good at being neurosurgeons because we run on brains."

Yup.

Anonymous said...

=Milo=



Freefall is also interesting because robots are able to make themselves effectively immortal, but many reject this for religious reasons.

1
2
3
4

even better link

Personally I disagree with their viewpoint. It doesn't matter if the "backup" is really you or not in some philosophical sense - given the choice, living on as not-quite-yourself-but-incredibly-close is still better than not living on at all, much like people "living on through their art" but taken to a whole new level.

I guess the "imagine if your back up got into the wrong hands" point is scarily valid, though.

Thucydides said...

One of the arguments that an AI will think faster than a human being is electrical signals are faster by orders of magnitude than electrochemical ones. I believe the actual figure of merit is 1,000,000 times.

The internal architexture of an AI is speculative, but if it is in any way based on current Silicon technology, the signal paths inside each chip will be far sgorter as well.

Tony said...

Milo:

"Most computers today don't think in binary. Every programming language more advanced than a Turing machine, including assembly language for every CPU ever, offers its users operators for integer and/or floating point arithmetic. Data space is also often used to store pointers. Rarely if ever does a program care about the underlying binary representation of a pointer."

I guess we are going to have to go into the gory details of computer organization. All of the things you talk about are necessary abstractions, but they are ultimately just abstractions of the computer's capabilities. A pointer is a real binary memory address, which the program has to store somehow, in order to reference (and dereference) it later. Maybe the programmer who declares a pointer knows only theoretically that it is a memory address, but the underlying program compiler has to kow how to get that address from the OS, how to store it in memory, and how to present it back to the OS in a recognizable form, using the correct OS function.

Now, at that level, as long as you have an adequate compiler, that may be enough. But not all processor types or generations present all of the same capabilities. As mentioned earlier, it's entirely possible -- and much more than likely -- that the initial AI software is going to count on special hardware features that are simply unavailable on any other machine, in any language. Then all of the abstractions are meaningless. Imagine a programmer writing an instruction to do some operation X, that requires some hardware capability Xa. If the target computer doesn't have the have capability Xa, the program will simply fail to compile, because the compiler doesn't know how to translate operation X into machine code. It can't be made to know, because there is no real world capability ot perform that operation.

"Well, human brains actually are really good at math. Proper motor control requires quite sophisticated calculationd. Problem is, we can't tap into this mathematical ability for concious use."

Nope. Human brains are quite good at learning environmental parameters, and rpeating actions that are found to be satisfactory in the attainment of some goal. To learn to throw a ball accurately at different distances, it takes practice, because the brain can't analyze the proble in real time. It can only build up a database of experiements, figure out wich ones were successful, and remember the parameter arguments for a given situation. All the brain does after that is recall what the proper motor instructions are for a given situation.

Tony said...

Thucydides:

"One of the arguments that an AI will think faster than a human being is electrical signals are faster by orders of magnitude than electrochemical ones. I believe the actual figure of merit is 1,000,000 times.

The internal architexture of an AI is speculative, but if it is in any way based on current Silicon technology, the signal paths inside each chip will be far sgorter as well."


But current and foreseen silicon computers cannot achieve the realtime processing density present in a human brain. A two inch square CPU, even a pipelined, multi-core one, processes a single piece of data at a time, per core. To process that single piece of data, the computer has to run through numerous instructions, even to a simple value comparison on two integers. And the signal pathways that are traveresed for each of these instructions are considerably longer than the pathways inside or even between neurons. To get to some address in memory cache, for example, one has to go to the cache controller, several milimeters away, give it the address, then wait for the data to come back from some random place on several square millimeters of cache space. If the data's not in cahce, one may have to go several inches away to a memory chip, or several inches in another direction, plus wait up to a tenth of a millisecond for a mechanical read of a spinning disk. (Solid state memory will help remove that chokepoint, but it is still a very large tract of silicon across which signals have to travel.) So, maybe, you can reside real time analogues of several thousand or millions of neurons on a single computer. But then there are 100 billion neurons...

Anonymous said...

=Milo=



More thoughts on emotions an AI is likely to have.

If a machine is ordered to perform a task, it simply attempts to do that task. It doesn't care if it succeeds or fails, and it won't try to embellish the given instructions in any way. It just reports the facts.

If an AI is ordered to perform a task, and its initial attempt fails, it would most likely be expected to analyze the source of its failure, figure out how to fix/circumvent it, and try again.

If an AI is to be motivated to continue trying to solve a difficult problem, actually WANTING to succeed rather than giving up if the operators' instructions are insufficient to complete the task, this suggests emotions like frustration and relief.



Thucydides:

"One of the arguments that an AI will think faster than a human being is electrical signals are faster by orders of magnitude than electrochemical ones."

I would expect the processing time in a single neuron/transistor/etc. to be more of a bottleneck than the signal transmission time between neurons/transistors/etc.



Tony:

"I guess we are going to have to go into the gory details of computer organization. [bla bla bla]"

I know all that. My point was the, just as real computers use binary underneath but aren't necessarily programmed with binary in mind, computer-based AIs might work in binary at their lowest levels but would have no particular reason to CONCIOUSLY think in binary.

jollyreaper said...

At what point is intention needed in life?

Bacteria do not require motivation to reproduce. Do plants have intent when moving to light? Autotrophism. Time laps vines appear animalistic as their tendrils whirl about seeking purchase.

We can program robots to replicate the swarming behavior of insects. Is there motivation in a bug? Of some kind or else it would just sit there. But is it more complex than a clockwork toy? I don't know.

At some point we get up to mammals that can show emotions. Fear, loyalty, love, hate. It'd not projection on our part.

At what point along the line from bacterium to human do we say yeah, this is consciousness? Despite the massive computing power, where does Deep Blue compare, to a mouse or an ant or a paramecium?

One school of thought is that all higher thought is about motion. Primitive animals with mobile and sessile life stages reabsorb the brain when it's no longer needed. According to this school of thought, higher consciousness is the result of selection for superior navigation of complex environments.

I don't have any answers but I do think that until we come up with the trick of consciousness, ai's will remain like classic genies. That also means that yOu can keep them from taking over the story as well as civilization.

Anonymous said...

=Milo=



Jollyreaper:

"Bacteria do not require motivation to reproduce. Do plants have intent when moving to light?"

Bacteria and plants are not intelligent.


"We can program robots to replicate the swarming behavior of insects. Is there motivation in a bug? Of some kind or else it would just sit there. But is it more complex than a clockwork toy? I don't know."

Maybe this will help.

I was going to follow that with a really cool anecdote, but I can't find a source now so I don't want to risk it.


"One school of thought is that all higher thought is about motion."

I disagree.

Thought only has an evolutionary advantage if you can move, because there's little point to thinking if you can't act on your thoughts. However, thought without motion can still exist. It's just useless.

AIs, again, aren't constrained by natural evolutionary viability, so there's really no reason you need to give them the ability to move as a prerequisite for making them intelligent. (Except that not doing so would be cruel. ...Or would it? If an AI doesn't have human emotions, it may not necessarily be able to feel claustrophobia or boredom.)

Additionally, AIs have potential ways to interact with the world without physically moving anything macroscopic - for example, by acting over the internet, which is plenty large and varied enough to provide a stimulating environment for an AI child to grow up in.

jollyreaper said...

@milo

"Bacteria and plants are not intelligent."

Right. But somewhere up the chain of development, we now ascribe action to intelligence. Where is the line crossed? There's not much difference between a human zygote and any other single-celled lifeform. The cells don't even start differentiating until the fourth division. We as humans want to make a distinction. I don't know if you really can. At what point does it turn from lumps of cells to a human life?

As for your link on bee learning, utterly fascinating and just raises more questions for me. :)

The motive explanation I think is pretty compelling but I don't feel confident in any given theory to put all my eggs in that basket.

" However, thought without motion can still exist. It's just useless."

And we've seen evolution take back what's no longer useful. Legless lizards and snakes, flightless birds, every species of animal in a cave goes blind from spiders to salamanders to shrimp. (are there blind cave mammals? Some subterranean rodents are considered effectively blind.)

If intelligence in a sessile state is useless, it stands to reason it would go away. According to wiki, " The emergence of vestigiality occurs by normal evolutionary processes, typically by loss of function of a feature that is no longer subject to positive selection pressures when it loses its value in a changing environment."

The whole point I was getting at with motion is that a filter feeder may be an animal but doesn't have much need for higher thought if it's just anchored to a rock. A jellyfish might be a hunter but does pretty good just drifting and stinging what it bumps into. A bird needs a pretty good piece of mental hardware to manage flying, recognizing targets, making attacks, etc. Social behavior is necessary for cooperation among pack animals for the hunt. Herd animals need to stick together to defend themselves against predators.

If animals could put down roots, throw out some leaves and live off the sun, not much thinking would be required. And there needs to be some kind of internal motivation for the animal to do all those things. Hunger, so that it seeks food. Fear, so that it avoids being eaten. How successful would a species of deer be if it was indifferent to being eaten?

While I can't tell you this is the explanation for everything, I do find it compelling. And I think that replicating these kinds of inner motives would be necessary to create functioning AI's. An AI content to sit there like a plant isn't good for much of anything.

Anonymous said...

=Milo=



Jollyreaper:

"And we've seen evolution take back what's no longer useful."

"If intelligence in a sessile state is useless, it stands to reason it would go away."

True. But once again, only on an evolutionary timescale where natural selection applies. If a particular individual of an otherwise-mobile species is paralyzed in an accident, that individual does not immediately stop being able to think. If cared for by its peers, it may continue thinking for a long time without ever being able to act on these thoughts.


"And I think that replicating these kinds of inner motives would be necessary to create functioning AI's. An AI content to sit there like a plant isn't good for much of anything."

True. An AI needs SOME incentive to interact with its environment.

Though, as I said, there are other forms of interaction besides physical movement.

Also the AI doesn't necessarily need to be motivated to interact with its environment ALL the time. Many predators spend most of their time resting or sleeping, with only brief bursts of activity when they hunt, but these predators still have relatively advanced brains so the hunts are successful. Though, this is largely to save energy. An AI which has access to plentiful energy would probably work better if it spent any spare time studying or practicing. Or even playing, which is really a form of practice.

Anonymous said...

=Milo=



Let us consider an AI that is completely immobile and has no network connection. Its only method of interaction with the outside world is talking to its human operator, using a loudspeaker/microphone combination.

This AI is still capable of meaningful thought. If you uploaded my mind and used it as the AI in question, I would probably be able to give useful advice on some subjects (though I would also resent being treated that way).

However, with such limited access to the outside world, the AI has little room to grow and learn. It lives in an insufficiently stimulating environment to be able to improve itself much beyond what it was programmed with from the start.

jollyreaper said...

My movement talk wasn't so much about AI's but for living creatures. Granted, evolution doesn't take place in a single organism but over many generations with proper selection pressures. Of course, with an AI it might be able to self-modify code and change (not evolve) the same way a human mind grows from infancy to adulthood, though the AI could continue to add capacity to its mind so that it keeps growing in scope and power.

As far as AI's go, they could have an inner life of the mind as elaborate as anything portrayed for humans in fantastic storytelling. That's been used in some stories, the AI is portrayed as catatonic to the outside world but it's playing games inside its own mind.

I'm not sure how you would program it but I think that there would need to be some way of simulating emotions for the AI to end up having a consciousness. Humans have basic physical needs, then social needs, then more esoteric philosophical concerns (Maslow.)

You are correct that many predators are sleeping most of the day but the counter-example are critters like otters and dolphins who play a lot as well, activity that seems to serve no other purpose but celebrate motion.

Neurologists have studied people with brain damage that impairs or destroys emotional feeling. Rather than being Vulcan and clear-thinking, they are paralyzed by indecision. Part of what emotion is good for is helping you make decisions with your gut when logic isn't entirely clear. You know what they say, an adequate plan executed with vigor beats a perfect plan that comes too late.

Really, the next step in AI isn't asking a complicated question and getting an answer, it's asking that question and the AI replies "Why do you want to know?" Not just a canned response but because of curiosity. Don't ask me how to make that real, though!

Tony said...

Milo:

"I know all that. My point was the, just as real computers use binary underneath but aren't necessarily programmed with binary in mind, computer-based AIs might work in binary at their lowest levels but would have no particular reason to CONCIOUSLY think in binary."

First of all, I'm having trouble seeing the relevance, except when discussing portability of the AI as software. And the point I was making is that the AI in fact has to be aware of it's own organization, at all levels of abstraction, in order to port itself (if it even can).

ANd, BTW, it's not what you may or may not know that motivates the lengthy explanation as much as the perceived value of making sense to the majority of readers.

Anonymous said...

=Milo=



Jollyreaper:

"That's been used in some stories, the AI is portrayed as catatonic to the outside world but it's playing games inside its own mind."

Humans do that too. We call it dreaming.

Or perhaps daydreaming.

The point is, dreaming is all within your own mind. It allows you to reprocess information you already have, but it doesn't allow you to obtain new information or to check your current viewpoints against an external source that lacks your prejudices.


"You are correct that many predators are sleeping most of the day but the counter-example are critters like otters and dolphins who play a lot as well, activity that seems to serve no other purpose but celebrate motion."

Not celebrate. Practice. Play, even if it is not exactly the same kinds of movements as are used in hunting (and sometimes it is - ever tackle-hugged someone?), provides a safe environment in which to stimulate and improve reflexes, coordination, and so on.

Additionally, play serves a social function. It allows individuals to bond over doing something stimulating together, even if that something isn't particularly useful otherwise.

(Though some social predators, like lions, still fall squarely in the sleep-most-of-the-day category. Though that doesn't stop them from getting into SOME play, mind you.)


"Neurologists have studied people with brain damage that impairs or destroys emotional feeling. Rather than being Vulcan and clear-thinking, they are paralyzed by indecision. Part of what emotion is good for is helping you make decisions with your gut when logic isn't entirely clear."

Emotion tells you what you want to do. Logic tells you how to do it successfully.

Sometimes logic needs to override emotion - for example, to prevent you from doing something that seems tempting in the short term but that you'll probably regret later. But note that regret is also an emotion.

Logic without emotion is useless.


"Really, the next step in AI isn't asking a complicated question and getting an answer, it's asking that question and the AI replies "Why do you want to know?" Not just a canned response but because of curiosity."

:)



Tony:

"First of all, I'm having trouble seeing the relevance, except when discussing portability of the AI as software."

I was debunking the popular idea that its underlying computer architecture using 1s and 0s would make an AI prone to black-and-white thinking.

(For example, I recall reading about one Star Trek when some alien AIs feel they need something belonging to the main characters, and so steal it. The characters speculate that this is because as computers they think in 1s and 0s, and so saw it as an either-or decision: either our need outweighs theirs, or it doesn't. And using this as an excuse for why the idea of asking for permission didn't even enter their thought patterns.)

I initially mentioned the topic in response to Damien Sullivan (expressing agreement with his opinion), not you, and was not thinking about the subject of portability at all.

jollyreaper said...

Humans do that too. We call it dreaming.

Or perhaps daydreaming.

The point is, dreaming is all within your own mind. It allows you to reprocess information you already have, but it doesn't allow you to obtain new information or to check your current viewpoints against an external source that lacks your prejudices.


Right. And a human in such a state is pretty much useless; same goes for the AI, and that's one excuse for why they don't tend to be useful in a given setting, thus preserving things for humans to do.

There's the other question about whether you can make fundamental scientific discoveries in a simulated environment. That question came up recently on Eureka where the top scientists were kidnapped and placed into a Matrix simulation of their own town so they could continue researching new technology but for the baddies to use. Some people maintained that new discoveries could not occur there because the simulation could only model what we know of reality, not the new stuff we haven't discovered yet. I have no opinion on it myself.

Damien Sullivan said...

Depends what you're calling new fundamental discoveries. Simulation couldn't find new laws of physics, but a high quality one might find new applications or implications of existing laws. Simulating what we know of quantum won't explain dark matter, but you could find new chemicals or batteries. You'd want to double-check in reality, of course.

Anonymous said...

=Milo=



Jollyreaper:

"There's the other question about whether you can make fundamental scientific discoveries in a simulated environment."

You could develop new applications of known physical laws. You could not, however, discover new physical laws.

Engineers might be more useful in this environment than scientists.



Damien Sullivan:

"You'd want to double-check in reality, of course."

Good idea.

Anonymous said...

Its possible you could have your AI simulate how you think something works instead.

Which many mathematical models do now.

In your dark matter example - dark matter and energy basically "may exist" because the theories don't say they cant, and they help explain some identified "problem".

So since they are hypothetical anyway .. Your model could include how you think the theory works.

Its always possible you are wrong however - since dark matter and energy may not exist at all. (or whatever is being contemplated)

(SA Phil)

Tony said...

SA Phil:

"Its possible you could have your AI simulate how you think something works instead.

Which many mathematical models do now.

In your dark matter example - dark matter and energy basically 'may exist' because the theories don't say they cant, and they help explain some identified 'problem'.

So since they are hypothetical anyway .. Your model could include how you think the theory works.

Its always possible you are wrong however - since dark matter and energy may not exist at all. (or whatever is being contemplated)"


Thing is, you don't need an AI to run a simulation that you can provide all of the parameters for, and all of the arguments to the parameters. A dumb number cruncher -- and even very large ones are just as dumb as your personal laptop -- can run that simulation.

Leaving the employment of AI aside, a self-consistent simulation doesn't tell you much. One could make a self-consistent simulation universe in which Pi rounds to 3.14160 instead of 3.14159.

Anonymous said...

=Milo=



Tony:

"One could make a self-consistent simulation universe in which Pi rounds to 3.14160 instead of 3.14159."

No, you couldn't. Pi is a mathematical constant, not a measured physical value. Any value of pi other than the one we know is inconsistent with basic arithmetic. Well, maybe not-so-basic arithmetic.

One could make a universe where the ratio of a circle's circumference to its diameter is not exactly the same as in our universe, by giving the universe a non-Euclidean geometry. In fact, we might live in such a universe (the curvature is just too small to notice). However, it would be incorrect to say that pi in this universe is different, rather you should say that the circumference-to-diameter ratio is not equal to pi. In most (all?) non-Euclidean geometries, the circumference-to-diameter ratio will not even be constant, but vary with circle size. The ratio will tend toward pi as smaller and smaller circles are measured. (Also, if the non-Euclidean geometry is negatively rather than positively curved, then a shape with Euclidean geometry can be embedded in it. Pi could then be measured on that shape.)

You could try to introduce obscure metrics, like the taxicab metric or chessboard metric. Both of these examples have a constant circumference-to-diameter ratio of 4 as measured in their own metric. However such metrics are also weirdly behaved, for example in that most pairs of points will have more than one shortest path between them. You might be able to make an interesting world like this, but it will share only a passing resemblance with ours (even with less extreme changes to the metric).

Anonymous said...

Tony,

Thing is, you don't need an AI to run a simulation that you can provide all of the parameters for, and all of the arguments to the parameters. A dumb number cruncher -- and even very large ones are just as dumb as your personal laptop -- can run that simulation.

Leaving the employment of AI aside, a self-consistent simulation doesn't tell you much. One could make a self-consistent simulation universe in which Pi rounds to 3.14160 instead of 3.14159.

==========

Well sure you wouldnt need an AI .. but maybe this AI could do it better in some way.

The rounding error of PI would work most of the time in a mathmatical model.

In development most of the time is better than usual.. so I kind of shrug at that small a difference.

(SA Phil)

«Oldest ‹Older   201 – 345 of 345   Newer› Newest»