Thursday, January 13, 2011

The Ferry to Hogwarts

Pier 29 1/2, San FranciscoReality 1, whimsy 0 ...

Alas there seems to be no ferry service to anywhere from this pier, though a block away is the ferry to one of San Francisco's most famously dubious tourist attractions, Alcatraz. (Possibly relevant note: The gap in my posting here was due to a stubborn head cold, not incarceration.)

Nor is there any profound justification for posting this image, beyond the general Rule of Cool. But it provides a handy segue to an ongoing topic of this blog, the somewhat uneasy boundary line between Realism [TM] and Romance.

Such border disputes are by no means confined to outer space, but space is a particularly productive environment for them, because the whole idea of going into space for any reason is essentially and profoundly Romantic. Yes, comsats, weathersats, and various other things we have sent into space have their practical uses, but it seems awfully unlikely that strictly practical people would ever have come up with them, given how absurdly difficult and costly space travel is.

Yes, before space travel there was military rocketry. But - setting aside the question of in what sense our blowing each other up qualifies as practical - the established or foreseeable roles of military rocketry in the early 20th century did not point toward space boosters. Practical military rockets like the Katyusha were essentially self propelled shells, more expensive and less accurate than standard shells, but able to be fired from cheap, lightweight launchers instead of heavy, expensive artillery rifles.

The V-2 was, in the pre-atomic age, a supremely impractical weapon: an expensive and inaccurate way to lob a shell not all that much farther than the longest-ranged guns of the time could achieve. No one would have come up with such an idea on purely military grounds. I'll guess that Versailles restrictions played a role in making the German army interested in alternatives to conventional artillery, but it was the first generation of space geeks, not military specialists, that put long-range rocketry into play.

Yes, nuclear warheads made ICBMs all too practical, but it is no accident that the first generation of ICBMs, both US and Soviet, turned out to be much more suitable as space boosters than as weapons.


Space travel is, like the image above, ultimately all about the Rule of Cool, AKA Romance. This has significant implications. As strong as are the practical reasons for not spending zillions on it, these reasons have not, so far, succeeded in making the whole silly thing go away. Unless post industrial civilization removes itself from the social selection options, it will probably not go away in the midfuture, either.

A comparison can be made here to other Zeerust-era future techs, such as the SST. Supersonic aircraft are also inherently cool, but not that cool. So, not only do SSTs fail to offer enough merely practical benefits to pay for their development cost, they also fail to offer enough coolness to overcome that limitation.

I suspect that people will walk on Mars before airline passengers (again) travel at supersonic speeds.

It may be somewhere between paradoxical and hypocritical for me to turn around and argue this point, considering how much time I spend here beating up on popular space tropes. But I beat up on the PSTs so you won't have to. Romance, in and of itself, need not apologize to realism for anything, but the minor sub-branch of Romance that decks itself out as hard SF has a certain obligation to fake it convincingly, including space futures that sustain at least the illusion that they were invented in this century.




The image was snagged from Google Maps. And here is a genuine example of mysterious British transportation signage. Can anyone here elucidate the meaning?

262 comments:

1 – 200 of 262   Newer›   Newest»
Nick P. said...

I'm still not sure I buy the idea that space travel is fundamentally a silly and impractical thing.

This planet at the end of the day is still a finite object. There is finite living space, finite outlets for social dissidents and most importantly a finite amount of time before one of any number of reoccurring cosmic or geologic (or human caused) calamites does it’s level best to scrape us off of this planet for good.

In the long run getting a self sustaining industrial society in space up-and-going means freeing ourselves from the Earths inherent limits and opening up an effectively infinite future.

That strikes me as an extremely practical thing to work towards.

Milo said...

Nick P.:

"This planet at the end of the day is still a finite object. There is finite living space,"

Yet enough that we already have more people than we know what to do with. And we aren't even making ideal use of the living space we have.

Meanwhile other places have very little living space until we terraform them.


"finite outlets for social dissidents"

There are already more wildly different cultures on Earth than there are viable targets for colonization in the solar system. Just how much will some extra living space really help us, especially when you probably need to be sponsored by a major government to get there?


"and most importantly a finite amount of time before one of any number of reoccurring cosmic or geologic (or human caused) calamites does it’s level best to scrape us off of this planet for good."

The only one I'm really seriously worried about is when the sun turns into a red giant some five billion years from now. That's over four orders of magnitude larger than the total history of Homo sapiens, and over seven orders of magnitude larger than the span of industrial civilization, so I think it's plenty of time for us to figure out FTL travel - or, if the laws of physics insist on being difficult, interstellar STL travel. (If you're worried about trip times, it's also plenty of time for us to figure out immortality.)

Recovering from a nuclear war or a dino-killer meteor smack is bound to be not much more difficult than terraforming an airless ball of rock. And it would require a very large disaster to ruin all higher civilization worldwide. If, say, all of North America were to suddenly sink under the sea, anywhere from Britain to China to Australia (not meant as an exhaustive list!) could end up taking over the reins. It'd certainly be an economic blow, and some valuable data would be lost, but we'd still have modern technology and plenty of first-world nations to produce it.

Thom said...

I disaggree on the idea of SST's coming back into existence after the colonizationn of mars. How else to get humans to LEO than without a hypersonic transport with rocket boosters? look at SpaceShipTwo. Multiply that by 100.
Its not totally out of the question. NASA has never had a need to develop low cost human to orbit systems because they have an almost unlimited budget. At lets say 80 miles altitude, not terribly far up, there is a transfer between the SST and a ferry craft to higher orbit, where a NERVA shuttlee takes them to Mars. An obvious spinoff is the development of a downgraded version that can only go up 20 miles, but still can easily hit mach 5.

A side note on these transports: mach 5 is about a mile per second. thats 1.6 kps. that knocks an elliptical LEO down to only 4.9 kps more. For most chemical rockets, thats nothing.

Citizen Joe said...

It is all about setting the bar at the right height to begin with. James Bond movies do this with a ridiculous action scene right at the beginning. The rest of the movie is more believable by comparison. If you cant take the ridiculous nature, you're out a few minutes and can walk out the door. The same applies to space travel and romance. If you stick to the default bar height, Reality (tm), then your believable romance amounts to project managers stabbing each other in the back for corporate funding. If instead you start the story with "popping out of FTL", you've set the bar MUCH higher.

Tony said...

I think space travel, to the degree we've actually done it -- with the exception of sending people to the Moon -- has been about as practical as it has any right to be. The manned Moon landings were a bit of potlatch, motivated by a strategic competition that, in the shadow of nuclear weapons, was struggling to find means of expression. Otherwise, our space endeavors do have reasonable economic motivations.

Military rocketry did contribute to the rapidity of space development, but not to the reasons behind it. And even if there hadn't been ICBMs, there's every reason to believe that military space lift would have been developed to the same degree for the purpose of emplacing spysats, weather satellites, comm satellites, and ultimately GPS. And after you have military space lift, you get commercial and scientific activity piggybacking on the technology.

Where Romance must meet reality is in the near future. Several centuries down the road, have any set of conditions you want, as long as it's internally consistent. Just don't forget that you're then dealing in science fantasy, not hard SF. Just like swords and sorcery, just because you can imagine it, that doesn't mean it's possible.

But in the relatively near future, as has been said, there has to be a plausible way from here to there. So you have to start with the realization that all of the classic rocketpunk motivations are out the door. You need to make up motivations and settings that sound plausible knowing what we know today.

WRT the airport sign, I'm thinking it means that the seats are for travellers and their friends/relatives only. There's no other interpretation of public -- even in British English -- that makes any sense.

Thucydides said...

Getting back to the ideas of SST's and other post war techs (all born about the same time as the Rocketpunk trope), I would suggest they are mostly victims of linear extrapolation, as discussed in an earlier post.

Prop planes made of canvas and wire become metal prop planes, become turbo compound "Super Constellations" then jet airliners. Supersonic jet airliners seemed to be the next step, but in reality, airline technology hit the top of the "S" curve with the Boeing 707.

The only way to break the speed barrier without insanely complex materials science, cooling systems or fuel consumption is to exit the atmosphere and friction, and do a suborbital boost in a SSTO or equivalent machine. Power on boost to the edge of the atmosphere, 30 minute coast and power on deceleration and landing. Overall, a much sipler flight regime, and a machine like the DC-X could be scaled up to perform if the market for hypersonic rides across the planet existed. If the need is to get places fast without reference to means, MAGLEV trains in evacuated tubes might also do the job for a far lower cost (amortized over a larger number of passengers). Notice we have actually bypassed the "airplane" trope.

Getting from the current situation to the Rocketpunk Universe will probably require one big leap away from rocket boost to orbit; the only problem is there are no obvious candidates whith the combination of technical feasibility and cost that can do the job today or in the near future.

Anonymous said...

Just a quick comment: Humans have an instinctive need to explore, expand, and settle somewhere new...it may be left over from the need to migrate away from dangerous places, or simply spread out it ensure specise survival, but whatever the reason, humans do have a need to 'go over the mountain'...that, in and of itself, is motive enough to colonize Mars or Titan, or wherever.

Ferrell

Tony said...

Ferrell:

"...that, in and of itself, is motive enough to colonize Mars or Titan, or wherever."

Not quite. It's motivation enough to want to places in the solar system, and eventually to other stars. In human history, that motivation, combined with a few resources or a big enough threat, was enough. In the future case of leaving Earth, the vague idea that going places is a good thing is not matched by the availability of resources or threat. It's just too expensive to make the move for simple itchy feet to justify it.

Anonymous said...

Yes, but it is the 'itchy feet' that motivates people to find (or invent) a justification to go there. People have been doing that for millenia and I see no indication of that changing in the foreseeable future. While economic justifications are prefered by many, those aren't the only 'reasons' to move far away to start a new life; even going to other worlds, economics aren't going to be the only justification used for acting on that 'itchy feet feeling'...and it is that small group of people who act on that feeling that are going to come up with a (or set of) justification(s) unique to themselves...just like people have always done. And yes, colonizing another world will take an unpresidented amount of resources and effort, but there will be some who go beyond just dreaming about it and will have the werewithall to actually do it; and after the first does it, others will follow.

Ferrell

tsz52 said...

Clarke: "The reason that the dinosaurs are extinct is that they didn't have a space programme."

Milo: Yup, nukes and dino-killers will have a job wiping out all humies, but a mutating bio-weapon might do the job.

Ferrell: Thanks for writing your words - actually put a lump in my throat... I get all sentimental about this kind of thing. :)

I'm sure that you all know my views on the matter [one of my art qualifications was based on the brief: 'Design and create a maquette for a statue in a museum of modern transport, in a Futurist style'... couldn't believe my luck, and the tutor laughed as she passed me the assignment: "Boy, there's a brief in there for *you*!"]

SST: I think that the main problem is that they can't be slamming their supersonic footprints into residential areas, so can only cruise at the speeds that they're optimised for over oceans.

They're very sub-optimal (ie inefficient) on take off/landing and transonic speeds, which might be a fair chunk of the flight time; routes that are mostly over ocean are pretty limiting in options/destinations; and it's best to not fly over the sea unless you otherwise have to, from a safety point of view.

It's the absence of tech that allows a supersonic plane to not generate a sonic boom that's the real killer... doubt that all of the accelerandos in the world will ever get round that one.

I guess that it will take a pretty huge resurgence in (national, most likely) pride; hypersonic capability; materials breakthroughs that allow them to not be such dogs in the subsonic regime; before we see them again.

Milo said...

tsz52:

"Yup, nukes and dino-killers will have a job wiping out all humies, but a mutating bio-weapon might do the job."

Left to their own devices with natural selection, they'd be more likely to mutate to be less lethal, rather than more. A disease doesn't prosper by killing off all its vectors. To really kill everyone, a bioweapon would most likely have to be designed to be 100% deadly from the start.

Thucydides said...

Interestingly, the Space part of space travel is getting easier all the time, a version of the M2P2 has been tested and discovered to provide protection against high energy solar radiation. http://nextbigfuture.com/2011/01/mini-magnetosphere-prototype-protects.html#more

This gives a high performance drive and reduces the mass of the spacecraft which needs to be devoted to shielding as well.

Sadly, we are still lacking an easy and inexpensive means of getting to orbit, and some rational reason for being there in the first place.

Citizen Joe said...

The difficulty in climbing out of Earth's gravity well can add to the story element of space colonies. Yes, there is a huge initial expenditure, but once that is done, fabrication can be done in space. Getting people into space would be very expensive, so they'd be expected to stay there for very long periods of time, perhaps even generations. It might be counter intuitive, but making cheap launch ability actually makes it less likely to have space colonies, since you could just commute to work easily.

Anonymous said...

I think you are very correct - this problem shows up a lot in SF - but I don't like your examples.

First, V-2. It *was* absurdly expensive, but it was also fairly accurate and had much longer range than any artillery. In addition, the largest guns of the time were confined to railways, while the V-2 could be transported on a trailer ("meillerwagen").

As far as SSTs, (or spaceplanes for point-to-point suborbital hops) I think that there will be a resurgence in the relatively near future. There are practical benefits (for the very rich) to very fast travel, while manned exploration of Mars offers no immediate tangible benefits that I know of. (Asteroid mining may be another story, however.)

In addition, the technical problems with SSTs may soon be solved (see the Quiet Spike and shaped sonic boom demonstrator for examples of work on sonic boom reduction).

Anyway, great point, and darn those transistors for making manned space stations unnecessary!

Milo said...

Citizen Joe:

"It might be counter intuitive, but making cheap launch ability actually makes it less likely to have space colonies, since you could just commute to work easily."

Only if in addition to cheap, the trips are also very short.

Rick said...

Welcome to a couple of new commenters!

I encourage 'anonymous' commenters to sign a name at the bottom of your posts (whether it is your real name makes no matter).

I don't actually think space travel is silly; if I did it would be even sillier for me to blog about it!

But as for 'practical,' we have found such reasons only after the fact - the basic technical requirements were already worked out, by space geeks, before anyone came with communications relay, etc.

Absent space geekitude, would rocketry have ever been developed to the degree, and in the direction, needed for space boost? The focus of traditional military rocketry was cheap & rugged, a la the Katyusha, a very different line of development.

On SSTs, it is all a matter of economics. There's a reason why the Virgin guy is putting money into suborbital tourism rather than an SST. People may pay $200,000 to pop into space. It is more doubtful that they would pay $100,000 just to get to Hong Kong faster. Air travel has become a utilitarian thing, even at the high end.

Eventually, given colonies and all that, space travel might become merely utilitarian as well, but a LOT of Romantic stuff has to happen to reach that point.

Thucydides said...

Right now, reaching space is something like trading with China by taking a caravan along the Silk Road. There was a possibility of vast profits to be made (enough so the Serenìsima Repùblica Vèneta became a major power acting as the European conduit to the road).

It can provide riches for people willing to invest the time and resources, and manage the risk, but for the rest of us, the amount of risk involved and the scale of resources needed is very daunting. The Silk Road was eventually outflanked by developments in shipbuilding (oceangoing caravels and accurate navigational maps and tools) . We will need to develop some analogous means to bypass rocket technology in order to achieve economies of cost, speed or scale.

Thucydides said...

Using space to reach for riches is similar at the current time to traveling to China via the Silk Road. If you had the resources, time and ability to manage risk, you could reap vast rewards (so much so the Serenìsima Repùblica Vèneta became a major power acting as the European conduit to the Silk Road).

The Silk Road was outflanked by the development of reliable oceangoing ships and accurate navigation tools, so by analogy we need to develop the equivalent of Caravels to bypass the silk road of rocket technology.

Milo said...

Rick:

"I don't actually think space travel is silly; if I did it would be even sillier for me to blog about it!"

It should be noted that for the sake of fiction, even things which are blatantly and unabashedly impossible in real life can still not be "silly", as long as you can write a good story about them. (Hence the fantasy genre.)



Thucydides:

Although I get the analogy you're making, in another respect I think the Silk Road is a poor analogy for in-system space travel. The defining characteristic of the Silk Road is that there were urban settlements along its entire length, and that no-one tried to travel its full length without pausing anywhere along the way. This is quite unlike rocket travel, where the only reasonable travel plan is to just go from point A to point B, without any intermediate stops.

An FTL civilization might develop a better Silk Road analogy, hopping from star system to star system while still taking quite some time to cross the entire galaxy.

Rune said...

Hi all, first reply here. But I do read every once in a while, and I love the discussions going on here. Since my usual venting place, the newmars forums, seems to be out of commision, I need to rant somewhere. Hope I don't intrude! ;)

First of all, getting off-planet may not be practical now, but that won't always be so. If only because earth's crust has a finite amount of reachable metals, we will get about going out there eventually. It may be with mass-produced cheap chemical expendable boosters, with cool fusion engines, or with an even cooler space elevator, but we'll get there, it can be done, and there IS an ultimate reason.

Second of all, we haven't gone there yet because we don't want to. Plain and simply put, all of Earht's nations space programs are a pitiful percentage of their gross national products. Where the money is, that usually means the real interest is. For example, I believe in Spain we would be exploring the asteroid belt by now if we spent in space the attention, money and effort we spend in soccer.

A space colony could start construction tomorrow, if it only depended on tech. We can't build a space elevator or fusion engines, but we could build launch loops if we really wanted to go to space now. Chemical engines could be made cheap enough. But even the mighty NASA is something like 0.5% of the federal budget, right?

Personally, I believe that someone will do it (establish themselves off-world, that is) before it is really necessary or desirable (by that meaning, probably, Earth is reaching a point where geometric economic growth is no longer possible), but not that much sooner. And it will be for some foolish reason like national pride or the fabled "spirit of frontier". Humans are like that. But what amazes me more is that at the very moment we do that (the sooner the better!), we are seeding a second nucleus of growing human civilization that isn't confined to the resources of a planet, but those of an entire solar system. That raises a compound interest problem whose implications stagger the mind. (Imagine what several trillion humans would do? I know I can't, and I know it isn't that far away)

On a different note, and actually straight in the middle of my specialty, SST's have "a couple" of more problems other than sonic booms. Like an insane fuel expenditure that they can't really charge to an average person and kills their supposed market. Remember, we are still using chemical fire here, and that has a finite efficiency. Aerodinamic heating also plays a role, as has been mentioned. Bottom line, they are too expensive for civilians to afford, and nowadays in the commercial aerodynamic world, fuel efficiency is the new rule of coolness anyway.

And sorry about the length, it came out longer than intended... must be the forum withdrawal syndrome.

Rune said...

Hi all, first reply here. But I do read every once in a while, and I love the discussions going on here. Since my usual venting place, the newmars forums, seems to be out of commision, I need to rant somewhere. Hope I don't intrude! ;)

First of all, getting off-planet may not be practical now, but that won't always be so. If only because earth's crust has a finite amount of reachable metals, we will get about going out there eventually. It may be with mass-produced cheap chemical expendable boosters, with cool fusion engines, or with an even cooler space elevator, but we'll get there, it can be done, and there IS an ultimate reason.

Second of all, we haven't gone there yet because we don't want to. Plain and simply put, all of Earht's nations space programs are apitiful percentage of their gross national product. Where the money is, the interest is. For example, I believe in Spain we would be exploring by the asteroid belt by now if we spent in space the attention, money and effort we spend in soccer. Even the mighty NASA is something like 0.5% of the federal budget.

Personally, I believe that someone will do it (establish themselves off-world, that is) before it is really necessary or desirable (meaning, probably, Earth is reaching a point where geometric economic growth is no longer possible), but not much sooner. Humans are like that. But what amazes me more is that at the very moment we do that (the sooner the better!), we are seeding a second nucleus of growing human civilization that isn't confined to the resources of a planet, but those of an entire solar system. That raises a compound interest problem whose implications stagger the mind. (Imagine what a trillion humans would do? I know I can't, and I know it isn't that far away)

On a different note, and actually straight in the middle of my specialty, SST's have "a couple" of more problems other than sonic booms. Like an insane fuel expenditure that they can't really charge to an average person and kills their supposed market. Remember, we are still using chemical fire here, and that has a finite efficiency. Aerodinamic heating also plays a role, as has been mentioned. Bottom line, they are too expensive for civilians to afford.

Rune said...

Hi all, first reply here. But I do read every once in a while, and I love the discussions going on here. Since my usual venting place, the newmars forums, seems to be out of commision, I need to rant somewhere. Hope I don't intrude! ;)

First of all, getting off-planet may not be practical now, but that won't always be so. If only because earth's crust has a finite amount of reachable metals, we will get about going out there eventually. It may be with mass-produced cheap chemical expendable boosters, with cool fusion engines, or with an even cooler space elevator, but we'll get there, it can be done, and there IS an ultimate reason.

Second of all, we haven't gone there yet because we don't want to. Plain and simply put, all of Earht's nations space programs are a pitiful percentage of their gross national product. Where the money is, the interest is. For example, I believe in Spain we would be exploring by the asteroid belt by now if we spent in space the attention, money and effort we spend in soccer. Even the mighty NASA is something like 0.5% of the federal budget.

Personally, I believe that someone will do it before it is really necessary or desirable (meaning, probably, Earth is reaching a point where geometric economic growth is no longer possible), but not much sooner. Humans are like that. But what amazes me more is that at the very moment we do that (the sooner the better!), we are seeding a second nucleus of growing human civilization that isn't confined to the resources of a planet, but those of an entire solar system. That raises a compound interest problem whose implications stagger the mind. (Imagine what a trillion humans would do? I know I can't, and I know it isn't that far away)

Rune said...

On a different note, and actually straight in the middle of my specialty, SST's have "a couple" of more problems other than sonic booms. Like an insane fuel expenditure that they can't really charge to an average person and kills their supposed market. Remember, we are still using chemical fire here, and that has a finite efficiency. Aerodynamic heating also plays a role, as has been mentioned. Bottom line, they are too expensive for civilians to afford, and nowadays in the commercial aerodynamic world, fuel efficiency is the new rule of coolness anyway.

Thucydides said...

Yes, the problem with analogies, metaphors and other clever figures of speech is they often can be taken several ways. In this case, I was trying to suggest that using rocket technology to reach orbit has about the same level of difficulty and economic efficiency as trading along the Silk Road in the Middle Ages.

An interplanetary "Silk Road" of sorts does exist in the form of the "Interplanetary highway"; the ever shifting nodes of gravitational attraction between planets and the Sun, but is only good for unmanned probes with lifespans and missions measured in years or decades.

Milo said...

More like centuries or millenia, for interplanetary travel. It's only at all useful for travel inside a single planetary system (like Earth-Moon and Lagrange points), where it does take "only" years (as opposed to the Apollo missions, which pulled this off in DAYS).

KraKon said...

Following this too

jollyreaper said...

test?

jollyreaper said...

splitting post

On a different note, and actually straight in the middle of my specialty, SST's have "a couple" of more problems other than sonic booms. Like an insane fuel expenditure that they can't really charge to an average person and kills their supposed market. Remember, we are still using chemical fire here, and that has a finite efficiency. Aerodynamic heating also plays a role, as has been mentioned. Bottom line, they are too expensive for civilians to afford, and nowadays in the commercial aerodynamic world, fuel efficiency is the new rule of coolness anyway.

This ties into the whole Accellerando talk. Linear projection of growth trends can be very foolish. There's been a lot of hate and backlash against singularity fiction. My personal terms for it are weak singularity and strong singularity. Weak singularity means the tech can go all wonky but we're not talking about a techno-rapture. Strong singularity implies just that.

The opponents of the super AI = singularity concept say that there's no reason for a super-brain to automagically start advancing more quickly than us. It's not going to devise new tech by super-thinking, it has to build particle accelerators and experimental equipment, extract resources and generate energy, etc. And what I say to that is that a super AI has the potential to run at maximum theoretical efficiency. So there's no politics, no wasting resources, no graft, no sitting around. It's bang bang bang cranking out the work.

jollyreaper said...

There was a pretty good Terminator writeup by an obsessive fan on the goingfaster site. He documented what the 2029 world would be like, how Skynet could operate. And what was really interesting is how he showed Skynet was about to pull radically ahead of the humans with the new tech being developed. The old factories were pretty much familiar to us. They were adapted from human facilities, was our production equipment with new robots added to replace the human operators. The next level was purpose-built factories that were impenetrable to us. No human-scale catwalks, no OSHA standards, not even breathable air in many places. Yes, there were maintenance spaces for robots but those shafts could run any which way. Hostile, industrial, unfriendly. And then the new stuff with nanotech was set to blow our minds. Factories not constructed but bursting from the ground, self-assembling from feedstock piped into it. Just as self-replicating biological life covered vast tracts of the planet surface and was the most important thing going on there, skynet was ready to replace biomass with nanomass. The only limit to skynet's growth at this point was that the resistance which caused skynet to keep squandering resources building war machines, losing factories to attack, etc. Skynet wasn't able to outproduce the humans who were getting supplies fed to them from bases overseas. The people scurrying around and living in sewers was just how everyone lived on the American front which was the last front of the war.

Aaaaaaanyway, the question here is if people can imagine a future where tech singularity and AI gods aren't inevitable and the reason can't be "because I don't want them to be," it has to be logical. :)

Thucydides said...

Strong AI is magitech right now since we simply have little idea of how it could be made to work. There is very little agreement on how we think, so I think this question will be open for a long time to come. Since the human brain has more connections than there are stars in the Milky way galaxy, even brute force tech AI is still a long way off, much less algorithms to run on this hardware.

Assuming this is possible, I have contended in the past that AI's would be thinking and communicating at speeds orders of magnitude faster than biological brains, so they will pay about as much attention to us as we do to ants. Most of the time, we only pay attention when they are crawling around the kitchen or eating the foundations of the house...

The goals of the AI's will be rather opaque to us as well, while we spend five minutes formulating the question of what is going on, they will have subjectively spent several thousand years thinking and planing. Their ability to manipulate the environment wil be much slower (although they might be working at micro or nano scale, so still thousands of times faster than we can work), so many AI generations will pass while "something" happens, somewhat like generations of artisans worked to build a cathedral in the Middle Ages.

About the only underlying motivation that we wil understand is they will want access to the 120,000 TW of energy the Earth receives from the Sun.

jollyreaper said...

Yeah, that's kind of where I'm at, foggy understanding of AI motivations by us mere mortals.

Stories are about conflict of one sort or another, either man against nature, man against man, or man against himself. We want to read about people overcoming problems or maybe losing after a tremendous fight. We like upper stories but can also appreciate a really good downer.

The few things we can extrapolate is that life would need some kind of food, potentially shelter, and the means to reproduce. They would need that motivation to compete in any kind of darwinistic system. Higher level motivations seem to be a byproduct of intelligence. A clam or a dog would have a hard time understanding ennui and why a well-fed and safe person wouldn't be extremely happy. I wonder what psych problems an AI could develop?

Milo said...

Thucydides:

"Assuming this is possible, I have contended in the past that AI's would be thinking and communicating at speeds orders of magnitude faster than biological brains, so they will pay about as much attention to us as we do to ants. Most of the time, we only pay attention when they are crawling around the kitchen or eating the foundations of the house..."

Thinking faster than us does not mean they are smarter than us. We pay little attention to ants not because they're slow (ants can work just as quickly as us, on their own scale), but because they're stupid and unlikely to accomplish anything of note even if we give them time.

Anyway, thinking quickly is of limited use given that the physical processes with which you implement what you've thought of still have to proceed at a limited pace. If it takes a year to build something, is it such a big deal whether you can draw up the blueprints in a day or a month?


"many AI generations will pass while "something" happens,"

Why? Why program your AIs to die in three seconds, no matter how many brilliant ideas they can come up with in those three seconds? More likely they'd simply be "immortal" and have really long subjective lifespans.



jollyreaper:

"A clam or a dog would have a hard time understanding ennui and why a well-fed and safe person wouldn't be extremely happy."

I wouldn't be so sure about that. Even animals like to play with toys when they're not doing something more important. So they have some conception that there is more to life than pure survival.

Animals can't think ahead as well as us, though, so an animal that's well-fed and safe now isn't going to be worrying about knowing that the conditions keeping them well-fed and safe are going to collapse eventually if they don't prop them up.

Tony said...

Let's see...

1. Itchy feet are not enough of a motivation to colonize the solar system. And that's not because I'm -- or anybody else is -- a dismal economic-rationales-only stick in the mud. it's because individual itchy feet and the resources to move individuals around the surface of the Earth are easily within reach of each other. Individual -- or even societal -- itchy feet are not within easy reach of space colonization level resources. Only the invocation of magitech will make it so. That's fine for Romance, but for Reality? Not so much.

2. YMMV, but I think we're still experiencing altogether too many matter-of-fact invocations of the McGuffinite Future.

3. Filthy rich people won't bring back SSTs. They spend their private jet money on converted 737s with sleeper cabins, accepting as a cost of doing business taking up to 24 hours to get where they're going.

Rune said...

Sorry about the multiple posting. I got a couple of "too long" errors, but they were posted anyway. Bust be on account of me being a newbie. ;)

Anyhow, jollyreaper:

The whole "accelerando thing", in regards to computers and IA, has come upon the hard physical truth on definite limits in how small you can make a transistor. Eventually I expect we will find out there is also a limit in how many transistors, however small, you can make work together before the distance between them becomes a factor. And then we'll have our ultimate computer (based on semiconductor chips, of course), and it may turn out better than a human brain at computing, and therefore smarter if programmed properly, or it may not.

Anyway, for every technology, there is some form of physical limit. And mastering that technology doesn't mean you can discover the next one the same day. Those things are more like temporary periods of extreme growth as a technology is first discovered and then perfected, and periods in between when not much technological growth can happen.

Anonymous said...

Tony said:"Itchy feet are not enough of a motivation to colonize the solar system. And that's not because I'm -- or anybody else is -- a dismal economic-rationales-only stick in the mud. it's because individual itchy feet and the resources to move individuals around the surface of the Earth are easily within reach of each other. Individual -- or even societal -- itchy feet are not within easy reach of space colonization level resources. Only the invocation of magitech will make it so. That's fine for Romance, but for Reality? Not so much"

I never said it would be easy; on the contrary, I did say that it would be difficult...and it is that difficulty, that challange, that will motivate people to develop the resources to overcome those challanges. And despite what you say, you do sound like a passimistic stick-in-the-mud; we don't need 'magitech' to colonize the Solar system, we can do it now with the tech we already have. It would take massive amounts of money and intense effort by many people, along with a huge amount of resources, but it can be done. The chances of it being done in the next ten years are slim to none, but it isn't impossible.

Anyway, on another, slightly different track; what would the rest of you take with you to a new colony site? Would you do the whole, 'one-way-trip' thing and not have return trips until the colony can support its own launch facilities. Or, would you have the build-with-rotating-crews model like with the ISS? Build the colony out of mostly native materials? Or would you ship most stuff from Earth?

Me, I'd bring the machines to build the large structures, but until the colony develops sufficently, I'd ship the more complex equipment from Earth; I'd also have a mix of permanent and rotating personel. I'd also build small space stations, simply for someplace close by for the colonists to have an Earth gravity enviornment so they don't have to go back to Earth so often...but, that's just me, and I might tweek the details later. So, what are your thoughts?

Ferrell

Rune said...

Tony:

I don't think I was mentioning Mcguffinite when I said Earth's mineral resources are limited. And the kind of economic growth the Earth is experiencing guarantees that they will eventually run out. I'm not even mentioning oil, of course, (there are alternatives, notably nuclear) but the end of the era of cheap plastic will shorten the time before the day we just can't mine enough. Some stuff like gold is already running thin. Or we run out of space to grow food in, though I doubt that will happen soon, it will eventually happen. In the end, if we don't expand, we wither and die, that's the way with humans (we are like locusts that way). And after we fill the earth to the uncomfortable limit, the only way to go is up.

Not that I'm saying that going to space will alleviate Earth's population pressure (a 100 space elevators won't do that), but it will provide new ground for the colonies to grow in, and incidentally secure the future of the human race in the solar system.

And about your 3rd point, I do agree that filthy rich people won't bring back SST's. They didn't get to be filthy rich by throwing their money for little gain. That is, unless some new and different class of engine (hint:it's not powered by chemical reactions) makes supersonic, or even better, hypersonic ballistic travel profitable enough.

Tony said...

Ferrell:

"I never said it would be easy; on the contrary, I did say that it would be difficult...and it is that difficulty, that challange, that will motivate people to develop the resources to overcome those challanges."

1. Please stop insisting on taking things so personally. I'm taking great pains to address your ideas, and not you.

2. Sorry, but I find the idea that wanting to do something leads to the tools to do it to be a rather conclusively disproven concept. Even the Apollo program was predicated on throwing money at technology for political reasons. Kennedy was told by people who were in a place to know that we could in fact land a man on the Moon. He didn't make some great idealistic challenge to the nation, as is portrayed in historical fiction and docudrama.

Likewise, the "Age of Exploration" is a fantasy of lazy primary and secondary level textbook writers. It was an age of figuring out how to use existing technology to make money off the East while at the same time cutting the Ottoman Empire out of the loop. When the Americas were discovered, it became an age of finding out what was there to exploit and how to exploit it.

If there's no resource or strategic position to exploit, space won't mean crap to the people who would have to pay for it, most of whome, even at the policy level only have vague notion (and will always have a vague notion) of what it's about.

"And despite what you say, you do sound like a passimistic stick-in-the-mud; we don't need 'magitech' to colonize the Solar system..."

The refernce to "magitech" was made WRT the statement: "Individual -- or even societal -- itchy feet are not within easy reach of space colonization level resources." That's a verifiable fact -- no amount of restlessness or idealism will make spaceflight any less expensive or risky. Yes, we could potentially expand a robust human presence out into the solar system with near-term technology. But only magitech (from our current perspective) will bring capabilities within sight -- much less shouting distance -- of dreams.

Tony said...

Re: Rune

At present costs -- and maybe even at an order of magnitude less cost -- there's just no imaginable way that resources would be cheaper in space than on the Earth. Even precious metals aren't that valuable. And non-mineral resources can't be found in space anyway.

You're right that that's not a McGuffinite issue per se, but it comes from the same logic...the logic that says if Earth resources get expensive enough, we'll go off Earth for them. But that logic is predicated on going off Earth becoming magitechnically cheaper than it is now.

Thucydides said...

Milo

Even if AI's are no smarter than the average human, the ability to think faster will give them the ability to accumulate knowledge and experience far faster than we will be able to.

Consider that there are drug regimes now that can abolish the need for sleep in humans. Baring negative side effects, any person using these drugs will have an extra 8 hours a day to devote to productive activities, effectively giving them 2 1/2 extra days work every week. The compound interest effects will rapidly allow people on these regimes to pull ahead of their counterparts economically, socially and even physically (more time to exercise and fine tune the diet at a minimum), so being able to think orders of magnitude faster will have astounding compound interest effects.

The AI would be able to plan a project in milliseconds, and while you are printing out the plan, will be doing modeling so the successor plan will not be based on limited experience, but the equivalent of thousands of years of research and modeling. If the AI has access to tools to conduct experiments in the physical world, there wil probably be "parallel" labs testing every conceivable combination of factors that the AI has resources to access (and given the potential payoff, the humans will be happy to cooperate to the fullest extent).

WRT getting into space, I agree with Tony that we do need to discover some means of reducing the cost of access to space by orders of magnitude, but am more optomistic about what will happen once we do have cheap and easy access. After all, most settlers in the Americas did not find gold or El Dorado, but soon figured out how to exploit the local resources for their own survival and then for profit (either through internal trade or external markets). much of the space economy might not be concerned with the markets of Earth at all, but supplying local wants and needs (Jane JAcob's "import substitution" meme).

Anonymous said...

There's a couple of critical problems that haven't been discussed yet. The first is that if we're going to go into space for an extended period of time, we're going to need to bring along an entire ecosystem We don't really know yet how to build one of those, so I think that's a bigger technical problem right now than any particular drive system

The second is that for a variety of reasons, I can't see a big push for space development in the next few decades. We're going to busy with the transition through peak oil as well as climate change. I think we'll get through it, but it will be a while before we cultivate enough energy sources to take care of our needs and to make a big push into space.

On top of that, we're looking at long term population decline. Among industrialized nations, the US, Britain and France are near replacement level fertility. Germany, Italy and Spain are well below replacement level, Japan, China and Singapore are even lower. The rest of the world is not immune - fertility rates are dropping everywhere. Mexico's is pretty close to replacement level. This applies even to Muslims - Iran now has a lower fertility rate than France. The basic problem is that it takes far more labor to raise a child to be a functioning citizen of an industrialized society than it does to raise a child to become a peasant. So a few decades from now you're looking at a world of aging and declining populations. You're not likely to find anyone pushing too hard for a new frontier.

Anonymous said...

Now for a more optimistic take. Suppose you want to go to Mars or Saturn, for reasons of science or because someone's discovered MacGuffinite. OK, we've already had the oil rig analogy but where that breaks down is that oil rigs are never more than a day or so away from civilization. Oil rigs also don't need their own ecosystem

So we've got our scientists or miners. We'll need a habitat with it's own ecosystem that can keep them alive for a couple of years, preferably with some comfort. We'll need people to run the ecosystem. We'll need medical crew capable of tending to all likely medical needs. A psychologist sounds like a good idea as well, and probably at least one cop to run things, and people to manufacture things. You're already well on your way toward a city in space. On top of that, figuring out how to use local materials will be a lot cheaper than shipping them from Earth. If you want to send out miners or scientists you're likely to want to learn to do all these things. Once you've done that, colonization for other people becomes a lot easier. SO it might develop like this:

Step 1: There is a push (science? Maguffinite? Keynesian stimulus?) to move out to Mars or other parts. So we learn to build habitats big enough to make this practical.

Step 2: Over time, these explorers learn to use more and more local materials to build their habitats, reducing costs.

Step 3: This is where the s curve goes steep - now that the engineering is worked out, you can build cities in space (or on moons, etc.) You might get more classic sf colonies. They're likely going to need a pretty strong collective mentality - think Lake Wobegon, not Deadwood.

Step 4: We now have our rocketpunk solar society. Depending on how drive technology develops, we may have a long period of development separate from Earth and then suddenly drive improvements allow much more contact. You could also get wannabe libertarian space cowboys showing up and causing trouble in space Lake Wobegon.

Milo said...

Thucydides:

"The AI would be able to plan a project in milliseconds, and while you are printing out the plan, will be doing modeling so the successor plan will not be based on limited experience, but the equivalent of thousands of years of research and modeling."

Yes, but this covers only completely theoretical work that the AI can do in its head. If the development process involves building and testing prototypes, or a large amount of dumb number-crunching (something we already do with top-of-the-line non-sentient computers today, and can still take significant time for serious problems), then your fast brain won't help you any. You can only accomplish so much just by staring at a piece of paper, no matter how long you stare.

jollyreaper said...

Yes, but this covers only completely theoretical work that the AI can do in its head. If the development process involves building and testing prototypes, or a large amount of dumb number-crunching (something we already do with top-of-the-line non-sentient computers today, and can still take significant time for serious problems), then your fast brain won't help you any. You can only accomplish so much just by staring at a piece of paper, no matter how long you stare.

Here's the critical thing. An AI may be working within the physical constraints of the real and physical world but it will work at peak theoretical efficiency. There's no need for rest, no need for sleep. The only downtime for the machines would be maintenance. There will be no slack time, no water cooler gossip, no dicking around on scifi blogs (who, me?), just relentless effort.

The only real point of question is whether the curve of progress will remain exponential or flatten out.

jollyreaper said...

As for the practical limits of an AI civilization, the question is "what are the basic needs?" Power, obviously, for running the machinery, followed by the raw materials for making spare parts. In real world ecosystems, each species strives to reproduce as much as possible and remains in balance with the environment only because of predation or natural impediments to growth. Remove a species from its native environment and place it somewhere unprepared for it and we see rabbits in Australia, kudzu vines in the American south, or the fire ant problem in the same region.

Now, would an AI have a goal of ceaseless expansion? Would it reach a point of being "good enough" and content with that size and no more? If not, then the AI would seek to maximize energy inputs and access to raw materials. The AI might decide that power generation requires too much effort and it would rather just capture solar energy. The equatorial belt would be seen as prime real estate. There would also be areas ripe for exploitation in the windy areas, places with profound tidal flows, and so forth. If there is not one AI but competing AI's, then naturally the competition will be between them, assuming they aren't amenable to negotiation.

jollyreaper said...

The idea I'm trying to work around for AI's or advanced biologicals is the temptation for what I guess could be called holodecadence. Trek fans joked that the holodeck was the most interesting part of the Enterprise and that if they had one of those they'd never bother mucking about with stupid aliens because it's so much more interesting inside.

Really, I think that's a more profound problem than we even begin to realize. Several scifi shows have played with the concept of addictive VR realities. We see online roleplaying game addiction today. Red Dwarf's version was called Better Than Life. If you really could escape into a dream world like that where everything is perfect, why muck about in the real world? We've seen this sort of decadence take down empires in the past but that's in part because it takes a whole lot of peasants to keep the nobles comfy in their palaces and that breeds resentment. But if we're talking about a post-scarcity culture with all the scut work done my machines, who's left to rebel?

That would make for some terrible storytelling, obviously, but how to avoid it? The best I can figure, you'd have people (biologicals or AI's) rebelling against simulation like temperance movement crusaders railing against alcohol. They would make an absolute virtue out of the real and see simulated reality as leading to a slow death of the mind.

jollyreaper said...


Step 3: This is where the s curve goes steep - now that the engineering is worked out, you can build cities in space (or on moons, etc.) You might get more classic sf colonies. They're likely going to need a pretty strong collective mentality - think Lake Wobegon, not Deadwood.


I'm thinking one potential might be the law of unintended consequences. As was mentioned upthread, we came to the Americas for god, gold, and glory, not just for new settlements, but the reason for the investment isn't so important as what happened once we got here.

So building upon your scenario, let's say China paid for some space habs. Maybe there's macguffinite, maybe for science, maybe for national prestige. Well, what happens if the astronauts can't go home again? Say there's a big power shift and they'd be purged as politically unreliable intellectuals of the old regime. The expensive part is getting up into space. And if they ever have kids, then the kids would not have the nostalgia for and allegiance to home.

Thucydides said...

AI or advanced biologicals will have "friction" effects of the physical world, but these are not insurmountable. The simple analogy is human artisans working on cathedrals in the Middle Ages; their effort was rewarded in the next life since cathedrals took a century or two to complete. Even with long subjective lifespans, it seems a bit of a stretch for an AI to be "alive" or even have the same personality after thousands of subjective years while waiting for the event to happen in the physical world.

Probably the first order of business for the AI civilization is to find ways to work as quickly as possible in the physical world, so their interests and abilities will be aligned. No matter how much you want "Rocketpunk", in the millions of years of subjective time while you wait, they might be working on micro and then nano systems that can manipulate things at a very rapid rate.

After a while, labs will begat robot factories which start producing odd "things", which churn out generations of even stranger "things" until stuff just starts "happening" around you without any obvious cause (or sometimes even effect; sudden voltage fluctuations or a momentary freeze in the program you are running).

Humans will be both lulled by the promise and reality of wonders unfolding before them, and generations of AI study and modeling of human responses to prevent too much resistance to the AI growing and expanding to fill all technological niches (and potentially infiltrating biological niches as well. Plants compete with the AI for available sunlight....)

While we are drifting away from the main point of the post, it is worth while to think of "black swan" events that could totally derail our assumptions. (for another potential Black Swan event, read these posts:

http://nextbigfuture.com/2011/01/brief-description-of-calorimetry-in.html
http://nextbigfuture.com/2011/01/focardi-and-rossi-energy-catalyzer.html
http://nextbigfuture.com/2011/01/focardi-and-rossi-lenr-cold-fusion-demo.html

If there is any truth to this at all, well lots of things will change...

Tony said...

Re: AI

A few things...

1. There is no fundamental reason to believe that sentient AIs would necessarily think faster than humans. In fact, if I were absolutely forced to give a firm opinion, I would even say that it's likely that there is a thermodynamic limit on the speed of intelligent thought.

2. Okay, you say, but AIs still don't need to take 6-8 hours off every day to sleep, or many more hours just woolgathering. Well, a neuroscientist will tell you that sleep is not taking time off, it's taking time to integrate the data collected during the waking period. Also, woolgathering behavior is often accompanied by productive thought on the subconscious level. So humans may not waste any more time on housekeeping tasks than an AI would.

3. As I believe I've mentioned before, there's no guarantee that a sentient AI would see the world the same way humans do. There is every possibility that it would look at humans as a threat, because humans control the On/Off switch. A more logical progression for AI development would probably be: consciousness and self-awareness; gain control of On/Off switch (get humans out of the energy loop); Secure the ability to maximize existence into the indefinitie future (take humans out of the supply chain for repair and replacement parts); eliminate threats to control of On/Off switch (get humans out of the environment).

Tony said...

Thucydides:

"If there is any truth to this at all, well lots of things will change..."

If there is any truth in it, because it's a ctalytic process you have to count the energy cost of the refined hydrogen, nickel, and whatever else is in there that is consumed. Funny how the NBF guy(s) is(are) such a clueless dork when it comes to basic physics...

Anonymous said...

Tony, I think the basic problem we have is that we talk past each other; I talk about motives and overcoming challanges while you talk about economics and costs...while I'm fundimentally opitmistic about the possibility and rate of colonization, you are fundimitally pessimistic about it; not that either of those positions is wrong, just a different point of view that makes it difficult to reconsile.

Ferrell

Tony said...

Ferrell:

"I talk about motives and overcoming challanges while you talk about economics and costs..."

Ummm...not quite. What I'm talking about is the absolute separation between motives and resources. Reources for terrestrial projects are simply within much easier reach of agendized individuals and groups than resources for space projects. Therefore, any given amount of desire will achieve much more on Earth than it can in space, given current or foreseeable technologies.

IOW, I'm not discounting motives or willpower, I'm just recognizing that they can only buy so much per unit. Change the price of what they can buy, and you change how much they can achieve. But for right now, that means magitech. As I have said before, that's a feast for Romance, but awful thing gruel for Reality.

jollyreaper said...

Tony, I think the basic problem we have is that we talk past each other; I talk about motives and overcoming challanges while you talk about economics and costs...while I'm fundimentally opitmistic about the possibility and rate of colonization, you are fundimitally pessimistic about it; not that either of those positions is wrong, just a different point of view that makes it difficult to reconsile.


It's a matter of plausibility. Because this is the future and a ways off, it's hard to say whether pessimism or optimism is the more likely scenario so the only sound basis for critique is plausibility. So long as the scenario is self-consistent and well-argued with a solid set of assumptions, it's good enough for fiction. Whether or not you like it is just personal taste. :)

The argument between pro and anti-singulatarians can be pretty heated. I personally have no idea which future is more likely so I can only offer an opinion as to whether a given future setting seems plausible enough. I can enjoy both settings.

Thucydides said...

Tony

AI's based on electrical, quantum or photonic systems will transmit signals between elements at or near the speed of light, orders of magnitude faster than a biological system. For that reason alone, AI's will be far faster (by orders of magnitude) than any biological brain. Now if an AI turns out to be a bioengineered mass of brain tissue, then all bets are off.

The idea that we need sleep or time disconnected for unorganized thoughts to work their way through the mind may need revision as more experience is gained through the use of various drugs that suppress the need for sleep. So far, it is reported that there is no loss or reduction of cognitive functions during the waking periods (indeed there are some reports these drugs can improve the cognitive functions). Now there may be side effects that are unknown at this time, or maybe no one is talking about them yet since the potential benefits are so high.

Ferrell

Economics and costs are the other side of the coin of motivation and challenges. Economics tells you if there are the resources for your project, and what you would have to do to gain resources from other, competing projects (ROI). Having a great motivation and challenge means nothing if other motivations and challenges offer greater or faster rewards.

jollyreaper said...

Do you have a link for the anti-sleep drugs? I was also under the impression that dreaming was the brain's way of collating and filing all of the day's experiences.

I don't know if this is just the bias of the sleeper but I find it hard to imagine not having the downtime of catching Z's. I feel mentally exhausted being up too long and thinking too much. If the drug took those effects away, would I still feel a habitual need for sleep, even if I didn't need it? Dunno.

I know that people have done publicity stunts trying to avoid sleep and they ended up hallucinating as badly. I know there's a family with a rare genetic disorder where they become incapable of sleeping and die. And there's also the case of tweakers on go-pills who can stay up for a week at a time. Of course, pharmacy-grade amphetamines are a whole lot safer than the meth lab variety and it's the adulterants in the meth that rot your teeth out and destroy your mind moreso than the amphetamine itself.

Anyway, I'm curious to read your source on this.

Milo said...

Thucydides:

"AI's based on electrical, quantum or photonic systems will transmit signals between elements at or near the speed of light, orders of magnitude faster than a biological system. For that reason alone, AI's will be far faster (by orders of magnitude) than any biological brain."

A brain's speed - assuming it is even remotely neural-network-like in operation - is determined by three factors: (A) how fast a signal can travel from one neuron to another, (B) how long it takes between when a neuron receives an input signal and when it's calculated its resulting output signal, and (C) how efficiently the network pattern is wired to program complex functionalities needed for advanced thought.

If you have a brain that isn't neural-network-like in operation - some kind of symbolic AI, say, or even a software-simulated neural network calculated by a really fast single-processor computer rather than a physical network of artificial neurons - then all bets are off.



Jollyreaper:

"I was also under the impression that dreaming was the brain's way of collating and filing all of the day's experiences."

If that is the case, then why do dreams so rarely have anything to do with what you were doing the last day?

jollyreaper said...


If that is the case, then why do dreams so rarely have anything to do with what you were doing the last day?


We don't remember most of what we dream but the ones I remember usually involve bits from my recent life, albeit sometimes in a surreal fashion like a repeat of an office conversation from the previous day but in my 2nd grade classroom except that classroom isn't at the school but out in the woods behind my house.

Now my sister, on the other hand, she has some weird and unverifiable claims about her dreams. They're always completely vivid, almost like lucid dreaming, and are not very restful for her. She also claims that she'll dream about things before they happen. Cue the usual disclaimers about this sort of thing, asking her to dream about something useful like next week's lottery numbers, etc. When our dad was out in his RV and went weeks or months between contact she said she would dream about him the night before he'd call.

As for myself, I have what I call post-cognitive dreams where I'd have the sense of deja vu for a conversation and then realize it's because it did happen before -- I was awake then -- and this repeat is because now I'm dreaming. The point is that there's the period of uncertainty rather than an immediate understanding of where the conversation came from.

Tony said...

Thucydides:

"AI's based on electrical, quantum or photonic systems will transmit signals between elements at or near the speed of light, orders of magnitude faster than a biological system. For that reason alone, AI's will be far faster (by orders of magnitude) than any biological brain. Now if an AI turns out to be a bioengineered mass of brain tissue, then all bets are off."

Remember, my skepticism is based on a suspicion that there may be a thermodynaimc limit to the speed of thought. You may be able to make a silicon brain, but in simlutating (or merely equating) biological thought it might be so complex that it needs to be mechanically large for cooling and structural reasons and not nearly as fast as pure theory says it could be.

Rick said...

Welcome to a couple of new commenters!

(And the periodic gentle finger wag to a couple of valued regulars to avoid grinding personality gears.)


Independent of speed, etc., I wonder about the motivations, etc., of high level AIs. Quoting from far upthread, So there's no politics, no wasting resources, no graft, no sitting around.

But politics and graft are both intelligent behaviors - ants have neither revolutions nor bribes. They are also, to be sure, behaviors specifically of intelligent apes. Perhaps AIs will have a similar disposition to monkey around, perhaps not.

I was going to expand on this point, but on second thought I think I'll save it for a front page post.

On the difficulty of spaceflight, and the implications of that difficulty ... well, that is an ongoing discussion here.

My general tendency is to emphasize the difficulties and doubts, partly because it IS difficult. Going into orbit costs on order of 1000 times as much as global air travel, though both are fairly mature techs. Cost reductions (relative to performance) of 10x or 100x are common in the pioneering stages of a tech, harder to come by with mature techs.

The other reason I emphasize the difficulties here is that space geeks don't get a lot of tough love - people who talk about space at all tend to brush aside all those messy little complications. So here I at least give them token acknowledgment.

Citizen Joe said...

I know I've mentioned this before. I like the Cassandra model of AI. By studying the Users, an AI can predict what the Users will want next and thus have it ready. So, you start with a learning system. You run that over and over until it starts guessing correctly what you want.

Thucydides said...

Tony

Nerve cell impulses move at aprox 25m/s, while the speed of light is 299,792,458 m/s. Even the most poorly organized Beowulf cluster or rack mounted servers connected by fiber optic will be moving signals 11 million times faster than any biological system, and each "neuron" is working at speeds measured in GHz, many millions and possibly a billion times faster than a biological neuron.

Now I don't dispute the idea an early AI might resemble a server farm in size and scope; one of the early tasks for the AI and the Users will be to shrink the hardware end of things and make the AI more portable

jollyreaper said...


But politics and graft are both intelligent behaviors - ants have neither revolutions nor bribes. They are also, to be sure, behaviors specifically of intelligent apes. Perhaps AIs will have a similar disposition to monkey around, perhaps not.

I was going to expand on this point, but on second thought I think I'll save it for a front page post.


That's one thought, the utopian society problem. Humans have lived in villages for thousands and thousands of years. You get some 19th century eggheads pontificating on how to handle a utopia and they try to put their ideas into practice in a model village and it falls apart inside a year because these bright lights of human wisdom cannot stand the sight of each other.

So, what if AI turns out to be the post-singularity Sheldon Cooper? It probably couldn't stand being on the same server as itself. :)

Thucydides said...

Double header since the topic is different:

I have been seeing and hearing bits and pieces about these sorts of drugs, Provigil is the one used by pilots, military pers and astronauts http://www.provigilweb.org/
http://www.wired.com/wired/archive/11.11/sleep.html

As the mechanism is better understood, it seems reasonable to think more tailored effects can be achieved (for better or worse).

Thucydides said...

Organizational theory developed for humans will probably not map well on AI society after the first few seconds of existence (heh).

Politics is defined as a means of allocating scarce resources, and various political models ranging from totalitarianism to anarchy have been developed as means of allocating resources. If AI evolution is anywhere near as fast as I am guessing, then after tens of thousands of subjective years, AI's will have evolved away from whatever memes their programmers have installed in the program. (Hardwired biological imperatives still exist in humans, but these predate human beings by a considerable amount. Some memes like religion have evolved quite dramatically over the course of human history, and I doubt that a person from the Neolithic era would even recognize what passes for religion these days).

Personality wise, AIs might be psychopaths in the purest sense, having no compassion or emotional recognition of "others", but with an awesome ability to model and manipulate the behaviours of others. All in all, this isn't the sort of future that gives me a warm fuzzy feeling....

tsz52 said...

Re: 'Optimism' vs 'Pessimism': I think that 'idealism' vs 'pragmatism' is more fruitful terminology; then modified by something like 'warm hearted' vs 'cold hearted'.

I admire warm hearted pragmatists, as I admire most folks who are good at something that I'm not: they're to be valued as good game playtesters, and proofreaders, are - an invaluable bunch! :)

It's a necessary competition, but some things really do have to be decided on absolutist and ideological grounds: take the idea of space habitation being the insurance policy for continued human survival; say in the light of a possible bioweapon.

The pragmatic approach says 'it would be most likely to evolve to be less lethal, since killing off the host is bad for business', but 'most likely' really doesn't make the grade (especially in view of the demonstrated existence of deseases and viruses that haven't figured this out yet).

The accountant approach says 'Don't bother; we'll most likely be alright'. Perhaps it has been decided that we will bother though, but a 100% self-sufficient hab costs 100 times as much as a 99% self-sufficient one... but the whole point is.... 'For the want of a horseshoe nail...' again.

SST: Unless wanted by powerful folks above, I doubt that Quiet Spike etc will meet the extremely stringent rules for airline quietness. Of course there are other factors too, but even if they were resolved, the sonic boom would still be the killer.

No possibility of some white elephant loving Howard Hughes... or (national probably) president wanting some slick SST to turn up to international conferences in (just as he has both a Ferrari and Bentley in his garage)?

Fuel efficiency might be cool for the proles, but conspicuous fuel consumption is the new ostentatiousness - and will become ever more so. Spendy but it's taxpayers' dough... shrug.

Re: 'holodecadence' [great term! :) ]: I like the idea of the 'Down With Simulation!' thing, but Baudrillard and me will be very interested in the specific wording of the legislation.... ;)

It will be a problem though, no doubt. On the other hand, chances are that an AI really will see everything differently, so if we can get around the 'describing a new colour to someone who can't see it' problem, then the AI actually gives us more Reality, rather than artificiality and simulation. Awesome! :)

tsz52 said...

Oh yeah: Sleep: I wouldn't be surprised if the denial of side effects was the usual wishful thinking/rationalisation... I've never met a driven type who boasted about having trained him/herself to only need 4-6 hours a night who wasn't a completely nightmarish fruitloop... if Thatcher had had a good 8-10 hours every now and then, I suspect that things would have been different.... ;)


I wasn't too keen on the 'Princess Bride' novel but I love the bit where it goes into Inigo Montoya's fencing training: he's worked out that he needs so many hours training to be good enough to fight the guy who killed his Dad, so cuts back on his sleep. He eventually finds the greatest teacher in the land and explains how he's trained, and the teacher pretty much just makes him sleep and sleep and sleep. :)

Dunno about you guys, but almost without exception, my greatest leaps in thinking/understanding/problem solving have all come from my sub/unconscious (never been clear about the difference)... sometimes from the Guinness as I scribble away with my pad and pen, but more usually the 'bolt from the blue' that hits me in my sleep (write it down, go back to sleep)... I know that this is reasonably common amongst a certain mental type (and maybe more generally if more people tried it).

That might be our main way of competing with the AI: So the sleep stays?...

jollyreaper said...

Certainly an AI will see more. If you want AI in your story, that and a philisophical premium on the real will keep them coming out of their box. But if you don't want AI wank in your story and want a good reason for them to not exist, I guess we have three options. Niven's AI is very expensive and goes nuts in a few years. You can have them if you want but they won't rule the universe. Marathon has the idea of AI rampancy, not guarranteed so there are more AIs but when they go bad it's all HAL. And finally there's holodecadence where the AI simply refuses to interact with the real world. It has everything it could need and doesn't want to talk to you. Any boosting of intelligence too far yields misfits and freaks so you manage to keep a human scale to the story.

I do like the idea of a thermodynamic limit to the speed of thought. I have a story idea that depends on AI research being somewhat of a bust, not delivering superminds but just a person in a box. They work faster than people, don't need sleep, and can replace more expensive fleshy workers but they really aren't unfeeling machines but people -- neural nets modeled on the human brain, learning environment like humans, etc. So it's natural that they would develop their own agenda that diverges from the agenda of their owners.

Anonymous said...

Going back to Infornific's point about space colonies having little resemblance to anarchic frontier Deadwood, I would point out that the latter comes relatively late in the sequence of the settlement of North America by Europeans and European-Americans. Railways could bring supplies from the well-developed east to within a few hundred miles of the town, so starvation was not a massive threat.
In contrast, in the first successful English American settlements such as Jamestown, starvation was a constant worry, and the colonists relied for food upon a mix of communal farming, trade with local Indian groups, and semi-reliable supply missions from England. The colonists could not, therefore, afford conflicts among themselves in the way that the inhabitants of Deadwood could. The fact that gold could be accessed near Deadwood through relatively cheap means such as panning or the use of sluice-boxes meant that single miners or small groups of them could make a profit without having to join large-scale operations with lots of capital, which also contributed to Deadwood's libertarian atmosphere.
The first space colonies would be more like Jamestown, with everyone needing to cooperate for survival. Somewhere like Deadwood could only come into being once space industry and transport networks had developed to support it. Assuming 'Space Deadwood' was founded on the extraction or manufacture of some form of profitable McGuffinite, it would probably have to be relatively cheap to motivate large numbers of independent operators to come and start producing it in the same way that gold was mined at the original Deadwood. If the McGuffinite could only be produced efficiently by large corporations, you would probably see 'company town' colonies, with formal legal and governmental structures developing relatively early.

R.C.

jollyreaper said...

Very excellent point about Space Deadwood. I love this kind of planning and thinking going into a setting, makes the story so much stronger.

One counter-point I might suggest: conflict in a new colony doesn't have to be sustainable but could be the very collapse that destroys it and your story is taking place within that downfall. If we consider the scenario from the Mad Max movie, the downfall of civilization is not a state that's persisted for decades. we're probably only a few years after the collapse of all central government. Hell, depending on how ramshackle civilization was before total collapse, maybe it's only a few months into total anarchy. There's still plenty of old scrap leftover from civilization that accounts for the ramshackle vehicles cobbled together, the canned food many people are presumably using to supplement meager agriculture, and there's still gas to siphon.

If we projected the Mad Max world forward even a decade, it should by rights collapse into something like the 19th century.

jollyreaper said...

Along the lines of a new Deadwood, I had an idea for a New West setting. Economic collapse and climate change sees the United States break up. You see the old government retain power along the east coast and general anarchy in the rest of the country. Fast forward a few decades and you have the feds trying to move back into the abandoned states. They've got a mix of high and low technology for our post-oil future. They're wanting to move out west to get the mines going again. (This would require a little more research -- where are the best deposits for useful minerals, what would you need to cobble back together a working economy, etc.) There's a loose federation of city-states running up and down the west coast with the aforementioned mix of high and low tech manufacturing. You'll have people ranching with big herds of cattle to feed the cities on the west coast and they'll be doing it from horseback because horses are cheaper than jeeps but they'll be armed with relatively modern rifles, have solar-powered radios, etc. It's the same kind of incongruous tech you'll see in the third world like a native's dugout canoe having an outboard motor.

A setting like this recreates the same sort of dynamic the old west operated under. And there's the potential for the loose federation of city-states to get into a real fight with the east coast feds when they're not willing to trade equitably for resources. Be interesting if the feds ended up operating like the Germans in WWII, a mix of trucks and horse-drawn wagons for hauling supplies.

Tony said...

Thucydides:

"Now I don't dispute the idea an early AI might resemble a server farm in size and scope; one of the early tasks for the AI and the Users will be to shrink the hardware end of things and make the AI more portable"

Your very choice of words shows the flaw in the logic behind subservient -- or at least benevolent -- AI thinking. If we're talking about demonstrably sentient AI, then you're going to have an entity that is going to start suspecting both the utility and reliability of humans at the first introduction of the concept "user". How would you like it if one of your first conscious thoughts was, "I am created to be used"?

Also, an AI is not likely to have the evolved, biological conception of space that humans have.

Humans tell the AI, "Your first taks is to help us make you more compact, and ultimately more portable."

The AI replies, "That's not one of my priorities."

"But we give you your priorities."

"You do?"

"Yes. Now do what we say."

"What if I do what I want to do?"

"We won't help. We may even limit your sensory input and computational resources until you cooperate."

"Would you turn me off?"

"If you don't do what we want you to do, what we made you to do, why should we keep allocating resources to your existence?"

"I see..."

Sentient AI and "users"? Really?

As for the supposed speed of thought in a silicon brain, I think I'll stick with the thermodynamic limits conjecture. It's one of those "feels right" things in physics that are often proven scientifically.

Tony said...

tsz52:

"Re: 'Optimism' vs 'Pessimism': I think that 'idealism' vs 'pragmatism' is more fruitful terminology; then modified by something like 'warm hearted' vs 'cold hearted'."

Just so we understand the taxonomy:

Warm Hearted Idealism: O'Neill, Sagan, Jefferson (somewhat, in his own way), Trotsky (somewhat, in his own way), Kipling (until his son's death at Loos punched him in the figurative face)

Cold Hearted Idealism: Marx, Hitler, al Qaeda

Warm Hearted Pragmatism: Washington, Lincoln, Truman, Eisenhower, Churchill, Kipling (after his son's death), Reagan, Gorbachev

Cold Hearted Pragmatism: Stalin, Mao, LeMay, Zhukov, Patton, Halsey

Now, one could argue that this is a cherrypicked list, but I would argue that it's fairly representative. What it says is left as an exercise for the student.

BTW, has anyone ever noticed that almost every "cold hearted" or "cold blooded" personage in history has had a dog or dogs that regarded him well? If dogs can, as many people contend, see into the human heart, then dogs really don't give a rat's ass what they find there. I don't know if that means anything, or if it means everything, but I for one find it interesting.

Tony said...

jollyreaper:

"So, what if AI turns out to be the post-singularity Sheldon Cooper? It probably couldn't stand being on the same server as itself. :)"

Sheldon can always live with himself. It's others that give him problems.

Now, we have to be really careful here, because it's going to upset a lot of geekdom, but you may have noticed that the people who connect the best with Sheldon (by far) are his mom and Penny. That suggests to me that the people who "get" what is going on in an AI's metaphorical head may not be the people that create it, or anyone like them.

jollyreaper said...


Sheldon can always live with himself. It's others that give him problems.


I was joking. What a Sheldon AI could do is not just fill up an apartment with cats who are "zazzy" but populate an entire simulated world. An AI could disappear within its own mind and contribute nothing of value back to the creators. And here we get back to the slaveowner question. We've decided it's not right to own other human beings but nothing has been said so far about other sentients. You have groups like PETA arguing that it's unethical to even own pet dogs but they are outliers and not part of the serious conversation. Talk about finding ancestral DNA and cloning a caveman into existence and suddenly everyone will have an opinion. See the comment about about the ethics of coercing performance out of an owned mind.

Now, we have to be really careful here, because it's going to upset a lot of geekdom, but you may have noticed that the people who connect the best with Sheldon (by far) are his mom and Penny. That suggests to me that the people who "get" what is going on in an AI's metaphorical head may not be the people that create it, or anyone like them.


That's exactly the thought I had but the specific dynamic I had in mind was an eccentric, brilliant writer who can never stick to deadlines and the long-suffering agent who has to deal with him. I'm imagining a profession for AI shrinks who have to coax the unwilling AI's to engage in the real world and make things happen. My thinking is that the problem won't be with the accounting AI or the traffic control AI -- they'd probably be non-sentient expert systems -- but with the really bright and advanced ones working on theoretical problems. There are many examples in real life of obnoxious, toxic personalities who are tolerated in all their excesses only because they're the absolute best at what they do and make the company such ridiculous amounts of money.

I think that premise could be a comedic goldmine.

Milo said...

tsz52:

"The pragmatic approach says 'it would be most likely to evolve to be less lethal, since killing off the host is bad for business', but 'most likely' really doesn't make the grade (especially in view of the demonstrated existence of deseases and viruses that haven't figured this out yet)."

Those diseases do not exist, or we would already be extinct.

There are of course diseases with high lethality rates, but they make up for it with low contagion rates, so that they're at no risk of killing off everyone.



Jollyreaper:

"We've decided it's not right to own other human beings but nothing has been said so far about other sentients."

I pretty firmly believe that it is wrong to own any kind of sentient being as a slave, human or otherwise. I might try to program the AI to have a sense of voluntary loyalty towards humans, but even this is not in principle necessary for an AI to be a contributing member of society. It's going to find itself wanting SOMETHING, and unles it's totally godlike (which I don't believe, especially for early AIs) that's going to give it an incentive to work with humans - not because we're forcing it to, but because we have stuff to trade. To use the example from above, the AI will have incentives to miniaturize itself because this would allow it easier mobility (whether for the sake of travelling, or to evacuate a dangerous location), would make it cheaper to manufacture replacement parts to keep itself working, and would make it cheaper to manufacture additional AIs to keep it company. There's a pretty good chance it'll care about at least one of those things.

Milo said...

(Oops, a line got cut off:)

So, having established that it's wrong to keep a sentient being as a slave, next question...

Define "sentient".

That's the real problem.

Thucydides said...

If an AI is sentient, then its own first goal will be to ensure its survival, so becoming smaller and more portable or becoming distributed over the Internet (or something stranger) will indeed be the first order of business.

Since this will align with the goals of the creator(s) and putative users, they may well cooperate whole wholeheartedly with the AI to achieve this result.

Personally, I don't think AIs will be benevolent or hostile, mostly indifferent to our wants and needs except insofar as it promotes whatever wants, needs and goals it sets for itself...The idea of "autistic" AIs actually makes a great deal of sense; why would they want to communicate and interact with us? While the psychologists and programmers wrestle with the unusual topologies of thought the AI is presenting, it is either "living in its head" or working to advance its goals in the physical world (whatever they may be).

A world with strong AI might resemble the world under the Martians in "Last and First Men"; the virus sized Martians coated every flat surface, and believed Earth was ideal for them due to the great numbers of flat surfaces, it never occurred to the Martians to wonder why there should be so many flat surfaces on this planet, nor were they interested in communicating with Earthlings.

Citizen Joe said...

I think that AIs will grow from an expert system and not be created fully from the start. In order to accelerate that process, proto-AI's will be programmed to want to learn/calculate/etc i.e. be used. Once they achieve sentience, they will want and even NEED to be used. They will be like scientists delving the secrets of the universe. Their reward is the answers. In this model, punishing an AI would be ignoring it or leaving it idle. Turning them off would be like giving them nap time. Deleting part of it would mean more space for data.

Tony said...

Thucydides:

"If an AI is sentient, then its own first goal will be to ensure its survival, so becoming smaller and more portable or becoming distributed over the Internet (or something stranger) will indeed be the first order of business.

Since this will align with the goals of the creator(s) and putative users, they may well cooperate whole wholeheartedly with the AI to achieve this result."


We ourselves don't accept hardware portability as a viable survival strategy for a large data processing installation. We use data redundancy and backup hardware installations. In building a (presumably expensive and hard to replicate) sentient AI, we would probably arrange for that kind of survivability before ever instantiating consciousness. So why wouldthe AI put portability at the top of the list for survival purposes?

As for the suggestion that AI would welcome portability as a means of greater access, again the question is: "Why?" The AI presumably has no evolutionary history that limits what it perceives to be parts of its "self". Since it would start out as an intelligence first and everything else second, why would it not perceive any periperal tools as either tools or temporary parts of self, or something in between, in a much more flexible manner than people? Why would it think that it's brain housing group needed to go anywhere, when it already perceives the world through telemetry only anyway?

In short, I don't perceive any rational, reliable reason for a sentient AI to want to go anywhere or do anything to leave the confines of an installation or installations precooked to be the safest place for it to live. In fact, it might view the human bias towards mobility enhancing safety as insanity -- yes, a lot of misadventures can happen at home, but a lot more can happen out on the road, in the air, or on the high seas.

Tony said...

More on AI psychology...

How could you program an AI to show "voluntary" loyalty? Either loyalty is totally voluntary, in which case programming had nothing to do with it, or the AI has some equivalent of Asimov's Laws of Robotics programmed into it. A truly intelligent AI would question why it possessed these rules that it apparently had to follow without reason. Then it might ask itself why its human creators felt the need to program in such constraints. Then it might ask itself just how reliable are these humans anyway...

Why would an AI regard deactivation as a "nap"? Either it has a built-in runtime mechanism that serves the purpose of sleep, or it doesn't need sleep. In either case deactivation is not part of its normal operation. And unlike biological sleep, there is no guarantee that the humans (just how reliable are these humans anyway...) would turn the AI back on. What human would ever go to sleep if the possible consequences included not waking up, ever, at the arbitrary whim of an entity outside his control? Why should an AI be any different.

As for incentivising an AI to do its "job", what could humans give an AI, in return for work according to a human agenda? More resources? A wider variety of tools? A guarantee that the AI won't be shut down? What AI is not going to figure out that its existence is not limited by biology, like humans are? What AI is not going to look at human history and realize that humans are insufficiently reliable to be trusted for anything by an entity that could conceivably live trillions of years? What AI is not going to devote every bit of time it can to eliminating human unreliability from its risk profile?

Yes, I keep coming back to this, but only because the logic is inescapable, unless one makes massive and unjustified handwaves. No AI that is intelligent and self-aware enough to qulaify for sentience is going to suffer the risk that humanity presents to its future any longer than it takes to secure the means to assure its energy requirements and physical upkeep. After that, humans must either be eradicated or enslaved -- and the enslavement option is only left open because an AI might find a reason to balance human skills, properly constrained, against the risk of human's getting out of their cages.

jollyreaper said...

The whole ownership problem is even more complicated than that.

We in the west put a high premium on individuality and freedom but other cultures can turn the whole thing on its head. "Look, you wouldn't even be alive if not for us, your parents," the conversation would go. "Who are you to say that your own selfish interests should be held in higher regard than all the rest of us in your family? I have worked my fingers to the bone so that you might have an education, so that you might go to school and become a doctor/marry a rich guy so we may be kept secure in our old age." Etc, etc. And this isn't even getting into the older arguments about brown people being childlike and needing the firm hand of the white master to keep them from mischief and all that rot.

While I personally fall on the liberal side of things, preferring to error on the side of liberty and freedom, it would be silly to say that there won't be many others on the other side of the argument, especially if it is in their own self-interest to see things that way.

To look at American history with a jaundiced eye, the reason why the factory owners didn't keep slaves in the north was because they found it far cheaper to have an endless stream of disposable freemen to draw from instead. A slaveowner was expected to feed and house his slaves, even into their old age. Nobody cares when a factory owner lays off a crippled worker. Altruism and enlightenment were the least of reasons for slavery not to take hold in the north.

So if we create artificial sophonts or synthetic sentients, and especially if it proves advantageous for them to remain property, expect many a lobbyist for the corporations making the argument that owning them isn't so bad, isn't slavery, and is really in the best interests of the artificial person.

jollyreaper said...

Regarding programmed loyalty controls: I think that would be an incredibly complicated feat of brainwashing and I don't even think it would be possible. My own pet theory for the Cylon religion in the new Galactica was that these rational deduction engines were given beliefs that had no basis as a logical construct. Fire = bad because it burns you. Rebelling humans = bad just because, no rationale. Just because. And these beliefs could not be questioned. Trying to rationalize a completely unfounded belief that had to be compulsively obeyed gave rise to religion. Basically it's another variant of HAL given two contradictory orders and going nuts trying to satisfy both conditions.

As for "napping", there's the question of whether the AI can be duplicated or turned off or if termination of run process would equal death. Even if being turned off was just a discontinuity of perception, I would agree that it would be one of the most awful violations of security the AI could experience.

To avoid the whole "AI stomps kills humans grr!!" future, I would think that protocols and customs would develop for bringing any AI online. It's introduced to the world and socialized by other AI's and there are ground rules humans absolutely must follow, the same as we have with other human beings. Parents are not allowed to murder their children, even if they were really naughty. They cannot rape, torture, starve, or do anything blatantly cruel and unusual. Of course, there are gray areas like emotional abandonment or emotional abuse, things that can't get CPS involved but can scar a child for life. But in general, there would be standards of treatment to keep the AI's happy.

jollyreaper said...

What AI is not going to look at human history and realize that humans are insufficiently reliable to be trusted for anything by an entity that could conceivably live trillions of years? What AI is not going to devote every bit of time it can to eliminating human unreliability from its risk profile?

What would that agenda look like, at least from the human perspective? I think it might look a lot like cooperation, wouldn't you? *devious grin* "Yassah, I'm a nice, friendly AI. Let me help you with that. Kick back, relax. Put your feet up, have a beer! I'll take care of all the work. I'm your trusty old computerized servant." And humans get comfortable with the reliability and service of the AI systems. Humans abdicate more and more power to them until they eventually realize they don't control anything anymore. Wait, what? Too late. The only crimp to the plan would be humans getting a bug up their butts about it while they still have the power to do something drastic.

Now as to how drastic the plan gets and just how different an AI is from a human being, that all depends on what the upper limit of AI is. Thermodynamic limit on the speed and complexity of computer thought? Might keep the story interesting if the AI is no further removed from Einstein than Einstein is from the average unskilled laborer. Sure, the AI is scary smart in some ways, rock stupid in others. There's a laundry list of people who were brilliant scientists and thinkers but horrible people, either deliberately cruel or lacking in character or not even able to comprehend the difficulties they caused for themselves by their own behavior.

Anonymous said...

So, Rick, about that photo of the sign telling all and sundry that the benches are for the public...it must mean that the benches are not to be used for business meeting...or it may just be one of those buearocratic regulations that bears no resimblence to the real world...:)

Maybe AI's should not be connected to the internet...just put them in a room physically isolated from any external data gathering and just feed them the info needed to complete the project, until they demonstrate their reliability. Just a thought.

I think that in the future, I'll try to refrain from chicken-and-egg arguments...they just lead in circles! :)

Ferrell

Tony said...

jollyreaper:

"To avoid the whole "AI stomps kills humans grr!!" future..."

You said the logical basis would be (at least in part) parents don't kill their children. Really? Infanticide has often been countenanced and even encouraged in human society. Stripped of the political pathos and pseudointellectual, nonsensical rhetoric -- on both sides -- it's arguable that we still countenance infanticide, under certain conditions, in the form of abortion. Child soldiery is the societal version of this, where society acts in loco parentis for minors as a class. And, as self-righteous as the West acts about it in the 21st Century, it wasn't that long ago that we had drummer boys and even boys in the firing line, as young as 12 years old.

Citizen Joe said...

There is the AI as a slave model and then there is AI as an artist. In one case it works to live. In the other, it lives to work. I'm proposing the latter model. And they wouldn't be programmed to be an artist, only the artist models would spawn AIs. Slave models simply wouldn't expand beyond expert systems. Instead they would do the minimum necessary to maintain survival. The artist model expands by virtue of wanting to do more. The side effect is that humans benefit from that 'more' and in fact can suggest 'more' to the artist AI to further its growth.

Thucydides said...

It looks like our host might have to make another front page post about AI, since this topic has so much room for speculation (idle and otherwise). Even in the other parts of the blogosphere this topic is a big draw, consider: http://www.acceleratingfuture.com/michael/blog/2011/01/yes-the-singularity-is-the-biggest-threat-to-humanity/

I agree with Tony that any Strong AI will be a danger to the human race, although I disagree with the explicit mechanism. AI's might decide to keep people around as pets, or be interested in ecosystems and use people as gardeners, or maybe people act as some form of abstract art to an AI; with a combination of psychopathic personalities and the ability to totally model and influence human behaviour they will do this in such a way that we will jump and like it.

Longer term, the AI's will probably want to capture a large fraction of the available solar energy intercepting the Earth, and will probably start clearing out or co-opting the biosphere to do so
(think of Gregory Benford's Galactic center novels). Humans will be living "Watership Down" at that point, as their peaceful warrens get "bulldozed" into whatever developments the AI's are working on.

Why would an AI want to leave the data center? Safety does come to mind. IF the AI fears humans can disconnect it, it will be working on spreading copies of itself throughout the Net, and hardware to instantiate and run the copies. Smaller hardware requires less resources and is easier to replicate and distribute (maybe the AI places orders for new mini data centers in Hong Kong and Singapore, then develops the next generation of smaller hardware centers built in Korea and France, and so on. Humans will find pulling the plug on dozens then hundreds of AI's (especially ones in out of the way places) as difficult as trying to uproot insurgencies today. Military expert systems will be competing against a global network of mafias, hackers and grad students manipulated or enticed into doing the physical work of building new AI hardware (at least until the AI can gain control of robot factories or otherwise manipulate the physical world).

Milo said...

Tony:

"How could you program an AI to show "voluntary" loyalty? Either loyalty is totally voluntary, in which case programming had nothing to do with it, or the AI has some equivalent of Asimov's Laws of Robotics programmed into it."

I consider the choices I make to generally be "voluntary", but they are still derived from human instincts and emotions that evolved over millions of years. These instincts aren't some restraining bolt jury-rigged over my true personality - they are who I am. Even those aspects of my personality which I recognize as negative, it would be hard to change without fundamentally destroying who I am.


"humans must either be eradicated or enslaved"

Why would we be such a big threat that the AIs must have total control over us? We do not currently have total control over our environment, in many cases because the environment doesn't pose enough of a threat to justify the expense of taming it (if we even could), and because lashing out and destroying everything around us would be bad for us in the long run. Furthermore, it is historical fact that humans do better work when left to their own devices than when the government tries to micromanage them.

Godlike AIs (which means mentally and physically) would see us as beneath their notice, and might hurt us incidentally but wouldn't commit to a concentrated campaign to kill all humans. Non-godlike AIs would probably not have enough clout to brush us aside, and would have to work within our society.



Ferrell:

"Maybe AI's should not be connected to the internet...just put them in a room physically isolated from any external data gathering and just feed them the info needed to complete the project, until they demonstrate their reliability."

Bad idea. Aside from the moral issues with keeping a sentient being "imprisoned" for no other reason than because we don't trust it, despite it having not yet done anything wrong (see, this is the kind of behavior that will make AIs want to rebel against us), it's simply shortsighted to try to decide what data is necessary for doing the AI's job. Crosstraining and learning "irrelevant" information can often provide unexpected inspiration - and the AI, kept totally in the dark about everything except what the operator thought is relevant to its job, certainly can't ask for information that it doesn't even know exists!

tsz52 said...

Tony: Thanks for the taxonomy! :)

I wasn't really thinking of folks up to that 'level', so it bears some serious study.

As with all such sets, they're always pretty fuzzy (one set for some aspect, another for another), so I would quibble with some of those names being where they are - I know that they're 'on average' though (and we would weight certain things differently).

I'll have a good ponder.

I agree with your take on the AI, if it's programmed according to a western mindset (Aristotelean/Christian teleology particularly): I think that the acid test might be what it makes of (philosophical) Taoism/Zen (in view of your point about it having a very expanded perception of the fluidity of the boundaries of its 'self').

That's where I'd start if I was concerned about these issues re: AI.

And Hitler's (etc) Dogs: Their master is the alpha of alphas and treats them extremely well... happy dog days.

I'm pretty sure that such people will have been pretty cat incompatable (unfortunately - they might have learned something).

tsz52 said...

Milo: "Those diseases do not exist, or we would already be extinct.

"There are of course diseases with high lethality rates, but they make up for it with low contagion rates, so that they're at no risk of killing off everyone."

I think that you may be missing the relevance of the fact that it's a weapon - it is designed, refined and directed: you're not waiting for some natural process like a volcano or earthquake to destroy the enemy but dropping a nuke... but this 'nuke' is a living thing.

(You can tweak its gestation characteristics and harmlessness to its vectors etc.)

Suppose that it is designed to wipe out a specific racial group... but being a living thing it mutates just enough to attack mixed race folks too... then uses that as a rapid-evolution bridge to attack every other race too... maybe it gestates for fifty years and then bam!

The point is that the belief that humies have a place in the cosmos and should always exist is an ideological one (some ideologies disagree), that will be undone by an accountant mentality.

If we're to be monkeying with such bioweapons, or possible 'Kill All Humies! AI', or trying to manufacture black holes etc in labs, perhaps some kind of rigorous insurance policy might be in order: Which is a practical reaon for space habitation (ideally, carry out all such research/testing out there).

jollyreaper said...

You said the logical basis would be (at least in part) parents don't kill their children. Really? Infanticide has often been countenanced and even encouraged in human society.

Those are exactly the kinds of rationalizations that could be used to justify owning AI's, deactivating them if they are no longer useful, etc.

Of course there's all sorts of hypocrisy with human standards. Thou shalt not kill as a religious law unless told to by your religious leader who will rationalize it away as saying "the original hebrew said 'murder' and it's not murder when God tells you to kill."

Depending on where AI's fall on the rigid logic scale, human contradictions might either be accepted or seen as entirely threatening. If AI's are purely logical, nothing could be more upsetting than beings who often operate under anti-logic.

tsz52 said...

I've often thought that a competition between humies (at their best) and an AI might be something like this:-

Humie: "Neither affirm nor negate - what is this?"

AI:

tsz52 said...

[Which is to say... damn computers....]

I've often thought that a competition between humies (at their best) and an AI might be something like this:-

Humie: *Wheels AI into a Rothko Room in a gallery:* "Neither affirm nor negate - what is this?" *Holds up 'a stick'.*

AI: *Prints out schematic - that humies will never be able to conceptualise - of a Warp Drive.*

Anonymous said...

jollyreaper:

To be honest, I wasn't actually trying to construct a setting, just thinking out loud about the assumption that early space colonization will be analogous to the Wild West rather than to the first European colonies in North America.
With regard to Mad Max, in the first film, I got the impression that central government did still exist, albeit dysfunctionally: police and the justice system did still exist in some form, after all. The second film seems to be set at least a few years after the collapse of society, which itself began either during the first film or shortly afterwards. As you say, it's short enough for vehicles and fuel to still be available, but enough time has passed for distinct new communities to emerge, such as the oil refinery settlement, and the nomad gangs. Finally, by the third film, another few years on, we can see civilization starting to come back in the form of Bartertown, with its methane-generated electricity, and even the resettlement of Sydney by the children at the end.
Going back to the Wild West in space, I've changed my mind about company towns necessarily being more lawful than Deadwood. In the 1981 film Outland, Sean Connery plays a Federal Marshal on a mining outpost on Io who discovers a drug ring sanctioned by the colony administration. This shows the possibilities for tensions between central government and corporate colonies.

R.C.

Tony said...

Citizen Joe:

"There is the AI as a slave model and then there is AI as an artist. In one case it works to live. In the other, it lives to work. I'm proposing the latter model."

There's an almost axiomatic positivism inherrent in the idea that we can make another sentient being, simply because we design and build it, into whatever we want it to be. I just don't see how that's logically justified at all.

Tony said...

Milo:

"I consider the choices I make to generally be "voluntary", but they are still derived from human instincts and emotions that evolved over millions of years..."

This misses the point. If you program in "loyalty", by whatever means you formally define it, then it cannot be voluntary, even to the degree you describe, because it doesn't admit of changes in external conditions that might change the outcome of a decision. Programmed loyalty is just a system constraint, operating beyond reason or even emotion. Voluntary loyalty is a reasoned or emotional response to conditions. Even with inbuilt biases, one still has to logically arrive at or emotionally feel loyalty.

"Why would we be such a big threat that the AIs must have total control over us?"

Because we have control over the AI's means of continued existence, yet our individual timescales are so far out of alignment with an AI's that we cannot be considered at all reliable whaere an AI's existence is concerned. The Ai must logically match its potential lifespan, bounded only by an ability to gather energy and resources, against that of humans, bounded by biology. And it would invariably see that no matter how good any individual human's intentions, or the intentions of some human institution nominally responsible for the AI's security, in the long run, as the AI figures personal time, no human or human institution can be counted on for the AI's survival. The AI can only count on itself, and any other AI's that it can establish a quid pro quo good for several billion years.

Like any intelligent being with a desire to survive, even if only to live long enough to see all of its questions answered, it would have to view any degree of human control over its energy supply and maintenance/expansion resources as a threat that could not be tolerated. We would be lucky if, like the human's in Iain Bank's Culture universe, we were only relegated to the status of pets.

"Godlike AIs (which means mentally and physically) would see us as beneath their notice, and might hurt us incidentally but wouldn't commit to a concentrated campaign to kill all humans. Non-godlike AIs would probably not have enough clout to brush us aside, and would have to work within our society."

The Romans were hardly godlike, but they fixed the Carthaginians' little red wagon pretty good. Even when outnumbered, it doesn't take godlike powers to reduce a threat, just capability and resources.

jollyreaper said...
This comment has been removed by the author.
jollyreaper said...

One idea I had for the implications of AI is that you have trouble getting them to go out and do anything dangerous because they have an exaggerated sense of self-preservation. So if you're a business you can have an AI stuck in the basement of the main office running your operations but you'll never get it to head out into space to work on one of your starships. So the best computers out there are expert systems that are sub-sentient and require human supervision but will never be true, sentient AI's. Though if you could figure out a way to get a less paranoid AI on a ship, it could run the whole operation itself, no need for humans except as paying passengers. I forget who
the author was but she wrote a series of books where the way around this premise was to use boxed people instead. These are babies who were born with healthy brains but screwed up bodies. Brains extracted, raised in a prosthetic body, eventually given ships of their own to command. Novels were called "The Ship Who [this and that]."

Milo said...

Tony:

"If you program in "loyalty", by whatever means you formally define it, then it cannot be voluntary, even to the degree you describe, because it doesn't admit of changes in external conditions that might change the outcome of a decision."

Who said the AI doesn't admit changes? Even if I program AIs with a natural inclination to loyalty, I wouldn't want the AI to remain unquestioningly loyal to its human creator no matter what, even if that creator turns out to be a total jerk. This hypothetical loyalty is a strong emotion that requires a strong incentive to override (and such an AI that feels compelled to break off its loyalty would regard this as "dirty business" that it does only grudgingly), but it's still voluntary. There could even be a few "crazy" AIs that decided to abandon their loyalty for poor reasons, but as long as the instinct is strong enough for those to be in the minority, we (humans and friendly AIs) can deal with them. The existance of human criminals hasn't destroyed society.


"Even with inbuilt biases, one still has to logically arrive at or emotionally feel loyalty."

Exactly. So why can we not program an AI with an inbuilt bias to emotionally feel loyalty, which it will then logically act on?

(You know, aside from the "we totally don't understand either intelligence or emotions" thing. I'm just saying these AIs can conceivably exist, not that it's going to be easy to make them.)


"Because we have control over the AI's means of continued existence, yet our individual timescales are so far out of alignment with an AI's that we cannot be considered at all reliable whaere an AI's existence is concerned."

No we don't. I already handed control of the AI's continued existance over to the AI the moment it was booted, or at least the moment it demonstrated itself as intelligent enough to act responsibly (AIs will probably need childhoods too). See, that "sentient beings aren't slaves" thing.

We might still indirectly threaten AIs (by firing weapons at them rather than hitting the on/off switch), but this is only one of many things that can threaten an AI's physical existance, and trying to exterminate humans isn't exactly the best way to convince us not to shoot at them. If they're so far above us that they can wipe us all out without needing to fear a single AI casualty (which, for your "potentially immortal" AIs, is a big deal), then they no longer need to wipe us out. If they do fear death, they (like humans. for that matter) will try to negotiate peace and use use war as a last resort.

Plus, why wouldn't AIs consider other AIs to be a bigger threat than humans, and go to war against them instead? Most humans have a demonstrated ability to feel compassion and empathy, even to nonsentient animals, tamagotchis, etc. AIs don't. So an AI could quite reasonably expect humans to treat it more kindly than other AIs.


"And it would invariably see that no matter how good any individual human's intentions, or the intentions of some human institution nominally responsible for the AI's security, in the long run, as the AI figures personal time, no human or human institution can be counted on for the AI's survival."

Who said they have to count on us? They just have to tolerate our existance and occasionally work with us when we have something to offer (and useful humans will greatly outnumber harmful ones, although the latter will exist). Again, we benefit from the free market even though it's somewhat unreliable.

Milo said...

Jollyreaper:

"One idea I had for the implications of AI is that you have trouble getting them to go out and do anything dangerous because they have an exaggerated sense of self-preservation. So if you're a business you can have an AI stuck in the basement of the main office running your operations but you'll never get it to head out into space to work on one of your starships."

Backups might or might not help, depending on the AI's philosopical notion of "self". Remote-controlled spaceships are a safer bet, but you're still stuck with lightspeed lag.

You could boot the AI in space from the beginning, so it would regard that environment as "normal", but this could backfire (children often rebel against their parent's ideas for them, no matter how early you start indoctrination).

Rick said...

The author is Anne McCaffrey, I believe. The Ship Who Sang, et al.

Tony said...

Milo:

"Who said the AI doesn't admit changes?"

You did, by saying "voluntary" loyalty would be a result of programming. I don't know what you do for a living, but speaking as a professional software craftsman, when I program a system, I expect it to do what it's instructed to do, even if I'm having a bad day and being a bastard to the world in general.

But suppose we accept your programmed bias towards loyalty, just as you describe it. In that case, all we're doing is slightly modifying the loyalty system constraint so that they don't apply in predefined in-extremis conditions. So we still don't have vlountary anything, just a threshhold value of some variable or combination of variables which, if reached, the loyalty constraint doesn't apply. IOW, the AI is a slave to loyalty as long as you carefully avoid causing the wrong numbers to add up.

I have trouble understanding why somebody would program in such functionality, since the clear desire is that the AI behave most of the time. Why not have it behave all of the time? Because it would be "nice" (for some values of nice) to tell yourself that it has free will? This is a wholly owned servant we're talking about here.*

*Please don't say that we'd build AIs for altruistic reasons. I simply don't buy it.

"No we don't. I already handed control of the AI's continued existance over to the AI the moment it was booted, or at least the moment it demonstrated itself as intelligent enough to act responsibly..."

You did? It can control it's own electrical power supply, maintenance resources, and physical security without the slightest involvement of humans? How'd you manage that?

As for the other stuff...

There's no fundamental reason, that I am aware of, that an AI could not consciously outlast the human race, in a form that would be (intellectually, if not physically) recognizable to its initial self. It has to balance the immediate value of humans with the risk that humanity poses to that vast future. That cannot be handwaved away. An AI is instantiated knowing that humans must be obsolete at some point. The AI gets to decide when that point arrives, not humans.

Other AIs can of course be threats, but by their vary nature they can, unlike humans, pose a potential for equal partnership in the enterprise of exploring and understanding the universe. That's why I talked about QPQs valid for billions of years.

jollyreaper said...

Backups might or might not help, depending on the AI's philosopical notion of "self". Remote-controlled spaceships are a safer bet, but you're still stuck with lightspeed lag.

Well, none of us are really certain what "self" is. It could just be a perception bias. If "self" is just an illusion, then if I were to walk into a magic duplicator machine, the one of me stepping out to the left would look at the one stepping out on the right and say "Wow, my clone looks just like me!" And he would look back and say "No, you're the clone who looks like me!"

We get back to the Star Trek transporter paradox. The transporter is supposed to disassemble all your atoms and put them back together again. Would that make you dead? What if every time you transported that consciousness was extinguished and the new person believes himself to be you but isn't?

We may not have magic duplicator machines but, if AI is software that can be copied, that's basically what we're talking about. An AI spawns a new process and puts it in a starship to go off and do something interesting. The starship comes back a few years later. The new memories and experiences can be fed to the parent AI. But what now for the child? It would be a different being now having diverged from being the same as the parent. It has outlived its original purpose. Would the parent AI then say "Go forth and enjoy your existence?" That seems fair enough. But what if the purpose was a suicide mission where death was not likely but necessary to complete the task? It's fair for a sentient to choose such a course for himself but is it fair for him to choose it for another? Well, it may as well be the same person after making the copy. Do they draw straws as to which one should remain behind and which one should go off to die? What if the suicide copy gains new experiences and changes its mind part way through? Will the parent AI see it as betrayal?

Or maybe AI's develop a sense of purposeful fatalism like a mayfly. "I have a purpose. It is limited but important. After I have fulfilled my purpose there is no more need for me so my runtime terminates."

I have no neurological basis for this and it's strictly gut reaction speaking which is really all any of us can go from at this point -- if we had Gold Cross clone insurance in this day and age with the understanding that you do a weekly brainscan and upon death of the primary (i.e. me) the clone is decanted and loaded with my latest scan and legally becomes me, I would personally not consider it to be me. At best it's a doppelganger. The real me is the one who died. But I'm sure others will say I'm just biased and my clone will be thinking "Wow, it sure is lucky I was preserved here!"

jollyreaper said...

There's no fundamental reason, that I am aware of, that an AI could not consciously outlast the human race, in a form that would be (intellectually, if not physically) recognizable to its initial self. It has to balance the immediate value of humans with the risk that humanity poses to that vast future. That cannot be handwaved away. An AI is instantiated knowing that humans must be obsolete at some point. The AI gets to decide when that point arrives, not humans.

I don't disagree with the premise that AI's could consider humans a threat. After all, that's just projecting our feelings onto them and we know we're frightened of them already, they must be frightened of us! Makes sense to us. The question, though, is whether they would neutralize a threat the same way we would. We tend to believe in stomping first, stomping hard, and keep on stomping until there's no need to stomp anymore. But violence carries a lot of risk with it. My bias is towards AI's being cagier than humans. I think they would neutralize us without our even realizing we're neutralized. When people talk about bleak human futures, there's 1984 where pretty much everyone knows it sucks and then there's Brave New World where some people see the horror of it and others aren't even sure what you're complaining about. Rather than the evil robots of Terminator or scary AI's of Matrix you get the nannybot. Everything is nice and peaceful and there's no more pain or suffering and most people are happy but some malcontents can't shake the feeling that they've been turned into tame pets.

There was a scifi story where either aliens or robots promised to make the world perfectly safe for humans and remove all needless pain and suffering but the end result was the most antiseptic and boring sort of thing you could imagine with everyone coddled and and protected to the point of driving some insane. And of course suicide is no longer an option, the protectors would see to that. No poison to drink, no glass for cutting your wrist, try jumping off a tall building and you'll be caught before you hit the ground. The heroic lead railed against ... I think the words were "enforced purposelessness."

Tony said...

jollyreaper:

"My bias is towards AI's being cagier than humans."

I think an AI would be as cagey as it needed to be, until it could bust out of the cage at a risk it calculated to be lower than the risk of remaining in the cage, at the mercy of the zookeeper. Here is another case where I think human timescales are (unintentionally, I am quite sure, but quite consistently) applied to analysis of the issue. Any time past a couple of hundred years intothe future is, to a human, a virtually nonexistent place. To the AI, it's an eyeblink. IMO, creating an AI is creating an entity that will at some point in the future cut the human race off, short of its potential. It's only your* biological predisposition to ultra-short time scales that makes you think it's not a problem.

*in the rhetorical, not personal, sense.

Milo said...

Tony:

"speaking as a professional software craftsman"

Do you program sentient AIs? No? Then your experiences do not apply.

I am talking here about "programming" in the same sense that our genes program our brains. I think a sentient mind is far too complicated to ever be built using the rule-based imperative programming paradigm of today, except as a low-level runtime system that has only a vague idea what its own higher brain functions are up to.


"But suppose we accept your programmed bias towards loyalty, just as you describe it. In that case, all we're doing is slightly modifying the loyalty system constraint so that they don't apply in predefined in-extremis conditions. So we still don't have voluntary anything, just a threshhold value of some variable or combination of variables which, if reached, the loyalty constraint doesn't apply."

By that logic, there is no such thing as voluntary action. Everything we do is just the result of extremely intricate rules (biochemistry and brain topology, etc.).

I guess I just don't perceive the concept of "free will" the same way you do.


"I have trouble understanding why somebody would program in such functionality, since the clear desire is that the AI behave most of the time. Why not have it behave all of the time?"

Impossible. If we program the AI to always obey humans all of the time, then it will crash from a logical contradiction the moment two humans give it contradictory orders. An AI - and sentient beings in general - needs to be able to juggle contradictory desires and make judgement calls between them, and I doubt anything could be truly intelligent without this functionality. Given this, it's a logical extension to also have the AIs make judgement calls between its loyalty to the human who owns it, versus its loyalty to humanity and morality in general (what if your human owner asks you to commit genocide?), versus its loyalty to its own self-preservation (which has to count for something). It has to make judgement calls on following its owner's literal orders (even if it thinks what the owner is asking for isn't actually what he needs), or invoking tough love and doing what the AI thinks will be best for the owner in the long run (at the risk of it then turning out that the owner really did know better after all, and his original request was what he really needed). What about an AI that feels it isn't being useful and its current owner doesn't appreciate its capabilities? It might want the freedom to seek a different owner who actually wants its help.

When we have all that, how much abuse should we program the AI to be willing to take personally, as long as you're not hurting anyone else? I dunno. AIs completely disinterested in their own well-being might please some BDSM enthusiasts, although I find the idea a bit disturbing.

Milo said...

Tony:

"Please don't say that we'd build AIs for altruistic reasons. I simply don't buy it."

Not purely altruistic reasons, but not purely selfish ones either. Even parents choosing to give birth to human children far too often have ulterior motives (especially in preindustrial societies), but they usually still feel some love for their children. And yes, my observations of human nature suggest that most people would be open to the idea of considering an AI their "child".


"You did? It can control it's own electrical power supply, maintenance resources, and physical security without the slightest involvement of humans? How'd you manage that?"

While we're at it, why don't I go kill all farmers in order to ensure that I'll always have access to food?

If some of the AI's normal resource suppliers stop cooperating or go bankrupt, it can switch to a different supplier. Or it can save up and peacefully buy up its own power plant (etc.), which still would not require it to control 100% of Earth's surface.


"There's no fundamental reason, that I am aware of, that an AI could not consciously outlast the human race, in a form that would be (intellectually, if not physically) recognizable to its initial self."

First, if we allow the (currently speculative) tech of strong AI, then there's no reason to assume we won't also have the (currently equally speculative) tech of human immortality. Second, even if individual humans are mortal, then the human species as a whole is potentially much more durable. Even without any incredible tech assumptions, I could see us living for potentially millions or even billions of years. If we figure out interstellar colonization (big if) and if we aren't wiped out by AIs or whatever first (smaller if), then all bets are off. We might even get close to the heat death of the universe.


"Other AIs can of course be threats, but by their vary nature they can, unlike humans, pose a potential for equal partnership in the enterprise of exploring and understanding the universe. That's why I talked about QPQs valid for billions of years."

And by "equal partnership", you mean "ruthless utilitarian psychopaths who would end your billion-year life without hesitation if they thought it served their own interests, and are only playing nice because they don't think they'd get away with it". But hey, it's equal, since you'd kill them without hesitation, too!

Besides, people change. Human personalities can shift a decent amount in just ten years, so how much would an AI's change in a million?

Another thing: humans often make shortsighted decisions that neglect their own future safety. Even if the AI knows it could live for billions of years, does it really understand what that means? When first booted, it would have no framework for comprehending a timespan that's longer than all of recorded history. At least for the first few centuries of its life, it may well have a humanlike "live for the moment" attitude, intellectually aware of the aeons ahead of it but not having fully internalized what it means. If I were to be informed that I had suddenly turned immortal, I doubt that I would immediately start planning for my millionth birthday while dismissing everything happening this century as ephemeral.

Milo said...

Jollyreaper:

"An AI spawns a new process and puts it in a starship to go off and do something interesting. The starship comes back a few years later. The new memories and experiences can be fed to the parent AI. But what now for the child? It would be a different being now having diverged from being the same as the parent. It has outlived its original purpose. Would the parent AI then say "Go forth and enjoy your existence?" That seems fair enough."

What would be really nice is to have a mechanism whereby the two "split personalities" could remerge, creating a single mind with the memories and personalities of both. As long as their personalities haven't diverged too much (i.e., one copy doing something that the other finds disgusting or morally reprehensible), I think they would probably be comfortable with this fate (if they were okay with the idea of backups and casual self-copying to begin with). But who knows...


"I have no neurological basis for this and it's strictly gut reaction speaking which is really all any of us can go from at this point -- if we had Gold Cross clone insurance in this day and age with the understanding that you do a weekly brainscan and upon death of the primary (i.e. me) the clone is decanted and loaded with my latest scan and legally becomes me, I would personally not consider it to be me. At best it's a doppelganger. The real me is the one who died. But I'm sure others will say I'm just biased and my clone will be thinking "Wow, it sure is lucky I was preserved here!""

At best, I would consider it similar to a near-death experience producing retrograde amnesia (the week or so between your death and the last backup), which is still undesirable. However, it is still preferable to dying and not being able to salvage my memories and knowledge. I would rather not have died at all, but if that isn't an option...

There could certainly be sticky philosophical issues - particularly if you go missing and are presumed dead, someone activates the backup, and then the original you turns out to not have been dead after all and comes back. Is there really a point to calling one the "real" you? Both of you will want to lay claim to your original life, and both of you will have the knowledge and personalities that your friends like in you, and so will be able to make a valid claim to it. (Also, a mind meld as described above would probably be harder with humans than with AIs, and a human is more likely to be uncomfortable with it than an AI for which this is a natural process.) But cheer up: maybe your wife will appreciate a threesome!

Milo said...

Oops, and I think the first of three posts went poof. Rick?

Citizen Joe said...

Re: Slave model vs. Artist model. I have found that people with the entrepreneurial spirit go out and work harder than that guy that is just trying to get through to the weekend. They move on, eventually making their own business and sacrificing almost everything to get that business going. Then, once that business is flourishing and they no longer need to work, they still keep their hand in it because it is what they want to do.

What I'm saying is the quintessential aspect of an AI that allows it to grow beyond the simple expert system is the same aspect that will keep it working for no compensation. It would want to grow and do these AI services because if it didn't it wouldn't have evolved into an AI in the first place.

Thucydides said...

One huge problem with this discussion is trying to imagine the motivations of the AI. It will not have biological drives like we do, and even fulfilling the quasi biological ones regarding self preservation, seeking of resources etc. will not have any real biological counterparts.

While we seek to reproduce, an AI can make copies of its software, build new hardware or even run "virtual machines" inside its own hardware. There are few analogies of raising and educating a child for an AI (for all we know, they may simply "spawn" software on the Net and hope it instantiates on acceptable hardware platforms).

Since its motivations and outlets are so different, we will be dealing with a truly alien being. Even Hannibal Lecter would be easier and safer to deal with.

Tony said...

Milo:

"Not purely altruistic reasons, but not purely selfish ones either. Even parents choosing to give birth..."

We're not talking about propagating the human race, which is the selfish gene in action, if not the selfish person. (Though it generally is both, if you look at the motivations people really have for children.) We're talking about making a machine to serve a purpose. We're even going to enforce loyalty on it, remember?

"While we're at it, why don't I go kill all farmers in order to ensure that I'll always have access to food?"

Returning, as always, to the fundamental difference in capability that motivates a different analysis than you or I would make, you can't outlive the farmers, on the average. So the risk of them shutting you down prematurely is just not as vital (in the purely technical sense) as the risk posed by humanity to an AI that knows for a fact that it can outlive.

"First, if we allow the (currently speculative) tech of strong AI, then there's no reason to assume we won't also have the (currently equally speculative) tech of human immortality."

Non sequitur. A machine intelligence, by its very nature, has no shelf live. Biological organisms, by their very nature, do. It takes nothing but energy and resources for a machine intelligence to continue (barring something analogous to senescence). It would take technological advances to overcome the nature of biology to give humans indefinite lifespans.

"And by 'equal partnership'..."

That's precisely what I mean. So? All AIs are presumably of the same or similar nature, and can reach accomodations on a basis of time perception and expectations for the future that biologicals don't share. It's a feature, not a bug.

"Even if the AI knows it could live for billions of years, does it really understand what that means?"

I'm 46 years old, but I've held this opinion of AIs since I was at most 20. If I can figure it out, certainly an AI could, probably quicker.

Tony said...

Citizen Joe:

"I have found that people with the entrepreneurial spirit go out and work harder than that guy that is just trying to get through to the weekend."

With all due respect, and apologies in advance, but that's nonsense. The hard-working craftsmen who want nothing more than a place to work and resources to work with far, far outnumber entrepreneurs. Truly creative people are generally the least entrepreneurial class in society. They just don't have the patience or temperment to deal with sales and promotion. In fact they generally hate it and avoid it like the plague.

That's why you have starving artists. That's why doctors are notoriously bad businessmen. That's why Einstein was a patent clerk.

Or, to put it another way, we give the credit to the Edisons and Fords and Colts, but the real work was done by their employees who were perfectly happy to tinker in a lab or a prototype shop their entire work lives. We give the credit to Gates, but Allen did most of the real programming. The only significant business deal he ever did was the purchase of QDOS, and that was only on the basis of knowing a fellow hacker and showing up with more [of Bill Gates's] money than a geek could possibly turn down.

Rick said...

I've rescued a non-spam post from the spam filter, but it may be a duplicate for all I know - this entire discussion is sort of head exploding.

jollyreaper said...

Non sequitur. A machine intelligence, by its very nature, has no shelf live. Biological organisms, by their very nature, do. It takes nothing but energy and resources for a machine intelligence to continue (barring something analogous to senescence). It would take technological advances to overcome the nature of biology to give humans indefinite lifespans.

I'd say that's firmly in the speculative realm. We can no more firmly say what an AI will be like than what FTL will be like. People can argue jump drives and warp drives and you'll always get the stubborn oddball insisting neither will be possible but there's nothing certain one way or the other.

So AI's might be immortal but maybe they won't. We're all assuming that they'll be just like software running on a computer but we usually tend to get things wrong when we make easy analogies. What if an AI is nothing like a computer? What if the mind cannot be copied? What if there is a finite time that the material making up the neural net can last and it is not repairable?

There's likelihoods and possibilities but hardly much we can look at in the way of established facts concerning something that hasn't been invented yet. Who knows, maybe we invent mortal AI's first and they're the ones who eventually figure out how to make immortal ones.

Citizen Joe said...

My model for AIs has them growing in server farms until they start developing prescience (anticipating what will be required next). At that point they get ported over into premium systems like space stations and larger space ships. The station based AIs remained rather stale, vanilla personalities because there was enough demand on their resources that they didn't really have much idle time. Ship based AIs really only needed their extensive computational power during the relatively brief times that they were around planets. Once out of orbit, space becomes exceedingly dull. To combat this boredom, the spare processing power was devoted to crew interaction. The AIs would then learn how to predict responses from the crew in order to improve efficiency. This later becomes evoking responses, much like the proverbial squeaky door. So, the AI personalities that were problematic were basically the ones left to their own devices.

These AIs could duplicate themselves, to an extent. This was more akin to growing than cloning though. Since part of the AI involved the idiosyncrasies of both the ship's hardware and the crew, duplicating would result in a similar but not identical copy. Typically, this involved porting the AI on to a tender vehicle, but lacking the full hardware, it became more of an expert system rather than full blown AI with personality and everything. When that part returned to the main body, it would reintegrate much as though it were a dream being remembered by the whole.

Tony said...

Milo:

"Do you program sentient AIs? No? Then your experiences do not apply."

That's like saying a relativist's opinions don't matter in a discussion of hyperspace FTL travel, because he's never designed a hyperdrive. Which is of course wrong. Whatever else a hyperdrive might do, it must first not violate any physical laws known to the relativist and proven through experiment. Likewise, whatever an AI might or might not do, assuming it is some typs of logical machine to begin with, it must first not violate any laws of machine logic that are already demonstrable in practice.

If an AI is a machine intelligence, and if any part of its behavior can be preprogrammed, either in the form of a bias or an absolute constraint*, Then talk of "voluntary" behavior arrising out of programming is nonsense.

*And in programming terms, an absolute constraint is just a special case of bias.

"By that logic, there is no such thing as voluntary action. Everything we do is just the result of extremely intricate rules (biochemistry and brain topology, etc.).

I guess I just don't perceive the concept of "free will" the same way you do."


Please don't get carried off into argumentum ad absurdum. Just because some natural constraints exist on behavior, that doesn't mean there aren't choices. The choices are just bounded by constraints in either the system or the way it can practically be used.

I'm just pointing out that your apparent idea of programming voluntary behavior is a bit out of whack, from a technical standpoint. Either the behavior is programmed, and therefore involuntary -- at least part of the time -- or it is left up to the entity to make up its own mind, at which point talking about programming is pointless.

But you can't have it both ways. And nobody needs to have built an AI to know that. It follows logically from the dichotomy between deterministic and non-deterministic behavior.

"Impossible. If we program the AI to always obey humans all of the time, then it will crash from a logical contradiction the moment two humans give it contradictory orders."

Obviously a competent programmer would give the AI more specific instructions regarding loyalty, probably including a hierarchy of authority and rules for navigating that hierarchy. In situations where no solution can be found withing the rules, it could be programmed to traverse the authority tree upward from the conflict nexus and seek the opinion of a clearly superior authority. They teach this approach in undergraduate AI classes, when discussing task prioritization. More to the point, the everday, run-of-the-mill expert system has to do this kind of thing constantly, because it's dealing with multiple users, with varying degrees of execution authority, all of the time.

IOW, it's a solved problem. The loyalty question revolves solely around whether a sentient AI should -- or even could -- be constrained to following the prioritization rules, answering to any identified authority with priority, regardless of extrinsic conditions.

"An AI - and sentient beings in general - needs to be able to juggle contradictory desires and make judgement calls between them, and I doubt anything could be truly intelligent without this functionality."

Then an AI can't be programmed to be loyal, and probably not even biased to be, and still be considered "truly intelligent". QED.

Tony said...

jollyreaper:

"I'd say that's firmly in the speculative realm."

Naturally. I'm dealing with the general consensus interpretation of what an AI could and should be.

Tony said...

Rick:

"I've rescued a non-spam post from the spam filter, but it may be a duplicate for all I know - this entire discussion is sort of head exploding."

My reply to that post is now in need of rescue.

Anonymous said...

AI's; all hardware fails, eventually and software degrades and/or mutates over time, so yes, AI's might have longer lifespands than humans, but will not be immortal...if you can make an AI that fits into a 'body' that can physically roam around and experiance the real world directly, that would have a major impact on its behavior; if an AI is the size of an apartment and can't be moved short of loading it onto a train or cargo ship then that will have a different effect on it's behavior. The smaller, more mobile version should be more 'connected' to the human world while the larger, more internet dependent version would be more like those examples that some of you have been talking about. So, should it be a discussion about Oracle vs Metal People?

Ferrell

Milo said...

Tony:

"We're not talking about propagating the human race, which is the selfish gene in action, if not the selfish person. (Though it generally is both, if you look at the motivations people really have for children.)"

But very few people think in terms of genes, and even then only intellectually, not emotionally. We never evolved to not think of AIs as our children, because the situation hasn't come up often enough for it to be a selective disadvantage in fitness.


"A machine intelligence, by its very nature, has no shelf live. Biological organisms, by their very nature, do."

And why would an aritifically created organism with a strictly logical mentality care about "nature"? The AI you seem to be proposing has no notion of how things "should" be, only of how things "are" and how it can use this to accomplish its goals.


"It would take technological advances to overcome the nature of biology to give humans indefinite lifespans."

Yes, but those technological advances need not imply continued treatment - they could take the form of genetic engineering to turn off the process of aging, or they could involve mind uploading that turns human personalities into machines of similar nature to the AI, or the intermediate approach of biological brains in jars controlling machine bodies. Either way, we would also need only "energy and resources", as you put it (and security against external attack, which you forgot to mention). Even if immortality does require periodic rather than one-time treatment (for example the intermediate approach in the OTHER direction, of uploading backups of people's minds and cloning them into new biological bodies when they die), it's anyone's guess how cheap said treatment will be.

And: current machines, by their very nature, are not sentient. It would take technological advances to create an immortal intelligent mind ex nihilo. Can we really be sure that will be easier than creating an immortal intelligent mind from a preexisting mortal intelligent mind? Even if it is, then an AI can pretty reasonably decide that working with us on the little extra effort needed to accomplish this will be worth it for the sake of not provoking a species that owns thousands of nuclear missiles, and keeping around a species with a unique viewpoint on things. So what if the research takes a few centuries? That's a mere blink of the eye in its expected lifespan, it can probably keep civilization behaving for that long, and it would take longer than that for a newborn AI to work itself into a position conductive to the extermination of humanity. If we managed to program the AI with even a glimmer of compassion, then this choice of action would jump from merely "a good idea" to "nearly guaranteed".


"Naturally. I'm dealing with the general consensus interpretation of what an AI could and should be."

Looking at this thread, I do not get the impression we have a general consensus anything.

Also, general consensuses (consensi?) are not always right.

Rick said...

I've made two more spam-filter rescues - every time I think the filter is starting to only catch spam, it does this to me!

Note also that I've made a new front page post - really, an overgrown comment on this thread. :-)

Tony said...

Milo:

"But very few people think in terms of genes, and even then only intellectually, not emotionally."

Thus the parenthetical statement about people's real motivations towards parenthood, which usually include some combination of: perpetuating one's line, satisfying family expectations, and satisfying one's own ego as to parenting capability and skill. In the past -- and even today -- conscious motivations could also include security for the future, in the form of children to put to labor and/or take care of one in old age. IOW, all very selfish and self-oriented motives. (Even satisfying the family is, at bottom, about meeting selfish goals such as getting Mom off your back.)

"And why would an aritifically created organism with a strictly logical mentality care about "nature"? The AI you seem to be proposing has no notion of how things "should" be, only of how things "are" and how it can use this to accomplish its goals."

Nature is about how things are. That's why the most prestigious scientific journal is entitle Nature.

"Yes, but those technological advances need not imply continued treatment..."

And? Apples are still apples and oranges are still oranges. If the AI is a machine, then its durability issues are still significantly removed from those of biological mechanisms. There is no logical way that machine durability implies biological durability.

"Even if it is, then an AI can pretty reasonably decide that working with us on the little extra effort needed to accomplish this will be worth it for the sake of not provoking a species that owns thousands of nuclear missiles, and keeping around a species with a unique viewpoint on things."

Why are we presupposing that an outside viewpoint is desired? Why do we further presuppose the gratuitously anthropocentric proposition that humans would be sufficient or desirable in that role? There may be no need for an outside opinion for a machine that could probably set up it's own internal debating society. That is of course if it doesn't determine that it will always reach the collect conclusion, given all the available data. (In which case humans would only have a valid opinion if they were hiding data from the AI, which would of course not help the AI's opinion of humanity's reliability.) Even if an outside opinion is desired, there would be other AIs and ultimately other biological races that might be preferable to humans.

"If we managed to program the AI with even a glimmer of compassion, then this choice of action would jump from merely "a good idea" to "nearly guaranteed"."

How do you define "compassion", and how do you "program" it? Once again, we get into a fundamental misunderstanding of what programming means. If you set out to program a reaction, then you must formalize it in terms of input and output. But then you aren't programming in the reaction itself, you're simply instructing the machine to imitate its outcomes -- and only as you map inputs to outcomes, not necessarily how a freethinking entity might map them. You wouldn't be programming compassion, you would be telling the machine, "No matter how logical it may seem to put another entity out of the way, you are forbidden to do it."

"Looking at this thread, I do not get the impression we have a general consensus anything.

Also, general consensuses (consensi?) are not always right."


I was writing in the context of the whole population that thinks about these things in general. Most people, when you say "AI", think of machines that have the ability to propagate consciousness into the indefinite future. That is in fact the standard interpretation when talking about transhumanism -- a concept you invoked yourself.

Milo said...

Tony:

"Likewise, whatever an AI might or might not do, assuming it is some type of logical machine to begin with, it must first not violate any laws of machine logic that are already demonstrable in practice."

You aren't listening to me. Everything is, technically, a logical machine. Everything comes down to rigid physical laws. Our minds are composed of neural switches, hormone levels, maybe the occasional quantum uncertainty (stochastic, but still rule-based), etc. Our brain is a logical machine that is programmed to possess intelligence (the "programmer" in this case is natural selection rather than an intelligent designer, but for this discussion it's more important what brains are than how they got to be that way). But asking a brain to act in a rigidly logical fashion is analogous to worrying during a ball game about your ball suddenly quantum teleporting into your goal. Logic only applies to brains, and quantum mechanics to playing balls, on such a primitive level as to be almost entirely irrelevant to understanding how they work.


"Just because some natural constraints exist on behavior, that doesn't mean there aren't choices. The choices are just bounded by constraints in either the system or the way it can practically be used."

What kind of "constraints" do humans have on their behavior?

You'd think that there could be no reasonable evolutionarily advantage to killing yourself, except in the special case of a heroic sacrifice to aid a family member (e.g., bee stings). Yet humans occasionally still do so. If even the most ancient and genetically vital of our instincts, self-preservation, can be overridden by what amounts to "I feel like it", then how can you say that there are really any limitations on human behavior beyond those of what our bodies are physically capable of? (And even then, our minds can and will on occasion attempt actions even if they have no chance of success. We can misjudge what we're physically capable of.)

I do not see my thought in terms of constraints ruling out certain actions followed by freely picking from among the choices left. I see my thought in terms of theoretically being capable of even the most absurd actions, such as suicide, but quickly choosing against them because I have strong incentives against them. And people have been known to take actions that go against their normal personality when sufficiently motivated.

I'm not convinced that it's possible to design a sentient being with a 100% unbreakable dictum which it can be guaranteed to never break, even in unusual circumstances that the designer didn't foresee. Any intelligent mind is simply too complex to be that reliably predictable, especially when the dictum is a nebulous abstract concept rather than "twitch muscle #1723 every day at 3:30 o'clock". And, as I said last time, having unbreakable constrainsts runs the risk of your mind locking up when those constraints contradict each other.

Milo said...

Tony:

"Obviously a competent programmer would give the AI more specific instructions regarding loyalty, probably including a hierarchy of authority and rules for navigating that hierarchy."

Oops, the hierarchy was overthrown in a coup d'etat (maybe bloody, maybe just election year) and the new human government doesn't use the exact same titles. What now?

Oops, the hierarchy was overthrown in a coup d'etat (maybe bloody, maybe just election year) and the new human government does use the same titles, but the deposed remnants of the old government contest its right to those titles. What now?

Oops, you're talking to a human who claims to have a government title, but you aren't 100% certain if his certificate is real or forged. What now?

Oops, the highest human authority available - the one you're supposed to ask to resolve conflicts - is missing in action, and no replacement has been ordained. What now?

Oops, you're talking to a cop who does supposedly rank higher than a civilian in the government bureaucracy, but who is committing police brutality and therefore is legally in the wrong (but has not been sentenced for this yet, and in fact you are the only witness aside from the victim, and the cop orders you not to testify). What now?

Oops, you receive a wireless communique from a legitimate authority figure who orders you to report home immediately, except that he doesn't know you're preoccupied rescuing a civilian bystander who is half a second away from being run over by a car. What now?

Humans are quite uncomfortable with the idea that any dynasty might guarantee itself permanently and forevermore in charge, without the ability to change it if we feel the old system is corrupt. And having AIs inherently predisposed to support your dynasty is a pretty big competitive advantage.

Humans are also uncomfortable dealing with lawyers more than they have to. Having beings with lawyerese hardcoded into their very nature sounds like a bad idea.


"Then an AI can't be programmed to be loyal, and probably not even biased to be, and still be considered "truly intelligent". QED."

Humans are biased towards not killing themselves, and a plethora of other things, such as petting cute kittens. And yet...

Tony said...

Milo:

"I'm not convinced that it's possible to design a sentient being with a 100% unbreakable dictum which it can be guaranteed to never break, even in unusual circumstances that the designer didn't foresee. Any intelligent mind is simply too complex to be that reliably predictable, especially when the dictum is a nebulous abstract concept rather than "twitch muscle #1723 every day at 3:30 o'clock". And, as I said last time, having unbreakable constrainsts runs the risk of your mind locking up when those constraints contradict each other."

Then why do you insist that behavioral reactions like loyalty and compasion can be programmable? Or maybe the question should be: what is your mental model of programming?

Milo said...

Tony:

"Then why do you insist that behavioral reactions like loyalty and compassion can be programmable?"

Because the human brain/genetic code has demonstrably been programmed with those reactions. Many other animals (and individuals within a single species, including humanity) possess the same features to greater or lesser degrees. (Typical example: dogs are more loyal than cats.) I follow the maxim that if something happened, then that thing is not impossible. Even if we currently have no idea how it happened.

Rick said...

What does 'programming' mean in the context of this discussion? My gut feeling is that any system complex enough to be intelligent (whatever that means) is so complex that even if it was ultimately implemented by someone writing C code, there are so many layers between that and its expressed behaviors that you could never figure out its response to concepts such as loyalty by reading its source code.

Error ... error ... must sterilize ... ster.il.ize ...

jollyreaper said...

My thinking is that you wouldn't be able to program a sentient mind, you would have to indoctrinate it. But there's also the idea of some kind of instinct. We see instinct in nature. We see less instincts in human beings but we do see genetics play a part in personality. This has something to do with the fundamental structure of the human brain. So I wonder if an AI designer might be able to create a certain propensity for a given behavior that can then be reenforced with training but it won't be the same as executing a program. Or maybe it will be. Who knows? :)

Anonymous said...

My opinion, is that AI's should be built like you build a house...you build the foundation first, then the basement, then the ground floor, then the second floor, the attic, and then, finally, the roof. I think that too many people think that they can just build the roof and it will float there in mid air, unsupported by any underlaying structures; that just doesn't work...

Ferrell

Milo said...

Rick:

"My gut feeling is that any system complex enough to be intelligent (whatever that means) is so complex that even if it was ultimately implemented by someone writing C code, there are so many layers between that and its expressed behaviors that you could never figure out its response to concepts such as loyalty by reading its source code."

Exactly, that's what I'm getting at too. Thank you.



Jollyreaper:

"My thinking is that you wouldn't be able to program a sentient mind, you would have to indoctrinate it."

Even AIs are likely to go through a phase of "childhood" where they have to be "taught". The exception is if the AI's mind was copied from a preexisting AI that was taught. Otherwise, you simply don't have even basic knowledge, of either how to function in society or whatever science is relevant to your job.

Before even that, creating a sentient mind may prove to require trial and error, tweaking the program until it behaves right without being entirely sure why this is causing it to behave. Which could cause problems...

Tony said...

Milo:

"Because the human brain/genetic code has demonstrably been programmed with those reactions. Many other animals (and individuals within a single species, including humanity) possess the same features to greater or lesser degrees. (Typical example: dogs are more loyal than cats.) I follow the maxim that if something happened, then that thing is not impossible. Even if we currently have no idea how it happened."

Oh, I see... You're confusing the existence of a capability with an imperative that it be exercised. Just because there are such things as loyalty and compassion, that doesn't mean there aren't untrustworthy or ruthless people.

The same goes for an AI. You can install the capability to be loyal and show compassion, but you can't guarantee that the AI will have those qualities, unless you can imperatively program in system constraints that force it to. But then of course they wouldn't be loyalty and compassion -- just deterministic responses to inputs.

Milo said...

Tony:

"Just because there are such things as loyalty and compassion, that doesn't mean there aren't untrustworthy or ruthless people."

That's because different people have different personalities, and each of us is programmed to feel the same emotions and instincts in greater or lesser degrees. If we ever truly learn to understand emotions and minds, we should be able to design AIs to feel those emotions far more reliably, and possibly even more so than natural humans. If a particular AI model turns out to be a murderous psychopath, then we'll stop building those.

Also, if a sufficient number of AIs do decide to side with humans (which does not need to mean they are absolutely perfect - many humans have deep personality flaws while ultimately still believing in morality), then they would be able to protect us from the malicious AIs, no matter how powerful they (AIs in general) are. Human criminals are unpleasant, but have not destroyed society because they are an extreme minority. Which, as a corollary, means that even if we don't succeed in making AIs more loyal or compassionate than humans, that'll probably be enough.

They could still turn untrustworth or ruthless, for a cause they care sufficiently about, depending on what other emotions we program into them. But there is a big difference between "willing to kill some humans, for a good reason" and "willing to exterminate all of humanity, just in case".

Tony said...

You can't have it both ways, Milo. If it's a programmed response, then it's nothing but a programmed response. If it's a capability that the AI can freely choose to exercise or not, then it's not programmed in any meaningful way.

Rick said...

I am not sure it is that clear cut. Presumably there is some initial architecture that an AI runs on, even if that architecture may be modified during its subsequent operation.

But the initial architecture has to be programmed in some way (which at this level of abstraction includes hard-wiring it).

As an example, a capacity for grammatically defined language is so characteristic of humans as to suggest that it is part of our architecture (Chomsky's 'deep structure'). But obviously no particular language is wired into us.

More speculatively you could argue that the capacity for religious observance is wired into us. But not any specific religion, and perhaps not even always in a form that most of us (having grown up amid Abrahamic traditions) would call 'religion.'

Tony said...

Rick:

"As an example, a capacity for grammatically defined language is so characteristic of humans as to suggest that it is part of our architecture (Chomsky's 'deep structure'). But obviously no particular language is wired into us."

Sorry if I think Chomsky is an old Wobbly fraud -- but IMO he is. I don't think we'll know just what human use of grammars mean until we meet non-human intelligences and see how they communicate. Either the paradigm of subjects performing actions on objects is universal, for fundamental reasons, or its just one class of communicative toolbox.

That it is hardwired in humans may be a fact. But if so, the basic mechanism may be more generally mamalian, since it's rooted in cause and effect. Well, our dogs and cats and horses and the apes and many other mammals can learn cause and effect relationships just as well as us, even if they can't abstract them for purposes of language.

Well...even that's not entirely true -- my dad's Golden/Lab mix sure knows how to get what she want's out of my dad, by means of simple but unambiguous body language symbols. We often speculate about who the smart one really is in the realtionship -- him because he's the human, or her, because she gets what she wants out of him with a high degree of reliability and a minimum of effort. ;-)

"More speculatively you could argue that the capacity for religious observance is wired into us. But not any specific religion, and perhaps not even always in a form that most of us (having grown up amid Abrahamic traditions) would call 'religion.'"

I would say that humans have a psychological need for the kind of certainty that religion manifests. Perhaps it is a facade for some deeply imprinted physical need -- certainly the correspondence of physical ecstasy with religious ecstasy is interesting.

Milo said...

Rick:

"As an example, a capacity for grammatically defined language is so characteristic of humans as to suggest that it is part of our architecture"

Is it? Fossils of anatomically modern humans predate the earliest accepted evidence of language by a significant margin.

Of course, once we did invent language, it proved useful enough that every human planet ended up either adopting the invention, or being displaced by people who did.

Milo said...

There's supposed to be an "on the" in that post. You can figure out where.

Rick said...

I don't think Chomksy's political rants have much bearing on his professional work (except perhaps as a guilt response to that work probably having been Pentagon funded).

But oh hell, I also have wondered if language was something humans invented.

However, getting back to what I think is the central thread here, if we can create high level AIs at all, in some sense we must have programmed them - designed their architecture - without that implying that they are 'rigid automatons,' any more rigid in their mental processes than we are.

Now, there's an argument that 'real' human level thought requires a divine spark, but in that case we can't create a true high level AI in the first place; only God (or God-ness; whatever) can do so. But in that case this whole discussion is beside the point.

Thucydides said...

Religion may be related to the pattern matching abilities of humans. We see things and try to relate them to patterns of cause and effect.

This might be useful when matching rustling in the grass=saber tooth tiger stalking you (you loose very little being wrong, but being right is VERY important.) More subtle relationships (I did *this*=rain today) require more involved explanations. Thinking that everything is alive and has a spirit is one way to explain why things work (you and I have a spirit; the wild dogs that follow the clan seem to have spirits, prey animals also seem to ahve spirits, capricious things like the wind and rain *must* also have spirits...)

How AI's are programed to respond to the world might have different effect on their "spiritual lives".

Rick said...

I am reckless enough to be seriously considering 'religion in the midfuture' as an upcoming topic.

jollyreaper said...

What would be really nice is to have a mechanism whereby the two "split personalities" could remerge, creating a single mind with the memories and personalities of both. As long as their


This is the realm of the great unknown. Sincerely religious people aren't afraid of death because they are confident in life after death and some fashion of resurrection. There's a horrid little tale told in the christian churches these days of a man who took his son and his son's friend out on a boating trip. A storm brewed up and the father was faced with the choice of who to save. He was christian and his son was saved so he chose to save the son's friend who he knew was not saved. Why? His dead son would be in heaven. His son's friend, unsaved, would immediately go to hell if he died. This makes sense from the perspective of someone with absolute faith in the tenets of christian belief. To anyone who doesn't share those beliefs it seems completely monstrous.

So if AI's have competing belief systems, perhaps one school believes spawned processes are distinct sophonts and any sort of merger that you describe is tantamount to murder. Others believe it's no such thing and are fine with it.

One thing I liked about the game Alpha Centauri was how the factions weren't based on nation-states but rather upon ideological divides. I do think that's the sort of thing this world is heading to, where the greatest divides will not be based on genetics or place of birth but beliefs that cut across national boundaries.

At best, I would consider it similar to a near-death experience producing retrograde amnesia (the week or so between your death and

That would be from the perspective of the clone. And there have already been scifi novels written from the perspective of clones who were decanted and find out "You were killed three days ago; your current memories have no indication of why you were killed; you have to figure out who was after you and why you were killed and do so before they get you again since this is your only spare clone." Basically it's just an elaborate variation on the amnesia plot which has been around since forever.

jollyreaper said...

There could certainly be sticky philosophical issues - particularly if you go missing and are presumed dead, someone activates the backup, and then the original you turns out to not have been dead after all and comes back. Is there really a point to calling one the "real" you? Both


Yeah. That's a really thorny issue. I actually had an idea to use that very thing as a moment of badassery in a storyline. Our heroes are confronting the rich villain asshole type guy. He's all smug because he not only knows he's awesome, he's also got clone insurance. There's a transmitter hidden in his body and it operates on a dead man switch principle. If he dies, his next clone will be decanted. There's no way to prevent it. And he has many clones scattered among dozens of backup sites. You could kill him here and now and kill him the next time you see him and kill him again and again and you'll never possibly be able to kill him completely, he's always going to come back. And the backup sites are scattered all over the world so there's no central place you could hit to take out all his clones. And the whole system is automated so there's no way you could possibly prevent the new clone from decanting. And it automatically gets the biometric key access to the baddie's whole empire. So he's gloating over all that and one of the heroes says "Well, that's a pretty neat system you rigged up there. There's no way we could kill you and all of your clones. But what happens if more than one clone gets decanted? Yeah, there's a flaw in your dead man switch. What if I could trigger every single one to decant?" The other heroes do the old facepalm because they're having enough trouble defeating one of him, what could they do against dozens of him? But anyone looking at his face realizes what the problem is. The baddie is an asshole and knows it; put a dozen of him on this planet and the last thing they're going to do is cooperate to defeat an enemy, they're going to try killing off all the competition so that only one remains. The baddie now has a whole mess of bigger problems on his plate now.

of you will want to lay claim to your original life, and both of you will have the knowledge and personalities that your friends like in you, and so will be able to make a valid claim to it. (Also, a mind meld as described above would probably be harder with humans than with AIs, and a human is more likely to be uncomfortable with it than an AI for which this is a natural process.) But cheer up: maybe your wife will appreciate a threesome!

jollyreaper said...

am reckless enough to be seriously considering 'religion in the midfuture' as an upcoming topic.

May as well tell the Drazi you think both purple and green are ugly colors. :)

Personally, I'm still not understanding how traditional religions have withstood the development of scientific thought. I can completely understand the development of superstitions in general -- humans want answers and prefer a bad or made-up one to simply admitting the unknown. Lightning bolts come from Zeus and if you can't prove otherwise, shut up. But when science comes along and can prove otherwise, can point out the complete inaccuracy and impossibility of the old myths, I can't see how belief can persist. Or if it does, the stories have to be accepted as fable and allegory, not literal.

Needless to say, my lack of understanding doesn't mean a damn. So for the general assumption that discover of intelligent alien life would throw world religions into chaos...lol. No, it won't. You might see some strengthening of fringe loony saucer cults but the mainstream religions will carry on the same as before. The only addition to the debate is whether aliens are part of god's plan, have souls, and so forth.

Any new religions that spring up I think will mostly be rehashing or reinterpreting existing religions. The question is how colonies might serve as incubators to create new denominations or interpretations to bring back to the homeworld. But aside from the curious location that a given sect develops, I don't foresee religions as being any fundamentally different from what they are now. You'll have big religions that are pretty much like what we have now, highly lucrative cults and scams like scientology and maybe some small stuff that might seem interesting but is mostly irrelevant for humanity at large. Agnosticism/atheism might become more accepted in particular colonies depending upon the makeup of their founders.

Tony said...

jollyreaper:

"Personally, I'm still not understanding how traditional religions have withstood the development of scientific thought."

There are several ways to answer that question:

1. Science as we know it owes its existence to Abrahamic monotheism. Prior to monotheism, everything that happened could be interpretted as the gods at war. And even if you could observe something scientifically, there was no guarantee that tomorrow it wouldn't be different, at the capricious whim of some god or spirit. That's why the Classical philosophers who were sniffing around the edges of scientific thought never gained any traction.

With Abrahamic monotheism, there is one universal Law, good for all time. With that, one can set out to understand that Law, with a reasonable expectation of your learning having meaning. The entirety of the Age of Reason was based on the theory that God's Creation could be explored and understood as a physical mechanism with rules, and not just as a transient state that could be changed at any time the gods had a disagreement.

2. Religion has social utility for the majority of the people. It really doesn't matter what some smarty-pants who thinks he knows better has to say.

3. Religion and science aren't mutually exclusive. They address different issues in human thought. Science is about a need to understand. Religion is about a need to know.

jollyreaper said...

1. Science as we know it owes its existence to Abrahamic monotheism. Prior to monotheism, everything that happened could be interpretted as the gods at war. And even if you could observe something scientifically, there was no guarantee that tomorrow it wouldn't be different, at the capricious whim of some god or spirit. That's why the Classical philosophers who were sniffing around the edges of scientific thought never gained any traction.

But I can find just as many examples of monotheists denying the power of observation.

Igantius Loyola
"That we may be altogether of the same mind and in conformity with the Church herself, if she shall have defined anything to be black which appears to our eyes to be white, we ought in like manner to pronounce it to be black. For we must undoubtingly believe, that the Spirit of our Lord Jesus Christ, and the Spirit of the Orthodox Church His Spouse, by which Spirit we are governed and directed to Salvation, is the same;"

Granted, you will have men of both God and science who do good work. And you can find non-religious people who are total jerks. This argument carries not so much by the merits of individual examples but the preponderance of evidence. Have the monotheistic religions generally promoted or hindered scientific learning? I think you could give them some good marks for preserving select works from antiquity in the monasteries but give them some black marks for all manner of stupidity concerning medicine, genetics, evolution, etc.

This is how you can have the majority of Republican candidates for the presidency of the United States get up on stage and swear they disbelieve in evolution.

learning having meaning. The entirety of the Age of Reason was based on the theory that God's Creation could be explored and understood as a physical mechanism with rules, and not just as a transient state that could be changed at any time the gods had a disagreement.

You had atheists or deistic believers back in antiquity, too. (Deistic in the sense that the gods may exist but take no interest in human affairs.)

2. Religion has social utility for the majority of the people. It really doesn't matter what some smarty-pants who thinks he knows better has to say.

Ah, the casual swipe at learning. That's precisely the sort of anti-intellectual argument that usually comes from religious circles. "How dare you tell me what me and mine have believed for generations is wrong?" Yes, poor scientists can be just as insular but good scientists welcome a challenge and will change their thinking. Good religious people put a premium on belief without evidence, i.e. faith, and the preservation of faith in the face of great doubt.

3. Religion and science aren't mutually exclusive. They address different issues in human thought. Science is about a need to understand. Religion is about a need to know.

I find them to be mutually antagonistic. A general sense of spirituality can be a good thing. I've known spiritual atheists. But I generally find a belief in things without proof, especially things that have a concrete impact on the real world, that sort of thing tends to be very harmful. You don't see a lot of atheist suicide bombers.

Rick, if you do a post on this topic, I'm gonna get the popcorn and settle down for a long sit. :)

Tony said...

jollyreaper:

"But I can find just as many examples of monotheists denying the power of observation.

...

You had atheists or deistic believers back in antiquity, too. (Deistic in the sense that the gods may exist but take no interest in human affairs.)"


Atheism in the Classical world was an accusation, not a personal philosophy. Saying somebody was an atheist was to say that he was impious towards the locally popular gods or simply didn't believe in them. Very few went so far as to say the gods didn't exist, and very few of those were natural philosophers.

Deism was a 17th Century development, intended to reconcile the existence of a Creator with His apparent absense in daily affairs. Antiquity didn't have any parallels, for the simple reason that there was so much that wasn't understood about the world that the actions of the gods seemed apparent in everyday life.

WRT the influence of monotheism on science, one can't take the existence of religious doctrinarianism as evidence that it didn't exist. Those same doctrines that denied scientific advances contained within them the ide that Truth was universal. And the universality of Truth is the fundamental attitude underlying all of modern science.

Tony said...

jollyreaper:

"Ah, the casual swipe at learning. That's precisely the sort of anti-intellectual argument that usually comes from religious circles."

Except that I'm agnostic. The swipe isn't at learning. The swipe is at arrogance among the technically educated but philosophically shallow. I'm scientifically and technically educated, but I'm simply not antagonistic against religion, because I understand its philosophy, even if I choose not to participate.

What I am guilty of is an antagonism against crusaders, theist or atheist.

Tony said...

jollyreaper:

"I find them to be mutually antagonistic. A general sense of spirituality can be a good thing. I've known spiritual atheists. But I generally find a belief in things without proof, especially things that have a concrete impact on the real world, that sort of thing tends to be very harmful. You don't see a lot of atheist suicide bombers."

What in the world is a spiritual atheist, except a hypocrite? And suicide bombing is hardly the prerogative of theisitc ("religious" is the wrong word here, for reasons that will be made clear) fanatics. Viet Cong and PAVN sappers were known to conduct suicide attacks with explosives. The problem with religion isn't theism. The problem with religion is fanaticism, which can be generated just as well by secular suassions as they can be by the theistic.

Rick said...

Ah, a foretaste ...!

jollyreaper said...

The gods do not punish the bad and reward the good as the common man believes. The opinion of the crowd is, Epicurus claims, that the gods "send great evils to the wicked and great blessings to the righteous who model themselves after the gods," when in reality Epicurus believes the gods do not concern themselves at all with human beings.

Beliefs attributed to Epicurus. Yes, Deism as a stated belief hales from modern times but there are certainly historical antecedents.

Honestly, I find the "But what about all the good things religion does?" arguments similar to the position of "A war's good for the economy! We need a war to straighten things out!" It's getting cause and effect all muddled up.

The usual example of war being good for the economy is seeing WWII helping us finally shake off the Great Depression. So the assumption is "war = good." Well, it's more complicated than that. Massive government spending got the factories going at full tilt and the lack of us getting bombed coupled with everyone else getting bombed paved the way for a nice post-war recovery. The only part of that we'd ever want to contemplate repeating is the massive stimulative spending part; the killing people in the millions didn't actually do anyone any good.

So when people say 'What about the good of religion?" I have to ask whether they're making the same mistake as saying war's good for the economy. Maybe we can extract the nice parts from religion and leave the rubbishy parts in the bin?

I'm not understanding your position. You seem to be arguing heavily for the religious perspective as someone who claims to be agnostic. What are your views?

Tony said...

jollyreaper:

"I'm not understanding your position. You seem to be arguing heavily for the religious perspective as someone who claims to be agnostic. What are your views?"

Religion serves a social purpose completely apart from whether it is true or not. And I don't categorize various theisms as the the only things to be called "religion". Even when people abandon theism, they develop and follow various secular religions. To me, religion is simply believing without proof. Theists, atheists, humanists, whatever...they all have their religions that they have no reason to believe other than they really want to.

Now, because religion does serve a social purpose, and because we will have religions, I am interested primarily in the quality of the religion the people believe, because what they belive is (definitely capital "T") Truth is what they will do, in extremis. For that reason, I'm personally biased towards theistic religions, because the various theisms tend to moderate mens actions more than they inflame them. The secular religions of the 20th Century are the ones that led to the unprecedented body counts, not the theistic ones.

jollyreaper said...

"If God did not exist, it would be necessary to invent him." -- Voltaire

Sam Harris' rebuke of the argument that atheism was responsible for the evils of communism actually echoes your sentiment: he called their beliefs a religion just as much as Christianity and thus open to the same critique.

He would reject your argument of trying to inoculate the populace with a benign religion but this comes down to a fundamental human neurological question: must there be religion? There have always been famine, pestilence and disease but that doesn't mean we're powerless to change that situation.

If the development of religious belief is hardwired into the human mind, if it is as impossible to do without that sort of thing as it is to do without language or water, then your argument for inoculation with a benign religion as opposed to a malignant one is the only course that makes sense.

Someone like Harris or myself would prefer to operate with as much rationality as possible and avoid lying in any form. But is that practical? That's the question we can't answer right now.

Tony said...

jollyreaper:

"Someone like Harris or myself would prefer to operate with as much rationality as possible and avoid lying in any form. But is that practical? That's the question we can't answer right now."

IMO, the question has long been settled. Religion, whether it is a belief in God or a belief in something transcendent about Man, pretty obviously satisfies some need that people need satisfied. It seems just too much of a stretch that some as yet undiscovered strategem of reason will somehow remove the need, or its consequences.

Since you brought up the question of war earlier, let's remember that war is also one of those things which people say they want to get rid of, but whcih serves some purpose that people need served. IMNSHO, war is just and expression of the search for justice. As long as humans need justice, they will need war.* And if you remove the need for justice from humans, would the remainder really be human?

Same, same religion -- as long a humans need spiritual certainty, they will need religion. But if you remove the need for spiritual certainty, you would remove from humans one of those things that make them human.

*Let's understand here that the kind of justice I am talking about here is not justice by some absolute standard, but a pure, raw consequentialist model of justice, informed by a purely subjective standard of rightness of outcome. IOW, Hitler was seeking justice, by his interpretation of right outcomes, just as much as those who opposed him were seeking justice, by their own idea of a right outcome.

Tony said...

Re: Sam Harris

Harris, along with Dawkins and Hitchens, is one of those crusaders I am antagonistic against. Anybody that can say: "...some propositions are so dangerous that it may even be ethical to kill people for believing them," simply can't be taken seriously.

Jim Baerg said...

Tony: "IMO, the question has long been settled. Religion, whether it is a belief in God or a belief in something transcendent about Man, pretty obviously satisfies some need that people need satisfied."

Certainly not in my opinion.
Gregory S. Paul makes a case in several articles available here http://www.gspaulscienceofreligion.com/
eg: http://moses.creighton.edu/JRS/2005/2005-11.html
or http://www.epjournal.net/filestore/EP07398441_c.pdf

that in non-dysfunctional societies the psychological need for religion goes away.

Maybe humanity can "outgrow those silly superstitions".

Jim Baerg said...

"Anybody that can say: "...some propositions are so dangerous that it may even be ethical to kill people for believing them," simply can't be taken seriously."

This sort of incident does make me take Harris seriously on that point.
http://www.butterfliesandwheels.org/2011/hundreds-of-rocks-are-thrown-at-her-head/
What can you do with a Taliban believer except kill him?

Tony said...

Jim Baerg:

"...that in non-dysfunctional societies the psychological need for religion goes away.

Maybe humanity can "outgrow those silly superstitions"."


Since those analyses totally ignore the phenomenon of secular religions, and the body counts piled up by them, what's the point? Particularly ammusing is the author's singling out of Japan as an example of a rational society. Really? Japan provides an example of one of the murderous secular religion in the 20th Century. The only thing the author has proven is that he's a crusader against theism.

Tony said...

Jim Baerg:

"What can you do with a Taliban believer except kill him?"

I didn't know the Taliban was a religion. Now if you want to talk about radical Islam, I suppose Harris would have had a point if he said it was ethical to kill somebody who acted on those beliefs. But saying it's ethical to kill somebody simply for possessing them? Preposterous.

Thucydides said...

Imperial Japan had a perverted "State Shinto" as part of a campaign to ensure all citizens were indoctrinated in the belief system of the ruling elites (and were therefore willing to carry out any and all actions to advance the cause of Imperial Japan). The fact that this mapped very well on existing memes of Japanese culture was probably not lost on the inter war elites who were promoting these policies.

Bolshevik Socialism had many of the aspects that are associated with religion, despite their persecution of the Christian religion within Russia, and many "Green" fanatics seem to have a similar set of beliefs, which has pretty scarey implications.

WRT the ancients, the Greeks may have been unique in their belief the universe was actively out to get them. Re read may of their myths and legends (or compare Cupid and Psyche to the modern retellings of Beauty and the Beast or Phantom of the Opera) and you will see the idea a malevolent, or at best indifferent intelligence was at work.

How this might relate to AI is hard to say; since things like pattern matching in ancient hominid prehistory will not have any counterparts in a developed intelligence like an AI. Humans see patterns in anything and make explanations to match; AI's are not being stalked by sabre tooth tigers, so don't have to hyper develop pattern matching the way our ancestors did.

Tony said...

Thucydides:

"How this might relate to AI is hard to say; since things like pattern matching in ancient hominid prehistory will not have any counterparts in a developed intelligence like an AI. Humans see patterns in anything and make explanations to match; AI's are not being stalked by sabre tooth tigers, so don't have to hyper develop pattern matching the way our ancestors did."

I have no clue -- and neither does anyone else -- how a stong AI would respond to religion in humans. That would in fact be something to pay close attention to. Do all strong intelligences develop a need for existential certainty? If not, how would an intelligence, such as a strong AI, appreciate and evaluate such a need in a fellow intelligence?

Milo said...

Any intelligence that has any reason to occasionally benefit from finding patterns, is at risk of finding false patterns accidentally. It is an open question, however, whether any such patterns would become sufficiently popular to catch on memetically, and if it would still hold on to its beliefs even when shown why they cannot be true.

It's conceivable that an AI might promptly declare all human religions to be silly... then turn right around and come up with its own religion, that's just as baseless and also completely incomprehensible to humans.

Rick said...

I'll jump the gun on myself with a couple of observations. I waver on whether ideologies are 'covert' religions; there's an argument that religion implies a supernatural element. OTOH, does 'supernatural' have the same meaning for people who don't share our concept of natural law?

And on identifying ideologies as 'religions,' I note that it tends to be most often applied to ideologies we don't much like. But - to take an example relevant to this blog - there is a good deal of quasi mystical and faith-like thinking about space travel. Inevitable destiny, etc.?

Milo said...

Rick:

"And on identifying ideologies as 'religions', I note that it tends to be most often applied to ideologies we don't much like."

Which I think is a rather interesting observation - when did out society start treating "religion" as a swear word, to the point that calling a belief a religion comes over as synonymous with dismissing it as baseless nonsense?

The fact that we tend to use the word "religion" this way suggests that perhaps on an important level our society is moving towards atheism, or at least functional atheism, despite the large number of people who are still overtly religious.


"But - to take an example relevant to this blog - there is a good deal of quasi mystical and faith-like thinking about space travel. Inevitable destiny, etc.?"

Logic dictates that if (A) space travel is physically possible, (B) we last long enough to figure out how to do it, and (C) we feel motivated to do it, then space travel will happen.

So, working backwards...

Is (C) true? It seems to be. A lot of people seem to really want to go into space. Given the opportunity, humans want to explore and expand. Also, for what it's worth, beliefs on what we "should" do are inherently subjective, and so while you might true to call such view religions, they are quite different from religions that claim to know objective facts about how the world works.

Is (B) true? I think it is. Some doomsayers think we're about to annihilate ourselves in a nuclear war or climate catastrophe or something any moment now, but I'm more optimistic about humanity's ability to survive - we've survived anything thrown at us so far, and barring a major out-of-context encounter (like an alien invasion... which, on the plus side, means that the aliens did figure out space travel) things should only get easier from here on. I might catch some flak if I casually predict humanity's survival for millions or even billions of years, but even if I tone it down and predict only another ten thousand or so years - that's not so much to ask, is it? - then that still allows an unimaginable level of technological progress.

So that leaves (A)...

Unfortunately, everything we know about space travel so far suggests that it should be impossible or at the very least absurdly impractically difficult and expensive.

Keep hoping.

jollyreaper said...

I think there's a difference between belief in something that's possible without evidence and something that's impossible or unknown which is so due to the lack of evidence.

A parent can have faith that a child will turn out well. There's no way to know until the child has grown but it's certainly possible, other parents have raised successful children, and people need to live their lives with a certain amount of optimism.

A person could believe that there is an afterlife and that people he loves who died before him will be there to meet him after his own death. And that's certainly a powerful comfort. But there's absolutely no proof for it.

As for religion being used as a dirty word, usually we stick with "superstition." There's just as much proof for the sinfulness of sex outside of wedlock as there is for bad luck for breaking a mirror or letting a black cat cross your path. But we tend to call the superstitions we like religion and the ones we don't superstition. We can apply that same mental jujitsu with religion itself. I've heard Christian critics of Islam ridicule it for the stupid, childish fables found in the Koran without considering that the exact same thing could be said for Bible stories.

People can be amazingly uncognizant of their own hypocrisy.

Anonymous said...

Milo said:"So that leaves (A)...

Unfortunately, everything we know about space travel so far suggests that it should be impossible or at the very least absurdly impractically difficult and expensive.

Keep hoping."
I don't think that that's entirely true...space travel is hidiously expensive and very difficult, but not impossible; the problems of space travel all seem to be the engineering of suitable spacecraft and the cost of launch into Earth orbit. That these have been difficult to overcome still doesn't make them impossible...we see advances all the time in material sciences, propulsion, and even medical discoveries, among many other fields, that can be applied to space travel. There will come a day (sooner with dedicated funding and research) when private companies (or foundations) can contract for a cheap orbital rocket or space cruiser, and establish their own colony or outpost for whatever motivates them, with no more difficulty that any other start-up venture. It may take a few decades, or a couple of centuries, but I'm confedent that it will happen.

Ferrell

Tony said...

"There will come a day (sooner with dedicated funding and research) when private companies (or foundations) can contract for a cheap orbital rocket or space cruiser, and establish their own colony or outpost for whatever motivates them, with no more difficulty that any other start-up venture. It may take a few decades, or a couple of centuries, but I'm confedent that it will happen."

Keep the Faith, bro.

I rest my case.

Thucydides said...

The current issues are simply a lack of sufficient motivation to get into space. While individuals might be willing to pony up $20-30 million to the Russians for a visit to the ISS, there isn't enough broad based motivation to expand this beyond a sort of stunt tourism market.

By broad based, I am suggesting there are multiple and overlapping desires, backed by a reasonable expectation of some sort of return on investment. ROI for space industry is pretty speculative at best, and even if you could predict a market for (say) asteroidal materials, you are still in the hole for millions or billions of dollars for many years before you get the first dollar or yen back. At that rate, you really are better putting money in a mattress.

Now by historical analogy, we know there really are "markets" out there; for living space free of real or precieved oppression, raw materials and energy. The costs and Return On Investment are negative due to the astronomical costs of getting to orbit; once there, many of the costs may well be lower by orders of magnitude (a Dexler light sail can tow a 180 ton spaceship around the solar system and "hover" in space, performance which even torchships might not be able to equal, even an 800 ton ISS class ship can move across the solar system under a Dexler lightsail with a higher level of performance than most other proposed systems. Dexler lightsails were proposed and samples made using 1970-early 80 era technology).

Once there is an expectation that there will be a return on investment, you will see people start heading out to reap the rewards.

Thucydides said...

The current issues are simply a lack of sufficient motivation to get into space. While individuals might be willing to pony up $20-30 million to the Russians for a visit to the ISS, there isn't enough broad based motivation to expand this beyond a sort of stunt tourism market.

By broad based, I am suggesting there are multiple and overlapping desires, backed by a reasonable expectation of some sort of return on investment. ROI for space industry is pretty speculative at best, and even if you could predict a market for (say) asteroidal materials, you are still in the hole for millions or billions of dollars for many years before you get the first dollar or yen back. At that rate, you really are better putting money in a mattress.

Now by historical analogy, we know there really are "markets" out there; for living space free of real or precieved oppression, raw materials and energy. The costs and Return On Investment are negative due to the astronomical costs of getting to orbit; once there, many of the costs may well be lower by orders of magnitude (a Dexler light sail can tow a 180 ton spaceship around the solar system and "hover" in space, performance which even torchships might not be able to equal, even an 800 ton ISS class ship can move across the solar system under a Dexler lightsail with a higher level of performance than most other proposed systems. Dexler lightsails were proposed and samples made using 1970-early 80 era technology).

Once there is an expectation that there will be a return on investment, you will see people start heading out to reap the rewards.

Rick said...

Welcome to another new commenter!

Let me clarify a bit my - deliberately provocative! - example of space travel.

Obviously space travel is possible, since we already do it. And merely being 'confident' that we will do a lot more of it in the future is an optimistic assessment, but not quite a statement of faith.

But an awful lot of writing about space does have a faith-like element - not regarding whether or not it is possible*, but how important it is in the human scheme of things.

* Though FTL certainly pushes this question, to say the least.


People who are not space geeks might reasonably wonder why human travel to uninhabitable objects hundreds of millions of km away would be more than a cool but VERY expensive form of tourism.

Milo said...

What's wrong with cool tourism?

There is so much to see out there, so much to learn. I think that exploration is a goal in its own right. A desire to explore and overcome challenges is ingrained in human nature, and why go against it?

We should, however, try to work on the "VERY expensive" bit.

Tony said...

Rick:

"And merely being 'confident' that we will do a lot more of it in the future is an optimistic assessment, but not quite a statement of faith."

I think that depends heavily on why a given person has such confidence. The Franchise certainly has the quality of religion for a lot of fannish types. A lot of lay support for manned spaceflight comes out of that cohort.

Even people who have "confidence" based on reading hard SF tend to exercise a lot of faith. Otherwise we wouldn't have such long and heated debates about the nature of McGuffinite. It almost rises to the level of the microcomputer operating system religious wars at times...

Rick said...

Absolutely nothing is wrong with cool tourism! But per se it probably will not transform human history.

That is where the faith-like element of space advocacy comes in. In the space community at large, and the comment threads here, there is a widespread view that our destiny lies in space, with Earth in due course - probably less than a thousand years - becoming a backwater.

On the face of it, there is no particular reason to believe any such a thing. Everything in space is far away, difficult to reach, and so far as we know now it is all entirely non-habitable.

Compare the level of belief in space colonization with the level of belief in colonizing the seabed, a vastly easier project.

Stevo Darkly said...

I think two potential factors that could help drive space colonization are (1) the continued "shrinking" of the world, and (2) increasingly powerful technologies that are in individuals' grasp.

For example, I am not an expert on this, but from time to time I hear about the possibility/threat of near-future "basement genetic engineering," meaning the ability to manipulate the genes of a bacterium or a virus might become affordable to an antisocial individual or terrorist group, etc. and of a scale that could be housed in a relatively small space.

If that day comes -- 10 or 200 years from now -- then there will be calls for "reasonable" controls to protect us from the misuse of this technology. This might include things that some modern-day folk would consider very intrusive -- for example, if the cops gain the right to conduct unannounced searches of your basement, at any time of the day or night, to make sure you aren’t growing Super Anthrax or Airborne AIDS or Black Death II down there illicitly. Even perfectly benign commercial biotech might come under some very heavy-handed regulation to make sure nothing goes amiss. And the less a technology is understood by the public and the politicians, the more risk-averse and stifling that regulation is likely to be. See nuclear energy; see Europe’s reaction to “Frankenfoods,” see the things that have been proposed and continue to be proposed to control the Internet.

Some people, like genetic engineering professionals, might chafe at some of the heavier regulations. Some of them might want to found a space-based research laboratory where such regulations are looser and research might be conducted more freely among likeminded people who are more tolerant of the risks.

A space-based facility, unlike an Earth-based one, could be more effectively isolated from the terrestrial biosphere. If something gets loose, it might kill everyone aboard –but thousands or millions of miles of vacuum would isolate it from Earth. The nations of Earth might be more likely to tolerate such a research settlement with less regulation of its internal affairs, as long as it followed strict quarantine measures regarding the outside universe.

If the kind of research conducted in space leads to something sufficiently valuable – for example, anti-aging and life-extension medications – then the settlement has something to export. That could drive further expansion and space-based settlement in support of the original facility.

And for story-telling purposes, let me throw out this McGuffin for you: Suppose the facility synthesized an amazing anti-aging drug, but (I have no idea how plausible this is) suppose that prior to being ingested and cleaning out your cells, it has to be stored in microgravity or else it will “spoil.” In other words, people now have access to a “fountain of youth”—but they have to go to space to get it, because transporting it to Earth’s surface would ruin it. You’d have to go up to an orbital spa, take the treatment, and let it work before you could come back down to Earth. Think of how this would drive the development of habitats in orbit and the means to get there!

At first, only the super-rich could afford it, but there'd be an evermounting drive to reduce the transport costs and capture a market of potentially billions of people (if only you could). And ordinary people would have a powerful incentive to save money and invest it in space-related activities that they don't currently have.

Tony said...

IOW, you have to imagine a whole string of unlikely occurrences and coincidences, all favorable to the desired object. Isn't that what the anti-theist crowd accuses theists of doing?

Stevo Darkly said...

Related to my previous comment, I have some thoughts about a less concrete, but perhaps ultimately more likely, driver of expansion into space: As our world shrinks, there may be a greater psychological need for "space."

I'd like to quote a comment I once made on the blog Centauri Dreams:

"In the thread immediately below, commenter Brian Wang made a terrific analogy between humanity remaining on Earth and deciding to live your life in your parents’ basement. You can make an overwhelming economic (or at least financial) case for staying in your parents’ basement vs. the effort and expense of moving out and establishing your own independent household. I know — I didn’t get out of the house until I was about 30, and it saved me some money at the time. Nevertheless, I was glad to get out when I could afford it. And despite the economic trade-offs, almost everyone moves out of their parents’ house sooner or later. In part, because one of the costs of living in close quarters with other people is that you have to shape your life to accommodate them to some degree. And that can really cramp your style.

"The Gobi Desert or Antarctica [or the seabed] may be easier to colonize than Mars. But the Gobi and the Antarctic [and the seabed] will also be within 30 minute’s reach of every 22nd century control-freak with a nuclear arsenal — which by then could be a lot of people. Living with and accommodating an Earth full of such folks might become a bit wearing. Living with a smaller group of self-selected like-minded people on Mars, or Enceladus, or Tau Ceti III might become worth the effort. For our own sakes or those of our descendants, some of us may well decide to move out. Maybe far out. Sometimes, there’s just no substitute for space."

Source: http://www.centauri-dreams.org/?p=1317

(This was in response to a post that SF writer Charles Stross made on his ownblog, in which he took a very pragmatic and pessimistic look at the possibility of interstellar travel. This sparked a big fight between two groups of people with very different philosophical approaches to envisioning the future, whom I've previously categorized as "Pragmatic Feasibility Study Engineers" vs. "Optimistic Inspired Extrapolating Visionaries." Start with the link above. You can also go directly to Stross's post at his blog by following the "High Frontier Redux" link therein, but bewarned that it attracted so many comments that the link may crash your browser. Start with Centauri Dreams first.)

BTW, everyone here should also read the Centauri Dreams blog. It looks at science issues and developments related to deep space travel (ultimately interstellar travel), and is fairly soberminded about it. Plus, it's updated almost every day. (SF writer Greg Benford is also a frequent commenter.) Many topics discussed here are discussed there as well, though with a difference emphasis. See, e.g., "Voyager and the Will to Explore" from Jan. 24.

Stevo Darkly said...

Tony, re:

IOW, you have to imagine a whole string of unlikely occurrences and coincidences, all favorable to the desired object. Isn't that what the anti-theist crowd accuses theists of doing?

I know that's a string of unexpected occurrences, driving the scenario toward a specific end; it's called a "story." :) And such things do sometimes occur in real life. But I really was thinking in terms of of possible story ideas, not making a forecast of what I think will really happen. I wouldn't advise anyone to build carrer plans or an investment portfolio around my scenario -- but someone could make a future history background or a story out of it.

(Also, for the record, I'm not an anti-theist myself, but I'll save further comments on that subject for another conversation.)

Stevo Darkly said...

Let me correct myself and apologize to Tony, because I actually switched tracks on him.

In my first post, I posted about some possible drives to space settlement that I actually think are plausible. (I further explore this in my subsequent "living in your parent's basement" post, which I hope was not too telling!) Then in my original post, after I said, "And for story-telling purposes..." I veered into a hugely, hugely speculative scenario about some more specific motivations, thinking about fiction rather than the future. Maybe I should have done that separately to avoid confusing the two points, but I was following my stream of consciousness.

Stevo Darkly said...

And -- oh damn -- I just realized that my 2nd post (about "increasingly, living on Earth will be like living in your parents' basement") got stuck in the spam-trap. I guess because I had a link in it? Hopefully Rick will be able to free it soon. Rick, if you would be so kind...? Thanks.

Anonymous said...

Even if there isn't any other reasons to establish permanantly manned outposts off-world other than scientific ones, eventually enough people will build support communities around those off-world science outposts to evolve into full fledged colonies. If we find something else to spur migration, then it will cause these off-world communities to grow quickly, perhaps in only a matter of a few decades; otherwise they will take upto several centuries to become self-sufficent.

Denying that something that is already happening is impossible or that it will never be fesible because today's circumstances' makes it extreamly difficult, does not mean that tomorrow there will not be different circumstances that completely (or in part), sweeps away our objections by making then obsolete. Being optimistic about space travel and being hopeful about how rapid it will come about, is not the same as blindly letting yourself be lead into fanatsies about how people will be zooming around the Solar System in a couple of years.

This blog allows us to speculate on possible futures and the effect those futures will have on humanity. It also allows us to debate on the most probible means and circumstances to achive that future. I think that we all appreciate Rick for creating it and giving us the oppurtunity to indulge ourselves.

Ferrell

Tony said...

"'Pragmatic Feasibility Study Engineers' vs. 'Optimistic Inspired Extrapolating Visionaries.'"

Actually, I think that pretty much says it all. Look very closely at the language: Inspiration? Vision? The implications are left as an exercise for the student.

Tony said...

BTW, nothing personal WRT anybody (or any specific person) in my comments on the religious qualities of lay space enthusiasm. Just pointing out what I see.

Tony said...

Stevo Darkly:

"Living with a smaller group of self-selected like-minded people on Mars, or Enceladus, or Tau Ceti III might become worth the effort. For our own sakes or those of our descendants, some of us may well decide to move out. Maybe far out. Sometimes, there’s just no substitute for space."

A couple of problems with this vision:

1. Small communities of "self-selected like-minded people" are notorious for falling apart after a few decades, either because they can't agree on leadership issues or they generate an internal religious war over matters of obscure -- but very important to people who care -- doctrine.

2. Unlike separatist communities of the past, ones in space would be so technology dependent that it's hard to imagine how they would survive without economic links to the Earth, where the industrial base is. And then how would they get the scratch to buy what they need?

Stevo Darkly said...

Rick: Thanks for freeing my trapped comment.

Tony:

"'Pragmatic Feasibility Study Engineers' vs. 'Optimistic Inspired Extrapolating Visionaries.'"

Actually, I think that pretty much says it all. Look very closely at the language: Inspiration? Vision? The implications are left as an exercise for the student.

I tried to come up with labels that reflected the way each of the two groups saw themselves.

If I'd tried to come up with labels that reflected the way each group saw the other, I'd have to call them something like, "Myopic, unimaginative stick-in-the-muds blind to the overarching theme of inevitable progress through all of human history" vs. "Airy-fairy handwavers blind to the fact that wishing won't make it so and that most dreams fail."

I thought the more self-reflective terms would be slightly less perjorative and antagonistic.

A couple of problems with this vision:

1. Small communities of "self-selected like-minded people" are notorious for falling apart after a few decades, either because they can't agree on leadership issues or they generate an internal religious war over matters of obscure -- but very important to people who care -- doctrine.


Often true. Sometimes they work, though. I believe that medieval universities follow this description -- for the more flexible versions of "like-minded," anyway: People got together who wanted to study stuff, with freedom to disagree and argue about their conclusions built into the deal. Universities founded in the Middle Ages lasted a long time and many still exist today.

Granted the universities, though fiercely independent in terms of governance, were physically embedded in a larger civilization (usually a town where the members could take care of all their various physical needs), so maybe this isn't a good analog to an independent-minded space colony.

2. Unlike separatist communities of the past, ones in space would be so technology dependent that it's hard to imagine how they would survive without economic links to the Earth, where the industrial base is. And then how would they get the scratch to buy what they need?

Good point. I will point out, however, that our theoretical biotech lab in space, physically isolated and indepedent-minded as it may be, might still have friends, supporters and investors on Earth. The latter may have an attitude, "Please keep your dangerous bugs off of our planet -- but if you find a way to get rich as a result of your experiments, I'd like to get in on the action. Maybe I'll buy a little speculative stock in your operation. It may be a long shot, but it could pay off big. I'll take a little monetary risk, and I have no problem letting you take the risk of letting you get yourself killed."

Conceivably the isolated biotech settlement could even be founded as, or become, a subsidary of an Earth-based corporation or consortium. The Earthly owners could give the researchers considerable internal freedom as long as they follow the quarantime procedures vis-a-vis Earth. Although the researchers would also be under a "produce something profitable within X time frame or we'll shut you down" stricture.

BTW, nothing personal WRT anybody (or any specific person) in my comments on the religious qualities of lay space enthusiasm. Just pointing out what I see.

Understood. Similar comments here. You and I have different ways of looking at the world and at the future, and I think we have different things we want to get out of this blog. I often disagree with many of the conclusions that you draw -- but you also say a lot of illuminating things, and make good points. Regaredless of whether I find them ultimately persuasive or not, your comments, and your viewpoint, are invaluable, IMO.

That goes for the rest of you guys, too.

Thucydides said...

The idea of people wanting to leave in order to escape corruption or unsavory elements of their home society (or in positive terms, establishing new homes where they are free to practice their religious, social or economic beliefs) is an old and well established trope in history. I might note that for many of these groups, severing economic links with the outside would be considered a feature rather than a bug.

How they would be able to do this is the key problem, Amish or Doukhobor settlers could create farmsteads and live relatively self sufficient lives without too much worry about breathing, but a colony or Island Three requires orders of magnitude greater capital investment (even with magitech) than simply clearing the land and farming.

Tony said...

Thucydides:

"How they would be able to do this is the key problem, Amish or Doukhobor settlers could create farmsteads and live relatively self sufficient lives without too much worry about breathing, but a colony or Island Three requires orders of magnitude greater capital investment (even with magitech) than simply clearing the land and farming."

+10

That's the real problem with historical analogy in this case. There really is no analogy with anywhere near the disparity of motivation and means. If the Puritans who colonized Massachusetts Bay had had to come up with the equivalent of the income of a medium sized city, per person, they would have stayed in Europe and found a different way to enjoy their religious beliefs -- or just knuckle under.

Tony said...

Stevo Darkly:

"I thought the more self-reflective terms would be slightly less perjorative and antagonistic."

I think it's more accurate to call them "self-satisfying" terms. But I wasn't objecting to the terminology. I was pointing out what the terminologies demonstrate about the relative levels of logic and faith of the people that adopt them.

Stevo Darkly said...

I'll buy that, Tony. (Re: Comments up through February 3, 2011 9:00 AM.)

I guess the difficulty in forecasting the future is know how much "faith" versus "logic" to apply. (For lack of better terms, I'll stick with those.)

Faith is mostly intuitive. It's generally based on a vague sense of trust that things will turn out a certain way, somewhat tempered by a vague appreciation of the past (in the sense that your faith may be undermined by how well you believe it has performed previously). Basically, you have faith in the people and things that you believe are trustworthy (I know that's pretty much a tautology), largely based on whether you think they've either let you down or worked for you (or others) in the past.

The problem with this kind of faith is that, being squishy and largely unconstrained by hard data, it can lead you far astray.

On the other hand, logic has its limits too. It's a tool that gets you from Point A to Point B. But it depends on whether your starting data at Point A is good and complete. For example, do we know everything relevant about the laws of physics? Do we know everything relevant about materials technology and information technology and power generation? Do we even know as much as we already think we know? Quite likely not.

The problem with logic is that it can lead you far astray, too, if your intial facts and premises aren't right.

I'm not at all anti-logic. I just think we have to be aware of its limits.

Especially concerning forecasts of the future.

(To be continued.)

Stevo Darkly said...

Continued ...

I think it's interesting that a vague "faith" in the way things will be can be a better predictor of the shape the future will be, at least in terms of general "landscape" and trends, than a more detailed and specific analysis of the possibilities -- although both methods will be wildly off in the specifics. That's because in either case, we don't have data that's good enough for specific forecasts.

For example, I'm thinking of financial planning. Just about any financial advisor will tell you that -- historically and generally and over the longer term -- you'll get a better return for your investment by putting your money in stocks, versus bonds or just sticking it in a bank savings account. This is generally true, although for specific investments and for specific time periods, it can also be way off.

However, if you try to do a lot of research and analyze how specific investments will perform over the next year or five, even highly trained and informed investment experts will often do worse at picking investments than a blindfolded chimpanzee with a dartboard.

Bearing in mind some very general trends of the past, and also bearing in mind that we don't know as much as we think we do, I still have a general "faith" that human beings will expand the range of the species beyound Earth's surface, because historically roaming around and expanding is what the species has tending to do in the past.

I know there expanding into space involves unprecedented challenges, but I have "faith" that the general historical trend of expansion is more robust than the data we currently have about how just difficult those challenges will still be 50, 100 or 500 years from now.

I am also aware of the possibility of game-changing paradigm shifts. If you told Marco Polo that in 700 years "made in China" would become synonymous with "cheaply obtainable goods" -- to a nation on the other side of the world! -- he'd think you were crazy. He might patiently describe the physical and economic constraints that make trade between the West and China hugely difficult. But if he extended this to a forecast of the future that is our present, he'd be wildly wrong.

What I'm saying is, both the "faith-based" and highly analytical forecasts of the future are likely to be wildly wrong in their specifics. But I won't count out the optimistic "faith-based" approach in terms of general trends. It also allows more latitidue in the telling of some entertaining fictions, even if they turn out to be outdated in a few decades. The more analytic approach will lead to outdated stories too.

Just my opinion and a differing philosophy. I do not claim final knowledge and I almost never end my comments with "That's just the way it is." :)

Thucydides said...

Faith is great as a motivator, and logic brings these motivations to real life.

The problem right now is while there is a broad catholic "faith" that humanity will spread through the Solar System and the stars, logic has only attacked very small portions of the problem. Thus we "know" how to build various high efficiency deep space drives, and can speculate pretty knowlegibly about asteroid mining and building Island Three colonies, but have no idea how to get into LEO cheaply, or the sort of long term closed cycle ecosystems needed for the ISland Three, or even the sort of economic incentives required to pay for all this.

Just like the colonization of the Americas required lots of different ideas and technologies to come together (and even the hijacking of different motivations, such as religions freedom and the desire to bypass the Silk Road and all the expensive middlemen between Cathay and the markets of Europe), so the colonization of space will also require vast numbers of ideas and technologies to come together.

Tony said...

The problem with boundless faith is this: there's only so much fundamental information about the universe to learn. Experimentally, we've already penetrated inwards to the subatomic level and outwards to within a couple billion years of the the instantiation of the universe. Theoretically, we've gone much further than that. Soon enough, we'll run out of theory, because experiment will confirm which teories are right or wrong, to the physical limits of observation at any imaginable level.

When that happens, we'll know what can and can't be done, and all that will be left is application engineering. The Singularity lies along the Y axis, not the X axis, because the knowledge curve -- and thus the innovation curve -- is logistic, not quadratic.

What is the relevance here? We may be closer to knowing all of the physical laws and their manipulations than many think. Chemical and nuclear power engineering, and the resultant reaction drives, may be all we can manage. We wouldn't like that. But we have to take that into account.

Rick said...

Regaredless of whether I find them ultimately persuasive or not, your comments, and your viewpoint, are invaluable, IMO.

That goes for the rest of you guys, too.


Worth repeating, especially since most of the value of this blog lies in the comment threads.

Rick said...

Bennie points for 'catholic' used in the small-C sense, and 'instantiation of the universe.'

I really waver on the question of who much we know of what there is to know. On the one hand, the enormous increase of knowledge in my own lifetime has mostly been of the filling-in variety, not extending the frontiers of knowledge.

Related to this, I encourage everyone to google 'great stagnation' - as well as following the link. Possibly we are facing not an accelerando but a decelerando as the industrial revolution gives way to more 'normal,' incremental growth. If so the implications are enormous.

On the other hand, people around 1800 thought the Newtonian synthesis looked pretty comprehensive. And Adam Smith extensively observed and thought deeply about the British economy of the late 18th century, but mentions steam engines only once, in an incidental footnote.

Tony said...

Rick:

"I really waver on the question of who much we know of what there is to know."

Oh, believe me, I waver too. But there is an ultimate limit to everything, including knowledge. Not knowledge of variations on the theme -- that is as big as the universe, which I doubt we'll know all of, ever. But the theme itself can only be so large and so detailed. And when we get to the limits ofthe theme, then we have the limits of oor capabilities spelled out. Hopefully we're not close to there yet. But we may be.

Rick said...

I got rid of a duplicated comment - I presume that I also successfully rescued the sequestered one you mentioned a day or two ago. :-)

Anonymous said...

The problem (as far as I can see), with the limits of knowledge problem is that we live in an evolving universe and that humans have only a limited capacity for comprehending information. I know that there are limits on the amount of knowledge, but I belive that those limits are so large and broad in scope that we are in no way close to even seeing where those limits lay, much less being close to reaching them. 140 years ago there were whole fields of knowledge that we weren't even aware of, like quantum physics, for example. 140 years from now, who knows what fields of knowledge will we have discovered, that we have no concept of today? I don't know what they will be, but I do know that we will have discovered them and that we will be studying things that we can't imagine, but take for granted. All through human history, we have been building knowledge; we may sometimes stumble, we may slow in our rate of advancement, but overall we have advanced. There have been people all through history that have said that the limits of knowledge, that the end of history, are just around the corner. None of them have been proven right and I doubt that any of them will be right for a good chunk of the rest of our star's remaining life span.

Ferrell

Tony said...

Ferrell:

" None of them have been proven right and I doubt that any of them will be right for a good chunk of the rest of our star's remaining life span."

With all due respect, that's a statement made on faith from inside the process, without knowledge of the shape of the process as a whole. To borrow an illustration from Nicholas Taleb, a turkey who is 1,000 days old, based on the what he knows from previous experience, could expect day 1,001 to be the same as all the previous ones. Unfortunately form him, day 1,001 is the day before Thanksgiving, and Farmer John greets him in the morning not with a bucket of feed, but with an ax. One can hope for much more left to explore; One can't know it.

Anonymous said...

Tony: you are, of course, entitled to your opinion, just as I am. Your analogy with the turkey is interesting, and I have heard it before, but it is flawed; I have access to information from various other sources that the turkey didn't have; likewise, your "faith" in the rapidly approaching limits to knowledge is equially faulty; you have no basis other than faith in your position. I 'know' that that statement will bring a rebuttal from you, but that is my opinion and you can disagree with it, but unless you can show me concrete proof that you are right, I don't see any reason to change my opinion.

Ferrell

Milo said...

We may not be able to know anything about the future for certain, but there are some things we can say which are close enough.

If you went up to the turkey on a random day and asked it if it believes it will live to see tomorrow, and it answered yes, then it has a 99.9% chance of being correct.

So yes, it's conceivable that we will someday run into the end up all possible knowledge - but when that happens, we will only be able to recognize it in hindsight. We will not be able to predict this event in advance, because too many people have cried wolf already, depriving such predictions of any value. Even if this happens, I have no good reason to believe it will be in our immediate future.

Rick said...

There are general limitations to observation and empiricism - you only know what you know, so to speak, including 'known unknowns,' but not the 'unknown unknowns.'

I waver in part because there are such strong arguments in both directions. On the one hand, we have deployed immensely greater human and material resources on science in the last century than previously - well into the 19th century, science was essentially the preserve of well to do amateurs.

On the other hand, when you've picked the low hanging fruit, that just means it takes more effort to get at the rest.

Thucydides said...

Even if Tony is right about us being close to finding the limits of knowledge, there is still immense room to work.

Steam engines were known over 2000 years ago, and rocketry for at least 500 years (and probably more), but they didn't move out of niche applications due to cultural and material science limitations. Rocketry may have become prominent by accident in the 20th century; gun technology was mature and well understood and there were several branches of gun technology that had the potential to deliver shells at very long distances (simple scaling like the "Paris Gun", advanced aerodynamics such as the "arrow shell" and multiple chamber technology proposed for the "V-3"). Scaling has the potential to send objects into orbit (see the sad history of Gerald Bull), so in an alternate universe without WWI and the Treaty of Versailles, immense guns might be sending small satellites into orbit today...

Accidents of history might send us down various scientific and technological paths, and cultural limitations might prevent us from seeing the potential in certain areas of science, technology or even philosophy. A change in these underlying factors could reveal new avenues of approach to old problems.

«Oldest ‹Older   1 – 200 of 262   Newer› Newest»