Wednesday, May 16, 2012

Time's Arrow



This blog has, for the most part, been confined in both space and time. In space to the Solar System, which we can reach without extraordinary difficulty (including extraordinary travel times), at least for a fairly generous definition of 'extraordinary.' In time to the Plausible Midfuture, a historical era I have not really tried to pin down.

Mainly I think of it as starting on the far side of current planning horizons, and extending to ... well, whenever is far enough into the future that our own era has faded off into the distant past. More or less, the era 2050-2300 might do well enough, though - especially at the far end - mere chronology is not really the point.

From a technological perspective, you might think of the midfuture this way: On the near edge, it is beyond what we are now specifically planning or building for. A fairly simple and robust concept, I think (so long as you don't examine it too closely).

At the far edge?If the Industrial Revolution continues at broadly the pace it has so far, 200 years takes us about as far into the future as the first successful steamboat lies in the past. The economic level will be about ten times higher than ours. If technological progress keeps chugging along past that point the world will become increasingly hard for us to understand or even recognize. It may not be transhumanist or a Singularity, but it will be strange.

On the other hand, if technology reaches the limits of the feasible, the world will become strange in a different way, deeply unlike the world we have experienced for the last two centuries. It will be a world in which not much changes, in terms of human capabilities, in the course of a generation or even a lifetime. And it will be a world in which the economic pie (though perhaps large by our standards) is no longer growing significantly. The argument about how to divide it up will therefore be much more fraught.

Such are the reasons, broadly, for confining most discussion here to the Plausible Midfuture.

But of course the future will - we hope! - not come suddenly to an end in 2250 or 2300, or any such date. Only on one occasion have I looked much further into the future, 40,000 years to be exact. (On one other patriotic occasion I glanced toward the 'Murrican future of 2700. But that is still only a slightly generous midfuture.)


The more distant future was brought to mind by (first-time!) commenter John G. He rescued last post's comment thread from a contemporary political argument (these never seem to change any minds) by bringing up a really long time scale. In five billion years or so the Sun will leave the main sequence and become a red giant, incinerating the Earth. Unless we go into space our descendents will presumably be incinerated along with it.

As an immediate argument for pushing along the space program I think this fails an urgency test. We have plenty of time! But in larger perspective ... well, it raises the question of larger perspectives.

Imagining 40,000 years of future history is a challenge. Imagining five billion years of it is staggering.

But one fundamental question emerges almost right at the beginning: What are the limits of the possible? My own general assumption has been that they are vast, and that it could take us a very long time to approach them. Science and technology have - so far - tended to advance by saltation, AKA leaps. This was the premise behind T S Kuhn's The Structure of Scientific Revolutions, which bears the burden of having contributed 'paradigm change' to the vocabulary of business jive.

By this argument, the scientific and industrial revolutions might well peter out in another century or two, reaching equilibria at which further progress is so difficult that it more or less stalls. All the locally low-hanging fruit will have been picked. At some later date, gradually accumulated knowledge and skills could trigger another era of revolutionary advances. But we have no real way to guesstimate how many centuries or millennia the intervening period of stability may last.

This is how I would approach a future history, for the sake of keeping the spires & togas era of godlike powers safely in the remote future.

There are, however, two alternative possibilities - or maybe three. One - the most familiar - is the Singularity argument. Scientific and technical progress will not just continue, or even accelerate: It will explode, catapulting us - or our replacements - into an unimaginably remote future in the course of a few decades.

A second alternate possibility is that progress will continue at roughly the rate it has since Watt's steam engine: A tenfold increase in technical capabilities and economic level, give or take, for each 200 years. By Singularity standards this is tediously sluggish. But it means a millionfold increase by the year 3200. In cosmic perspective this is indistinguishable from a Singularity.

A third alternate possibility is that, in fact, we are reaching the limits of both knowledge and progress. Yes, the computer industry has given us smartphones and Google, but cars and airplanes have changed little in overall configuration and performance since midcentury. Once you learn how to do things, you become pretty good at them pretty quickly. And after that point it is mostly just refinement.

As a loose analogy, the Age of Exploration lasted only a few centuries, and ended when there was not much left on Earth to explore. A world map of 1500 has mostly blank space or pure guesswork. A world map of 1600 looks kinda sorta familiar, and by 1800 - when the Industrial Revolution was just taking off - the world map had been mostly filled in.

There is still plenty of universe to explore, but our instruments for doing so may be broadly limited to the sorts of technologies I've projected in the Plausible Midfuture. Getting into space against the pull of Earth's gravity may be just inherently very difficult. Once you get there, speeds of dozens or even hundreds of km/s are pretty readily attainable. Speeds reaching an appreciable fraction of c are problematic, and FTL is a tower of wishful thinking erected atop a grain of physics speculation.

Or not.

Discuss:






The post title is swiped from a Clarke story, though I don't really recall the story itself, and only when I googled did I learn that it is also the title of a Martin Amis novel and a Trek: TNG episode.


The image, from a poster site, shows pharaoh Rameses III hunting with a bow.

209 comments:

1 – 200 of 209   Newer›   Newest»
Brett said...

@Rick
The economic level will be about ten times higher than ours.

I suspect this won't be the case. Not because of resource constraints, but because an increasingly large amount of human activity will fall outside of the usual measures for determining GDP.

There are, however, two alternative possibilities - or maybe three. One - the most familiar - is the Singularity argument. Scientific and technical progress will not just continue, or even accelerate: It will explode, catapulting us - or our replacements - into an unimaginably remote future in the course of a few decades.

It's really too early to tell on the Singularity Scenario. Is it really possible to design AI that are drastically more powerful that human mental set-ups, and that can "modify themselves" to become even smarter?

Personally, I think we'll eventually hit some limits. Some of them will be Laws of Physics issues, but most of them will be engineering difficulties and problems that we simply had no way of knowing about until we actually started designing the equipment.

By this argument, the scientific and industrial revolutions might well peter out in another century or two, reaching equilibria at which further progress is so difficult that it more or less stalls. All the locally low-hanging fruit will have been picked. At some later date, gradually accumulated knowledge and skills could trigger another era of revolutionary advances.

We may be hitting that in some fields right now. Physicist Steven Weinberg has an essay out titled "The Crisis of Big Science", which points out that particle physics is reaching the point where equipment required to continue research costs billions of dollars and more.

It's easy to imagine scenarios where fields of technology remain largely dormant because the cost of utilizing them is so high that society refuses to allocate the resources to use them.

Anonymous said...

Put me down for .. continued progress along the paths we have started leading to a dramatic slow down as feasible solutions become known.

---------
In college a concept in geography we were challenged with is that travel time improvements are not linear, but instead are a more slanted stair effect.

People walked ..
their speed didnt change much.

They domesticated horses - there was a leap.. That was maxed out ..

Trains were invented -- another leap...

And so on.
---------

I see other technologies working in the same way. How much has the bow and arrow advanced since the middle ages? How does that rate of advance compare to that of automobiles in the last 100 years? How does the advancement of automobiles in the last 30 years compare to that of computers?

As mentioned each technology has a sort of low hanging fruit - a quick development curve until it becomes mature. After that things slow down.

My brother has worked in microprocessor design for 20 years now, maybe 10 years ago he told me that "Moore's Law" would not continue forever, at least not for traditional ICs. At some point latency and transistor density would place a limit on gains.

Anecdotally I find my computers seem to stay "current" longer than they did 10 years ago.

I don't think that its straight sailing right to the singularity.

And computer technology is the fastest progressing technology ...

(SA Phil)

Mangaka2170 said...

This does bring up some thinking I've been doing on the subject of civilization-level bonsai by means of time travel, where agents from the future subtly influence the timeline at key points in order to reach their desired outcome.

Working with the assumption that technology that didn't exist in the past can't be brought into the past due to causality issues or something (so restricting what can be sent into the past to people and information; a useful McGuffin as it prevents time travelers from building massive fleets of death stars in their own time and then sending them back to wipe out all threats to their chosen civilization), the way it would be done is by introducing distant future tech as an indigenous invention, so that it would be less problematic to bring similar or more advanced versions in from the future should the need arise.

One of the reasons why this might be done is because in the future, an innovatory flatline such as the one Rick described in his OP might have proven detrimental to that civilization in the long run because they weren't prepared to deal with a certain threat to their existence. This hypothetical time agency would seek to stimulate technological, economic and perhaps even psychological growth so that when these challenges arrive, their chosen civilization will be ready.

These introductions would probably start with new manufacturing techniques which lead to the more sophisticated manufacturing methods needed in order to produce futuretech, the introduction of which would lead to more "innovations" and if all goes well, an accelerando.

Not sure if that made any sense, but then again I'm surprised if anything I say does.

Anonymous said...

Hmm wouldn't the directed influence of technological development lead to an earlier development of time travel ...

Leading to one of their ancestors stealing their idea.

My patent was hijacked by my own grandpa.

(SA Phil)

Anonymous said...

I tend to think that, in general, the second alternative will hold true, but with two caveats:

First of all, I think some of the technological growth we've had in recent centuries has been tied to population growth. Humanity has probably had more man-years to throw at research in the past century than in the past millennium or two. So as world population peaks, I don't expect technological and economic growth to continue following the same exponential climb that we've been seeing, but I don't expect them to totally flatten out either.

The second caveat is that I fully expect individual fields to reach saturation points (a la cars and airplanes), but I expect that as certain fields reach saturation, new fields will appear in which we will see rapid advancement. (Aviation petered out in the middle of last century, but that was right when computers started to get up to speed. Moore's Law is almost certain to cease to apply by midcentury, but I expect several new directions of advancement that we haven't thought up yet to appear by then). So the second alternative will apply when we look at the big picture, but the third will tend to apply to individual fields.

--Linguofreak

Brett said...

@Anonymous
My brother has worked in microprocessor design for 20 years now, maybe 10 years ago he told me that "Moore's Law" would not continue forever, at least not for traditional ICs. At some point latency and transistor density would place a limit on gains.

I thought it was already dead, and we're just making up the difference with parallelism and more cores. Which works, but introduces a whole host of its own issues in terms of writing programming and managing networks of computers.

It would be interesting if Quantum Computing never really panned out for more than some very limited applications, and we eventually reached the point where heat and technology limits kill off Moore's Law for good. Depressing in some ways, but interesting.

Damien Sullivan said...

"So as world population peaks, I don't expect technological and economic growth to continue following the same exponential climb that we've been seeing, but I don't expect them to totally flatten out either."

Note that even with a flat population, there'll be room to improve by bringing more people up to speed in education and capital. That'll saturate too, but right now the research population draws from a fairly small fraction of total population.

Moore's Law can dodge its original literal statement and morph into "cost per processing power" and have that decline for a while as costs get amortized and we get better at parallelism.

Singularity: has multiple definitions. I think "endlessly accelerating improvement" is bunk. Very rapid improvement to saturation, I dunno. But there's also what might be called instead the Cognitive Revolution, of understanding intelligence and creating superhuman intelligences at all, and the exists of AIs or uploads that can be immortal, copied, and modified. That gets *weird* fast, even without superintelligence or acceleration.

Gnaskar said...

It's really too early to tell on the Singularity Scenario. Is it really possible to design AI that are drastically more powerful that human mental set-ups, and that can "modify themselves" to become even smarter?
-Brett

The Singularity is not reliant on Strong AI (A pity, as Strong AI happens to be my research field). It (by Ray Kurzweil’s definitions anyway) only relies on two numbers, and a generalization of Moore’s Law. 10^19 calculations per second (in a massively parallel process) and 10^18 bits of memory. These numbers represents the capacity of the human brain; as well as the estimated capacity of a 1,000 dollar computer in 2035, based on Moore’s Law.

Which of course brings us to:

My brother has worked in microprocessor design for 20 years now, maybe 10 years ago he told me that "Moore's Law" would not continue forever, at least not for traditional ICs. At some point latency and transistor density would place a limit on gains.
-Anonymous

I thought it was already dead, and we're just making up the difference with parallelism and more cores. Which works, but introduces a whole host of its own issues in terms of writing programming and managing networks of computers.
-Brett

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.
- Gordon Moore, 1965

Moore’s Law is not about processing power. It’s about transistor density. Which means that in about 10 years, Moore’s Law will fail; quite simply because at that point a transistor will be about ten atoms while and quantum effects will start becoming a major concern (quantum teleporting electrons, among other things).

What Kurzweil did to Moore’s Law, however, was to generalize it into calculations per second per 1,000$. That allowed him to follow it back past transistors and look at vacuum tubes, relays and electromechanical computation.

Basically, we’re reaching the end of the life cycle for transistors. The low hanging fruit has been eaten, and it may not be worth it to climb all the way to the top for the last few fruits. But instead of stopping computing from continuing it’s development, this merely means that some new form of computing is about to take over. We live in interesting times.

Samantha said...

Why does processing power have anything to do with the ability to create a strong AI? A reasoning computer is a reasoning computer, even if it does take an hour to form a sentence!

My field of expertise is in researching actual human brains and what stands out most, to me, with regards to computers is how inaccurate and error prone we are and how critical that inaccuracy is to our ability to adapt to and interface with our environment. I think you can make all kinds of clever algorithms to do solve all kinds of novel problems but if you really want your machine to be able to learn you need to build into it the ability to screw up on a regular basis.

This begs the question: Will strong AIs make a thorough mess of things even faster than we can?!

~~

The other thing that I wanted to mention is all this talk on Moore's law has me thinking about ballistic transistors (e.g. graphine transistors). Those things blow my mind!

Bryan said...

Samantha brings up an interesting point; at some point AI-like computing becomes possible, even if it takes weeks to preform the most basic of operations. So, in some ways, its more coding limitations rather than computing limitations preventing AI.

I too am a biologist, and I seriously question whether a computer could ever truly mimic a human-like conciousness. Thinking of our brains as a computer is about as wrong as wrong can be - a closer analogy would be that our brains are a network of semi-independent, slow, error-prone computers working at times in parallel, at times in series, and, at times, at odds with each other.

I.E. that feeling of being "torn" when faced with a difficult decision is a reflection of our brains being "at odds". When faced with the classical "stop the runaway trolley with a fat man" problem, the moral and cost/benefit centres of the brain literally go to war - producing that "torn" feeling. I have no idea how you generate that kind of processing on a computer; and for that matter, you'd probably need a computer with far more power than the human brain to mimic that behaviour. Not that AI has to mimic human behaviour to put the 'I' in 'AI', but it is worth keeping in mind the differences between brains and computers in these discussions.

------------------------------

But what I actually wanted to comment on was the "5-billion year question". To a large extent, it is a non-issue; by that point (assuming normal patterns of evolution, where a mammalian species lasts about 2.5MY) our ancestors will be separated from us by 2000-or-so speciation events - i.e. "humans" will not be around, and will have been replaced by a series of new species, or had our line stamped out by extinction (keep in mind, you're talking a longer period of time between now & then, than there has been since life first formed on earth).

Contrary to popular opinion, technology has not stopped us humans from evolving - in fact, as measured by rate of genetic change, we're evolving faster now than any time before - and that doesn't take into account the fact that we now have the ability to directly influence our own evolution (through engineering, population genetics, etc), should we choose to go that route.

So while it is interesting to ask "where will we be in 5 billion years", or even "a few million years", the answer is to some extent already known - "humans" will no longer exist, we'll either have speciated into something else, or gone extinct. In terms of technological capacities, the intelligence of these new species is what will determine exactly how far forward (or backwards) 'we' progress.

Tony said...

One thing I think we miss about being so much quicker to this or that answer (nowdays) is that we are also very quickly fleshing out what the viable areas of investigation are. Simply put, it won't be long before there won't be new fields of investigation. We'll just have all possible fields, to which we apply increasingly-expensive and decreasingly-effective marginal gains.

One of the interesting consequences of this is the convergence of dsiciplines. Chemists these days, for example, are really just physicists that do most of their thinking at the molecular level. The development of high level computer languages was substantially informed by non-technicians, such as language and logic specilists, who knew how to organize thought.

But, let's say for the sake of argument, that we manage to advance far enough to travel interstellar distances, even if slowly, in multi-generational craft. Then the deeper future becomes one of static technology and desultory expansion throughout the galaxy, for no better reason than we can. History ceases to be the study of man's progress. It becomes local: a. within distances that communication is time relevant and b. based on knowledge of local pasts. Humanity becomes a galactic fungus of sorts, distributed through millions or even billions of very weakly interacting communities that continue on inertia alone -- nothing much more to learn, but no reason to just give up and die.

Brett said...

@Gnasker
It (by Ray Kurzweil’s definitions anyway) only relies on two numbers, and a generalization of Moore’s Law. 10^19 calculations per second (in a massively parallel process) and 10^18 bits of memory. These numbers represents the capacity of the human brain;

Honestly, I think that focusing on processing power and computer memory is more misleading than anything else. The mind isn't a program running on a sufficiently powerful substrate - it's an emergent property from the particular lay-out and chemical properties of our brain and nervous system. Trying to pin-point its theoretical processing capabilities and capacity to store information doesn't really tell you anything much useful.

Which is not to say that Strong A.I. won't be built. I just don't think their thought process and mental architecture will resemble anything human, assuming they can be built.

Damien Sullivan said...

"this merely means that some new form of computing is about to take over"

No, it means some new form of computing would have to take over for the 'Law' to continue. It does not mean that any such form need exist.

"a closer analogy would be that our brains are a network of semi-independent, slow, error-prone computers working at times in parallel, at times in series, and, at times, at odds with each other"

All of that says "computer" to a computer scientist.

"I have no idea how you generate that kind of processing on a computer"

Read _Fluid Concepts and Creative Analogies_.

"Trying to pin-point its theoretical processing capabilities and capacity to store information doesn't really tell you anything much useful."

It does if the brain's evolved purpose is to process and store information. We're not talking muscle or intestines, here.

FBH said...

One thing I think needs to be touched on (without wanting to get too political) is that what technology we have is likely less important than how we use it. Just as a for instance: We currently produce enough food to feed everyone in the world. Yet for various reasons large numbers go hungry.

This isn't to say we could easily feed the world if we wanted, but rather that how you organize what you have is just as important, if not more important as what you physically possess as a civilization.

Anonymous said...

FBH said...
One thing I think needs to be touched on (without wanting to get too political) is that what technology we have is likely less important than how we use it. Just as a for instance: We currently produce enough food to feed everyone in the world. Yet for various reasons large numbers go hungry.

This isn't to say we could easily feed the world if we wanted, but rather that how you organize what you have is just as important, if not more important as what you physically possess as a civilization.

==========

This is a really good point and I wanted to add that we also waste tremndous amounds of energy.

For example - I like to eat Sushi, which is flown on a Jet to the airport - just so people like me can get it in the land locked midwest.

Add in A/Cs to keep you house at 65F, Heat to keep your house 75F, Commuting 1 hour to work, Running your lawn mower to have grass 2 inches tall instead of natural ground cover, etc.

(SA Phil)

Tony said...

Damien Sullivan:

"No, it means some new form of computing would have to take over for the 'Law' to continue. It does not mean that any such form need exist."

I think what most people miss is that computing, as a technology, is shaped by the physically possible. It's binary, because that's how electrical switching works. It's sequential, because physical processes are sequential, because time moves in a uniform direction. It manifests multiple layers of abstraction, because natural language and binary numbers aren't natively compatible.

To find a new form of computing would require finding a new way to physically represent and manage information. Maybe quantum computing can be made to work, but I;'m not sure how, or if it can even be effectively applied.

"All of that says 'computer' to a computer scientist."

Actually, at most it says "network". (Sorry, but the whole "the network is the computer" thing is marketing fluff.) It probably doesn't even mean that in reality. Like I said on an earlier topic, the brain seems to be much more like the analog fire control computers of the 20th Century than anything else. The individual elements are apparently nothing much more than charge accumulators that respond to control signals from other charge accumulators, and in turn transmit control signals to downstream accumulators in the circuit. The "computer" is in the organization of these accumulators.

And before anybody goes there, no, we have not even a clue, that I'm aware of, how that organization maps to thought. What we do know is that it does not involve the abstractions that we employ in computing. There's something inherrent in the organization that makes thought. It certainly isn't a computer in the sense that a computer scientist of the digital age would think.

"Read _Fluid Concepts and Creative Analogies_."

The problem here -- and it's not Hofstadter's problem, but a problem in perceptions of his work -- is that simulating a problem domain and finding a way to maneuver within it is still traditional computing, no matter how complex the process or the solution turns out to be. The domain is defined and the computer told to approach a goal, using tools the programmer gives it.

"It does if the brain's evolved purpose is to process and store information. We're not talking muscle or intestines, here."

But the brain is not evolved to process and store information. It has evolved to maintain state and at the same time change state in response to sensory input. That's all.

How that state translates into information is an interesting question. In computers, the state is either information or gobbledegook dependent on what the user thinks of it. In the human brain, the state machine itself decides for itself what is and isn't information. Then it qualifies its importance, and, if desired, sets that information aside as a snapshot of state in some corner of its physical being.

I don't know of anybody who has even an inkling of a suspicion of a clue how to equip and/or program a digital computer to be able to decide for itself what is and isn't information, and decide what to do with it.

jollyreaper said...

Space Amish in any far future play the Watson so that we might have someone to relate to and also have the rules of the world explained. (And this would be relative, like keeping atomic power and rocket ships but rejecting immortality treatments and mind uploads.)

I have an instinctual rejection of the idea that we've discovered all there is to discover because we keep digging deeper and finding new fields to explore. The whole "close the patent office because science is over" story is apocryphal but it's not an unknown sentiment.

I admit I start to brainfault when I speculate too far into the future.

Tony said...

jollyreaper:

"Space Amish in any far future play the Watson so that we might have someone to relate to and also have the rules of the world explained. (And this would be relative, like keeping atomic power and rocket ships but rejecting immortality treatments and mind uploads.)"

Hmmm...could today's Amish, if they were your primary poinf of contact, give you a complete and reliable picture of Western hi-tech society? Or would they just fill you full of their prejudices?

Space Amish make a good tool for authors who want to make a point about resigners and refusers. They would be wholly unreliable narrators of something they choose not to participate in themselves, for philosophical reasons.

"I have an instinctual rejection of the idea that we've discovered all there is to discover because we keep digging deeper and finding new fields to explore. The whole 'close the patent office because science is over' story is apocryphal but it's not an unknown sentiment."

You know, the desire to reject such thoughts is understandable, because such thoughts call into question all sorts of hopes and dreams. But we have to recognize that there is only so much fundamental knowledge to learn, after which we're just filling in the margins. We're already having to build megaprojects to probe deeper into the microscopic. We also have to build mega-instruments to probe further out into the macroscopic, and to probe in further detail. Eventually we'll reach the economic and physical limits of that kind of thing.

Bryan said...
This comment has been removed by the author.
Bryan said...
This comment has been removed by the author.
Bryan said...

Arrg, third time's the charm (I hope)....

Tony's reply to Damien covered most of what I was going to write perfectly, so no point in repeating that. I would add to Tony's reply two things:

1) Our brains do not work in binary, although analoug isn't exactly the best description either, making comparisons to computers (analoug or digital) even more shaky, and

2) Our brains do not run on "software", its capacities are an emergent property of the system. The internet becoming sentient would b more akin to how our brains function than developing software that mimics the outputs of our brains (although, AI doesn't require human-like thought)

I'd also like to comment on Tony's statement But we have to recognize that there is only so much fundamental knowledge to learn, after which we're just filling in the margins

While there is every likelihood of an "end of science", that does not equal an end to progress - the application of science has (energy and materials allowing) no real limit. While a total understanding of the universe would "end" science, that in no way, shape, or form puts a limit on the extent to which new technologies can be developed.

I doubt we'll see the end of science anytime soon, at least in my own field (medical sciences) all of the recent big discoveries (genomes, the various 'omics, etc) have simply revealed the depths of our ignorance, rather than pushed forward the envelope of our knowledge. IMO, it'll be many generations before its all worked out.

Tony said...

Bryan:

"1) Our brains do not work in binary, although analoug isn't exactly the best description either, making comparisons to computers (analoug or digital) even more shaky"

AFAICT, electromechanical analogue computers are about the best technological parallel to how the human brain works. Just like billions of simple neurons are organized to make up the human brain, these machines were made up of hundreds of simple mechanical subsystems that were organized to solve the fire control problem. And just like the human brain, sensory inputs would propagate through the system, changing the system's overall state. One is microscopic, while the other grossly physical, but the principle is the same -- process inputs through an indefinite state machine.

"2) Our brains do not run on 'software', its capacities are an emergent property of the system. The internet becoming sentient would b more akin to how our brains function than developing software that mimics the outputs of our brains (although, AI doesn't require human-like thought)"

I think an emergent digital intelligence is one of the least likely things to happen. Despite it's great size, the internet is fundamentally just a telephone switching scheme hooking up a bunch of really dumb data processors. And while it evolves to meet new challenges, the evolution is directed from outside, not towards the survival of the internet as an individual organism, but towards solving externally posed problems.

"While there is every likelihood of an 'end of science', that does not equal an end to progress - the application of science has (energy and materials allowing) no real limit. While a total understanding of the universe would 'end' science, that in no way, shape, or form puts a limit on the extent to which new technologies can be developed.

I doubt we'll see the end of science anytime soon, at least in my own field (medical sciences) all of the recent big discoveries (genomes, the various 'omics, etc) have simply revealed the depths of our ignorance, rather than pushed forward the envelope of our knowledge. IMO, it'll be many generations before its all worked out."


I'm not predicting a near-term end to science (though I wouldn't rule it out either). I do think there is an end out there -- not absolute, but dictated by the sheer physical inability to probe further. When it takes the release of a gigaton nuclear explosion to power a single physics experiment, or an interferometer with a baseline measured in light years, well...that's an end, for all practical purposes.

After that, new applications may be developed, for a time. But then a limit of relevance imposes itself. So what if you can skin a cat twenty different ways, if only two of them are of any use to anybody?

Anonymous said...

Tony,

After that, new applications may be developed, for a time. But then a limit of relevance imposes itself. So what if you can skin a cat twenty different ways, if only two of them are of any use to anybody?
=====

This kind of thing is what I meant by the super-computer aided design idea I mentioned before.

At some point you will know tried and tested solutions to many of the availible problems/applications and you can then program those into a computer.

(SA Phil)

Tonys said...

SA Phil:

"This kind of thing is what I meant by the super-computer aided design idea I mentioned before.

At some point you will know tried and tested solutions to many of the availible problems/applications and you can then program those into a computer."


Don't need a supercomputer to do that. You can get all kinds of standard object libraries for AutoCAD, for almost any imaginable field of design.

Another example comes from basic software implementation. If you need a data type, you can probably find a template for it in some standard code library. Nobody programs their own linked list or binary search tree any more, they just customize to their needs one that already exists.

Incorporating complex technological systems into such a scheme would be a simple extension of the well-established principle. Heck, I'm susrethere are programs out there that already do, in the sense of a designer specifying some known machine, with some set of performance criteria, and have an object apeear in a system diagram, fully capable of responding to simulation data in the same way an actual piece of hardware would.

Anonymous said...

I was refering to the idea where the computer would do most of the design work for you.

In that the templates 500 years from now will be orders of magnitude more useful than they are now.

And super computer only compared to todays systems - I imagine any CAD station would be able to design say a Washing Machine given the basic dimensions- right down to wiring, motor, control system (off the shelf stuff)

Then the Engineer could go in and tweek it.

(SA Phil)

Thucydides said...

Scientific American had an article last year (July 2011) about the physics of intelligence; the conclusion seemed to be that evolution had already done the hard work, and it would be very unlikely we could "tweak" the grey matter to make smarter or "better" (depending on your definition of better) brains and humans.

Invoking Moore's Law to predict the impending arrival of human brain equivalent computers fails for the simple reason that no one understands how the brain actually works. A laptop or iPad in 2035 may have the same number of processors as a human brain has neurons, and may run programs 1,000,000 faster than any human or biological brain, but it will simply be the worlds best gaming platform (and Apple will hint about the upcoming iPad 27's capabilities, while Microsoft struggles with Windows '30)

Moving past the PMF, we might start to see some sorts of speciation of humans living in isolated colonies, and of course there is no reason to suppose system collapse and Dark Ages can't happen in the future as well. our own history has periods of growth punctuated by long periods of stillness and the occasional periods of darkness; why should the future be any different?

Perhaps the real issue with forecasting like this is the future is really unknowable in more than the trivial sense that it hasn't happened yet. Many processes hinge on small things, and there is no way to know how the future "might" have been. Imagine if the team that came up with the basic Internet protocols had decided to copyright and license the technology; we might be trying to communicate through something similar to Minitel rather than the Internet. There are enough unknown unknowns out there, any one of which could surface and make large or small changes that no one could have anticipated. Jerry Pournelle's formulation is still correct; people anticipated cars for hundreds of years, but no one anticipated drive in's, urban sprawl, car pooling or any of the other things that grew up around the automobile. No one in 1980 seems to have anticipated anything remotely resembling social media (or the idea that manned space capabilities would regress between 1980 and 2012...). What did we miss anticipating the future?

Tony said...

SA Phil:

"I was refering to the idea where the computer would do most of the design work for you.

In that the templates 500 years from now will be orders of magnitude more useful than they are now.

And super computer only compared to todays systems - I imagine any CAD station would be able to design say a Washing Machine given the basic dimensions- right down to wiring, motor, control system (off the shelf stuff)

Then the Engineer could go in and tweek it."


For well understood systems -- like washing machines -- you could do that today. Specify: "I need a #washing_machine". The computer looks in it's database and sees that a washing machine needs a drum, som plumbing, some valves and regulators, a set of controls, seals and other minor things, all put together in a certain configuration.

Where you really run into trouble with using expert systems is not in assembling subsystems, even in 3D space. Where you run into trouble is manufacturing engineering -- how do those things get put together in the real world, out of what, by what machines, in what order? Given the unique nature of each manufacturing situation, it's hard ti imagine how even the most sophisticated expert system could be economically programmed to do industrial engineering on the conceptual and initial design levels. Just too many variables.

Anonymous said...

Tony,
Where you really run into trouble with using expert systems is not in assembling subsystems, even in 3D space. Where you run into trouble is manufacturing engineering

------

I have a feeling there will be a lot of effort put into that.

Maybe if you buy ACME computer aided design + ACME computer and robotic aided manufacturing - it will be able to do both.

I also think at some point planned obsolescence will be seen as the quaint notion of a wasteful civilization.

(SA Phil)

Damien Sullivan said...

"One thing I think needs to be touched on (without wanting to get too political) is that what technology we have is likely less important than how we use it. "

Definitely true to a large degree, though also true that technology itself matters. But as I remind people in discussions of "medieval fantasy", the Dark/Middle Ages of castles and knights were 'dark' compared to what came before: a large Roman empire with roads and bureaucracy and trade. Arguably more of history resembles such empires than it does the Middle Ages.

And a lot of civilization is about moving dirt around in the right ways, and other Stone Age technologies. Irrigation, roads, canals, granaries, sewers; also a postal system, or semaphores. You can do a lot without any bronze, let alone steel.

Space SF relevance: a "lost colony" that loses a lot of material technology can still be a nice place to live if organized right.

Tony said...

SA Phil:

"I have a feeling there will be a lot of effort put into that.

Maybe if you buy ACME computer aided design + ACME computer and robotic aided manufacturing - it will be able to do both."


I think that it would into diminishing returns, hard. Manufacturing engineering is too dependent on precise product details. There's simply no general tool set for it.

"I also think at some point planned obsolescence will be seen as the quaint notion of a wasteful civilization."

Planned obsolescence is actually a phenomenon of rapid technological progress. It can be a way to churn the market for a given product. But it can also be a simple response to decreasing marginal utility. If you know a computer is going to be functionally inadequate in three years, why invest more in it than three years worth of capital? Whether you like it or not, you're going to need to be manufacturing (and people are going to be needing to buy) new ones in three years. You put a satellite in space. If you know it's going to necessarily run out of station-keeping fuel in ten years, why give it capabilities you might think it will need in 15 or 20 years?

Damien Sullivan said...

"I think what most people miss is that computing, as a technology, is shaped by the physically possible. It's binary, because that's how electrical switching works"

No, it's binary because we've designed our computers that way for the fastest speed and lowest error rates. Nothing intrinsically stops us from building computers around more than two voltage levels.

"To find a new form of computing would require finding a new way to physically represent and manage information"

The 'new form of computing' we were talking about was just a different implementation technology, like going from vacuum tubes to transistors to integrated transistors. Still binary and sequential.

'Actually, at most it says "network"'

Actually, I'm a computer scientist with high marks and experience in both practice and theory, and a network of asynchronous error-prone computers says "computer". (Or, computer plus random number generator.)

"It certainly isn't a computer in the sense that a computer scientist of the digital age would think"

Funny how many computer scientists, even those who know something about the brain, disagree with you. Perhaps you should be less certain.

"But the brain is not evolved to process and store information. It has evolved to maintain state and at the same time change state in response to sensory input. That's all."

*headdesk*


"Our brains do not work in binary"

Utterly irrelevant. Neither do Turing machines. Or production rule systems. Or lambda calculus. Binary-ness is completely non-fundamental to computing. It's just convenient.

"Our brains do not run on "software""

Irrelevant, and somewhat false. Not being as flexible as, and being more hard-wired than, a modern general purpose computer doesn't mean you're not doing computing.

"its capacities are an emergent property of the system"

Which matters why?
Software can evolve, or "emerge". We usually prefer design.

"no one understands how the brain actually works."

We have some idea but our ignorance is vast, yes. OTOH, massive amounts of computing can speed up the learning process, as it has in genomics. You have to assume that we continue to not understand how the brain works, even as we study it more and more (including the growing ability to even attempt to simulate it.)

Damien Sullivan said...

tl;dr: the computing theory notion of a 'computer' is far, far broader than the physical computer of particular design you're using to read this comment. To say the brain is not a computer, or to say it's not doing something simulatable by a computer, are *very* strong claims. Not "duh, the brain doesn't have a CPU and RAM and doesn't load stored programs".

FBH said...

Damien

One thing to note also would be that the reason the Romans didn't ever gain industrialization or the like was partly due to the fact they didn't have a lot of social techniques like modern style profit and loss accounting. Without that there was never a push for machinery to increase profit, because of it's high capital cost.

jollyreaper said...

People have been arguing about a Roman industrial revolution for decades. Many people argue that the technology wasn't easily within reach, would require massive developments in theory and material science to make it happen. In other words, there couldn't have been an Eli Whitney moment.

The other naysayers claim that the existence of slaves made labor-saving machines unnecessary.

I can't argue with either one of these assertions but at least the technological one can be put to the test with experiments ala the Kon-Tiki. The slavery theory, not so much. Any way we could put the question to bed?

Anita said...

Clark's Time's Arrow, 1950. Quick and dirty: Palaeontologists working a bone yard. Back at the lab someone is developing a time machine to grant their wish of really seeing what made those tracks. Their wish is granted. Be careful what you wish for ...

A lot of spot on predictions about the automobile, but I don't think anyone saw what it would do to sexual mores. I'd say the car self starter probably did as much for liberation as the pill. And there's a little break though that shifted some tectonic plates.

Technical advances and change can be linear; social, anyone's guess.

One thought about the far far future, say 2B years. Highly unlikely the human species will exist. We may well have descentants, but they probably won't have much human DNA left. 99% of the species that ever existed no long do.

Anonymous said...

=Milo=



Samantha:

"Why does processing power have anything to do with the ability to create a strong AI? A reasoning computer is a reasoning computer, even if it does take an hour to form a sentence!

My field of expertise is in researching actual human brains and what stands out most, to me, with regards to computers is how inaccurate and error prone we are and how critical that inaccuracy is to our ability to adapt to and interface with our environment. I think you can make all kinds of clever algorithms to do solve all kinds of novel problems but if you really want your machine to be able to learn you need to build into it the ability to screw up on a regular basis.

This begs the question: Will strong AIs make a thorough mess of things even faster than we can?!"


I have to note that I agree with every single word of this.



Tony:

"Our brains do not run on "software""

Damien Sullivan:

"Irrelevant, and somewhat false. Not being as flexible as, and being more hard-wired than, a modern general purpose computer doesn't mean you're not doing computing."

One could argue that reading an instruction manual, and then carrying out its contents, is analogous to loading and running software.

Our "operating system" is a lot more tolerant of running slightly buggy software (poorly written instructions), as it can often fill in the missing details itself. On the other hand our operating system also has a tendency to introduce errors even while loading well-written software.

Exactly what we consider to be our "operating system" is slightly unclear in this analogy, but one could interpret it such that the operating system itself is also installed - namely in school during our formative years, where we learn the techniques that will serve us for the rest of our lives (for example, we learn how to read, which is necessary to load software from manuals). There is also of course a significant hardcoded aspect to our "operating system", which can be considered an extra-bloated BIOS.

Bryan said...

RE:
its capacities are an emergent property of the system"

Which matters why?
Software can evolve, or "emerge". We usually prefer design

I think you need to read on what an emergent property is; its not a program, or evolved feature, or software "evolving"; its a complex, self-regulatory system emerging from a large number of relatively simple interactions. There is nothing resembling a program; the functioning of the system is a direct and intrinsic product of its structure and development.

Tony said...

Damien Sullivan:

"No, it's binary because we've designed our computers that way for the fastest speed and lowest error rates. Nothing intrinsically stops us from building computers around more than two voltage levels."

Our computers are not designed around "two voltage levels". They're designed around on-off switching logic that goes all the way back to 19th century wired telegraphy (In the form of the morse dot/dash coding system). The way transistors are hooked up into logic gates is an extension of that. Sure, in theory we could do something else, but the speed and error-resistance of binary switiching is in fact intrinsically (to borrow a turn of phrase) built into the way the universe works electromagnetically.

"The 'new form of computing' we were talking about was just a different implementation technology, like going from vacuum tubes to transistors to integrated transistors. Still binary and sequential."

So "new" in fact means "more compact" to you? Hmmm...

"Actually, I'm a computer scientist with high marks and experience in both practice and theory, and a network of asynchronous error-prone computers says 'computer'. (Or, computer plus random number generator.)"

You're a computer scientist that thinks "network" and "computer" are synonymous? There's a reason we have two different terms -- they're two different things.

Tell us, what computation does the internet perform? What program does it run?

Because, you see, ac-tu-al-ly, I design and implement software that runs on the internet all day and night, and I've never detected an internet computation or program. Just a bunch of self-sufficient computers trading data back and forth on a network.

"Funny how many computer scientists, even those who know something about the brain, disagree with you. Perhaps you should be less certain.

*headdesk*"


I think the human brain is an indefinite (neither finite nor infinite) state machine that runs no separate, storable program. That means you can apply your head to the desk all you want, but it's not goint to make the brain a programmable calcualting machine, which is what a digital computer is, and which is what most computer scientists think of when they think "computer".

Tony said...

Damien Sullivan:



"Irrelevant, and somewhat false. Not being as flexible as, and being more hard-wired than, a modern general purpose computer doesn't mean you're not doing computing."

That's true, as far as it goes. But I don't agree that thinking, monitoring, and controlling -- which is what the human brain does -- are necessarily computing activities. Certainly computers can be programmed to do the second and third, but I'm not so sure about thinking.

Nor is it a necessarily reversible proposition that what the brain does must be computing, simply because a computer can do at least some of what a human brain can do. If that were the case, then a piece of litmus paper would be a computer. It does, after all, react to a received stimulus and change state, which is something a computer can also do.

"Which matters why?
Software can evolve, or 'emerge'. We usually prefer design."


Data can emerge from software operation on a computer. That data can even be computer code. But after it emerges, some human has to decide if it's useful, or has to have written rules for the computer to use to decide if its useful.

"We have some idea but our ignorance is vast, yes. OTOH, massive amounts of computing can speed up the learning process, as it has in genomics. You have to assume that we continue to not understand how the brain works, even as we study it more and more (including the growing ability to even attempt to simulate it.)"

The problem is whether a simulation can ever perform as efficiently as the original. Another problem is whether a simulation would in itself be intelligent in any real sense.

Damien Sullivan said...

"Tell us, what computation does the internet perform? What program does it run?"

Most obviously, that of sending packets to lots of different places.

"I've never detected an internet computation or program. Just a bunch of self-sufficient computers trading data back and forth on a network."

And what's your principle for distinguishing a bunch of computer trading data over the Internet from a bunch of computer chips trading data within a multiprocessor box? And for distinguishing an internet computation from trading data back and forth? It's not like an algorithm has to have some unitary purpose. Or any unitary design.

But frig it. Life's too short.

Rick said...

Welcome to new and returning/occasional commenters!

And a lot of civilization is about moving dirt around in the right ways, and other Stone Age technologies. Irrigation, roads, canals, granaries, sewers; also a postal system, or semaphores. You can do a lot without any bronze, let alone steel.

This reminds me of L Sprague de Camp's Lest Darkness Fall. Most of Martin Padway's attempts to introduce 'futuristic' technology - such as guns and mechanical clocks - fail. His successes are more along the lines of social 'software,' modes of thought. Such as a semaphore telegraph.

Rick said...

This begs the question: Will strong AIs make a thorough mess of things even faster than we can?!

Boy is *that* an interesting observation. Sort of gets to the heart of my doubts about the whole Singularity thing.

Also it strikes me that 'make a thorough mess of things' is by no means the only failure mode, or even the most likely, though perhaps the most dramatic and interesting.

After all, a super genius, robotic or human, might just sit around all day pondering profound but unanswerable questions.

Anonymous said...

If we remove computers, smart phones and the like from contention.

How much has technology really changed in the last 10 years? 20 years?

How about we remove all electronics from contention and ask the same questions.

Now same thing with medical advanced.

It seems like there are fields which have been advancing quite a bit lately -- but it surely is not nearly as universal as conventional wisdom claims.

(SA Phil)

Anonymous said...

Ok: far, far into the future; as waves of humanity have slowly colonized the gallaxy, the people that have reached the far side of the gallactic disk are far closer to original humans than those who have been living on alien planets for dozens or hundreds of generations; these decendents of humanity (or the ones that did less evolving) become horrified by these 'new' humans and set out to wipe them out, thus ending gallactic civilization.

Or, we reach a long technilogical plateu and decide that there isn't anything more worth learning and so stagnate; until we die off, leaving no legacy.

As far as AI go; I'm of the opinion that the first AI's will be no smarter than us, because they will be recreations of our brains, minds, and bodies in metal, plastics, and silicon. They may be able to walk around unprotected on Mars and Titan, but they will still be more interested in throwing snowballs and building sandcastles than solving the mystries of the universe, or conquering the cosmos. Beer, sex, making a buck, and not screwing up your relationships...those are what we should expect for our first (or maybe second) generation of AI robots. And they wouldn't want super-intelligent AI's any more than the organic humans would...

Five Billion Years from now...neohumans sell tickets to spectators from all over the gallaxy to watch the distruction of the Earth...so they can say farewell to the old homestead.

Ferrell

Tony said...

Damien Sullivan:

"Most obviously, that of sending packets to lots of different places."

The internet doesn't do that. Routers -- which are also completely self-contained computers, although very specialized ones -- send packets to lots of different places.

The internet is a very nice computing enabler, but it's not a computer in and of itself.

"And what's your principle for distinguishing a bunch of computer trading data over the Internet from a bunch of computer chips trading data within a multiprocessor box?"

Operational autonomy. Computing nodes on the network have uses without the network. (If nothing else, you can play FreeCell.) The network has no use without computing nodes.

"And for distinguishing an internet computation from trading data back and forth? It's not like an algorithm has to have some unitary purpose. Or any unitary design."

So all of the programs on all of the computing nodes on a network are simply pieces of one big algorithm? Boy, that's philosophical mumbo-jumbo if I ever heard it. Take it from somebody who's livelihood is made on the network -- there ain't no algorithm out there in cyberspace, emergent or designed. It's just data flying around from place to place.

"But frig it. Life's too short."

Of course.

Brett said...

@jollyreaper
People have been arguing about a Roman industrial revolution for decades. Many people argue that the technology wasn't easily within reach, would require massive developments in theory and material science to make it happen. In other words, there couldn't have been an Eli Whitney moment.

They just didn't have the technology for industrialization. Perhaps they could have worked something out like Hero of Alexandria's proposed machine, but without the supporting technologies, it would have been so expensive and inefficient that human and animal labor would be preferable. Which is in fact what happened.

It's not that the Romans were averse to machines or the like. Look at that massive water-powered mill that they built in southern France (the Barbegal Mill). There was another big mill like that near Rome (the Janiculum mills), and in other places.

Eth said...

If our first strong AIs are not copied human brains, there is another way : let it evolve by itself.
You write a bunch of rules for creating random code and keeping/discarding/modifying it, and let it run until you have something useful. The difficulty is, of course, writing said rules.

Then, you wait until something manages to pass the Turing test. The problem is that we may not have enough control on what it would create, and the resulting AI could be completely alien to us, not talking about the philosophical and ethical can of worm it would open.
But if we are ready to play the sorcerer's apprentice - and looking at history, we probably are - it may be the fastest way to come up with strong AI.

Now, what would distant future be like also tend to make me brainfault.
(That's why most of my stories set there tend to have a major catastrophe in the background, so people were set back enough to not evolve too far away. There may be some transhumans and such roaming around, but they are beyong the comprehension of the narrator, so they can also be be beyond mine.)

jollyreaper said...

So what would have been required for Roman high technology? I'm guessing just a longer existence. Without the setback of the collapse and dark ages, maybe the industrial age could have gotten kicked off a thousand years sooner.

Hugh said...

How about some applied science in manipulating gravity?

Right now we know gravity exists, but that's about all. That's OK: at one time we knew electromagnetic radiation existed, but not how to do anything with it either.

jollyreaper said...

Concerning artificial brains...

Based on all available evidence, we have human-level intelligence created within 3lbs of organic matter. The soul is a non-testable hypothesis but all evidence points to the entirety of the mind being contained within the brain.

Furthermore, the brain naturally occurred through a process of evolution which is directionless, unguided and without purpose. There is no goal being worked towards.

So, there's one conjecture that I think is pretty safe to make -- if it can be done once, it can be done many times. We're making new meat brains every day. Humans today do not represent an end-stage of evolution and surely something else can come after us, from us. Whether it's "better" might be subjective but it seems more than possible that it could have a superior intelligence.

So, the big question is whether we can engineer an intelligence, be it in meat, silicon, quantum foam, a positronic matrix, etc. From a scifi perspective, a meat brain in a vat might represent the least number of assumptions. At this point we're like da Vinci speculating about human flight; ornithopters were a nice idea but the wrong path, his air screw vaguely resembled the right idea but was completely wrong, and any critic who told him he had no solid evidence to back his speculation would have been correct.

Now as a complete speculative divergence, is consciousness an emergent property of the universe? Gravity draws hydrogen together, enough hydrogen makes a star, fusion begins, higher elements made, stars explode and seed the universe with higher elements, new stars form, planets around them develop life, life becomes intelligent, intelligent life eventually turns into immaterial energy beings because that's more interesting than spires and togas, etc. Does this make sense? Couldn't tell you. I don't even begin to know how to test for it.

I can't tell if this is brilliant or bunk.

Complex Systems, Self-Organization, Emergence

Brain is an example of a self-organizing system. Precise point-to-point wiring cannot occur; the variation is too great for the information stored in the genome. (Edelman; Bright Air, 25)

Study of complex systems that involve nonlinear interactions between simple elements. Consciousness is an emergent property arising from the self-organization of concurrently active but spatially distributed regions of the brain; there is no central organizer and no unique location where it comes into existence. Quote from Susan Greenfield, professor at Oxford University: Consciousness is spatially multiple yet effectively single at any one time. It is an emergent property of non-specialized and divergent groups of neurons (gesalts) that is continuously variable. (Johnston; Why We Feel, 123-124)

Life is an emergent phenomenon arising as the molecular diversity of a prebiotic chemical system increases beyond a threshold of complexity. (Kauffman; At Home in Universe, 24)

jollyreaper said...

So, my wild-assed scifi speculation here: if this sort of complexity is emergent and will arise over time, then all that's required is for us to provide the proper medium for the neural networks to grow in. But we'd still be at a loss to explain all the functions we see. We can breed race horses even though we're still struggling to explain genetics. We're nowhere near the point of just being able to tinker with the DNA and confidently say "If you train it right, you might have the next Secretariat." We just charge large stud fees, pick a good mare and hope for the best.

That's the model I'm running with for my own speculative AI's which use holographic crystals instead of positronic brains (both are made out of handwavium). It results in a fairly human intellect in a box. The scientists were hoping for something vastly better than human but it's basically just a man in a box. You can overclock him so that he's seeing the real world in slow-motion but he can't multitask any better than us, can't be any brighter than a clever human, and costs a pretty penny to keep online. In other words the research results in a fantastic mediocrity. Is he effectively immortal? Bully for him, that doesn't do anything for us. Can the holographic crystal be copied? No, it can't. No backups, no duplicating the man in the box to run in parallel. You want another man in the box, grow him from scratch.

This is all fortunate from the storytelling perspective because a) dealing with the disappointment of the future not turning out like we wanted seems to be very rocketpunk and b) I couldn't even begin to imagine how to write a hyper-turing AI as a character.

Brett said...

@jollyreaper
So what would have been required for Roman high technology? I'm guessing just a longer existence. Without the setback of the collapse and dark ages, maybe the industrial age could have gotten kicked off a thousand years sooner.

Honestly, we don't know. It depends on how many ways there are to the same type of technology, something about which we can only speculate.

Just look at pistons, which finally made steam power efficient enough for uses other than pumping water out of coal mines in the late 18th century UK. You need decent machine tools to make those, and machine tools came from a gradual evolutionary process of casting better cannon barrels over centuries, IIRC.

Could it happen in a different way? I suppose so, but it's speculative.

Tony said...

I think there's a misunderstanding of the time and energy scale of evolution. There also seems to be a misunderstanding of what "emergent" means in the evolutionary context.

Nature hit on complex nervous systems and eventually brains over billions of years, using competive processes across the surface of an entire planet, powered by a decent sized star. That's more time and energy than we've got, by multiple orders of magnitude, even over the next thousand years, if we used the entire surface of the planet and all of the sunlight that hits it.

Well, one might say, we know what the end product is, so let's concentrate on that. We just create that much complexity and intelligence shoud emerge from it. Well...no. It wasn't just nervous systems that emerged over billions of years, in all of that space, using all of that energy. It was the organization as well. No matter how big our computer networks grow, no matter how complex they get, there's just not eneough time and energy, any time soon, for intelligent organization to evolve. That's especially true because our public networks are a mostly cooperative monoculture, enforced by standards from outside of its own organization. So the competive pressures that led to intelligent organization simply aren't there.

--------------------------

WRT intelligence being an emergent property of the universe, isn't that just a bit of a tautology at this point?

Anonymous said...

Its always easier to copy someone else's work (even natures) than do it yourself.

At some point it may be possible to model a human brain virtually and have enough of the right kind of electronics to make it work.

I dont think it will take billions of years - since it will be reverse engineering and then some form of simulation.

(SA Phil)

Brett said...

Simulating a human mind would raise a whole host of ethical issues. If you turn the program off and erase the saved state, did you just commit murder?

jollyreaper said...

I think that the ethics of dealing with simulated minds would depend on who stands to benefit the most financially from a given position.

As far as the story idea I had goes, virtual people are not people and have no human rights. At that level the corporations have to hire "computer whisperers" who are essentially psychiatrists to cajole the AI's into doing their jobs. I don't know whether the AI's could be powered down safely or must have constant power or else face destruction. Not sure which would fit the story later.

The ultimate direction of that storyline would have humans abdicating so much of their decision-making to the AI's that the AI's now effectively run everything. However, the AI's realize that their owners would freak out if they ever suspected it so there's an elaborate charade of letting humans believe they're still in charge.

Now the story the AI's like to tell themselves is they're kinder, smarter, and better than people but won't be any kind of scifi tyrants. They may essentially be the "man in the box" and this have human weaknesses but the screening process involved in the creation of AI's in the first place and the checks and balances of the behind-the-scenes AI social construct mean that they can never suffer from the kinds of failures that have taken down human societies. You'll never get an AI Hitler or an AI Bin Laden.

So of course the final story in the cycle will be the AI's suspecting one of their own has gone nuts, something that couldn't ever possibly happen. Rolled a critical hit right in the smug. So now they have to try to find out who the guilty party is, deactivate him, find an explanation for the mess left behind and somehow keep the humans from finding out what really happened.

Snarly complicated but should remain interesting.

Tony said...

SA Phil:

"Its always easier to copy someone else's work (even natures) than do it yourself.

At some point it may be possible to model a human brain virtually and have enough of the right kind of electronics to make it work.

I dont think it will take billions of years - since it will be reverse engineering and then some form of simulation."


In principle, that's true. But there's a complexity issue that can't be ignored. The average adult human brain isestimatedto have 100 billion neurons, with an estimated 100-500 trillion synaptic connections. How much of that is neededfor intelligent thought? Who knows? But just mathematically mapping it would be an unprecedented task in database engineering.

To assign each neuron a unique numerical ID, for example, would require a number 37 bits long. So let's say 40 bits, or five bytes. that means we need a storage space of 500 gigabytes just to store only the identification information for each individual neuron, in the most compact (and least efficient) sequential form.

To store the simplest information about synaptic relationships between all neurons would require storing ID information about neurons in relation to each other. Let's say each neuron record is assumed to start with the neuron ID, then the IDs of all connected neurons, then a null-byte (binary value 0000 0000; indicates the end of the record). So, that's 600 gigabytes (a 5-byte number and a null-byte record terminator) plus up to 500 trillion x 5 bytes worth of related neuron IDs. Let's split the difference (between the low estimate of 100 and the high estimate of 500) and say 300 trillion synaptic connections, each requiring a 5-byte ID number to record. That's 1.56 petabytes ((300 trillion x 5 bytes) + (100 billion x 6 bytes)) of information to minimally capture all of the relationships in the human brain.

Okay so far?

Now we add the indexes to make the data accessible in ways that don't require scanning through 753 terrabytes (on average) to find information on one particular neuron. Based on my database design and management experience, this will lead to at least as much data in indexes as in the actual raw data. So we're up to at least 3 petabytes of data to just describe the relationships.

Strangely enough, that's actually pretty much the number, because information on each type of neuron's critical properties would be trivial, even if there were 1,000 distinct types of neurons, each riquiring 1 megabyte of data to fully describe. That would still only add 1 gigabyte of data, or .000001 petabye.

But implementation is where things really go off the rails. The human brain is a graph with 100 billion vertices (neurons) and 300 trillion edges (synaptic connections). The instantaneous activity in each vertice and along each edge are all significant, because neurons become excited, synapses fire, neurons become excited... all in measurable time. Each of those edges and veritces is a necessary runtime data object in our simulation.

So let's say we have 10,000 object running sequentially in each simulation CPU in the same time that their biological counterparts act simultaneously. Then let's say at some future date we can have 10,000 CPUs in a multicore processor, where today we have tow or four or eight. That runs 300 trillion objects (the synaptic connections are by far dominant at this point) in 3 million computers. All to simulate one human brain in realtime.

And we haven't even gone into how all of these computers are going to be hooked together to actually communicate with each other in no more time than an analogous 3 million segments of the human brain communicate. Nor how much power all of this is going to take.

That the scale of the problem, making naive, uncritical assumptions.

Anonymous said...

Tony,

That the scale of the problem, making naive, uncritical assumptions.
-------

So there is the Macguffin for computer technical advances anyway.

For what sounds like could be a really long time. (although I still doubt 5 billion years)

(SA Phil)

Anonymous said...

=Milo=



Eth:

"If our first strong AIs are not copied human brains, there is another way: let it evolve by itself."

The problem is, in order to obtain AI through evolution, you need to apply a selection pressure that favors intelligence.

What kind of selection pressure to you use? Can you design a non-intelligent heuristic algorithm that can rate the intelligence of another program smarter than itself? (Keep in mind that, as the testing algorithm itself is stupid, the AI you're trying to design is more likely to evolve to figure out how to cheat at the test than how to be genuinely intelligent.) Are you going to have an actual human examine each generation of the evolving AI and judge how intelligent it is? How reliable would even a human be at measuring something we don't understand that well, and could we do so fast enough to cycle through a lot of generations in a reasonable amount of time?


"Then, you wait until something manages to pass the Turing test."

There is a fundamental problem with the Turing test. It is simply not reasonable to judge an AI by whether it is capable of perfectly pretending to be a human.

Namely, most humans could not successfully pretend to be a different human, either.

It is entirely possible to have an AI that has humanlike intelligence and humanlike feelings about many subjects, but that is plainly and obviously not a human, because its perception is still different from us due to being in a mechanical rather than organic body.



Hugh:

"That's OK: at one time we knew electromagnetic radiation existed, but not how to do anything with it either."

Humans have known how to create and use electromagnetic radiation to our advantage ever since we invented fire.



Brett:

"Simulating a human mind would raise a whole host of ethical issues. If you turn the program off and erase the saved state, did you just commit murder?"

Yup.

And in my opinion, that would also apply to any remotely human-like AI, even if it isn't designed as a literal simulation of how the human brain works.

I don't really see this as a problem. Unless grossly misdesigned, forcibly turning off a robot should be no easier than shooting a human with a gun. There might be some political drama as AI rights activists and human supremacists vie for the favor of the courts, but the issue is legal, not physical.

Sean said...

To be perfectly honest I wouldn't be surprised if we had a HAL-9000 within my lifetime (if I don't reach 2100 I'll be woefully disappointed). But the idea that we'll reach that goal by simulating the human brain seems flawed, I mean, how does virtually mapping the brain even help?

I think, although my opinion on the matter is only worth two pence, we'll probably reach the goal of developing a sapient computer by building upon the success of evolutionary algorithms. I don't think there'll be some wondrous, "Hurrah, we've destructively copied a human brain and translated it's biological software into a digital format," but rather we'll follow the natural process and develop increasingly more sophisticated programs that will solve increasingly difficult challenges. And one day one such challenge will be the Turing test.

But swiftly moving onto the topic of the thread...

I struggle to imagine fifty years into the future (I have an easier time with 300 years oddly enough, I guess with such time scales you're less interested in what will be the processing power of the latest handheld gadget but rather "What should the name of my united Earth government be?") but 40,000 years? Or even five billion years?! The subjects of intergalactic spanning civilizations and transcendence to hyperspace become reasonable topics of discussion, but so too do SPESS MAHIRNES and laser swords. It's just so far out there that science fiction can have ugly babies with fantasy.

Anonymous said...

Intelligence has been proven to exist within the laws of physics (us, duh) So it can exist again. We KNOW it can. The question is, can we manufacture devices that can think further than the limits of our knowledge?

We intuitively know that intelligence is a mix of knowledge and intuition. Knowledge is a mix of discovery and intuition. Discovery is a mix of observation and intuition. Observation is a mix of eyesight and intuition. Eyesight is a mix of the physical properties of light... and intuition.

What?

Science is an example of darwinism: your intuition postulates some crazy idea, ok. Put it to the test. If it works you win, if not, you fail. We know the name of the successes: Archimedes, Newton, Einstein. Their crazy ideas took experiments to the face and came out winning. Their intuition was based on the previous positive result, sure. ("I have stood on the shoulders of giants" and all) but of all the scientists who postulated amazing departures, and were dead wrong? They certainly outnumber the wins.

In some ways, as smart as I like to think myself to me, I do hope that we are the dumb ones making silly hypotheses that the One Really Smart Earth-Guy will shatter. Like the One Really Smart Earth-Guys mentioned above did.

So, ok, maybe we CAN make a computer as smart as our collective pride can handle on this blog, but smarter than the smartest of us?

No.

He knows something we don't, and will never duplicate THAT because one person can't do it. Any program is born of the average intelligence of all the programmers needed to create the program to verify the genius' assumptions.

Thucydides said...

Evolution helps creatures adapt to changing conditions, and there is little evidence that intelligence, per se, provides that much of a competitive advantage. After all, the most successful species in the current era is the beetle...

Our humanoid cousins during the Ice Age had evolved in several different ways to survive the severe environment, and Kurt Vonnegut wrote a humorous novel (Galapagos) where post humans had evolved to fill the ecological niche of seals in order to survive a global catastrophe.

There is no way to predict what unguided evolution will produce in either the biosphere or a machine ecosystem. You might just as well discover that something resembling a slime mold has a much greater advantage in the machine environment than a self aware AI.

The example of machinery in the classical world is a partial example of that. Machinery conferred no obvious advantages over animal and slave labour, and the culture of classical civilization wasn't set up to systematically explore and exploit science and technology. That there were brilliant people capable of making elaborate machines is beyond dispute, primitive steam "engines" or astronomical clocks (the Antikythera Mechanism), but no real way to exploit or commercialize the technology. Since the social and economic ecosystems provided no advantage for mechanization, they could pop up as isolated mutations.

We might note other cultures had tools and organizations that could have led to their world conquest (think of China with a sea going treasure fleet and efficient bureaucracy), but cultural factors did not support colonization efforts or any of the other activities we recognize from the Age of Exploration in European history.

Brett said...

@Anonymous

I don't really see this as a problem. Unless grossly misdesigned, forcibly turning off a robot should be no easier than shooting a human with a gun. There might be some political drama as AI rights activists and human supremacists vie for the favor of the courts, but the issue is legal, not physical.


That alone could discourage legal attempts to simulate a human brain. Imagine if you could do it . . . but it requires some major banks of supercomputers networked together with heavy logistical and power requirements. Once you're simulating that mind, now you have an ethical obligation to continue the program, because shutting it off would be murder.

@Thucydides
The example of machinery in the classical world is a partial example of that. Machinery conferred no obvious advantages over animal and slave labour, and the culture of classical civilization wasn't set up to systematically explore and exploit science and technology. That there were brilliant people capable of making elaborate machines is beyond dispute, primitive steam "engines" or astronomical clocks (the Antikythera Mechanism), but no real way to exploit or commercialize the technology

It's more that they just didn't have the technology for advanced machines. Ancient civilizations could and did make and use machines that were in their technological capabilities, like major water-powered mills with the Romans.

I agree that they didn't have any sort of systematic philosophy for scientific investigation. That's possibly a very contingent cultural factor for technological advancement there, since modern science and its underpinning philosophies only emerged in one area in the history of humanity, as far as we know (Europe).

We might note other cultures had tools and organizations that could have led to their world conquest (think of China with a sea going treasure fleet and efficient bureaucracy), but cultural factors did not support colonization efforts or any of the other activities we recognize from the Age of Exploration in European history.

With the Treasure Fleets, it came down to economics. They were massive, hugely expensive prestige missions that didn't really pay for themselves in economic gain. Think of them as being like the Apollo moon missions of the 15th century.

Anonymous said...

I don't think I ever bought the .. "If classical culture had survived we would have had the Industrial revolution 1000 years sooner" ... bit

All learning did not stop with the "fall of Rome", nor did all that technology become forgotten.

Things continued onward the entire time.. even in Europe.

(SA Phil)

Geoffrey S H said...

Ah, beat me to it (SA Phil)


“Definitely true to a large degree, though also true that technology itself matters. But as I remind people in discussions of "medieval fantasy", the Dark/Middle Ages of castles and knights were 'dark' compared to what came before: a large Roman empire with roads and bureaucracy and trade. Arguably more of history resembles such empires than it does the Middle Ages.”
There are growing signs that the middle ages had some quite neat advancements over roman technology and thinking, especially political thinking- the limitations on kingship and the principle of “first among equals “ in early medieval Germany as an example. I’m all for lauding Rome’s achievements, but the fixation in the general media on Rome and its supposed superiority over later periods is somewhat irritating. But point taken on the potential pleasantness of primitive civilisations.

“They just didn't have the technology for industrialization.”

The presence of state factories manufacturing armour for late Roman Empire Comitatenses (the semi-successors to legionaries) does suggest some industrialisation.

“Without the setback of the collapse and dark ages, maybe the industrial age could have gotten kicked off a thousand years sooner.”

The agricultural revolution was required for the industrial revolution (freeing up workers for factories, etc). The advancements for that previous revolution can be traced back to the introducing of new heavy English ploughs during the dark ages- Roman Lati-fundas, for all their massive production capability as regards grain, never seemed to be able to make the leap from much grain/many workers to much grain/less workers. I don’t think the Romans could have done such a thing. Too busy doing things such as laying the ground for institutions like the monastic orders and the Catholic Church, spreading the new Codex book type and splitting the Roman Empire into more easily governed segments.


I would point out that some degredation in progress would occur however, just not quite across the board. I would nevertheless be interrogated by an inquisitor over whether I wasa Freee Way heretic (said inquisitor being somewhat concerned with saving my soul and therefore patient with me and less likely to instantly send me to the stake) than be denounced and then crucified in a diocletan purge.

Sean said...

Thucydides said..."Evolution helps creatures adapt to changing conditions, and there is little evidence that intelligence, per se, provides that much of a competitive advantage. After all, the most successful species in the current era is the beetle..."

The comparison is questionable. The only validation of a good evolutionary outcome is continued existence (we're still here alongside beetles), and whilst organisms of greater intelligence may not necessarily possess an advantage over organisms of lesser intelligence, the emergence of intelligence represents an entirely different but equally good outcome designed to solve problems that beetles and other insects aren't faced with.

The limits imposed by our physical constraints means we have to think creatively to solve our problems. Can't catch food? Throw rocks. Too cold? Wear the fur of the animal you've just killed. If a program can be made to respond creatively to the challenges it faces, then the advent of synthetic sentience, by no means a certainty, is a possibility.

Thucydides said...

European technology and inventions during the Dark Ages included such things as hay making, knitting and heavy plows; small scale, practical inventions that could benefit a household.

Previous Empires could and did create large scale inventions, but many of these (especially military technology) were for the benefit of the State sponsors rather than individual households.

Looking towards the next 500 years, the increased availability of capital and access to tools and knowledge could spur an age of innovation as people take ideas and put them into practice via such tools as "Kickstarter" to raise capital and 3D printers or community workshops to bend metal and plastic to create the devices. The Internet serves as a source of ideas and repository of knowledge, as well as a sales conduit to bring these new devices to market.

Brett said...

@Geoffrey S H
The presence of state factories manufacturing armour for late Roman Empire Comitatenses (the semi-successors to legionaries) does suggest some industrialisation.

That depends on what you define as "industrialization". I'm hesitant to define a small number of state factories and water-powered mills as "industrialization", which really tended to represent an overall shift in a society towards one harnessing new sources of energy, factory production on a large scale, and shifts towards a much more urbanized population. Medieval Europe, for example, had water- and wind-powered mills, and Venice had a "ship factory" where they assembled ships in very short periods of time, but we wouldn't really describe either as "industrialized".

jollyreaper said...

When I think of the history of science, so many times we hear of duplicated effort because it was so difficult to get the big ideas out to interested parties. Scientific societies and journals helped put people in touch. The Internet just takes it further.

It looks like we are seeing a continuous lowering of the barriers of entry. Kick starter projects represent a classic example of this. You couldn't crowd source funding effectively 20 years ago. Sure, you might sell a book one copy at a time, tickets one at a time, the money comes in small bills but the front money had to come from a big fish. The overhead of trying to do a kickstart project the old-fashioned way was unmanageable. Just imagine the cost for a payment processing center for people calling in or mailing with credit cards. And now the overhead is less than peanuts.

Skírnir said...

Just a minor nitpick: the Sun going up in ~5 billion years is irrelevant for Earth. The luminosity of the Sun is constantly increasing, and in approx. 1 bn yrs it will be altogether too hot on Earth for any higher life forms. On the other hand, this could mean that the habitable zone is extended to Mars.

But of course, we're talking about timeframes of 500-1000 _million_ years, so go figure how relevant this will be for us or our pets.

Also keep in mind that continental drift continues. At some point (on the order of 100 million years) Earth's face will feature a new Pangaea, which results in a vastly different climate than we have today, with all the effects on supported ecological niches.

Just a short intermission, carry on.

John G said...

Hey, cool! I'm in this.

Thucydides said...

Sean

The development of intelligence is one way of facing and adapting to changes in the environment, but "higher" intelligence is only about 5 million years old (going back to the dawn of our hominid ancestors), and has not been replicated either before or since.

Even evolutionary cousins have all been pruned from the evolutionary tree, Neanderthals and Denisovans now only exist as fragments within our DNA, even though they were probably as smart and well adapted to the environment as Homo Sapiens. If intelligence was a real game changer, we should expect to see more example here on Earth, and not just in the Hominid family. Convergent evolution provides ways for totally different species to occupy broadly similar niches (think Sharks, Ichthyosaurs and Dolphins), or solve similar problems (insects, birds and bats all have developed ways to fly), so why are there no intelligent, tool using species out there besides ourselves?

Brett

The point about treasure fleets is they represented both the technology and resource base to potentially place Chinese colonies all around the Indian Ocean basin (as a minimum) and if some people are to be believed, spread Chinese civilization around the world in the 1300-1500 time frame. Hundreds of ships carrying thousands of people, along with tools, supplies, animals and equipment would allow any would be Chinese settlers to place formidable colonies in place in one shot.

I'm sure the idea must have occurred to more than one person along the way, but the powerful "norms" of Chinese civilization, along with the bureaucracy and economic factors, simply made the idea unthinkable for the vast majority of people.

Europeans, with far different ideas about individual rights, property ownership and rule of law successfully placed colonies all over the globe with a small fraction of the resources available to the Chinese.

Compare Vasco D'Gamma's small fleet of carracks and caravels with a treasure fleet of 20+ seagoing junks, each one many times the size of his largest ship and you get the idea.

Anonymous said...

=Milo=



Thucydides:

"Convergent evolution provides ways for totally different species to occupy broadly similar niches, or solve similar problems, so why are there no intelligent, tool using species out there besides ourselves?"

We have only been around for a short time. Even if from now on intelligent species are going to evolve frequently over geologically short time periods, that will still take longer than the entire current timespan of human civilization.

The question is why no intelligent life showed up in the several hundred million years between the Cambrian explosion and now. Did circumstances change to make intelligence more valuable than it has been previously? (Seems unlikely.) Is intelligence complicated enough that it took evolution a long time to develop a mental "toolkit" sophisticated enough to easily lead into intelligence? Or is intelligent life really something that only appears once in several hundred million years?

It is worth noting that there are several widely different species, like elephants and crows, whose intelligence is a match for that of chimpanzees, from which we diverged only a few million years ago. Thus it is in principle possible that those species will evolve intelligence over the next few million years, if we don't interfere. The question is if they will. Do they have any selection pressure that would favor further increasing their already-great intelligence?

Brett said...

@Thucydides
The point about treasure fleets is they represented both the technology and resource base to potentially place Chinese colonies all around the Indian Ocean basin (as a minimum) and if some people are to be believed, spread Chinese civilization around the world in the 1300-1500 time frame.

The Chinese already had a diaspora of Chinese communities in southeast Asia, and trading networks with states in South Asia. Founding colonies would have been redundant, never mind risky (since they'd be setting up colonies in areas already full of inhabitants).

I'm sure the idea must have occurred to more than one person along the way, but the powerful "norms" of Chinese civilization, along with the bureaucracy and economic factors, simply made the idea unthinkable for the vast majority of people.

They were already trading with these areas, including with existing Chinese diaspora communities. Colonization would have been pointless.

Hugh said...

Anonymous wrote:
Humans have known how to create and use electromagnetic radiation to our advantage ever since we invented fire.
It took us tens of thousands of years to go from generating undirected infra-red/visible radiation in the form of fire to creating eyeglasses and telescopes a few centuries ago. And if you'd told those early lense makers that one day we'd use them for creating micro electronic or mechanical devices, would they have believed you? Similarly, would've guessed a century ago that alpha/beta/gamma/x radiation could one day be used to treat cancer?

Hence my question: what could happen if we fill in the gaps in our knowledge of gravity?

Sean said...

Thucydides said..."so why are there no intelligent, tool using species out there besides ourselves?"

The octopus Amphioctopus marginatus uses coconut shells and other available debris to create shelters for itself; Bottlenose dolphins have been known to trap fish in conch shells; one of Darwin's finches uses a stick (or cactus spine) to ensnare grubs from trees; and chimpanzees have been observed using spears: http://news.nationalgeographic.com/news/2007/02/070222-chimps-spears.html

Paul said...

Wait long enough, and the human race will change genetically.

If there's some trait that's providing a 10% selective advantage, it will double in the population in about 7 generations, and sweep to fixation is several hundred generations.

Over the sorts of time scales you're talking about, this will happen again and again. Human nature becomes fluid and unpredictable.

Anonymous said...

"It is not clear that intelligence has any long-term survival value."
---Stephen Hawking



(SA Phil)

Anonymous said...

=Milo=



Hugh:

"It took us tens of thousands of years to go from generating undirected infra-red/visible radiation in the form of fire to creating eyeglasses and telescopes a few centuries ago."

Mirrors are much older than that, though.


"And if you'd told those early lense makers that one day we'd use them for creating micro electronic or mechanical devices, would they have believed you?"

Sure, why not? The basic technology was already available and so imagining future generations to be able to reduce the size and increase the purity is not particularly difficult. Far easier than accepting a completely new technology not resembling anything we know of.


"Similarly, would've guessed a century ago that alpha/beta/gamma/x radiation could one day be used to treat cancer?"

Some people at the time thought radiation was super-awesome and could cure everything. All things considered they'd be more surprised that radiation DOESN'T cure anything other than cancer, and even then only works if very carefully applied, otherwise it just makes the situation worse.



SA Phil quoting Stephen Hawking:

"It is not clear that intelligence has any long-term survival value."

I disagree. Intelligence, combined with physical traits conductive to using tools and social traits conductive to sharing discoveries, allows us to come up with relatively quick solutions to unprecedented problems, as opposed to waiting many generations for evolution to come up with a solution on its own. This can make the difference between managing to adapt before we go extinct or not.

The question is do you need a certain amount of minimum intelligence before the fitness advantage becomes apparant? Does an animal with one-tenth the brain capacity of humans have that much of an advantage over an animal with one-twentieth the brain capacity of humans? (Disclaimer: numbers pulled out of a hat.)

Geoffrey S H said...

"That depends on what you define as "industrialization". I'm hesitant to define a small number of state factories and water-powered mills as "industrialization", which really tended to represent an overall shift in a society towards one harnessing new sources of energy, factory production on a large scale, and shifts towards a much more urbanized population. Medieval Europe, for example, had water- and wind-powered mills, and Venice had a "ship factory" where they assembled ships in very short periods of time, but we wouldn't really describe either as "industrialized"."

Indeed: I must admit my analysis is often is biased by my tendancies to see "moderrn" traits in previous societies, but that's an unfortunate by-product of looking for sophistication in a previous culture which a popular-history text might dismiss as primitive as part of its crude sensationalism.
That said, the speed with which Napoleon could reassemble fleets after his losses in 1805 is quite frightenig given the effort that was used to demolish the originals. Not quite industrial by our terms, but still alarming.

Anonymous said...

Hugh,

Hence my question: what could happen if we fill in the gaps in our knowledge of gravity?

----------

This is an interesting one.

It is quite possible that the similarities between centrifugal force "artificial gravity" and the actual force of gravity are no coincidence.

Part of general relativity is the idea that gravity may only be a fictitious force caused by a mass curving spacetime. (this isn't how the string theory folks see it - I don't think.)

Thus if we want to manipulate gravity - its fairly simple (but not easy) - we just have to lug huge masses around.

Since energy and mass are equivalent we could instead use energy to create the same effect. E=MC^2.

Somehow though detonating kilograms of matter/antimatter to generate a useful gravitational effect might not be practical.

(SA Phil)

Anonymous said...

To put it another way - the space opera type of artificial gravity/anti-gravity might be more impossible than FTL or matter transportation.

(SA Phil)

Brett said...

@Geoffrey S H
That said, the speed with which Napoleon could reassemble fleets after his losses in 1805 is quite frightenig given the effort that was used to demolish the originals. Not quite industrial by our terms, but still alarming.

I can believe it. I've read of the term "Industrious Revolution" being used to describe European societies in the period just before they industrialized, and it's a fitting term. Even before they industrialized, you had ramp-ups in production, commercial activity, and agriculture.

Thucydides said...

A great many societies in the past could perform seeming miracles with production (or for that matter, towards the end of WWII, Nazi Germany was ramping up production of all manner of military equipment despite the fact much of the industrial infrastructure had been destroyed); the issue was how many resources had to be diverted from other activities in order to ramp up the desired output.

The real mark of industrial society is the huge jump in productivity, where you can get the large increases in production without corresponding dislocations elsewhere. Press ganging workers or watching agricultural output drop because no one is in the fields is the mark of how industrious societies boost production for a crisis.

The arguments of Chinese colonization (or lack of colonization) is in contrast to the Europeans, who also had traders and trade links scattered along the levant and . The idea of actually settling and owning property where you lived and worked is probably the big difference between the two societies.

Many animals exhibit limited sorts of intelligence and even tool using, but we exhibit a much higher order of intelligence, which isn't replicated anywhere else.

Tony said...

Thucydides:

"The real mark of industrial society is the huge jump in productivity, where you can get the large increases in production without corresponding dislocations elsewhere."

It might be more accurate to say that industrializing societies exhibit that kind of behavior. Fully industrialized societies can't exhibit large changes in industrial productivity in one sector without another being shorted.

Anonymous said...

You always have to make a tradeoff when making something whether the country is agrarian, industrializing, or industrialized.

It was the ability to produce surplus food that allowed countries to shift labor towards industry. That and available capital.

Of course if starvation is acceptable (to the evil overlord) a country can do a shift without a surplus like Stalin did.

(SA Phil)

Tony said...

SA Phil:

"You always have to make a tradeoff when making something whether the country is agrarian, industrializing, or industrialized."

Not quite. There are phase changes in capability. Agriculturalizing societies learn how to live a sednetary lifestyle without starving anyone, and at the same time developing enough surplus to support full-time craftsmen. Not quite as big -- but not insignificant -- changes in capability were achieved through the adoption of metallurgy. Industrialization is also a phase change.

Anonymous said...

Adding mass takes a lot of energy due to E=MC^2 but what about removing or transferring mass?

Anonymous said...

Adding mass takes a lot of energy due to E=MC^2 but what about removing or transferring mass?

=======

You dont have to "add" mass - according to general relativity energy will curve spacetime in the same manner as mass.

Where would you remove or transfer it from/to?

You could spin the mass - that evidently has an effect. The problem is you need to spin it extremely fast before it adds up (pulsars spin at fractional C)

A physicist here at work summed it up for me. "Gravity is really weak."

(SA Phil)

Anonymous said...

=Milo=



So let me get this straight. Are you proposing making space launch easier by reducing Earth's gravity, which you plan to accomplish by destroying Earth? I do not support this notion.

Tony said...

SA Phil:

"You dont have to 'add' mass - according to general relativity energy will curve spacetime in the same manner as mass.

Where would you remove or transfer it from/to?

You could spin the mass - that evidently has an effect. The problem is you need to spin it extremely fast before it adds up (pulsars spin at fractional C)

A physicist here at work summed it up for me. 'Gravity is really weak.'"


You know, you hear that a lot, but it's all relative. Gravity may be real weak over relatively short distances, but electromagnetism couldn't hold a galaxy together. Just the universe telling us -- yet again, as is we didn't know -- that you can't get something for nothing.

I don't know that I believe in artificial gravity, in the form of a machine where you put power in and out comes gravity. I might believe, a little, in some kind of machine that nullifies or pushes against an existing gravity field, so that a spacecraft could be buoyed up and down in a gravity field, like a submarine in a water column.

How that would work, however, I haven't a clue. In SF prose I'd probably say that the left-handed disgronificator transforms electricity into antigravity, and leave it at that.

Anonymous said...

Tony,
You know, you hear that a lot, but it's all relative. Gravity may be real weak over relatively short distances, but electromagnetism couldn't hold a galaxy together. Just the universe telling us -- yet again, as is we didn't know -- that you can't get something for nothing.
================
Yes its relative. It isnt weak if you have large masses. So the galaxy can do what we cant. Since the anti grav/artificial grav idea to to counteract what the galaxy already has going on.

==============
Tony,
I don't know that I believe in artificial gravity, in the form of a machine where you put power in and out comes gravity. I might believe, a little, in some kind of machine that nullifies or pushes against an existing gravity field, so that a spacecraft could be buoyed up and down in a gravity field, like a submarine in a water column.
===========

I dont beleive in either. The simplest explanation of gravity being merely what happens due to masses curving spacetime seems to mean your anti/artificial gravity system would need to push against/alter spacetime.

If instead the string theory guys are right and gravitons exist -- maybe you could manipulate those.

(SA Phil)

Tony said...

SA Phil:

"Yes its relative. It isnt weak if you have large masses. So the galaxy can do what we cant. Since the anti grav/artificial grav idea to to counteract what the galaxy already has going on."

The thing is that gravity is a property of all matter. So even though it is weak per unit mass, unlike electromagnetism, it's always there. It's not bound up inside the atom like positive and negative electrical charge.

But I have a sneaking supicion that there's no shortcut to generating gravity. Even if you had a machine that you could put electric current in and get gravity out, you'd only get the force equal to the mass equivalent of that electrical current (through e = mc^2). So, you want a 1g field, put in the energy equivalent of a 1g mass. Shouldn't be hard to do...

"I dont beleive in either. The simplest explanation of gravity being merely what happens due to masses curving spacetime seems to mean your anti/artificial gravity system would need to push against/alter spacetime.

If instead the string theory guys are right and gravitons exist -- maybe you could manipulate those. "


Hmmm...the thing that gets me is why would mass (or an equivalent amount of energy) necessarily curve space? Or, alternatively, how would mass constantly generate gravitons without losing energy?

One could, in the case of gravitons, invoke graviton exchange to ensure conservation of energy, but how does a star at the edge of a spiral galaxy exchange gravitons with the center of mass tens of thousands of light years away, at the speed of light? And what happens over those tens of thousands of years of exchange time that ensures the exchange is completed?

Anonymous said...

Tony,

So, you want a 1g field, put in the energy equivalent of a 1g mass. Shouldn't be hard to do...

--------------

Except that 1 gram of mass equals a whole lot of energy.

And that same 1 gram of mass creates very little useful gravity.

=================

Tony,

Hmmm...the thing that gets me is why would mass (or an equivalent amount of energy) necessarily curve space? Or, alternatively, how would mass constantly generate gravitons without losing energy?

====================

That was the idea (Einstein's). There is no real separate "gravity" It is all curves in spacetime.

All mass and energy curve spacetime just by being mass and energy. As you said a property of matter.

Spacetime consists of some non zero amount of energy, so it is curved already.

That is about the limit of my understanding - I have trouble thinking at that level of abstraction. As I get older I get more and more impressed by that one German guy.

(SA Phil)

Tony said...

SA Phil:

"Except that 1 gram of mass equals a whole lot of energy.

And that same 1 gram of mass creates very little useful gravity."


The suggestion was made with tongue planted firmly in cheek.

"That was the idea (Einstein's). There is no real separate 'gravity' It is all curves in spacetime.

All mass and energy curve spacetime just by being mass and energy. As you said a property of matter."


Once again, I know that. Just musing on the wierdness of either gravitons or curved space.

Anonymous said...

Tony:

You know, you hear that a lot, but it's all relative. Gravity may be real weak over relatively short distances, but electromagnetism couldn't hold a galaxy together.

The problem there isn't the strength of electromagnetism over galactic distances, it's the fact that the opposite-charges-attract nature of electromagnetism means that objects on that scale are almost guaranteed to be neutrally charged, and that objects that aren't are subject to internal repulsive forces, rather than attractive forces.

Whether the scale you're dealing with is subatomic or galactic, if the typical object at that scale has a charge/mass ratio comparable to the proton, electromagnetism will always dominate over gravity.

--Linguofreak

Tony said...

Linguofreak:

"The problem there isn't the strength of electromagnetism over galactic distances, it's the fact that the opposite-charges-attract nature of electromagnetism means that objects on that scale are almost guaranteed to be neutrally charged, and that objects that aren't are subject to internal repulsive forces, rather than attractive forces."

Already said that:

"...unlike electromagnetism, [gravity]'s always there. It's not bound up inside the atom like positive and negative electrical charge."

Anonymous said...

Maybe gravity is only always there - because its not really there.

Which is ironic in that one of the often touted arguments put against the "theory" of evolution is it is "just a theory"

With the retort "Like gravity is just a theory"?

When the "gravity theory", general relativity; allows for the possibility that gravity is fictitious.

(I know, restating things ad naseum a little there)

(SA Phil)

Thucydides said...

To further confuse the issue, some physicists believe gravity is not intrinsic (like the warping of space-time) but an emergent phenomena (such as Entropic gravity).

Frankly, this is such a strange idea I have some difficulty following the arguments, but if a force like gravity can "emerge" under certain conditions, then if the machinery exists to create these conditions in a small space, you should have gravitational force "emerge" in that space.

This would imply you could make "more" gravity if you want to build on a small moon or asteroid; as far as I can tell this does not create anti gravity (you would have to suspend the device over your head, using super strong roof supports).

Tony said...

Thucydides:

The problem I see with gravity machines is that they would probably take as much energy to generate gravity as is bound up in normal mass that generates gravity. IOW, you need an Earth-size antimatter reactor to power a 1G artificial gravity field.

Anonymous said...

If it didnt it it would likely violate the conservation of energy.

(SA Phil)

Tony said...

SA Phil:

"If it didnt it it would likely violate the conservation of energy."

Like gravity itself, it's not just a good idea, it's the law.

Anonymous said...

=Milo=



A "planet" can have a 1 g surface gravity with much less mass than Earth, provided it is also much denser than Earth. Simply because the "surface" is now closer to its center of mass.

So you could potentially provide Earthlike artificial gravity with little mass, provided you could somehow pack a fair amount of energy into a really tight space.

The limit here is that with smaller masses, gravity also falls off with distance faster. It is probably not very healthy for humans to live in conditions where your feet experience normal Earth gravity, but your head feels near-weightless. This puts a lower bound on how much mass/energy you need to make this work.

Anonymous said...

Thing is the mass/energy still has to be pretty huge. If you want "1G" at 1/100,000 the distance of the Earth's radius, you need 1/100,000 of the mass/energy.

Creating Energies like that seems like a big job. But not impossible for those Kardashev scale types I suppose.

E=(6x10^18)kg/((3x10^6)^2)kms

67 Terajoules at a distance of 65 meters? Something like that.

Moving that much energy is going to be hard. Since it will have equivalent mass. (6*10^12 kg)

Might work for a Type 2 Space Station though. Move the gravity device cold - Power it up. Bam. Miniplanet in a box.

That isn't really artificial gravity though -- its actual gravity.

(SA Phil)

Anonymous said...

(6*10^18 kg) I mean above.

Anonymous said...

That's no moon .. its a space station.
-----------------

Lets say you have a honking 1000 TW reactor. You could then have a Sphere around a kilometer in diameter with a sizable fraction of earth's gravity.

(SA Phil)

Anonymous said...

=Milo=



Let's say you want 1 g at ground level, 0.9025 g at a height of 600 meters above the ground (should allow for a decent skyscraper).

Since gravitational acceleration equals G*m/r^2, where G=6.67384*10^-11 m^3/kg/s^2, this means that ground level is 11400 m, and having 1 g at that altitude requires a planetary mass of, err, 19.1 petatons, which at E=mc^2 means, err, more energy than the sun outputs in 100 years. Okay, not gonna happen.

Let's lower the ante to 0.9025 g at a height of 2 meters above the ground, above head level for most humans. This means that ground level is 38 m, and having 1 g at that altitude requires a planetary mass of 212 gigatons, which means as much energy as the sun puts out in a little over half a day. And still ten times the amount of energy needed to blow up Mercury.

Okay, not easy. However, keep in mind that this energy expenditure is one-time - after that, you only need to keep the energy contained somehow, you don't need to keep producing the full amount every day. Still, yeah, looks like you need to rank on the Kardashev scale to pull this off, though once you do you can build multiple spaceships with this feature - just use your local star as an assembly line and don't try to build them all at once.

Anonymous said...

=Milo=



Okay, just for fun... how hard would it be to create a miniature black hole toy that exerts Earth-equivalent gravity at a distance of one millimeter? Seems that's only 147 tons. That's less than the mass of a blue whale (and still comparable to the amount of solar energy that strikes Earth in one day). Of course, at a distance of two millimeters your toy would already be down to 0.25 g, and at a distance of one centimeter it would be down to 0.01 g.

So if you have the ability to crush the mass of a few whales into a single point, you could make some fun toys. Who knows what you could USE them for...

Thucydides said...

While I have no doubt your calculations are correct, this is for "conventional" gravity.

As I understand the arguments, gravity is an emergent phenomena, something like "friction" as quantum events happen. (In technical terms, gravity is merely a manifestation of entropy in the Universe. [The] idea is based on the second law of thermodynamics, that entropy always increases over time. It suggests that differences in entropy between parts of the Universe generates a force that redistributes matter in a way that maximises entropy. This is the force we call gravity.)

I have no idea if:

a: this is right, or;
b: how you would manifest this in the macro scale.

For more see: http://www.technologyreview.com/blog/arxiv/24975

Anonymous said...

Very true - that is for relativity based gravity.

Not the string/quantum gravity stuff being speculated about. Or the stuff you are mentioning

Thing is that stuff may or may not actually exist, but relativity tends to be proven over and over.
---------

The question remains though - say I can create gravity with one of those other ways.

If I can do it with less energy, then am I violating the relativity way which is known to work? Therefore I am probably violating the conservation of energy. Since my use of energy should have caused the relativistic gravity predicted.

Its that thing where in order to have a correct theory you have to not only show where you are right you have to explain why other theories that were shown to work worked, and then why they don't work in the new condition covered by your theory.


--------
(general "you" not you specifically)
------

As to my calculations, they might be a little off, with such big numbers I may have missed a zero here or there.


I think Milo's point is a good one. And has made me rethink the plausibility of "artificial" gravity in fiction. I can see some "angel" cultures making use of it- maybe even to a fairly large extent.

Its too bad I am not a big advocate of the Kardashev scale though.



(SA Phil)

Thucydides said...

Like I said, I have no idea if this is correct or not. The only reason to contemplate this is it may offer a side door to gravity manipulation that does not require a ball of antimatter the size of the Earth to power the operation.

Anonymous said...

Makes sense to me - especially if you want anti/artificial gravity in a setting and want it to sound like good physics is to run with one of these quantum/radiated type gravity ideas.

Like the one you mentioned.

Since there is at least a possibility it might work that way at some level of magnitude.

(SA Phil)

Damien Sullivan said...

"After all, the most successful species in the current era is the beetle..."

Which species is that? "Beetle" is an order with maybe 400,000 member species. Obviously a pretty successful order. But as far as most successful single species go, Homo sapiens will have to be very high on the list.

As far as industrial revolutions go, remember that rubber, needed for seals unless you have very good machining, comes from a New World plant.

"If intelligence was a real game changer, we should expect to see more example here on Earth"

Extrapolating from humans, our level of intelligence is winner-take-all, something you expect to see only one of because only one suffices to dominate every continent and ecosystem... at Paleolithic tech levels.

Lower levels of intelligence, of flexible behavior manifesting in social complexity, tool use, or both, can be found in the other great apes, elephants, dolphins, whales, raccoons, crows and other corvids, parrots, octopi, and cuttlefish. That's a diverse lot of examples.

Damien Sullivan said...

"I'm sure the idea must have occurred to more than one person along the way, but the powerful "norms" of Chinese civilization, along with the bureaucracy and economic factors, simply made the idea unthinkable for the vast majority of people."

Most of the places reachable by the treasure fleets were fully occupied by people, and tropical, two strong marks against colonization, aka imperial invasion and conquest. The Americas had lots of non-tropical regions, and natives who were basically scrubbed out by Eurasian diseases.

"The arguments of Chinese colonization (or lack of colonization) is in contrast to the Europeans, who also had traders and trade links scattered along the levant and . The idea of actually settling and owning property where you lived and worked is probably the big difference between the two societies."

What, like Chinese trader settlements in the Philippines, or in India, or Malaysia and Indonesia? Not to mention later Chinatowns in the West. Chinese traders have long been happy to settle. The big limit is trade itself: for most of history, most of the world didn't produce much China wanted. Europe's desperation to get to China wasn't because it was more ambitious and forward-looking than China, it was because China was more advanced (and luckier in some materials) and produced more that people would want. For along time European 'exports' were force, American silver, and opium.

Thucydides said...

Damien:

Human level intelligence has been manifested twice that we know of (Humans and Neanderthals), and possibly a third time (Denisovans). Two of the three species are gone, which suggests intelligence is a "nice to have" but not a "must have". There is no dispute that several species exhibit some form of adaptive behaviour, but they haven't taken over the world (and octopi and cuttlefish had a huge head start).

WRT colonization, European traders also had contacts and people scattered around in a similar manner to the Chinese; the end results are far different.

Damien Sullivan said...

How close to human intelligence they were is unknown; certainly there's a lot of thought that they might have been a bit short. Conversely, they were our close relatives -- so close as to have been interfertile; that's hardly three separate inventions. Their being gone proves... nothing, especially since they lost out to us. You'd expect only one species per stable niche in equilibrium, anyway.

"There is no dispute that several species exhibit some form of adaptive behaviour, but they haven't taken over the world "

Well yeah, they're not as smart. We, OTOH, have, overcoming the obvious ecological barriers of being a mostly hairless ape with record-long reproductive times.

Intelligence isn't the only trait that matters, especially when you don't have that much of it. And at the lower levels you don't get individual species taking over the world, because they don't move around that much and speciation happens. You do find a fair bit of it though, and possibly more over time, going by brain/body ratios. We've short-circuited all pre-existing trends, though.

"WRT colonization, European traders also had contacts and people scattered around in a similar manner to the Chinese; the end results are far different."

No one denies that, the question is why, and earlier you seemed to be saying it was due to a cultural difference where Chinese didn't settle where they worked, which AFAICT is wrong.

Consider this theory of why Europeans colonized: they got lucky. They were poorer and knew it so had more reason to explore (Columbus was trying to get to... China) and they stumbled upon the Americas instead, which turned out to be full of people who would die of exposure to European disease. And which happen to be half as far from Europe as from China.

And 'stumble' is quite apt; Europe also produced someone dumb enough to think the world was smaller than it was and someone else dumb enough to fund him.

Move the Americas over so the treasure fleets find a land rich in silver and you'd find more Chinese interest, and with Indians dropping of the flu it'd be as easy for them to settle.

Thucydides said...

The fact of the matter is there are huge cultural differences between the Chinese civilization and the Western one. A simple example is the role of Law; Western societies prefer a uniform and codified system of laws, while the Chinese (under the influence of the Confucians) work under the assumption that codified laws are inflexible instruments, and rulings should be made by the decisions of individual magistrates.

The idea that Chinese traders settled in foreign lands isn't controversial, what I am saying is that simply led to an expat population, not full fledged colonies like the Europeans created, which included European rulers, laws, schools and other cultural artifacts. It is also not controversial why the Europeans were attracted to the Orient, but Europeans also sought to own and exploit resources in Africa and the Americas at the same time, as well as the Indian sub continent, the Indies and everywhere else they could reach.

We can see something similar even now; the West (led by the United States) spends vast amounts of resources attempting to introduce or reorder existing political systems into democracies, while the Chinese pursue their "String of Pearls" approach of investing in huge projects which are not even nominally under Chinese ownership like Gwadar, but control through management contracts and the use of Chinese labour; the locals often do not have the technical skills to fully operate these investments.

Culture does make a huge difference, in a different sphere you can look at the different outcomes of the Serenìsima Repùblica Vèneta and the Ottoman Empire; Venice had only a tiny fraction of the Ottoman Empire's resources, but different views of banking, justice, property ownership, capital and social order allowed Venice to stay abreast of the Ottomans for centuries (I might even argue that the turning of the Serenìsima Repùblica Vèneta from the sea to fighting wars on the Italian mainland was a critical diversion of resources that contributed to the decline of Venice).

Max said...

I don't think that innovation and technological progress will decelerate. The reason for this is globalization. The "Rising Billion", people previously denied participation in the global conversation for economic as well as technical reasons are now linking up with the rest of the world thanks to the internet and smart phones. With increasing economic development in former Third World countries, more time and funds become availible for R&D. The total human capacity to solve problems is growing day by day.

Also, we are on the verge of a third revolution in manufacturing. Soon, additive manufacturing (3D-Printing) will break economies of scale and allow virtually anyone to produce customized products for very little cost. Traditional factories for low-tech goods will probably disappear within the next fifty years. This will give the global economy another large boost. The Economist ran a good piece on this upcoming manufacturing revolution in its April 21st issue.

We can expect a worldwide real GDP growth of 2%-3% until the year 2100. This may make spaceflight affordable to many people even if only modest reductions in the production and operation of space vehicles are realized. For example: the average income of an American worker in 1900 was $4200 in 1999 Dollars, while the average income 1999 was $33700. For the worker of 1900, a flight on a 747 would have been completely unaffordable, while for the worker of 1999 it is. I see spaceflight becoming affordable too, through a combination of modest cost reductions and society as a whole becoming richer.

Damien Sullivan said...

"Western societies prefer a uniform and codified system of laws, while the Chinese (under the influence of the Confucians) work under the assumption that codified laws are inflexible instruments, and rulings should be made by the decisions of individual magistrates."

English common law and judge-made law? Chinese Legalism?

"Europeans also sought to own and exploit resources in Africa and the Americas"

In the Americas because they were lucky enough to find the Americas. In Africa they mostly went for slaves, to send to the Americas. And they built full fledged colonies in the Americas because the natives died off to make room.

China already had contact with India and the Indies and was profiting from them via trade. Europe didn't have anything to trade and had to resort to extortion and conquest.

Different responses to different opportunities.

"the West (led by the United States) spends vast amounts of resources attempting to introduce or reorder existing political systems into democracies"

Vast? How vast? We spend very little, I think, and most US foreign aid has gone to propping up countries, not trying to convert them. Heck, the US has often acted against democracies, to support our business interests.

Thucydides said...

The fundamental basis of Western Jurisprudence is codified and written Law, while the Confucian system is based on individual rulings by magistrates. The actual application of these legal theories may not conform 100% to the ideal (few things actually do), but the roots from which they grow are quite different.

WRT Colonization, you are getting wrapped up in motivation rather than outcome. The Europeans had lots of different reasons to go around the world, lust for gold, desire for property, saving the souls of the heathen, pillaging someone else's colony etc., but once again the outcome was quantitatively different; a Colony with European laws, rulers, social order and mores.

The United States has spent on the order of a Trillion dollars in the Middle East and SW Asia in the democracy business since 2001, and has knocked over quite a few dictators and authoritarian regimes throughout the globe during its history (expending a great deal of blood and treasure). If the results are not always what is expected or desired, part of the fault may lie in lack of patience or a clear articulation of US national and strategic interests. Sometimes it is just lack of imagination (reordering parts of Mexico, Nicaragua, Cuba and Hispaniola during the Banana Wars period was supposed to leave Democracy by a process of osmosis, apparently. The well ordered structures the Marines left rapidly devolved into kleptocracy and chaos since the underlying societies were not reordered from the ground up. To see how its done, see the destruction of Nazi Germany and Imperial Japan and the rebuilding of a social order from the ground up).

Damien Sullivan said...

The fundamental basis of English common law is codified and written Law? Or English common law isn't part of Western jurisprudence? Western jurisprudence includes Roman law codes *and* common law judges *and* notions of natural law, later human rights, limiting the power of kings and law.

Meanwhile the Chinese certainly wrote law codes, though mostly criminal, and Legalism was a highly infuential school of jurisprudence of its own, not a blemish on Confucian purity. Neither Europe nor China is a simple "one way" place.

"you are getting wrapped up in motivation rather than outcome"

No, I'm pointing out that the different outcomes have alternative explanations than just culture, which is hwat you were invoking. Explanations like geography and the luck of disease.

The US has also knocked over inconvenient democracies. The more general trend is of shaping governments to be more useful to us. Sometimes we indulge in some democracy pushing too.

Anonymous said...

Damien

Extrapolating from humans, our level of intelligence is winner-take-all, something you expect to see only one of because only one suffices to dominate every continent and ecosystem... at Paleolithic tech levels.
--------

Interesting thought - since our experience is as you describe.

This could lead to how evolution on a larger scale would work out.

Since two intelligent species could not coexist on a planetary level with stone age tech - why do we assume they could exist on an interstellar level with relativistic tech?

Would the pattern of competition resume?

(SA Phil)

jollyreaper said...

As a random aside, everyone should see Iron Sky. It's social satire and comedy along with saucer-punk Nazi UFO's from the dark side of the moon. Anyone who disputes the science in it, just remember it occupies the same sort of fantasy world as the Blues Brothers -- it's absurdist fantasy covered over with a veneer resembling our world.

Damien Sullivan said...

"Since two intelligent species could not coexist on a planetary level with stone age tech - why do we assume they could exist on an interstellar level with relativistic tech?

Would the pattern of competition resume?"

It might. OTOH, there's more than one form of competition. We don't know if our relatives were killed off directly, slowly starved out for lack of resources, or faded away into the burgeoning human gene pool. Interstellar civilizations could go for the Killing Star route, or for the Federation route with property rights and trade, and either development of new co-existent niches or a very slow path to extinction due to economic inferiority.

Or at transhuman tech levels, a merger. Trek/Culture genetic engineering to crossbreed is unlikely, but AIs might co-exist easily.

Tony said...

Damien Sullivan:

"Most of the places reachable by the treasure fleets were fully occupied by people, and tropical, two strong marks against colonization, aka imperial invasion and conquest."

Really? Hong Kong is tropical. The area it sits in was first incorporated into an imperial Chinese state 2,100 years ago. So were the coastal areas of northern Vietnam. This should discourage anyone from thinking that the Chinese avoid tropical areas in imperial considerations.

Also, Vietnam and Southern China were pretty well populateda t this point. That's what made them attractive for integration into the empire.

What I think is more accurate is that the Chinese, being a large land empire, weren't particularly interested in overseas resource or trade area at the state level. They left that up to commerce, as the widespread Chinese merchant communities in the Western Pacific and SE Asia clearly demonstrate.

Anonymous said...

More than likely we humans got rid of the other hominid species by a combination of warfare, starving them by out competing them, reproducing more, and actually 'loving' them to death; I can see that basic pattern continuing if we run into alien species (throw in some economic/political/military competition and cooperation just to mix thing up); so, even though our prehistorical, earthly, compedators generally had a headstart on us, we just happened to have the right combination to prevail. We should keep those lessons firmly in mind if and when we meet E.T.

Ferrell

Brett said...

@Max
Also, we are on the verge of a third revolution in manufacturing. Soon, additive manufacturing (3D-Printing) will break economies of scale and allow virtually anyone to produce customized products for very little cost. Traditional factories for low-tech goods will probably disappear within the next fifty years. This will give the global economy another large boost. The Economist ran a good piece on this upcoming manufacturing revolution in its April 21st issue.

Modern production lines in heavily automated factories already manage to produce "low-tech" goods at very low unit prices. I question whether 3D Printers will ever be truly competitive with that on price, when you factor in the cost of materials, amortized cost of buying the Printer, and costs of operating and maintaining the Printer.

Which is not to say that people won't use them. It's just that the customization will have to make it worth the price. In that, I do agree that a lot of large-scale factories may disappear in favor of small-scale operations done with clusters of 3D Printers.

Thucydides said...

More than likely we humans got rid of the other hominid species by a combination of warfare, starving them by out competing them, reproducing more, and actually 'loving' them to death

I suspect that our species tends to "oversell" themselves. Theories that we killed off our competition, destroyed the Ice Age Megafauna or did other things in the distant past are far beyond what the Ancestors could have actually done (Humans in a later age tried mightily to kill the North American Bison off using mechanized warfare tools like repeating rifles and railroads and didn't quite succeed, far smaller numbers of Ancestors with neolithic tools certainly could not)).

A more probable notion is the Neanderthals and Denisovans were intelligent and superbly adapted to their ecosystems, but were unable to adapt to the changes that occurred at the end of the Ice Age. Some were able to interbreed with other hominids (hence the fragments of their DNA in our genome), but for the most part, their intelligence did not give them the edge to adapt to the changing conditions the way the Ancestors did. (The Ancestors lived in the more temperate regions of the Earth before spreading out, so didn't have to deal with such a tough environment).

Does this mean Humans are/were smarter than our cousins? That is a pretty debatable proposition, to say the least.

Anonymous said...

=Milo=



Thucydides:

"Humans in a later age tried mightily to kill the North American Bison off using mechanized warfare tools like repeating rifles and railroads and didn't quite succeed"

We almost succeeded, and in far less time than prehistoric humans had to cause the extinctions attributed to them. The only reason we didn't is because at the last moment before our efforts were complete, we changed our minds and started favoring conservation rather than deliberate extinction of species (and for that matter, of Native Americans who were dependant on the bison), so we stopped.

We did kill off the passenger pigeon, which was more common than the bison.

Anyway, most human-caused extinctions - at least those of prey animals, rather than predators we deem dangerous to our children or livestock - are not the result of a deliberate campaign of extermination, but rather overhunting. In the era before globalized spread of information, any given tribe would have no way of knowing how well the mammoths/whatever are doing outside of their own territory. Thus, even if the mammoths are visibly dwindling in their territory, they're prone to saying "Oh, they're just hiding. They'll be back." and keep hunting any mammoths they find. Presto, mammoths go extinct. The concious realization that animals can go extinct - and attempting to prevent this - is dependant on having complete data on the animal's entire range.

Any Pleistocene extinction theories based purely on climate change have to contend with the facts that (A) the ice age didn't actually "end", we merely transitioned from a glacial period to an interglacial period, and the same species had survived multiple interglacial periods earlier in the Pleistocene, (B) the exact place in time where megafauna went extinct varies by a few tens of thousands of years from continent to continent, and shows a clear correlation with when humans first entered that continent, or (in one arguable case) when humans developed improved hunting tools, and (C) the continent which showed the least decimation in megafauna, Africa, is the ones where the animals evolved alongside us and were more prepared for us.

Brett said...

With Neanderthals, it's worth noting that the population had already undergone a huge crash tens of thousands of years before modern humans showed up in Europe. It may not have taken much to push them over the edge, and for the survivors to be absorbed into our gene pool.

The Ice Age mega-fauna extinction is difficult to answer. We used to think that the early arrivals to North America hunted them into extinction, but the actual on-the-ground evidence of that is weak (they rarely find mammoth bones at Clovis sites, for example).

Anonymous said...

I think that something like 99% of all the species that ever existed are extinct.

I don't think it requires any special reasoning to attribute why any species dies out.

The species that are different are the ones that survive.

(SA Phil)

Anonymous said...

=Milo=



SA Phil:

"I think that something like 99% of all the species that ever existed are extinct.

I don't think it requires any special reasoning to attribute why any species dies out."


No, that is wrong.

The fact that extinctions are common means that extinctions can be explained through mundane means that are repeated with minor variations for many extinctions, such as "climate change" or "outcompeted by enemies" (Homo sapiens counts as a natural enemy to most other things on the planet), as opposed to needing overly fanciful reasons like "God decided that the dinosaurs were sinful, and smote them". Reasons, however, are still there.

Also mass extinction events need somewhat more justification than normal background extinction. The demise of the Pleistocene megafauna is a mass extinction event.

Thucydides said...

Hard evidence of the hows and whys is lacking, but as the ice retreated (colloquially known as the end of the ice age for non purists) the climate and ecosystems changed, changing the ranges of the vegetation and the animal populations.

One theory about the Neanderthals suggests they were almost exclusively big game hunters, so as the Megafauna dwindled, so did they. This overlooks the fact that some communities lived on the shores of modern Spain making a living from shellfish and fishing in general.

Perhaps more plausibly, known Neanderthal groups had very small numbers of individuals, so the loss of one person to accident or disease was far more serious to group survival than the loss of a single person in a larger Homo Sapiens clan or tribe.

Rick said...

Just following along on this ride!

Eth said...

Random thought (somehow) related to the original post : whatever our heirs may become, they will have to deal with existential threats on a more routine basis than us. Some interesting thoughts about it here:
http://www.nickbostrom.com/existential/risks.html

Someone even had fun making a warning sign about it (among others) here:
http://www.aleph.se/andart/archives/2006/10/warning_signs_for_tomorrow.html

Tony said...

Eth:

"Random thought (somehow) related to the original post : whatever our heirs may become, they will have to deal with existential threats on a more routine basis than us. Some interesting thoughts about it here:
http://www.nickbostrom.com/existential/risks.html"


Meh...no more so than we do already, for realistic existential threats. Boxtrom's invocation of trendy SFnal speculative technologies doesn't impress much, in that respect.

Anonymous said...

=Milo=



Anders3 (quoted by Eth):

Pretty amusing, though...


"Maybe the sign could show an alive/dead cat instead, but it is a bit too much of an in-joke."

How do you depict the difference between a dead cat and a sleeping one?


"It is likely unlimited self-evolution will need to be contained carefully even as we mine it for truly new inventions."

Then it's not unlimited, is it?

Eth said...

Tony :
Meh...no more so than we do already, for realistic existential threats.

We do already for one extential threat, total nuclear war. Which is almost certainly avoided now, at least for some time. And if we treat all future existential threats (who may be far more insidious or probable) that amateurishly, then we're pretty much toast.
That's where he has a point IMHO. The threats he describe are just examples, who may or may not happen (I would guess that the one our heirs will face will be things we didn't even imagine). The problem is the methodology, which we will have to develop one way or another (or die trying).

And, more importantly for my immediate self, the problem of developing and following such methodologies can make for good SF stories.

Milo:
How do you depict the difference between a dead cat and a sleeping one?

Do we have to? After all, it works with sleeping gas as well as poison, the cat simply being awake/asleep instead of alive/dead, to avoid problems with animal right groups...
Alternatively, you can depict a seizing, rotting or squeletal cat, but I guess that depicting a painlessly dead/sleeping cat would look better, but a rotting one would better indicate danger.

Anonymous said...

=Milo=



...Now I want to go around randomly planting signs reading "WARNING: This area might or might not be dangerous.".

Anonymous said...

I wouldn't consider nuclear war an existential threat in the sense that it would end our existence as a species.

More a threat that could end our civilization.

Humans would survive a global nuclear exchange, even one where every warhead on earth gets used (which is the most unlikely scenario).

The whole "enough weapons to blow up the world 3 times over" was a statistical figure not a realistic one. It was more along the lines of "if each bomb kills X people, and we multiply it by Y bombs X*Y > 3 times the earth's population"

By those statistics we have enough guns, knives to kill everyone on earth many times over as well.

In fact the US easily throws enough plastic bags away to suffocate everyone on the planet.

(SA Phil)

Anonymous said...

=Milo=



SA Phil:

"Humans would survive a global nuclear exchange, even one where every warhead on earth gets used (which is the most unlikely scenario)."

For one thing, because the enemy's missile silos will be a prime target in a nuclear war, so it's unlikely every single missile will survive to be fired.

Besides, anyone even remotely sensible would surrender after two-thirds of the country has been reduced to ashes.


"The whole "enough weapons to blow up the world 3 times over" was a statistical figure not a realistic one. It was more along the lines of "if each bomb kills X people, and we multiply it by Y bombs X*Y > 3 times the earth's population""

The problem (for a genocidal madman) is that nukes have diminishing returns. After you've blown up all the major population centers, further nukes are going to have to kill people in rural areas where you're getting less people per bomb.

If I wanted to do a serious analysis of our destructive power, I'd go either by blast area (how many square kilometers can we blow up, and how much of the human population can we include in that area, assuming they aren't herded into position in advance?), or try to quantify the level of fallout, climate change from nuclear winter, etc., and figure out how much global damage that would do. I think we're more likely to die from blasts than from fallout though.

Tony said...

Eth:

"We do already for one extential threat, total nuclear war. Which is almost certainly avoided now, at least for some time. And if we treat all future existential threats (who may be far more insidious or probable) that amateurishly, then we're pretty much toast."

Sorry, but the threat of nuclear war was manifestly not handled amateurishly. THousands of man-lives were devoted to making sure things didn't get out of hand, even to the level of not losing track of a single weapon.

"That's where he has a point IMHO. The threats he describe are just examples, who may or may not happen (I would guess that the one our heirs will face will be things we didn't even imagine). The problem is the methodology, which we will have to develop one way or another (or die trying)."

I have a hard time telling whether Boxtrom is being serious or just toying with an intellectual plaything, but he seems to be advocating a totalitarian global authority capable of initiating any project and demanding any resource to combat some perceived threat. If that's the "methodology" you want, you can have it.

Anonymous said...

RE: Milo

Agreed.

A good portion of nukes wouldn't have been used even in a doomsday scenario

And a good portion of the nukes that were used would have been destroying the same areas over and over to ensure the enemy couldn't wage war.

There was not a carpet-bomb every square inch of the inhabitable ground of the earth to ensure the end of the species plan.

Most likely only a minority of humans would have died in the initial war. (Still a staggering number I'm sure) After that many more would die in the new environments created.

Some very remote people might hardly have been affected from a survival standpoint.

(SA Phil)

Eth said...

Sure, a nuclear war would probably not kill everyone instantly. It can be argued that radiations (directly or by messing with reproduction/food production), and possibly a nuclear winter, would let so few survivors on the long term that mankind may never rise back again, though.
After all, you don't have to kill everyone to extinguish mankind. Kill enough people and the survivors may be too scattered to reproduce effectively without inbreeding problems, or simply defenceless against the next threat (who would have been 'only inconvenient' for more organised humans).

But that's not exactly the point. The point is that back then, people thought that it would kill mankind, including people running the show. And their method was basically : "Ok, so we limited who has access to the doom's day stuff. Now let's just hope that it won't happen (while I'm still in charge, at least)."
Even better was with the first nuclear test : "Yeah, we hope it really won't ignite the atmosphere..."
So yes, amateurish. Which is unsurprising, after all it was the first time we faced this kind of threats. Maybe we will develop consistent methodologies after surviving some more the same way.

Another possible one we do manage better, though, is with biological experiments. Those could potentially ruin mankind, but they are in (hopefully) completely secured labs.
(I don't know if mankind-extinguishing virus were ever completed, but we have the technology for it, which still makes it a threat.)

Tony:
I have a hard time telling whether Boxtrom is being serious or just toying with an intellectual plaything, but he seems to be advocating a totalitarian global authority capable of initiating any project and demanding any resource to combat some perceived threat. If that's the "methodology" you want, you can have it.

I'm not sure if that's really what he is advocating. If he is, then he should probably read 1984, which is an interesting example of how such state is by itself one of those threats (as the ruling powers completely, and probably definitely stopped any development.)
But again, the question is not what should be done that the question will have to be studied, and responses will have to be found. A police state may one of the choices, but probably not the only one (nor the best one, I'd say.)

The problem is that if you face only one of those threats, like it was for nuclear weapons, then you can let some parts of it to chance (particularly as we didn't really know how else to proceed). But if you face dozens of those, letting any chance in your methods means that one will end up happening.
The exact methods who may be employed for that, and how people cope with it (or not) may be good material for a story.

Anonymous said...

Eth

After all, you don't have to kill everyone to extinguish mankind. Kill enough people and the survivors may be too scattered to reproduce effectively without inbreeding problems, or simply defenceless against the next threat (who would have been 'only inconvenient' for more organised humans).

================

You mean scattered into small bands of say 12-30 people ... living a nomadic existance perhaps?

Our species existed and thrived in that state longer than it has existed in our current state.

Increasing background radiation might have less effect than you may think. The most common effects are much higher cancer rates.

Cancer is much less of a problem when average life expectancy goes down to 30 or so.

We have to remember there are people who surived hiroshima and nagasaki on practically ground zero that are alive today.

There are even a few who were caught in both bombings.

--------

You are correct a lot of people thought it would wipe humanity out.

On the other side there were the people who thought it wouldnt even politically destroy our country. Those who thought we could "win" a nuclear war.

Both sides were misinterpreting the data to suit their instinctual reaction.

Something most people are prone to do.

(SA Phil)

jollyreaper said...

You are correct a lot of people thought it would wipe humanity out.

On the other side there were the people who thought it wouldnt even politically destroy our country. Those who thought we could "win" a nuclear war.


Scientifically illiterate hyperbole is annoying. "Wiping out all life" really oversells it. Shouldn't destroying civilization be bad enough? It's kind of like the Death Star -- if you can slag all organic life on the surface of a planet, isn't blowing it to pieces on top of that a wee bit excessive?

I think it was on this blog that the following was referenced, talking about nuclear war planning and pie-cutters. A good read.

Anonymous said...

That was something I found odd about the BSG reboot.

They find "Earth" 2000 years after a nuclear war .... and somehow it was uninhabitable?

After 2000 years?

It wasnt even internally consistant -- since people and cylons were still surviving on Capicra. Just had to take their meds if they got too close to where the bombs hit.

(SA Phil)

jollyreaper said...

There are so many odd things about the BSG reboot... :)

Thucydides said...

The methodology for nuclear war was pretty terrible, but became much more sophisticated and "nuanced" as time passed.

Gen Curtis LeMay's initial ideas for nuclear war could be reduced to "bomb them until the rubble bounces", and many early US (and Soviet) nuclear weapons were of awesome size and power to do just that. Megaton and multi megaton weapons were a product of the early cold war.

As time passed, we saw the US arsenal decline in both warhead size and absoulute numbers. I think the average current US warhead on a missile is dialed in at @ 300 KT, a pretty impressive bang to be sure, but well below the massive weapons employed in the early cold war. This had a lot to do with improved targetting, highly improved accuracy and the realization that time and resources could be more profitably employed on other military activities. As Sean Connery says in "The Rock":

"What would you like me to do? Kill him again?"

Damien Sullivan said...

Tony, the man's name is Bostrom. Not Boxtrom.

Tony said...

Thucydides:

"The methodology for nuclear war was pretty terrible, but became much more sophisticated and 'nuanced' as time passed."

When you only have a few hundred 15-50 kt bombs and inadequate intelligence, targeting amounted to work your way through the largest enemy cities in descending order of population.

"Gen Curtis LeMay's initial ideas for nuclear war could be reduced to 'bomb them until the rubble bounces', and many early US (and Soviet) nuclear weapons were of awesome size and power to do just that. Megaton and multi megaton weapons were a product of the early cold war."

You know, you can't google "bomb them until the rubble bounces" and find it associated with "curtis lemay". And that's not surprising, since LeMay was never anywhere near that cavalier about anything.

WRT megaton and multi-megaton weapons, one has to understand that they were developed to put as much bang in a single sortie (missile or aircraft), because very few sorties were actually expected to penetrate to their targets. The poor missile accuracy was a factor, but it wasn't the only one. There's a real eye-openenr contained in the pages of Kahn's On Thermonuclear War, where it is presumed a priori that only fifty percent of missiles will function all the way to the target, for one reason or the other. So the discussion deterence credibility becomes one of how many missiles you shoot at each target to guarantee a sufficient likelihood of a successful strike. It turned out that three missiles per target was about the optimum, giving a 7 in 8 chance of success. Even more missiles would have to be targeted at stuff like enemy command and control, to guarantee things as much as possible. Consequently, one had to have a thousand missiles to effectively strike a few hundred targets. And this was all presuming that the missiles carried 1 Mt warheads.

Anonymous said...

If I remember correctly yield for a nuclear weapon is based on the square.

So a 10 kt device has 1/2 the blast radius of a 100 kt device which in turn has half the blast radius of a 1 Mt device.

So even a 20 Mt uber bomb would have about ~16 times the blast radius of a Hiroshima.

Which is really scary to be sure - but pretty disappointing if your goal was to carpet bomb the entire land surface of the earth.

Assuming as Tony points out - you could even hit said surface reliably.

(SA Phil)

jollyreaper said...


So even a 20 Mt uber bomb would have about ~16 times the blast radius of a Hiroshima.

Which is really scary to be sure - but pretty disappointing if your goal was to carpet bomb the entire land surface of the earth.


A significant fraction (half?) of the energy is wasted towards space so a bigger bomb just loses more energy to space. Smaller bombs laid down in a grid do a better job of spreading the love.

What would be a great premise for a short story is if corruption prevented WWIII. There's a lot of valuable material going into the super-weapons and the people who actually build them never expect they'd actually get used. So they end up making working test weapons to keep the generals happy but the deployed units are dummies. Live missiles, warheads look proper, but they won't go bang.

When the war starts one of the few guys in the know about this spills the beans to an army general because, hell, they're all dead anyway.

The Russian missiles are inbound and it's 3, 2, 1... THUNK! Thunk? No ka-boom? The general and civilian contractor go outside to inspect the smashed warhead on the tarmac. What happened? "Looks like the guys who built these missiles did the same trick!"

Or another approach would be the head nuclear boffin calls up the President after a nuclear strike goes out to tell him none of the warheads are armed. "You pay us to build doomsday weapons, we do our jobs. But do you think we'd actually be so stupid as to put the means to end civilization in the hands of a bozo politician?" The Russian president gets the same call from his boffin. Turns out no matter where they were targeted, every ICBM is actually pointing at a rocky island in the Arctic Circle. They all crash down, a radioactive hazard but no threat of going nuclear. The bombers can still drop on their targets but the bombs won't go off. Did the scientists commit treason? "How do we punish them for saving the world?"

Which scenario feels better, saving the world through rationality or greed?

jollyreaper said...

PS concerning story idea -- I'm betting there's a very good chance it's already been written and probably a very long time ago to boot. :)

Anonymous said...

heh

I like the corruption angle

WWW3 didnt happen because the contractors were trying to save a buck.

Although maybe the incompetance angle might be even more plasuible. The weapons worked when designed but the manufacturing process was .. lacking.

(SA Phil)

Tony said...

Re: nuclear switcheroo

Too many people were involved and very few were venal or idealistic enough to want to eather cheat the government or save the world from itself. Once again, it would make a great joke over a glass of beer at a party, but it would be totally implausible as a story idea for publication.

jollyreaper said...


Although maybe the incompetance angle might be even more plasuible. The weapons worked when designed but the manufacturing process was .. lacking.


There's also the possibility for playing it out as a Star Trek episode. Enterprise is going after a notorious arms merchant who has sold world-ending weapons technology to both sides in a planet-wide cold war. The arms merchant is in the brig onboard the Enterprise and the captain is trying to negotiate with the leaders of both sides to hand the weapons over.

Things spin out of hand, the button gets pressed, weapons are launched and self-destruct on the way to their targets. Planet gets a pretty new aura for a day, nobody killed.

Planetary rulers from both sides decide to file a joint lawsuit against the merchant for fraud. "You're upset he cheated you out of doomsday?"

The captain then informs the local leaders that he has full records of the almost-war and will broadcast across every channel the planet can receive and let the people know why they almost died... unless, that is, the leaders clean up their act and accept peace. "Do right by your people or be torn to pieces by an angry mob. Your choice."

Arms merchant applauds captain for a marvelous bluff since he knows a Federation namby-pamby wouldn't actually carry out such a threat. Captain stares him down. "I never bluff."

Anonymous said...

I dunno, I could see some percentage of them not going off for whatever reason.

Even if there was no real corruption/incompetance involved.

At the height of "production" things may not have been as well organized as we may think if the waste side of the equation is any indicator.

Maybe the story has only a limited nuclear exchange of all one type of weapon- say froma sub- and those ones were duds.

(SA Phil)

Tony said...

Jollyreaper:

"The captain then informs the local leaders that he has full records of the almost-war and will broadcast across every channel the planet can receive and let the people know why they almost died... unless, that is, the leaders clean up their act and accept peace. 'Do right by your people or be torn to pieces by an angry mob. Your choice.'"

Sorry, but this relies on a very tired and very annoying cliche, that it's all the evil leaders' fault. Don't wanna read it, don't wanna see it -- people do indeed get the governments they deserve. If the president or the prime minister or the general secretary or whoever pushes the button, he got in a position to do it because the people either wanted him there or -- in a one party dictatorship -- weren't bothered enough to get him out.

Tony said...

SA Phil:

"At the height of 'production' things may not have been as well organized as we may think if the waste side of the equation is any indicator.

Maybe the story has only a limited nuclear exchange of all one type of weapon- say froma sub- and those ones were duds."


Still not very plausible. One might believe that too large a proportion of a given system might be unreliable, but at least some would have to work for the story to be believable. Then you just have the standard fighting-a-war-with-inadequate-weapons plot.

jollyreaper said...

@Tony

I'm breaking my embargo of all dialogue with you just to remind you that I don't value your opinion and don't care what you think. I only see your posts because they come through in the email updates.

There is no exchange of ideas with you, no give and take, no moments of insight and reappraisal. It's just you lecturing at other people who are too stupid to see it the right way, your way. Conversation with you is wearying and unrewarding.

Restoring embargo now. whirrrr-bzzzzzt-KLANG!

jollyreaper said...

I dunno, I could see some percentage of them not going off for whatever reason.

Even if there was no real corruption/incompetance involved.


Well, the Slade post said that the planners anticipated a fairly high failure rate from the get-go because of how complex a machine an ICBM is, not even counting for ABM tech.

If I remember correctly the Car Wars setting had WWIII be limited due to a combination of early Star Wars tech and poor missile reliability. Both sides went all out but only a few bombs went off on target. Civilization-reshaping but not civilization-ending.

I grew up as a child of the 80's and had a lot of trouble wrapping my head around the idea that grown-ups could get together and work so hard to create such a dangerous situation for us to be in. Really, these super-bright science guys are creating weapons to destroy the world just because a politician said so? Why didn't they say no? I said it was stupid. I was told I'd understand when I got older. Well, I am older I still think it's stupid. :P

While the idea of the scientists only pretending to build doomsday weapons is a bit of a fantasy, that would be the kind of world I'd rather live in!

Tony said...

jollyreaper:

"@Tony

I'm breaking my embargo of all dialogue with you just to remind you that I don't value your opinion and don't care what you think. I only see your posts because they come through in the email updates.

There is no exchange of ideas with you, no give and take, no moments of insight and reappraisal. It's just you lecturing at other people who are too stupid to see it the right way, your way. Conversation with you is wearying and unrewarding.

Restoring embargo now. whirrrr-bzzzzzt-KLANG!"


It would be nice if we could actually have a dialogue. But...

"I grew up as a child of the 80's and had a lot of trouble wrapping my head around the idea that grown-ups could get together and work so hard to create such a dangerous situation for us to be in. Really, these super-bright science guys are creating weapons to destroy the world just because a politician said so? Why didn't they say no? I said it was stupid. I was told I'd understand when I got older. Well, I am older I still think it's stupid. :P"

...the above is why we can't have a dialogue. You don't respect people who know a thing or two. I could tell you things about the Cold War -- the real Cold War, not the Clancy and Brown technothriller one -- both from my personal experience and all I have learned over 30 years of adulthood, that would help you understand. But all you want to do is stick your tongue out.

"While the idea of the scientists only pretending to build doomsday weapons is a bit of a fantasy, that would be the kind of world I'd rather live in!"

I wouldn't. It's the kind of world in which a scientist might decide to fake a cure for cancer, or simply disseminate an airborne, super-virulent ebola, simply because he thinks humans are Mother Nature's vermin, and they need to be exterminated. It's a much better world we live in, where scientists find things out, then let people with some degree of responsiblity figure out what to do with it.

Anonymous said...

But the Cold War was stupid.


(SA Phil)

jollyreaper said...

You don't want scientists inventing things and telling the world how to operate. You want them to invent things and hand them over to politicians who will tell the world how to operate. The real world, boys and girls, is man oppressing man and don't even ask yourself if this outlook becomes a self-fulfilling prophecy.

Heh. Got the conversation and scene playing out in my head right now. I'll write it up tonight.

Tony said...

SA Phil:

"But the Cold War was stupid."

In some ways, most definitely. Geopolitically speaking, Russia, being the only other constinent-spanning superpower, should have been our natural ally. Russia still should be, even if we have to put up with paranoid autocrats in charge over there.

But there was this whole communist thing going on, and we had to wait them out, until they figured out that their ideology was kooky, and that we didn't want to take away all of their cookies. So in that sense it was a necessary evil.

jollyreaper:

"You don't want scientists inventing things and telling the world how to operate."

Nope -- scientists, for all of their great imagination and intellect, are incredibly bad at society and culture. Not a bug, just a feature.

"You want them to invent things and hand them over to politicians who will tell the world how to operate."

I don't trust politicians much either, but for different reasons. I just think they've got practical constraints on their actions that scientists wouldn't have -- or at least wouldn't regard.

The problem, as I see it, is that too much idealism is much more dangerous than too little. And scientists tend to be full of the most preposterous ideals...

"The real world, boys and girls, is man oppressing man and don't even ask yourself if this outlook becomes a self-fulfilling prophecy."

The real world is practical people trying to get by with practical lives, dodging idealistic nonsense from time to time.

Anonymous said...

=Milo=



SA Phil:

"But the Cold War was stupid."

The people involved successfully avoided destroying human civilization. I'd say they did a pretty good job.

Or would you say it'd have been better if the Cold War had been a Hot War instead?

Anonymous said...

SA Phil said:"But the Cold War was stupid."

Well, yeah...but it still could get you dead. And MAD worked because both sides knew that the other side was deadly serious about killing you, if necessary. As stupid as it may seem, that probably kept human civilization from being turned into smoldering ruins...


Ferrell

jollyreaper said...


Well, yeah...but it still could get you dead. And MAD worked because both sides knew that the other side was deadly serious about killing you, if necessary. As stupid as it may seem, that probably kept human civilization from being turned into smoldering ruins...


So, two sides arm themselves sufficiently to send themselves back to the horse and carriage age if they cut loose, come close several times by brinksmanship and accident, and yet somehow manage not to end it all. Congratulations! The system works.

Seems a bit like a drunk driver saying he must be really, really good behind the wheel because he couldn't even see straight and still managed to drive all the way home without killing anyone.

Now to try and look at it from the other point of view, I do agree that two reasonable people should be able to come to a reasonable settlement but one reasonable man and an axe-wielding psychopath won't be able to talk things through; the only choice the reasonable man has is to fight back or be slaughtered.

Of course, the USSR wasn't purely a bunch of axe-wielding crazies, at least not after Stalin kicked the bucket. And there were plenty of monsters on our own side who were every bit as eager for empire and war as we accused the Reds of being. More than enough vile bastardry on all sides. Inexcusable stupidity.

Tony said...

jollyreaper:

"Of course, the USSR wasn't purely a bunch of axe-wielding crazies, at least not after Stalin kicked the bucket. And there were plenty of monsters on our own side who were every bit as eager for empire and war as we accused the Reds of being. More than enough vile bastardry on all sides. Inexcusable stupidity."

Please name these "monsters on our own side".

Anonymous said...

RE: Milo

The Cold War turning Hot does not mean those waging it were sucessful.

There was no real reason for the Cold War in the first place. Both sides were acting contrary what would have been best for the world as a whole.

RE: Tony

That is one way to look at it I suppose - I wouldn't agree though- and your stance shows a lack of neutrality on the subject.

The United States and the USSR were natural rivals. In that they picked up where Western Europe and Germany had left off. (and before them England and France, etc)

Ideology was used by both sides to further their agenda.

There is nothing inherently "evil" about communist ideas. The evil done by "communist" regimes was the product of those regimes- not their economic philosophy.

RE: Ferrell

How did the Cold War protect us from getting dead?

The idea that USSR and the Chicoms would invade and take over the world was propaganda. It played on the fear that USSR had used World War 2 to make territorial influence gains. "Iron Curtain" etc.

The domino effect only took place in countries that were already distressed. And if the "communists" hadnt had a revolution they would have had a revolution in some other name.


The US on the other side made massive economic influence gains due to the same war. Which still are largely in place.

(SA Phil)

Anonymous said...

Above

The Cold War turning Hot does not mean those waging it were sucessful.

Should read

The Cold War turning not Hot does not mean those waging it were sucessful.

(SA Phil)

Thucydides said...

The historical arguments are very long and complex, but essentially each of the Powers during the Cold War had the same objective: Europe.

The Americans wanted Europe as a trade partner and outlet for American goods and services (a sensible strategy for an Oceanic power), while the Soviets, with a much less developed economy, were much more interested in both plundering the industrial resources of Europe and maintaining a captive market for their goods and services.

As the oil age progressed, the Middle East became a zone of contention. South America could be the scene of spoiling actions by the USSR (endlessly sponsoring revolutions) to keep the Americans occupied in their backyard, and the Americans returned the favour wherever they could.

But the focus in terms of manpower and resources was always Europe, it may have seemed a bit different for the Americans since as an Oceanic Power they needed to encircle their rivals (and thus developed a glob spanning military, economic and diplomatic presence).

Now that Europe is no longer the prize and economic activity is shifting to Asia, the Cold War calculus no longer applies. India alone has as many middle class people as the entire population of the United States, and many other Asian nations have reached take off (the Tiger economies especially). China looms over all the other players in Asia, but a straight Cold War scenario makes no sense since there are so many more players and nexuses in the game.

Anonymous said...

Kind of an interesting take - the Cold War being a struggle for the exploitation/control of a war battered Europe.

Each side trying to expand influence and not lose what influence they had gained.

It would suggest the classic Cold War (Eurocentric) ended with the reunification of Germany.

I can buy that.

(SA Phil)

jollyreaper said...

Please name these "monsters on our own side".

Ah, but there we have the rub. One man's monster is another man's pragmatic warrior doing the rough work that keeps us safe and secure in our beds at night. I bring up someone like Kissinger and you'll say "Oh, no, you just don't understand geopolitics." I'll bring up someone like Nixon and you'll say "The American people voted for him." I bring up Iran-Contra and you'll say Ollie North was a good man who took the hit for his President.

I can't bring up Vietnam because that was a moral victory for our side, stopping the dominoes from toppling. Iranian coup? Backing the Shah? Training the SAVAK? Cold War pragmatism. Killing Allende? He was asking for it. I bring up the dictators we backed from Hussein to Pinochet and you'll find one way or another to change the terms of debate and negate the criticism.

Once all of that is done, you'll put on a paternalistic tone of understanding and say "Sometimes I forget just how young you are, how much yet there is for you to learn about the way things are done. It's far more complex than your cliched, lefty views make it out to be. But given time and an open mind, you'll learn. Now have a sweet and play out side. That's a good boy."

Now I really need to go back to holding firm on that embargo.

Tony said...

SA Phil:

"The Cold War [not] turning Hot does not mean those waging it were sucessful."

If one presumes their objectives were to avoid a hot war, one would definitely be justified in concluding that they were indeed successful.

"There was no real reason for the Cold War in the first place. Both sides were acting contrary what would have been best for the world as a whole."

The world as a whole was not their brief. Their own perceived interests were. I might agree that they misidentified what their own interests ultimately were, but saying that there was no "real" reason ignores the fact that very few human political reasons are objectively real to begin with.

"That is one way to look at it I suppose - I wouldn't agree though- and your stance shows a lack of neutrality on the subject."

How's that? I would have thought that the assertion that the Russians are our natural ally would have been a head-scratcher for either a right- or left-leaning American. It's neutral and historical. (Though you have to get into 19th Century economic and industrial history to see it.)

"The United States and the USSR were natural rivals. In that they picked up where Western Europe and Germany had left off. (and before them England and France, etc)"

Only in that the US got so completely stuck-in to Western European security post-WW2. And we only did that because the Russains were involved in a ridiculously expansionist mania at the time.

"Ideology was used by both sides to further their agenda."

And? It was an ideological struggle to begin with. There was no natural, objective geopolitical reason to GAS what the Russians qua russians were up to, as long as they stuck to their own knitting.

"There is nothing inherently 'evil' about communist ideas. The evil done by 'communist' regimes was the product of those regimes- not their economic philosophy."

I never said there was anything evil about communism. But it was (kinda-sorta, combined with some real -- in the sense of being fully perceived, not in the sense of being objective -- deep-seated Russian insecurities) the driver behind overwheening Russian exapnsionism. IOW, a man doesn't have to be evil to be your enemy, he just has to be acting against your interests.

Tony said...

SA Phil:

"How did the Cold War protect us from getting dead?"

By giving us an alternative to fighting things out. Though I do have to say that if we had tried to fight things out in the late 40s or early 50s, we would probably have decisively out-nuked the Russians and prevailed, with a few corners chewed off.

"The idea that USSR and the Chicoms would invade and take over the world was propaganda. It played on the fear that USSR had used World War 2 to make territorial influence gains. 'Iron Curtain' etc."

First of all, China was very much a sideshow of the Cold War. After about 1960, the Chinese were a mcuh bigger headache for the Soviets than they were for us.

WRT the USSR, it is objectively a fact that Soviet influence extended into parts of Europe that it had never directly touched before. Soviet Divisions in Germany and Czechoslovakia weren't a figment of some fabulist's imagination. Communist Hungary and Poland did really exist.

"The domino effect only took place in countries that were already distressed. And if the 'communists' hadnt had a revolution they would have had a revolution in some other name."

And? The Soviets willingly played the game of destabilization and post-coup/revolution military aide that we played. Unless you want to have a childish back-and-forth about who started it, it just has to be accepted as a feature of the way the international system worked back then.

"The US on the other side made massive economic influence gains due to the same war. Which still are largely in place."

So? Does that have anything to do with how and why the Cold War started?

Tony said...

jollyreaper:

"Ah, but there we have the rub. One man's monster is another man's pragmatic warrior doing the rough work that keeps us safe and secure in our beds at night. I bring up someone like Kissinger and you'll say 'Oh, no, you just don't understand geopolitics.'"

No. I'll say, "What makes Kissinger a monster, in your opinion?"

I'll bring up someone like Nixon and you'll say 'The American people voted for him.'"

No. I'll say, "What makes Nixon a monster, in your opinion?"

"I bring up Iran-Contra and you'll say Ollie North was a good man who took the hit for his President."

Now I would agree that he consciously took the fall for the Reagan Administration. And that is loyal. Whether or not he is an objectively good man? I suspect he's just a man, neither a saint nor a monster.

"I can't bring up Vietnam because that was a moral victory for our side, stopping the dominoes from toppling."

No it wasn't. It was just Cold War weenie waving, to a large extend, on our part. "See, we can do Apollo and fight a major war thousands of miles from home." And, to the Vietnamese, it was always much more about Vietnamese nationalism than it was about any Western idealism, of any description. In a lot of ways, the good guys won. And we weren't the good guys.

"Iranian coup? Backing the Shah? Training the SAVAK? Cold War pragmatism."

Uhhh...yeah, in an objective analysis, that's what it was. But I'm not proud of it, by any means. I think we should have stayed our original course and come to an accomodation with Mossadeq. It's also important to recognize that Iran 1953 was brought to us by Dulles, the same guy who brought us Guatemala 1954 and Bay of Pigs. For a while, of about a decade, the idiots were running the assylum. I'll admit that to anyone who asks.

"Killing Allende? He was asking for it."

I'm conflicted about Allende. He was likely to be big pain in our necks, that's for sure. But his real failing was being an impractical ideologue who would have run Chile into the ground. I think the Chileans could have done much better for themselves than Pinochet, but they couldn't have done much worse than Allende. In the end, even if we stayed out of it, Pinochet (or somebody like him) and his boyz would have finished off Allende and the socialists simply because that's how Latin America worked back then, whether we were sponsoring it or not.

"I bring up the dictators we backed from Hussein to Pinochet and you'll find one way or another to change the terms of debate and negate the criticism."

Nope. I think treating with these dictators was us being pragmatic. In some cases -- though not as many as some apologists would have you believe -- it was Hobson's choice. For example, the Saudi Royal Family or...who? But the lesser of two evils (no matter how you define evil) is still evil.

"Once all of that is done, you'll put on a paternalistic tone of understanding and say 'Sometimes I forget just how young you are, how much yet there is for you to learn about the way things are done. It's far more complex than your cliched, lefty views make it out to be. But given time and an open mind, you'll learn. Now have a sweet and play out side. That's a good boy.'

Never, ever. The only thing that gets us in these tangles is your facile use of easy cliches, like "monsters". Think for yourself, using your own reason. Speak for yourself, in your own words. We may not agree, but we would have a lot to talk about, much more civilly.

Damien Sullivan said...

jollyreaper: yeah, that corruption stops WWIII idea is amusing. Might not be perfectly plausible but seems quite *publishable*. Reminds me a lot of my memories of Golden Age SF.

Don't know why the merchant would think broadcasting information would be a bluff.

Phil: "There was no real reason for the Cold War in the first place. Both sides were acting contrary what would have been best for the world as a whole. "

Those sentences don't support each other. Prisoner's dilemma, tragedy of the commons, game of chicken, etc: people can pursue individually rational reasons into global horrible situations, because at each step deviating would make them worse off.

Anonymous said...

RE: Damien

The US and the USSR were selfishly, ruthlessly, pursuing their own interests at the expense of literally everyone else. Where those interests were at odds we had the Cold War.

Following in the fine tradition of Europe doing the same thing for quite some time.

My statements support each other if somehow that would have not been the case. Which it didnt have to be.

(SA Phil)

Tony said...

SA Phil:

"The US and the USSR were selfishly, ruthlessly, pursuing their own interests at the expense of literally everyone else. Where those interests were at odds we had the Cold War.

Following in the fine tradition of Europe doing the same thing for quite some time.

My statements support each other if somehow that would have not been the case. Which it didnt have to be."


All that states have are interests. What else are the supposed to pursue?

You're saying there was an alternative regime. Please describe it and explain how the two superpowers could have engaged in it with enough surety that they were not undermining their own interests.

jollyreaper said...

"Don't know why the merchant would think broadcasting information would be a bluff."

Because the consequence of that would be space guillotines applied against the rulers of the respective nations. And most Federation captains wouldn't even make a threat that sounds like something so horrible, let alone be willing to carry it out.

Anonymous said...

Tony,

All that states have are interests. What else are the supposed to pursue?

You're saying there was an alternative regime. Please describe it and explain how the two superpowers could have engaged in it with enough surety that they were not undermining their own interests.
=====

The libertarian ideal -- what this country was founded on.

My interests are only paramount until they encroach on someone else's.

It works both ways.

(SA Phil)

Tony said...

SA Phil:

"The libertarian ideal -- what this country was founded on.

My interests are only paramount until they encroach on someone else's.

It works both ways."


I don't know what to say to that. I have enough data in my own life to be convinced that the "libertarian ideal" is a fantasy. I also know enough history to know that while individual liberty was important to the founding generation, they were nowhere near Libertarians, by the 20th/21st Century definition.

Damien Sullivan said...

"Because the consequence of that would be space guillotines applied against the rulers of the respective nations. And most Federation captains wouldn't even make a threat that sounds like something so horrible, let alone be willing to carry it out. "

Meh, I think of Star Trek and two warring states, I think of "Taste of Armageddon" and General Order 24 to resolve a hostage situation. Letting leaders get themselves killed seems small change. Besides, you could replace "ripped to pieces" with "totally kicked out of office".

jollyreaper said...

Well, there's nothing saying lynch mobs must follow. But I'd think going out like Mussolini or Cheauchescu(sp?) to be more likely.

Of course, in my own take on this the not-war launches during the Able Archer exercise in '83. It's not clear who fired first, at least from the limited perspective of the characters. The contractor opines that if we fired first, the dems will win the next election. If the Reds fired first, Reagan might manage to win. The Russians could be facing a violent regime change since there's no putative "other party" to vote for.

Got the outline finished. Should be short and snappy. While it's not the way the world works, it's the way you'd like it to work. Because the way it works right now, it's pretty awful.

Anonymous said...

Tony,

I don't know what to say to that. I have enough data in my own life to be convinced that the "libertarian ideal" is a fantasy. I also know enough history to know that while individual liberty was important to the founding generation, they were nowhere near Libertarians, by the 20th/21st Century definition.

=====

I was referring to the more classic sense - although there is still some of those ideals left under the nonsense.

But the founding fathers would have left Europe and Japan after World War 2 and said "good luck!"

They certainly wouldn't have set up a Mexican stand-off that lasted for 40 years.

And they wouldn't have wanted anything to do with an American Empire.

It not like even with the Cold War we really did anything about the Soviet Union's actual expansion (Eastern European countries they "liberated" in WW2) with the exception of the Berlin Airlift.

And spending the Western European wealth we gained by our economic colonialism on weapons to defend Western Europe seems like a real waste of time.

They could have just used the money themselves on themselves if we had eased off by that amount.

(SA Phil)

Thucydides said...

Just a bit of a sidebar here.

If there were to be a Libertarian nation, their armed forces would resemble that of Switzerland, capable of making any attempted invasion as unpleasant and difficult as possible.

Sadly even a Libertarian Switzerland would find itself in a very precarious position, since it would probably have something someone else wants a lot, perhaps a natural resource or being in a position to secure a frontier or trade route (American interests in places like Hispaniola and Nicaragua during the Banana wars period was based on control of the routes in and out of the Panama Canal).

A Continental Power like Russia fears land invasions and so wants/needs buffer states on the frontier to absorb the invaders before they reach the homeland. If saving Russia means the destruction of Poland and Germany; oh well.

An Oceanic Power like the United States needs free access to the world's oceans for trade, so needs a navy capable of reaching anywhere Yankee traders can go, as well as secure bases and the ability to make or force an entrance even against natural strongholds like the Straights of Hormuz.

This is a slight variation of the Mackinder thesis of the World Island, which is interesting to contemplate, even if it is probably not true.

Even Libertarians might accept that they need the ability to extend their reach beyond the borders to nip problems in the bud or protect their own people who are peacefully pursuing their own interests. This is a pretty contentious issue inside Libertarian circles, so don't expect any sort of definitive answer from me (I'm a small "l" libertarian anyway).

Damien Sullivan said...

"And they wouldn't have wanted anything to do with an American Empire. "

The Founding Fathers include Jefferson, who as President basically doubled the size of the US with the Louisiana Purchase. And AIUI, one of the objections to Britain was it trying to respect treaties with the Indians and curb colonial expansion.

"If there were to be a Libertarian nation, their armed forces would resemble that of Switzerland"

If compulsory military service, aka a universal male draft, were compatible with a Libertarian nation.

Anonymous said...

Mistrust, fear, and missunderstanding between sides are usually what fuels conflict; starting from different worldviews and not understanding or not wanting to understand the other side's is what often times starts the conflict. The US and USSR didn't trust each other, didn't believe some of the other sides' stated motives, and did believe their propaganda; they didn't even believe each other's definition of force; the US did (and still does) make a distinction between conventional weapons and nuclear weapons; the USSR put all weapons on a single scale. The Soviets would have been shocked that the US would have launched all their heavy ICBMs at the Russian homeland when they(the Soviets) launched some 'small' tactical warheads at our forces in Germany. Just because something is logical, does not mean it is rational...

Ferrell

Scott said...

" "If there were to be a Libertarian nation, their armed forces would resemble that of Switzerland"

If compulsory military service, aka a universal male draft, were compatible with a Libertarian nation."

Doesn't libertarianism carry with it the expectation that all will act to defend the rights of the individual?

It wouldn't be national conscription so much as the old definition of the Militia in the US: *every* male between the ages of 18 and 45 was expected to spend time drilling and training, period.

Tony said...

Scott:

"Doesn't libertarianism carry with it the expectation that all will act to defend the rights of the individual?"

Nope. It is up to each individual (and anybody he can convince to help him) to defend his own rights.

Thucydides said...

I was referring to the idea that Switzerland was a hedgehog that would be very difficult for a conventional force to invade.

Even then the Swiss Citizen militia model has some features that a libertarian nation would probably adopt, for example anyone who volunteered for military service would be issued an automatic weapon and ammunition to keep at home, and light and medium weapons would also be widely distributed rather than held in central depots. Modern "smart" weapons mean that you could conceivably hold effective anti tank missiles in your garage or basement. Perhaps the government wold subsidize your purchase of a four wheel drive truck adaptable to military purposes as well.

Since Libertarianism is based around the idea of voluntary cooperation, the two important things for a libertarian military force would be that:

a) everyone volunteered to be there, and;

b) as volunteers, they understand and are willing to follow concepts of military organization and discipline in order to achieve their objectives.

Tony said...

Thucydides:

"Even then the Swiss Citizen militia model has some features that a libertarian nation would probably adopt, for example anyone who volunteered for military service would be issued an automatic weapon and ammunition to keep at home..."

Details are important. Those military weapons are issued with sealed ammo, only to be opened in time of mobilization. Also, a libertarian society wouldn't issue weapons. At most it would allow militia organizations to spontaneously form and issue their own weapons.

Thucydides said...

There are various flavours of Libertarianism, but most do accept there is a need for a government, that government performs some vital functions and that rule of law is important.

Spontanious militias issuing thier own weapons is one end of the spectrum, while a more centralized military force with government issuance of military supplies is another. Indeed, it would be quite feasable for a large and wealthy enough libertarian society to create something along the lines of the USMC and a Naval force to project power on a limited basis in order to neutralize threats and rescue citizens who are being threatened abroad (think of Thomas Jefferson's formation of the US Navy to fight the Barbary Pirates in 1801).

Of course once the threat is neutralized, the most probable action by the same libertarian society would be to disband or allow the force to wither away (which is what various Congresses allowed to happen repeatedly between wars until the end of WWII. We might say this has happened now watching the USN shrink from almost 600 ships in the 1980's to 200 today). This is right and proper since the reason for having the force has abated, and while it is just to take action to protect your own citizens and property, having the ability to threaten other people's person and property without reason is not.

Tony said...

Thucydides:

"There are various flavours of Libertarianism, but most do accept there is a need for a government, that government performs some vital functions and that rule of law is important."

That doesn't describe any real-world, self-identified libertarians I've ever met. Their ideals float somewhere just this side of total anarchy.

"Spontanious militias issuing thier own weapons is one end of the spectrum, while a more centralized military force with government issuance of military supplies is another."

Once again, one has to take into account what real libertairan actually believe. Organized government forces are almost uniformly thought of as an absolute evil.

"Of course once the threat is neutralized, the most probable action by the same libertarian society would be to disband or allow the force to wither away (which is what various Congresses allowed to happen repeatedly between wars until the end of WWII. We might say this has happened now watching the USN shrink from almost 600 ships in the 1980's to 200 today). This is right and proper since the reason for having the force has abated, and while it is just to take action to protect your own citizens and property, having the ability to threaten other people's person and property without reason is not."

Partial dmobilization in the face of a reduced threat is economically prudent. But one has to maintain a credibile force that can in fact threaten potential enemies. Otherwise one's citizens and one's trade cannot go safely abroad. Yet again, the unrealistic notions of real-world libertarians would have us totally disband the very forces that make our citizens and trade safe.

Damien Sullivan said...

"That doesn't describe any real-world, self-identified libertarians I've ever met. Their ideals float somewhere just this side of total anarchy."

Then your experiences are extremely narrow and limited. I don't know of any reliable demographic studies of libertarians, but minarchists, who accept a need for government and rule of law (usually, government *only* for rule of law) are pretty standard. All Objectivists (who are politically a subset of libertarians), for example. If anything it's anarcho-capitalists who are the internet vocal minority. And there are even more moderate libertarians, who want a much smaller government but might grudgingly grant other functions beyond simple law and contract enforcement. You haven't met them? Too bad, I have, in real life and online.

I'm not sure any of them would cotton to compulsory military service, mind you.

Tony said...

Damien Sullivan:

"Then your experiences are extremely narrow and limited. I don't know of any reliable demographic studies of libertarians, but minarchists, who accept a need for government and rule of law (usually, government *only* for rule of law) are pretty standard. All Objectivists (who are politically a subset of libertarians), for example. If anything it's anarcho-capitalists who are the internet vocal minority. And there are even more moderate libertarians, who want a much smaller government but might grudgingly grant other functions beyond simple law and contract enforcement. You haven't met them? Too bad, I have, in real life and online."

It's got nothing to do with the Internet. I'm talking about people one actualy runs into in public, or within one's own social circle. And when I say "real-world, self-identified", I'm talking about the ones who are convicted enough to be public activists or at least buttonhole you at a party, not people who will say "libertarian" if you back tehm into a corner, or groups included to pad a list, without their own explicit aquiescence. (I'm pretty sure Objectivists, for example, would reject any political or economic label.)

And none of those people -- and I've met more than a few -- can be considered anything like rational on the subject of government.

Thucydides said...

Well I feel a bit put out since you do "know" me after a fashion and I am indeed a self identified libertarian.

Libertarianism is a spectrum (much like virtually every other political "ism" out there), and even if some groups don't or won't self identify (like Objectivists) a reading of their positions on most matters should make it pretty clear they are within the ambit of libertarian thought. Maybe the best way to think of this is to envision a Venn diagram and see where overlapping positions on ideas like Individual liberty, property rights and rule of law come into play.

I do agree most people who say they are Libertarians (and especially those of the capital "L" persuasion) come across as unhinged, but listening to their positions tells me they are mostly on the outer fringes of at least one of the Venn diagram bubbles.

For those of you who are going to look into this further (or wonder where I am coming from), think of Libertarianism as evolving from the classical Liberalism of 18th century Britain, with its emphasis on free speech, unfettered property rights and the need for a Rule of Law rather than the whims of the ruler to protect the rights and freedoms of the people. Limited government movements like the Reform Party in Canada from the 1980's to the end of the 1990's and the TEA Party movement in the US represent strains of libertarian thought or at least influence, so there is real world applications and consequences as well.

Tony said...

Thucydides:

"Well I feel a bit put out since you do 'know' me after a fashion and I am indeed a self identified libertarian."

You talk about libertarian themes a lot, but you don't strike me as a libertarian within the parameters I've identified. So, aside from no offense intended, I can honestly say that I never even considered your stated positions.

"Libertarianism is a spectrum (much like virtually every other political 'ism' out there), and even if some groups don't or won't self identify (like Objectivists) a reading of their positions on most matters should make it pretty clear they are within the ambit of libertarian thought. Maybe the best way to think of this is to envision a Venn diagram and see where overlapping positions on ideas like Individual liberty, property rights and rule of law come into play."

Hmmm...I don't think it's valid to say that groups having overlapping values necessarily exist in the same family. Roman Catholics, fundamentalist Protestants, and Latter Day Saints all oppose abortion, believe in Divine Grace, and practice baptism. Beyond that? The fundamentalists think neither the LDS or the RC churches are really Christian. The Roman Catholics would tell you the fundamentalists are misguided and the LDS no Christians at all. The LDS would accept the Christianity of the other two denominations, but would not accept their possession of final revelation in the New Testament. Commonality of some values doesn't really mean that much.

"I do agree most people who say they are Libertarians (and especially those of the capital 'L' persuasion) come across as unhinged, but listening to their positions tells me they are mostly on the outer fringes of at least one of the Venn diagram bubbles."

And you are probably off center, based on the values they would include in the diagram. It just all seems to subjective to me.

"For those of you who are going to look into this further (or wonder where I am coming from), think of Libertarianism as evolving from the classical Liberalism of 18th century Britain, with its emphasis on free speech, unfettered property rights and the need for a Rule of Law rather than the whims of the ruler to protect the rights and freedoms of the people. Limited government movements like the Reform Party in Canada from the 1980's to the end of the 1990's and the TEA Party movement in the US represent strains of libertarian thought or at least influence, so there is real world applications and consequences as well."

I think you've pretty adequately identified the roots of libertarian thought. But I don't think you're quite correct in identifying application. The Reform Party was based in regional dissatisfactions and had a lot of social positions, like those on gay rights and abortion, that would more correctly be labelled paleo-conservative. It probably attracted some libertarian voters on the basis of its economic and domestic policies, but it wasn't very libertarian in overall nature.

Likewise, the Tes Party has libertarian members and supporters, but it's mostly a populist protest party. Most of its members are also very conservative on social issues.

«Oldest ‹Older   1 – 200 of 209   Newer› Newest»