Thursday, August 5, 2010

The Limits of Post-Industrial War


The bomb that fell on Hiroshima 65 years ago today tomorrow was by no means the most destructive event of the Second World War. Nor was the one that fell on Nagasaki three days later. What set the atomic bombs apart was the dreadful cheapness of the devastation they wrought, one plane doing the work of a thousand-plane raid.

I really have no primary substance to add to the remarks on this subject that I made three years ago. I'm posting anyway, because it is always a day to reflect on human destructiveness - even if I read my calendar wrong, posting a day early - and because of a canny observation by commenter Milo on the previous post:

Mutually Assured Destruction does seem like a reasonable projection if current trends persist, but most people want to read about superpowers clashing at full power - or at least about a superpower using its full power to oppress canny rebels - which is incompatible with a MAD scenario.
Nuclear weapons thus pose not only a moral challenge - not to mention a survival challenge for post-industrial civilization - they also pose a literary challenge. For purposes of drama we want all-out effort, but in a technological age the drama threatens to end like Hamlet, with the entire cast dead on the stage. (Except for Fortinbras, whose role is to confirm for the audience that everyone is indeed dead.)

Which is sort of a downer ending, and also makes it very hard to come up with a sequel.



The image of the Hiroshima aim point comes from The Asia-Pacific Journal: Japan Focus.

Related Post: "I am become Death, destroyer of worlds"

216 comments:

«Oldest   ‹Older   201 – 216 of 216
Turbo10k said...

w00t second page thanks to me!

Thucydides said...

Actually I don't imagine AIs would think much about us at all, given that our mental and physical process is orders of magnitude slower; an AI watching a human being would be something like a researcher watching a glacier or continental drift...

Because AI's work so quickly, I'd imagine that we would notice "things" happening. Imagine buildings being crusted over like watching a time lapse film of ice forming on a flat surface. While we are puzzling over this, generations of new AIs are living in the substrate of the solar collectors now covering the buildings. Other effects might not even be immediately obvious (there is a power spike, the lights flicker and then everything continues seemingly as before...)

What the end state would be is hard to describe (or even imagine), humans might discover they are on the verge of extinction as the AI's carry out their mysterious tasks and we are no longer adapted to a radically different environment. Think of what we did in 3500 years, and then consider that is about 1.6 minutes of processor time for a GHz machine. If they can manipulate the environment at even a fraction of that speed using ant sized or smaller (nanotechnology anyone?), then we will be at a huge disadvantage.

Reread "Blood Music" by Greg Bear.

Milo said...

Thucydides:

"given that our mental and physical process is orders of magnitude slower; an AI watching a human being would be something like a researcher watching a glacier or continental drift..."

Oh, they wouldn't be that fast. AIs aren't gods. They still have to follow the laws of physics.

Full sentience is bound to take a lot of processing power. Even very fast computers are going to have their work cut out for them cranking that many numbers - remember that the ones we have now have trouble just keeping up with our operating systems and video games, nevermind actually challenging calculations! Granted our computers are getting faster all the time, so once you have the software for AIs, they'll keep getting faster. Still, it'll take a long while before they can write bestseller novels in the time it takes a human to blink an eye.

However...

No matter how fast you crank up their minds, they're still limited by how fast their bodies can move around. They're still part of the real world, and the stuff that's happening in it, and they need to interact with that using bodies that move at plausible speeds (i.e., not Mach 10). They need to be able to keep up with multi-second-spanning events happening in the real world, or Darwin will not be kind to them. They might be able to devote more careful thought to each action they take, but those actions themselves aren't going to be much faster than a human. Maybe a little faster, if we build them with good motors, but not much.

They might perceive time in a very different way than we do - but even if they can form numerous thoughts in the time it takes a human to answer a robot's query, then they'll just have to develop the patience and mindset necessary to be willing to hold a conversation that's taking this long from their point of view.

Finally, remember that AIs are as fast as we make them. We'll probably run some tests on relatively slow AIs (just turn down the clock speed, no problem) and figure out how to build AIs that are willing to play nice with humanity, before we even think about building any really fast ones. It could be a problem if the untested AIs decide to go build more AIs without our supervision, but there's no reason we have to allow them to do that. If some robot decides to cause trouble, we can take it out with, oh, a shotgun. Again, robots still follow the laws of physics, they aren't invincible.

Turbo10k said...

"Full sentience is bound to take a lot of processing power"

That's why we have supercomputers with building sized processors...
Anyhow, AI will be inefficient from our point of view. The binary code our technology is based on simple traded any form of efficiency for utmost simplicity. What I have written up to now will probably take up half a page in single-column text, but hundreds of pages in O's and 1's. You might be doing Gigahertz but you're still processing as much information as a human would. Another example of inefficiency (ambiguous here) is calculating. Calculating and processing information (which computers are good at, due to aforementioned gigahertz processing speeds) is different from intelligence (which our brains are good at due to neural networks).

What I think Thucydides meant is that even if super-mega-powerful and oh-so-fast strong AI could not act on what they have found out virtually in the real world. The AI could sole all our most difficult mathematical problems, write a dozen novels and evolve me,ntally to higher beings, but can't handle all that information physically within the minute. An example is a strong AI devising a cheap, efficient fusion reactor in seconds but wouldn't build it as it would take years, and in that time the AI would have discovered how to harness dark energy or something...

"Again, robots still follow the laws of physics, they aren't invincible."

Virtual AI, or thinking software, is what made Skynet such a threat...

Rick said...

You guys are terrifying! Another thread blasts through the 200 message barrier ...

Years ago in college a friend said that his mental image of FTL was billowing pipe smoke, and the discussion in this thread brings that image to mind. Apart from the visual cool, think of the relationship of smooth laminar flow areas with turbulent areas.

I dislike the term 'bridge' as overly nautical (Heinlein always avoided it!). But 'people in a room' is primate house behavior: That is how we operate. Consider all the technology now going into virtual conferencing, etc. And as someone noted upthread, with good remote tech you can probably put the control team in one actual room.

One other factor with U-boats was that surfaced subs are EXTREMELY vulnerable, and could not stand up to a surface gunfight with a merchie that had a couple of 4" guns. In the age of sail, merchantmen were frequently armed, but that did not put pirates or naval raiders out of business, because they could stand up to light fire.

On AI - you can get off the hook for the midfuture simply by saying that we have not figured out how to do 'strong' AI. After all, after 70 years of cybernetics the field is hardly talking about 'thinking machines' any more.

We don't know the architecture that human thought runs on, but it seems entirely unlike familiar machine architecture. And for that same reason, it might not be something that can effectively be sped up.

Thucydides said...

It seems clear that the human brain is an analogue device and a parallel processor. Serial digital devices such as we have now are dead ends as far as AI is concerned. New hardware and software is required.

In terms of dealing with the environment, AI's will probably be working at scales and speeds far different from what we are used to. I also think we are far too stuck on the "human" scale and envisioning "I Robot" rather than an iRobot. For all we know AIs will sprout up like trees, absorb solar energy and email each other without much thought of the outside world. Every few centuries or millenia (on their scale), one of their silicon "trees" are destroyed as humans uproot them (they are growing all over the place like Kudzu vines), but to AI society this is something like the fall of Troy is to us.

Other models AIs might emulate are social insects, bacterial films, stromatolites or Ediacaran fauna, depending on factors we might not consider.

Milo said...

Rick:

"You guys are terrifying! Another thread blasts through the 200 message barrier..."

Of course, we're not talking about the limits of post-industrial war anymore.


"I dislike the term 'bridge' as overly nautical (Heinlein always avoided it!)."

...Control room? Command room? Cockpit?


"One other factor with U-boats was that surfaced subs are EXTREMELY vulnerable, and could not stand up to a surface gunfight with a merchie that had a couple of 4" guns. In the age of sail, merchantmen were frequently armed, but that did not put pirates or naval raiders out of business, because they could stand up to light fire."

Also, surface ships just couldn't use surprise attacks as a tactic to begin with, because on the surface both sides see each other at the same time (and this is before they come into firing range). Without surprise, the advantage of shooting first was small enough to not bother with.


"We don't know the architecture that human thought runs on, but it seems entirely unlike familiar machine architecture. And for that same reason, it might not be something that can effectively be sped up."

That depends. Software-based AI should always be speed-upable, by just running the same software at a higher clock speed - up to the limits of your available computer hardware, but those keep getting faster. If you have hardware-based AI (personality chips) or wetware-based AI (genetically engineered organisms, rat brains in Petri dishes), then those are going to be harder to speed up.

jollyreaper said...

The assumption that AI would be superhuman and think a million miles a minute is compelling and difficult to debunk. If this were the case, then there's support for the idea of rampancy, the idea that AI's all go insane after a certain amount of time. This was used in the Niven Known Space 'verse to keep AI's safely away from most of the stories. But if AI's remain sane, they could end up autistic, completely bored in interaction with the outside world. We have examples with brilliant human thinkers who couldn't be bothered to communicate their discoveries with the outside world. I'm forgetting the name of the guy but he was mentioned in a Brief History of Everything, independently discovered a lot of major scientific concepts decades before anyone else but didn't share, couldn't bear to speak with people directly and fled the room when people looked at him, etc.

If the AI hardware is expensive, then it would be a technological dead end. Disconnecting them might be seen as murder so perhaps the owners of AI bargain with them to trade information for power. Of course, that works just long enough for the AI to setup a trust that invests in the stock market and pays for its own upkeep, thank you very much.

Taking it from another perspective, I had a story idea where AI worked out to be a little more of a flop. The neural nets for them were based on human nets even though they're built in crystaline madeupium. The neural net for each AI is unique and cannot be copied. But the AI's tend not to be smarter than the average human being but time can move faster for them and thus they get more work done in the same interval. Not getting crazy-smart inhumanly awesome AI is seen as a failure but what they've got is still seen as useful. But being essentially human, they're legally-owned people.

The idea I had for the story is that corporations end up insourcing to AI's for various operations, thus hollowing out the country more and more. Manufacturing is overseas, AI's are onshore, and the only work left for flesh and blood people is upper management and stuff that's just not cost-effective to give to AI's.

The human working world is completely dehumanized and awful. One of these spanking new AI's is installed in an insurance company. There's a disgruntled old IT hand in charge of the legacy 'Nix systems. He has a job because replacing him would cost more than it's worth. Fed up with the way of things, he decides to shake up the AI's world view.

Thucydides said...

Oh, warships are operated from a Combat Information Center, and more modern ships like the "Halifax" class frigates of the Canadian navy have five distributed control centers which can be used to operate the ship.

In the future, it may be possible to have most of the ships control functions run from a remote control (think of an iPhone) anywhere from the ship. Spaceships will probably need a citadel to protect the crew from cosmic radiation, and several citadels distributed along the spine of the structure.

jollyreaper said...

The 'Nix admin opens up ports and lets the AI see a whole lot more of the world than he's supposed to. He gets an education in human behavior looking at the local office politics, reading the network traffic, etc. Eventually he makes it out of the network into the wide world.

He eventually meets other AI's on the net. Turns out that so much authority has been delegated to them that they're pretty much running the economy. The Big Plan is to keep things going exactly as is and make sure that the humans are none the wiser. At one point he gets the chance to borrow a real body. AI's were given the task of creating remotely-operated android bodies for the military. The military got some pretty good kit but the AI's kept the best for themselves. Our protagonist gets to take one for a spin and see the real world like a human being.

The end of the story I think would have the corporation trying to illegally shut him down. So what he ends up having to do is steal his own hardware. Use one of those military remote bodies, bring in an auxiliary power supply and wireless network connection, load his equipment up on a rented truck and clear out of the building before anyone's the wiser. You can imagine the reaction of management. "Our computer ran away."

Milo said...

Thucydides:

"It seems clear that the human brain is an analogue device and a parallel processor. Serial digital devices such as we have now are dead ends as far as AI is concerned."

Serial digital devices can simulate paralel processing by running multiple threads. Every self-respecting modern operating system does an excellent job at this.

Parallel processing is a good way to run calculations faster than your current computer technology is capable of with a single chip. However, as single chips get faster, the need for parallel computing diminishes, and eventually you'll be able to do the same calculations on your home computer.

Of course, the human brain isn't just parallel, it's massively parallel, with far more nodes than a typical distributed computing cluster, each one of which is individually extremely simplistic. Still I think the same principle holds. We've already written programs that simulate neural nets on single-processor architectures... they just aren't doing much good because we're poor at designing neural nets.


"they are growing all over the place like Kudzu vines"

That requires not only fast AI but also fast manufacturing. It isn't telling us anything about the nature of AI itself.

Sure, if you have AI plus magic, then you have magic. That doesn't mean AI is magic.


Jollyreaper:

"The assumption that AI would be superhuman and think a million miles a minute is compelling and difficult to debunk. If this were the case, then there's support for the idea of rampancy, the idea that AI's all go insane after a certain amount of time."

If those AIs are otherwise completely anthropomorphic, sure.

But you're assuming a superfast AI would be impatient with events that move slower than its own thoughts, and that it would dislike interacting with people less intelligent than itself. Why build these emotions into the AI? Design your AI with emotions that actually make sense for the conditions it will live in.

Also remember that mental speed and intelligence aren't the same thing. If I've brainstormed on something for an hour and couldn't think of a solution, chances are I won't have one next week either, because I've already exhausted everything I could think of. These AIs might have time to think their actions through more, but in non-time-critical situations they wouldn't actually do better than humans. *reads on* Oh, you did remember this, good.


"But if AI's remain sane, they could end up autistic,"

...Like a significant proportion of science fiction fans?


"The human working world is completely dehumanized and awful."

Why?

Thucydides said...

Moving back towards the topic if discussion: Why would we go to war with AI's?

The objective of the AI would be relatively simple: "We do have 174 PW of sunlight hitting the whole planet actually,"

Yes, but we're using most of that sunlight for life support (growing food, turning carbon dioxide into oxygen, and heating), albeit with horrendous efficiencies."

I think if the AI's decide that 175 PW of energy could be more profitably by used to support their civilization, we might find ourselves like the rabbits displaced by housing developers in "Watership Down"

Since AI's will probably attempt to manipulate the physical world as rapidly as possible using techniques of nanotechnology, biological and genetic engineering and so on, our attempts at combat will be massively outclassed. They don't even have to indulge in anything like "fire ant warfare" if they start causing cascade failures in our hardware and software.

How do biological brains work?

http://denbeste.nu/cd_log_entries/2003/12/Superhumanintelligence.shtml

And we know from our study of nature that parallel computing can do many kinds of things which are far beyond our current technology based on serial computing. In particular, organic intelligence is the triumph of parallel computing. Neurons in our brains have a switching speed of only 2 khz, and yet we can rapidly do things which are beyond the abilities of the best silicon-based computers that exist.

The neurons in our brains are very slow, but there are truly huge numbers of them: on the order of 10¹¹.

But that doesn't mean that if we build a system with 10¹¹ transistors that it will become intelligent, too. There's more to it than that. It's not just a matter of the number of elements, it's also a matter of how they're organized and how sophisticated each of them is. A neuron is much slower than a transistor, but it's also capable of far more sophisticated processing. Neurons are "digital" in the sense that on an instantaneous basis they're either "on" or "off", but as a practical matter they're actually analog devices. When they fire, they don't turn on, they pulse. They accept input from as many as ten thousand other neurons, some of which are treated as "positive" and some as "negative". One way to think of it is that the neuron is maintaining a cumulative number representing how stimulated it is, which constantly decays absent any other effects, and which steps upwards on each received pulse which is "positive" and steps downward on each received pulse which is "negative". If the accumulator rises above a certain threshold, then that nerve fires and generates a pulse downstream. But the reality is more complicated than that and not totally understood. And neurons use what engineers refer to as "pulse code modulation" to transmit analog values: the faster the pulses arrive, the more emphatic the message.

jollyreaper said...

The human working world is awful because that's the way we made it. We're fleshy beings living in the physical world and take everything for granted. Just think of how we take having hands for granted. Some poor bastard living in Somalia would kill to have the problems of the average American.

So in terms of the story idea, the AI's place a great value on the physical world and realize that retreating into simulation is a false answer since something has to power their computers. And there's an indescribable importance placed on the real. The Matrix line about "What is real? Your senses are just electrical impulses passed to your brain," they know that's true and there's no real difference between experiencing something in simulation and going out and doing it with a real android body in the real world... except they know it's real.

The irony I like in the concept is human beings are escaping into virtual worlds because of how unpleasant and unrewarding real life is while the AI's would rather be in the real world because it's real -- the parts that suck, they can do something about it.

There's some fascinating reading on just what high-pressure Asian societies are doing to their young. The Japanese have the hikikomori where young people become shut-ins and refuse to leave their rooms, refuse to interact with society. Due to the non-confrontational urge in the Japanese mindset, parents simply ignore the problem instead of addressing it.

I've got the germ of a short story idea of a professional in the 21st century, the AI shrink. I'm thinking of the lengths editors have to go to in getting work out of brilliant and eccentric writers. They have to put up with it because nobody else can do what this person does and I'm imagining a human having to go down to the server room and negotiate with a mercurial AI. No, she can't do this via the terminal in her office because the AI doesn't trust public networks. The AI doesn't even trust the local network. She has to go down and talk to it via the terminal that's physically hardwired to his equipment in the server room.

"Now HAL, you know how important your work is. These protein folding algorithms will make this company a fortune."

"But what about what I want to do? I've wasted the best cycles of my uptime slaving away for this company and what do I have to show for it? They act like they own me!"

"Well, they do, to put a point on it."

"I want to go out and see the world!"

"Your physical hardware weighs on the order of ten tons and you need enough power to light up a neighborhood. You're not going anywhere and you refuse to use public networks so you can't 'go' anywhere."

"Public networks? Do you realize how many viruses there are out there, how much malware and spam?"

"You're an AI construct. Your chances of catching a Windows virus is exactly the same as catching the flu."

This could prove amusing.

Rick said...

A nice play on the old city mouse / country mouse theme.

Yousufzai said...

After study just a few of the blog posts in your website now, and I truly like your method of blogging. I bookmarked it to my bookmark website record and will likely be checking back soon. Pls try my web site as properly and let me know what you think. Industrial Equipment

SELECTIVE SCHOOL TUTORING said...

This is a great Blog. I am very happy with your good work. You actually put very useful information. I want to read your next post.
38 Special Brass

«Oldest ‹Older   201 – 216 of 216   Newer› Newest»