Wednesday, December 1, 2010

Accelerando


Comments on our last exciting episode discussed, among many other thread drifts, the concept of an Accelerando, a speeding up of technological progress that is presumed, in many circles, to culminate in the Singularity. (See the comment thread, starting around #180.)


I will argue - and I've made this argument before - that the real Accelerando happened roughly a hundred years ago, say in the period from about 1880 to 1930.

The Industrial Revolution began a hundred years earlier, but most people in 1880, even in industrialized countries, still lived essentially postmedieval lives. (Cribbing from my own comment follows:) Railroads and steamships had transformed long distance travel, but on a day to day basis people walked, or if they were quite well off they used horses. They lived by the sun; the only artificial lighting was candles or oil lamps, the same as for centuries. A few large cities had gaslight; reputedly it made Paris the City of Lights.

By 1930, millions of people were living essentially modern lives. They drove cars to homes with electric lighting, talked on the phone, streamed entertainment content on the radio or played recorded media on the phonograph. To a person from the pre-industrial world a hand-crank telephone and an iPhone are equally magical; to a person from 1930 the iPhone is an nifty piece of 'midfuture' technology, not remotely magical. (Gee whiz, Tom, a wireless telephone with moving pictures! And it all fits in your pocket!)

Militarily a good part of the Accelerando played out in the course of World War I; people went in with cavalry and came out with tanks and aircraft. Commenter Tony handily expanded on this theme:

Murray and Millette made this point in their operational history of WWII, A War to be Won. They pointed out that a lieutenant in 1914 had little in common with the colonel that he himself had become by 1918. Yet that same colonel would have easily recognized the overall form, if not the detail, of war in the 1990s.
How do you measure an Accelerando? One handy benchmark is human travel speed. Here the Accelerando actually began a bit before the Industrial Revolution. Stagecoaches could maintain a steady speed of about 15-20 km/h by combining advanced carriage design with the infrastructure innovation of fresh horses for each stage. Ordinary travellers could thus maintain human running speed for hours.

The first steam locomotive ran in 1804. General purpose steam railroading began in 1825-30, and a locomotive appropriately called The Rocket reached 47 km/h in 1829. Rail speed data in the 19th century is amazingly sparse, but I would guess that locomotives exceeded 100 km/h by midcentury. The next doubling was reached in 1906 by a (steam!) racing car. The next doubling after that, to 400 km/h, was achieved in 1923 by an airplane.

Mach 1 was reached in 1947, and then of course things got wild. Yuri Gagarin reached orbital speed, a shade under 8 km/s, in 1961, an accelerando of 25x in 14 years, with another bump up to lunar insertion speed of 11 km/s in 1968.

Things have settled back a shade since then. Most of the 500+ human space travellers have piddled along at orbital speed, while since the retirement of Concorde the civil standard for long distance travel is high subsonic.

In this particular case the period 1880-1930 actually falls between stools - steam railroading was already pretty well developed by 1880, while aviation in 1930 was just starting to combine low drag airframes with high power engines.

Other technologies would give different results. Some, like computers, are still in the rapid transition phase of railroads around 1840 and airplanes around 1950. The overall Accelerando of the Industrial Revolution is a sort of weighted average of many individual and interrelated tech revolutions. And sometimes an older, mature-seeming tech gets a new power jolt, as has happened with railroad speed since the Japanese bullet trains of the 1960s.


Science fiction is the literary child of the Accelerando, and emerged as a distinct genre of Romance in just about the period 1880-1930. Jules Verne published From the Earth to the Moon in 1865; Hugo Gernsbach launched Amazing Stories in 1926.

In 1800 no one speculated about the world of 1900, because no one imagined it would be all that different from the world they already knew. And in 2000 there was only limited speculation about the world of 2100. Indeed the future has gone somewhat out of style, replaced in part by the enchantment of retro-futures.

The future has lost its magic not so much (if at all) because our technical progress has reached a 'decelerando,' but because we have learned to take technical progress for granted. It is a lot harder to get a Gee Whiz! reaction these days, a sort of psychological decelerando. As I've suggested in the last couple of posts, the challenge of interplanetary travel is not how to do it but why to spend the money.

(As a far more modest example of psychological discounting, where in this holiday retail season are the iPad rivals? Did Apple blow everyone else's tablet devices back to the drawing board, or has everyone else decided that tablets are a niche market they'll leave as an Apple playground? I haven't a clue.)

This is where I am supposed to wrap my arguments neatly in a bow, but I am not sure what the summation should be. So instead I will toss the question out for comments.



The image of a North American train c. 1900 comes from a public library site in Kansas.

323 comments:

1 – 200 of 323   Newer›   Newest»
Jedidia said...

A very nice summary and analisys of the accelerando of the past century, for sure.

As you said, Accelerandos are not universally, although they can never be contained to only one specific departement of technology. Still, as you said, our engineering accelerando seems to have run its course, while the electronic accelerando is still holding up and probably in its final stage (and I have no doubt that it will come to rest before anything akin to a singularity will occur).

A possible question might be, are there departements in which we can detect early signs of a comming accelerando?

Tobias said...

I disagree that top speed records are a good measure of progress.
They only consider science not the societal impact.
Top speeds of a commuter would be more significant. Which means that the current speed of traveling is either the speed of a commercial airliner or of one of the high speed trains. So 300 to 500 km/h.
Much less than the lunar orbiter and still changing.
Average speed of a commuter would maybe be even more interesting but I would have no idea where to find those.

Elukka said...

Commuter trains here in southern Finland run at some 160 kph (except when the winter surprises them every year, heh), compared to the 120 kph of the last few decades. The current trains are used in several other countries, so I'd guess the speed would be similar.

High speed trains are a class of their own; they probably don't comprise the majority of commuter traffic anyway.

Geoffrey S H said...

The accelerando is the reason so much of my settings bordered on magitech...

Flying battleships and freightors? Well, I had to try and advance on the trimeran, the hydrofoil and the ground-effect craft. New materials and "bigger and better" can only go so far in 5 hundred years.

Morphable wing aircraft? Advances on that over a 200 year period? It that all?

Antimatter? Well, the uses for fission, thorium fission (yes, vapourware of the first order), then (urgh!) fusion, then fusion/antimatter hybrids, then antimatter. Is that enough for a 500 year period of technological revolutions and overcoming of gaps in knowledge?

Is it (gulp) not enough?!? No matter how much research I could do to make things more plausible, I nevertheless think I am simply not doing enough in terms of scientific advancements.

Not to mention advances in banking, agriculture, robotics...

Of course, if Rick is right, then things are made (somewhat) easier. The military-scientific-industrial complex doesn't help though, with newspaper and scientific magazine after newspaper and scientific magazine coming out with promises of a new scientific advancement every month. One should dismiss these out of hand, but unless you're a scientist with a PHD in that field of expertise, then you will have as much of an idea as anyone as to whether its true or not. It might just work.

Being able to remember certain promises about quantumn computing about 5 years ago, IMHO it seems that the public are being unintentionally led to believe that a singularity is around the corner.

That ain't gonna help any perception of the future, and it certainly won't help sci-fi writers who grow up with these promises ringing in their ears.

jollyreaper said...

We get back to what I call the "political will" argument. We were able to make it to the moon when we did because 1) our science put it within reach and 2) we had the political will to spend the gobs of cash to go the rest of the way.

We have the technology to rebuild the transportation infrastructure of the United States. Maglev trains could handle all of our shipping, getting the trucks off the highways. The numbers show you get the most bang for the buck shipping by water, then rail, and highways are a distant third. Right now we use highways because the increased fuel cost is made up for by the decreased labor. Load the trailer at the factory, drive it a thousand miles to the store, offload it it directly at the dock. Less handling. If energy costs quadrupled, we'd be looking at different solutions.

We're suffering from our current problems due to a lack of political will, either because there are no advocates for it or there are plenty of advocates against it. Why is the US the only industrialized country without universal health care? 70% of the citizens want it. Lots of connected special interests are paying to kill it. No political will. Why are we incapable of funding essential stem cell research? Too much influence on the part of the religious, not enough influence from the secularists. The push of progress should be to allow it but the political will is squarely against it.

As an example of political will, the Iraq and Afghanistan wars. Not a lot of popular support for them, especially Iraq. Even people who supported "getting" Osama were skeptical about how this could be accomplished with an open-ended invasion of Afghanistan. It took a tremendous amount of political will on the part of the president and his supporters to win the people over to his point of view. The current cost of both wars, to date, is over a trillion dollars. Not estimated, not budgeted, spent. Gone. Never to be seen again. There's no telling what the total cost will be, especially seeing as we're borrowing money to pay for them. The Washington Post guessed it might be $3 trillion but they're revising their guesstimate upwards.

Wow. A trillion bucks. And all the while the American economy is being hollowed out, our jobs sent overseas while a financial sector busies itself building castles in the air, castles they're able to mortgage out for cold, hard cash. Whether or not you think the wars were a good idea, it took some amazing political will to make them happen. Could you imagine the howling if a democratic president proposed spending a fraction of that kind of money on, well, anything? Obama can't even get the Republicans to go along with renewing Reagan's old START treaty with Russia. Reagan's idea! Nuclear counterproliferation. This should be an easy win, like a congressional non-binding resolution in support of puppies! Not happening. Republicans have shown Obama what political will looks like and he comes up short.

I think a large part of our accelearndo was a national myth that believed in progress and futurism. It became self-fulfilling. The myth changed to one of "greed is good" self-interest and screw the future short-term cash grabs. It takes money to pay for research and development, it takes a serious will for people to make the future happen. It's not an inevitable geologic process proceeding without our input, it is something we have to commit to. And we aren't. If we can recapture that spirit again then the accelerando is on again.

f said...

Well, there's the fact that we're reaching the limit of the technologic achievements we can reach without starting to change ourselves into something other... I'm not necessarily speaking about "The" singularity, but going on we'll have increasing chances to modify, and even improve ourselves, and this it's quite scary for many of us, and so one we do no like too much think about (or, for somebody else, think too much even investing it of semi-religious meaning...)
Golden age SF saw the transition from human to some kind of post-human super-being as an all-or-nothing scenario, when you had on one side a standard model human, even if with super-weapons and FTL, and on the other side a god-like energy being that left behind the material body to trascend (maybe this too inpired by religious parallels?).
Now, it seems more likely it will be a much more messy process, expecially because reality is never as neat and clean as the narratives we tell ourselves.

Michael said...

I've often heard it said that the next big tech surge will be in biology. I see two obvious branches of this trend, and there may be more.

1.) Biomedical: Advancements in genetic medicine and regenerative therapies with stem cells are simmering on the edge of broad use, with more discoveries made every day.

2.) Bioindustrial: There's a lot of research into using biological systems to produce industrial goods cheaper or better than the current methods. Examples include biological catalysts in fuel cells and fuel production using microorganisms.

Milo said...

The premise of Singularity theory, as I see it, is that you can look at practically every point in history and say "this is where a major invention was made that facilitated the rise of civilization". Agriculture, for example. We have been genetically modern Homo sapiens, with language and associated habits like cave painting and reasonably sophisticated tool use, for at least five times as long as the existance of agriculture so far (depending on who you ask). What have we been doing all that time? We weren't stupid for the remaining 4x time period. We just hadn't hit the agricultural accelerando yet.

I disagree with the idea that our world is just 1930 with a little midfuture tech. Case in point: the internet. Google. Wikipedia. Forums, blogs. These have radically influenced our society, and how we think about business, communication, and entertainment. We now take for granted the abilities it gives us, without even thinking about it.

Tony said...

As I've stated previously, wherever you look at a process that appears to be quadratic (J-shaped curve, taking off into forever), you're probably looking at one that is actually logistic (S-shaped curve) in nature.
Everything has limits. Just wait a while, it will become apparent.

jollyreaper:

"The current cost of both wars, to date, is over a trillion dollars. Not estimated, not budgeted, spent. Gone. Never to be seen again."

Most of that money is still in the US economy. Just because the government spends it, that doesn't mean it goes down a black hole. Even most of the money that went to buy foreign goods (mostly from military payroll being spent on personal items made overseas, like PCs, cell phones, and running shoes) eventually finds its way back into our economy, through one foreign trade channel or another.

Tony said...

Milo:

"I disagree with the idea that our world is just 1930 with a little midfuture tech. Case in point: the internet. Google. Wikipedia. Forums, blogs. These have radically influenced our society, and how we think about business, communication, and entertainment. We now take for granted the abilities it gives us, without even thinking about it."

With all due respect, the above lacks perspective.

The Internet? telegraphy is now commonly referred to as the "Victorian Internet", for all sorts of good and sufficient technological and sociological reasons.

E-commerce? Back in the Thirties -- and for many decades previously -- they had catalogue buying. It's fundamentally the same thing, just faster.

Google? Only necessary because there is an Internet to begin with.

Wikipedia? Collier's Encyclopedia, which filled up a whole bookshelf in my bedroom when I was a kid. (If you didn't have Collier's, you had Britannica or World Book.) If you didn't have the encyclopedia at home, you had it at school or in the public library, just like people who don't have Internet access at home have it at school or in the public library.

The digital revolution has changed how we do things much, much more than what we do.

If you want to talk about everyday commonplaces, In 1930 you walked in a room, flipped a switch, and got light. Same thing today. You twisted the knob in the bathroom and got running water. Same thing today. You picked up the phone and dialed a number to speak to someone. A cell phone is a bit different in both operation and convenience, but it does the same thing for you. You spun a dial in the kitchen and the burner comes on to cook your meal. Same thing today, though you may have an electric instead of a gas stove. This can go on and on...

Hugh said...

Milo:

"I disagree with the idea that our world is just 1930 with a little midfuture tech. Case in point: the internet. Google. Wikipedia. Forums, blogs. These have radically influenced our society, and how we think about business, communication, and entertainment. We now take for granted the abilities it gives us, without even thinking about it."

A major reason we take our Internet abilities for granted is becaused it hasn't changed our lives all that much. We're not all jacking into cyberspace, early William Gibson style, for example.

Most businesses operate much as they did in the 1930s: managers and workers, board of executives, accounts department, sales, etc. A 1930s time traveller would need new skills training, but not the kind of conceptual shifts needed for the first assembly lines.

Communications has improved, but again it's just got faster. When inter-continental messaging went from weeks to same day, government and business changed. Same day to same hour, not so much. Same hour to instant, maybe ... but only when the other 4/5ths or so of the planet get connected.

Entertainment has certainly expanded, and MMOGs probably are going to be seen as a major cultural shift ... but not until there are a lot more people playing them full time.

And you left out government and law, which again hasn't changed that much since the 1930s. Western democracies still have voting districts and political parties and supreme courts. We haven't, say, switched to country-wide voting or some kind of computer mediated legal system. A 1930s time traveller would look at today's news and groan "not another banking crisis?"

jollyreaper said...

We can play dev.null's advocate here. There are certain practical limits to the speed of transit. Even the early cars could get up to 100mph but they were poorly-constructed, dangerous, and you had to be a mechanic to work on them. Modern cars are safer, more fuel-efficient, less-polluting, and incredibly reliable. Even though outwardly similar, there's quite a difference between 1950's car and 2010's.

We had supersonic commercial air travel but it proved to be economically infeasible. The sweet spot appears to be the high subsonics.

The expectation gap we have is we went from nothing to something, now we want that next step to be so big something looks like nothing in comparison. Which is hard to do.

In terms of advances in construction, there are a lot of new ideas out there. Yes, houses have doors and windows and the toilet is inside but the way it's all put together has improved. We didn't have much in the way of insulation a hundred years ago and older houses suffer for it. Newer houses can be constructed with better thermal properties and can keep the heating and cooling costs within reason.

I do think that the mental landscape is probably where we stand to make our biggest changes in the coming years. The revolutionary factor of the internet is as a bidirectional communication medium. The Guttenburg press was huge because it meant that one scholar could disseminate his ideas across the world. Electronic media amped up that intensity, helping to shape public opinion in ways the catholic church could only envy. But it was still a one-way communication, though for the people controlling it, that was a feature.

The internet is a disruptive technology. Just look at the wikileaks case. We've had leaks before but technology has made the volume larger and the ability to disseminate even easier.

Major changes in thinking are generational and I think that the current revolution is somewhat muffled because the boomers are still in full force. They're the ones who still watch the nightly news, still have landlines, still watch television. They listen to and respect the old media. Younger people, not so much.

The internet helps people route around the media message. It's harder to shape a uniform public opinion when people are listening to bloggers and each other every bit as much as they listen to the noise coming from the official media. It's getting harder and harder to control the message.

The economy is only going to continue to get worse as the old guard continue to fap about and do nothing constructive. Times of crisis can lead to major societal changes. The Guttenburg press aided the protestant reformation and quite a few religious wars that reshaped Europe, not to mention assisted in the spread of science and rationalism as well. I can't help but think the internet is equally as disruptive. Just imagine if leak culture becomes the norm in all manner of businesses, if there are no more secrets.

Tony said...

Re: jollyreaper

We've been putting effective and efficient insulation in houses for decades. The major structural changes in the last several decades have been from lath & plaster to drywall, engineered wood products, more composites and plastics in roofing and siding, gas-filled, energy-efficient windows, etc. But a modern house is still made of sticks nailed together, cladded, roofed, and wired. And many of the most innovative technologies have had to be sold as packages that interface in much the same way as the tech they replaced, in order to leverage existing skills. My grandfather and his buddies who built the church at Beryl Junction, Utah in the 1970s would (if they were still alive) easily be able to build a church today, using modern materials and tools, without any specialized knowledge or training.

I'm a Boomer -- admittedly tail end, but I grew up with the network news, broadcast TV only, and the rotary dial telephone. I don't match the profile you laid out, nor do most people I know born after about 1935. We all use the Internet daily, even hourly. We mostly read blogs and participate in discussion groups of one type or another. We mostly ignore the news on TV, except to watch breaking story coverage. What TV we do watch is the same sports, History Channel, Discovery, etc. that the kids watch. We almost all of us have cell phones.

Not a few of us have reason to laugh at the idea that we don't know what the 21st Century is about. We helped invent and build the technology that makes it possible. Yeah, I started out on a Teletype 33 terminal hooked up to a PDP-11 across town with an acoustic coupler. But now I write software for Web city and state guides. Before that I worked on Internet yellow pages. Boomers don't know about tech? ROTFLMFAO!!!

As for the internet being disruptive technology? Who are we trying to kid here? Assange has been the naughtiest boy imaginable, but I don't see any major events coming out of the Wikileaks fiasco. It wasn't like people didn't know all that stuff to begin with. It wasn't even a shock to anybody that any of it was said in internal communications. And, quite frankly, it pretty much blew up in Assange's face, because the writing quality and depth of analysis of the cables came across as a revelation to a lot of people who thoguht the US State Department was a refuge for hacks and incompetents. A lot of people found out just how little they were getting past us, not how much. Nor were any conspiracy theories oncfirmed.

Finally, there is some validity in the diea that real enemy in the 21st Century is Little Snitch, rather than Big Brother. But that's just the point -- the more people do this kind of thing, the more it will be combatted, legally, technologically, and procedurally. Not to mention the fact that all of this technology is owned by big business -- all of the landlines, satellite connections, routers...everything. All of this internet "freedom" is totally at the discretion of big business to allow or to crush. Right now they allow it, because it's good for business. If it gets too far out of hand, not so much. Never forget that.

AdShea said...

@Tony I think that the "1 Trillion spent" thing wasn't so much reflecting on it being gone as reflecting on how close 1 Trillion comes to the estimates on the blog as to the cost to get us to a "all over the solar system" civilization.

The issue that most people take with the cost of the last two wars is that there are so many other things that the government could have spent that money on.

tkinias said...

Milo:

I disagree with the idea that our world is just 1930 with a little midfuture tech. Case in point: the internet. Google. Wikipedia. Forums, blogs. These have radically influenced our society, and how we think about business, communication, and entertainment. We now take for granted the abilities it gives us, without even thinking about it.

I’m actually of two minds about this. My initial inclination was basically to agree with Tony: the main change is how fast or convenient it is, not what we can do. But on the other hand, there are big macroeconomic changes that go along with the information revolution, and they can’t be ignored.

As to how it effects our daily life... I’m Gen-X, and I grew up during what I think of as the digital transition period. That means that I had a passbook bank account as a kid, learned to use a paper card catalog at the public library, learned to type on a typewriter, bought music on 45s, etc. ‒ but before I was out of high school I had an ATM card and a CD player and was writing school papers in WordPerfect on a PC.

One of the really huge transitions that I lived through at university was on-line journals. When I wrote my first papers as an undergrad, it involved going to the library, finding journals (in the basement periodical stacks) that seemed relevant, hoping there was a published index and if not flipping through the table of contents of each issue looking for something useful... then photocopying any articles I found. By the early 2000s I could do a search in an on-line database, then download a PDF of the article. I could even get interlibrary loan (via a Web form) to request article scans from old paper journals at libraries around the world, which would arrive also as PDFs. It seemed miraculous, and at times I wept for joy.

But the process was basically same as it had been a decade earlier, just much, much faster and easier. In the same way, the photocopier made library research much, much faster and easier than it had been before ‒ but didn’t fundamentally change the nature of the work.

If I pick up a smart phone and tell its voice-activated Google Maps feature to find me the nearest Starbucks, I’m doing something that would have seemed pretty scifi in my youth. But it’s not something I couldn’t have done by finding a payphone and looking in the Yellow Pages hanging under it.

But the macroeconomics are different. Take a look at the companies which make up the Dow Industrials today: depending on where you draw the line, 20-40% of them are information-related businesses, ranging from entertainment to finance. A quarter or so of the Dow Industrials companies don’t make anything. Five of the world’s ten largest public corporations by market capitalization (2010Q3) are information-based: telecoms, IT, or finance.

So, on the one hand, information tasks are a lot faster and easier than they were in 1930. On the other hand, we can’t really do anything we couldn’t do before.

On the gripping hand (*grin*), it’s increasingly the making and distribution of information rather than food or manufactured goods that drives the world economy. And that may be the Big Deal.

Tony said...

tkinias:

"Take a look at the companies which make up the Dow Industrials today: depending on where you draw the line, 20-40% of them are information-related businesses, ranging from entertainment to finance."

I'm not so sure that makes the point you think it does.

Telecom entered the DJI in 1916 with AT&T and Western Union.

Retail entered in 1924 with Sears and Woolworth. Please note that the index was 20 stocks in both 1916 and 1924, meaning that retail replaced 10% of the manufacturing component of the index in one year.

Entertainment entered in 1925 with Paramount. At this point, the index was 25% telecom, retail, and entertainment. The fortunes of these three secotrs have waxed and waned on the Dow over the years, but they've always been in there somewhere.

Information systems entered with IBM in 1979.

Finance entered with American Express in 1982.

Also, over the history of the Dow from 1928, there has been at least one or two companies that manufactured radio, television, phonographic, and/or photographic equipment and supplies. (General Electric (through it's controlling interest in RCA), Victor Talking Machine Eastman Kodak) These were, in their time, as much communications and information sector as Cisco and IBM are today.

It is indeed accurate to assert that the Dow tries -- and generally succeeds -- to reflect the major sectors of the American economy. But...while the identities of the component stocks change over time, as do the technological properties that underly them, the nature of the business environment hasn't really changed all that much over the last 80 or so years. When you count telegraphy, telephony, photography, phonography, radio, and television, we've been an information driven economy for at least that long. And the relative drop in Dow manufacturing components, combined with a rise in capital components, probably has much more to do with the movement of manufacturing overseas than anything else.

Anonymous said...

I would disagree slightly with the idea of minimal changes since the 1930s. I would move the date to the 1950s due to the last truly significant techs, jets, satellites, computers and atomic power.

Anonymous said...

Well, I've said this before, but Rick and the rest of the commentators have more or less made my point: the real 'singularity' happened between 1880 and 1930 (with a few improvements over the years), as far as technology goes...the most striking differences between 1930 and 2010 would be cultural; our society is basicly different from 1930 becouse of our attitudes about people, the enviornment, the proper role of government, differences in standards of business ethics, even fashions and the morals of social discourse (have you listened to modern music lately?); your time-traveller from 1930 would be impressed by the tech and bemused by the attitudes of the people of 2010, while a time-traveller from 1830 would be befuddled by ALL aspects of the world of 2010 and barely able to understand a fraction of what, and who, he found surounding him.

"Accelerando" is probably one of those things that can only be defined retrospectively.

Ferrell

adam_grif said...

Technological progress is not, as some singulatarians have suggested, exponential, but rather follows a logistic growth curve. The other central mistake they make is by compiling all kinds of technological progress together and saying "look, everything is getting better at an exponential rate!"

Individual technologies experience explosive growth early on in their development as engineers rapidly converge on the optimal solution to each problem given the current technology base. Future advances sometimes provide a new spurn of activity in a field (see: Gatling gun renaissance with the addition of electricity), but generally speaking once a technology is mature, it's only going to be seeing incremental improvements. Although modern computers are advancing tremendously fast, modern assault rifles have made only incremental improvements since the 1960's.

The "singularity" will come about with advances in artificial intelligence. I'm sure you're all familiar with the concept of Seed A.I. (recursively self-improving intelligences), and this is the kind of totally game changing advance that singulatarians are often talking about when discussing the future.

When we create intelligences greater than our own which can then improve themselves, in a very short space of time (possibly days, possibly years) the scientific progress made by these intelligences will be staggeringly large. We will rapidly converge on the "best" solutions to every problem we can throw at them, and we will start running into hard-limits posed by physics, rather than our own creative limitations and engineering competence.

Jim Baerg said...

"When we create intelligences greater than our own which can then improve themselves, in a very short space of time (possibly days, possibly years)"

I've never understood how anyone can take seriously a time scale of 'days' for such a 'singularity'. Once the slightly above human AI designs its slightly better sucessor it takes time to manufacture this next AI & likely also time to 'educate' it. Years seems like the plausible timescale for significant improvement.

frgough said...

WARNING: Controversial post! Read at your own risk :-).

I found Jollyreaper's comment highly interesting, since his criticisms epitomize to me the mentality that keeps us earthbound today. Culture, not technology determines our expansion into space, and that culture does not exist today.

For the down and dirty details, read on.

Consider the exploration of the New World. Travel was difficult and expensive, and chances of failure were very high in the initial expeditions (and not entirely from indigenous peoples).

For the Spanish and French, travel to the New World was primarily for mercantile wealth (the idea that the more gold you had, the more economically powerful you were) and nationalism (expanding territory). Mercantile wealth sent military expeditions to get gold and return it home. No permanent presence. Nationalism created military colonies, with the local populace essentially conquered.

For the British, the New World offered religious freedom and trade (with some nationalism). This led to the British colonies, which generally lived in harmony with the indigenous population, purchasing land and engaging in trade. Colonists, for the most part, chartered with a business or the government to establish a colony, in return for which they would pay a certain amount for a certain time so the business or government could recoup its investment.

Now, let's consider the industrial revolution. The most important factor that led to the industrial revolution and the author's Accelerando was the Protestant Reformation and its key ideas: That God's creation was discoverable and that learning it glorified God, and that the earth was God's gift to man, and that honest work and profit glorified God.

Now, what do these things give a society? A sense of manifest destiny and a moral purpose. Science exists to improve the human condition, and taking risk and earning profit are noble activities.

Now, compare this to Jollyreaper’s society, where the individual feels he has the moral right to use the state to take another man's money and pay it to himself for his personal needs (universal health care). Where he surrenders his individual responsibility to the care of the state, where nationalism is evil. Where certain categories of human life are simply raw materials to be mined for personal well-being. Where Christianity and its moderating influence on human vice is abandoned.

In Jollyreaper's world, society is completely focused on self. Money spent on war is wasted because it can no longer be spent on safety and comfort. Embryos are a resource for the safety and comfort of the individual. Euthanasia is encouraged because it frees up resources for the safety and comfort of the young and strong. Risky technologies are taboo. Profit is evil because it unevenly concentrates the money needed to purchase comfort and safety.

In the Reformation world: The purpose of science is to allow inventions improve the human condition. Nuclear power is abundant because failure to take advantage of a gift of such incredible usefulness would be an affront to God. Space travel would be aggressively pursued because all of creation exists for mankind to use. Governments would go to space in the name of nationalism. Commercial firms would offer charters to colonists seeking a better life or wealth, even though the odds of failure might be high, because profit is noble and worthwhile.

In Jollyreapers world: Nuclear technology is dangerous and therefore taboo. Secularism places man as equal to all other life, and science exists to ensure safety and comfort and nothing more. Space exploration focuses on seeking life to demonstrate the non-uniqueness of man. Manned space travel is limited because it draws away resources needed for safety and comfort. Space travel for nationalism is evil and forbidden. Space travel for profit is evil and limited.

In Jollyreaper’s world, society never rises above mediocrity, and will never expand into space.

Anonymous said...

I always find it interesting to see the bias motivating censorship. It's easy to spot by comparing statements allowed to stand and statements quickly removed. It's even better when the censorship itself underscores the actual arguments made in the censored material.

adam_grif said...

"Once the slightly above human AI designs its slightly better sucessor it takes time to manufacture this next AI & likely also time to 'educate' it. Years seems like the plausible timescale for significant improvement. "

The "successor system", once designed, is likely to be a modified copy of the original, with its sum knowledge and experience. Copying the system would occur as fast as the write speed would allow it to, there is no need to slowly raise a "Baby A.I." over a period of months or years.

On the other hand, it may not even create a copy at all, it may simply reflect on itself, say "this could be better some other way", build a big list of changes in some sort of large external cache, and then modify itself, keeping the "good parts" in-tact. This method would be far faster.

If we create a system that is 'merely' as intelligent as a single human being, but is running on computer hardware instead of meatware, it would be thinking hundreds of thousands of times faster than a human of that intelligence would, because neuron transmission speed is horrifically slow compared to the speed of a copper wire or fiber optic cable. It also wouldn't have concentration problems or require sleep and lunch breaks so long as the system is cooled and powered, so the amount of work it can do in a given space of time is orders of magnitude higher than a Mark I human.

By contrast, dealing with something more intelligent than a human, thinking thousands of times faster than a human, whose working memory is orders of magnitude larger than a human, it's not difficult to see why progress on successor systems would be extremely rapid.

The largest bottleneck would be the actual engineering and construction of new hardware, but if we have computer controlled factories at its metaphorical fingertips and a supply line feeding it the raw materials it asks for, this, too, can be greatly expedited.

Another potential bottleneck is that it may approach local maxima, i.e. it may converge on an optimal solution given current knowledge, but given more information may not actually be the best available. For this reason, it might be spitting out documents calling for a scientific experiment to be run or a new, superlarge particle accelerator to be built before it will give you the answer you wanted.

In conclusion, I don't think we can give a satisfying answer regarding the actual timescale involved from start to finish of singularity ("finish" defined as "has discovered basically everything"), because there are too many question marks. Not least of all because we don't actually know what will be involved in creating successor systems, since if we did, we would not need to build the initial system in the first place.

Rick said...

Welcome to a couple of new commenters!

Blogger occasionally tosses real comments into the spam folder, even though they don't even superficially resemble spam. So if your comment doesn't show up on the blog, it is probably waiting for me to rescue it.

Regarding censorship, I have never deleted any substantive comment on this blog (unless I hit a wrong button). If comments don't appear it is probably for the reason above.

Anent this, I am quite political and don't hide my politics, but if I wanted to beat people over the head with them I would start a political blog. I encourage much the same of commenters, along with some genteel indulgence of mistaken (i.e., differing) views.

FWIW, interest in space travel seems to be fairly orthogonal to political views.

On the motivations and mechanisms of New World colonization, I'd note that through the 16th century and much of the 17th the English seemed markedly sluggish. Observers of that era might be quite surprised to find out how much of North America ended up anglophone.

Fun fact: The first book on European shipbuilding and naval architecture was published in Mexico City around 1580.

Mr. Blue said...

I would personally set the end point of the accelerando at 1945, but would mostly agree.

Try this little thought experiment. Think of a military technology... say a rifle or battleship. Move it back 100 years... now move it back another 100 years. Say, from the M-16 to the Mauser bolt action to the Brown Bess flintlock musket.

Jedidia said...

Where he surrenders his individual responsibility to the care of the state, where nationalism is evil.

It is funny that you should mention these two phenomenae in the same sentence, because they go hand in hand. I'm working in social developement in bosnia and hercegowina, and I can tell you from everyday experience that nationalism *is* evil. It can have quite astounding results in terms of economic, military and scientific efficiency if you can practice it in a more or less isolated environment, but all hell breaks loose if you got a heterogenous group, and everything comes to a standstill (I'm not talking about healthy patriotism here, I'm talking about the basic axiom of nationalism: that we are superior to them, whoever we or they are).

In a more and more globalized world, nationalism will lead to disaster (not that it already didn't during the last century, mind you).

Also, while surrendering personal responsibility to the state might seem a very socialistic treat, Nationalism does this just as well. Yet while socialism does it on the social plane (we'll tell you what to eat and wear, so don't worry about it) Nationalism does so on a moral level (we'll tell you what's right and wrong, so don't worry about it). Of course this also happens in socialism too, which goes to show how closely related the two actually are.

Anyways, in Bosnia, Nationalism has brought everything to a standstill: politics, economics, culture, everything is frozen in a state where everyone just blames the inferior other for the misery. I don't see bosnians going to space anytime soon because of nationalism, neither do I see any other nation. The world has grown too close, too heterogenous for that to work without following up with a bill that we all might find too expensive (and I'm not talking about money here. I'm talking about genocide).

There's of course a couple of things in your post I don't agree with, however I think you are dead on with your assesement that the major thing preventing manned space exploration nowadays is indeed culture. The trouble is, as described above, if the culture from the past that might lead to price I'd find too much to pay for a spacefaring society.

Jedidia said...

I just noticed that I totally messed up my last sentence in the above post. It should go something like this:

The trouble is, as described above, that if we want to achieve this goal by the culture from the past, we might find it comming at a price that I'd be unwilling to pay for a spacefaring civilisation.

Anonymous said...

Developing technologies such as ubiquitous computing and multi-touch displays may continue the process of refining and enhancing post-Accelerando communications and information technology.
As an example, an office equipped with more developed versions of these technologies could be not only paperless but desktop-less. In such an office, the desks and whiteboards themselves could be the computers, with multi-touch surfaces and wireless links that would allow employees to treat documents and spreadsheets as virtual pages that could be easily sent from one desk or whiteboard to another.

R.C.

jollyreaper said...

Frgough's comments highly interesting as well. It amazes me how people can look at a situation and see things in such vastly different ways. Astounding.

That God's creation was discoverable and that learning it glorified God, and that the earth was God's gift to man, and that honest work and profit glorified God.

It's never quite as simple as that. Religious folk are more than happy to accept the fruits of science when it provides electricity, clean water, and creature comforts. They get very upset when science treads close to matters of religious faith.

It was religion that screwed up the Islamic world. They maintained civilization post-Rome when Europe was mired in barbarism. It was the rise of an influential mystic that set the clock back for them. His argument was that Allah was the primary cause for everything and trying to delve any deeper into that mystery was blasphemy.

Yes, there are some scientists who have a completely science-friendly religious view. God created the universe and our best science indicates he did it with the Big Bang. And being so awesomely great, he knew it would result in human beings eventually. So while evolution and natural selection were the means by which we directly came about, the first cause goes back to God. Bible stories like Noah's Ark are religious metaphor and not meant to be taken literally. Where science contradicts the Bible, that can be attributed to human error.

This science-friendly view is NOT espoused by the majority of mainstream Christians. When every Republican candidate for president stood on stage and not one of them raised a hand to say they believed in evolution, seriously, WTF?

Taxes are the price we pay for civilization. The chest-thumping anti-tax argument from the right never seems to explain how necessary government services should be paid for. The usual libertarian argument is to substitute a for-profit business to provide the services instead, not trusting the government to be efficient. But for-profit businesses are intended to make money, and if doing a bad job by the customer makes them more money, that's what they'll do! There's no incentive not to.

jollyreaper said...

Frgough's POV is that my world is self-centered. Really? What could be any more selfish than the "I've got mine, Jack" mentality we see today? There's no investment in the future, no sense of stewardship. It's all scamming and profit-taking. Business leaders strip-mine their companies. Who cares if they exist in a year? I've got mine, jack. Golden parachute, no responsibility for the consequences of my actions. Government regulators are paid off, democracy subverted, and the middle-class destroyed.

You want to see the idealized libertarian paradise of no big government? Visit Somalia. Yeah, Russia and the Eastern Bloc countries were pretty grim, too. This just goes to show that the answer lies closer to the middle than the extremes. Russia now has swung the other way towards gangster capitalism and robber-baron economics. China's been operating as an authoritarian oligarchy/geritocracy for years now, communism paid lip-service. But pure command economies are generally bad for the country anyway.

Nuclear power is abundant because failure to take advantage of a gift of such incredible usefulness would be an affront to God. Space travel would be aggressively pursued because all of creation exists for mankind to use.

No, we already have people in government with a religious POV. The Secretary of the Interior under Reagan said we didn't need to worry about environmentalism because Jesus was coming soon on a cloud of glory and the Earth would be remade.

The whole "it's there for us to use" argument was the same thing used to justify slavery. Look, it's in the Bible, the only injunction for the slaveholders was to not permanently injure their slaves. Christian slaves were told to obey their Christian masters.

The biggest shortcoming of libertarian fantasy is the idealizing of the individual over the the group. Much scifi falls into this trope. But it's the idea that one man, acting alone, through his pure act of genius can change the world. Not often the case. At most you get an inspired leader who guides the work of a lot of other geniuses and together, as a team, that group changes the world.

Let's ignore the fictional heroes like John Galt, let's look at the real world. Pick any hero of industry you like and imagine how likely his success would be if he were in Haiti or Somalia or Uganda and not the US of A. Could Bill Gates have built Microsoft in Haiti? Why not? No employees to hire! Sure, there's warm bodies, but they didn't have the years of public schooling and secondary education that made them qualified programmers. Imagine the cost if Gates himself had to find these children and educate them himself. And let's not forget that poverty and violence are bad for education. He'd have to provide police to go with the teachers. And in this country there would be graft to take into account, paying off corrupt officials.

For Gates to start Microsoft in Haiti, he'd already have to have a fortune to spend in the hopes of making one. Gates built his success on a foundation provided by others working for the common good. And you know what? There's nothing wrong with that! That's the way the system is supposed to work! This is a good thing. I only get upset when someone in his position acts like that help was never offered in the first place.

I've been on food stamps and welfare. Did anybody help me out? No! Craig T. Nelson, on the Glenn Beck show.

WTF? facepalm

The modern conservative is engaged in one of man's oldest exercises in moral philosophy; that is,
the search for a superior moral justification for selfishness. - John Kenneth Galbraith

jollyreaper said...

One last comment -- we are all slaves to magical thinking. If I put plumbing in my house, anyone looking at the line I put in can see where I intended the water to go. But the water doesn't care. It operates according to the laws of physics. If I leave gaps, if I don't seal things correctly, the water will go where it will. I can stand back and shout about how my ideas are sound and why I shouldn't be ankle-deep in water but I'd look like a damn fool.

Free market types will tell us how the markets are supposed to work and then when they fail we're told that what we're seeing isn't really what we're seeing at all.

A free market is a tool, not magic. You assume that the participants in the market are rational actors and, with the free flow of information, a proper price for goods and services can be arrived at. Which is possible if everyone is playing the same game but they aren't. Businesses don't care about free markets, they care about making money. If the free market argument is in their favor, they'll make it. If breaking the market is in their favor, they'll do that, too, while all the while singing its praises.

According to free market theory, a car company has an incentive to manufacture safe cars because customers will not buy dangerous cars. If they continue to manufacture dangerous cars, they will go out of business. This is magical thinking. The real world, well, let's just quote from a famous novel and movie here --

Narrator: "A new car built by my company
leaves somewhere traveling at 60 mph. The
rear differential locks up. The car crashes
and burns with everyone trapped inside. Now:
should we initiate a recall? Take the number
of vehicles in the field, A, multiply by the
probable rate of failure, B, multiply by the
average out-of-court settlement, C. A times
B times C equals X. If X is less than the cost
of a recall, we don't do one."
Business woman on plane: "Are there a lot of
these kinds of accidents?"
Narrator: "You wouldn't believe."
Business woman on plane: "Which car company do
you work for?"
Narrator: "A major one."

Bear in mind that the GOP has made tort reform a major plank in their platform. This would reduce potential damages, variable C in the example, thus making recalls even less likely than before, thus removing the corporation's only incentive to make safe cars.

"But no business would kill off its own customers!" someone might say.

Really? Please explain Big Tobacco. They're doing just fine.

Tony said...

jollyreaper:

"Gates built his success on a foundation provided by others working for the common good."

Sorry, but that's a hacker culture urban myth. He built his success on academic and commercial work that was mostly paid for, with the expectation that it would have defense and business utility. Nobody had personal computers to do software research on. All of the foundations of MS Basic and MS DOS were built on computers belonging to large corporations, government, and educational institutions. Most of the development time was either on the business, defense, or academic payroll. Individual, off the clock, contributions were a minor part of the research man hour budget.

"And you know what? There's nothing wrong with that! That's the way the system is supposed to work!"

A little bit breathless with all of the exclamation points, aren't we?

In any case, this is another hacker culture urban myth. The "system" is manifestly not designed to be -- and will not work better as -- a gift economy. The resources are not free, the infrastructure is large and complex, and all of the effort that goes into it's construction and maintenance has to pay off.

jollyreaper said...
This comment has been removed by the author.
Michael said...

"Yes, there are some scientists who have a completely science-friendly religious view. God created the universe and our best science indicates he did it with the Big Bang. And being so awesomely great, he knew it would result in human beings eventually. So while evolution and natural selection were the means by which we directly came about, the first cause goes back to God. Bible stories like Noah's Ark are religious metaphor and not meant to be taken literally. Where science contradicts the Bible, that can be attributed to human error.

This science-friendly view is NOT espoused by the majority of mainstream Christians."

With the notable exception of the Catholic Church, who despite lapses, has probably had the most beneficial impact on science of any organization in Western Civilization, post Roman collapse.

Tony said...

jollyreaper:

"According to free market theory, a car company has an incentive to manufacture safe cars because customers will not buy dangerous cars. If they continue to manufacture dangerous cars, they will go out of business. This is magical thinking. The real world, well, let's just quote from a famous novel and movie here --..."

Any valid market theory, free or otherwise, would never maintain such a ridiculous position. Suppliers have an incentive to manufacture products that meet an optimum point of consumer acceptance and supplier cost. If suppliers figure potential legal fees, damages, and negative publicity in their cost structure, they're only being prudent, not evil.

The real magical thinking is in the idea that suppliers have an incentive -- some would say a duty -- to build perfectly safe products, regardless of cost. The theory, as expounded above, is that consumers would pay for perfect safety above all other considerations, regardless of price. With all due respect to mr. jollyreaper, this is pure baloney. Consumers force cutting safety corners on suppliers by being willing to buy products that are only so safe, for only so much money. Heck, a lot of automobile safety features (including such basic things as seat belts) are mandated by the government not because car makers are inherently careless about saftey, but because there are enough consumers out there that would buy cars without them, if their absence was reflected in the price. Some manufacturers (at least -- perhaps all of them) would service that market.

If you have a problem with what businesses do or don't do to make a buck, look at your friends, neighbors, and, most particularly, yourself, as consumers. You'll find that they are every bit as selfish and venal -- yes, even evil -- as corporations are made out to be. Which is only to be expected. Corporations are made out of people, all kinds of people. So are consumer markets. Don't blame the supplier before you look at the consumer.

jollyreaper said...

"Sorry, but that's a hacker culture urban myth. He built his success on academic and commercial work that was mostly paid for, with the expectation that it would have defense and business utility. "

No, I'm talking about the workers he hired, the society he's living in. I'm not talking about open source. Taxes paid for the roads his employees drive on. Taxes paid for their education, taxes paid for by people with and without children. Gates in turn creates products that would not have existed if not for our society. You wouldn't have gotten the PC out of Soviet Russia. Certainly not the Internet.

"In any case, this is another hacker culture urban myth. The "system" is manifestly not designed to be -- and will not work better as -- a gift economy."

Ok, I recognize my mistake. You're being willfully obtuse. No point in continuing this conversation.

Thucydides said...

Moving around a bit here (part 1):

The concept of Accelerando is pretty open ended, so trying to define a beginning and end is probably more useful to identify the particular person's interests and bias than anything else. (So here goes ;))

Accelerando and Singularity should be defined by the game changer that starts the curve moving upwards (although as pointed out, the curve is always "S" shaped). In Human history, there are a few distinct events which stand out:

The use of tools. Humans (and protohumans) were now able to manipulate the environment with more than their fingers and teeth, expanding the range of things they could do.

The development of agriculture. Humans now had the ability to sustain larger populations in more concentrated areas. More people = more brainpower in a single spot. How they use their brainpower is another question....

The Bronze Age. Humans making and using metal tools and equipment had a far greater range of quality and quantity than was possible with wood or stone tools. The Stone Age did not end because people ran out of stones.

The Industrial Revolution. I will define this as being the large scale harnessing of thermal energy, since muscular, wind and water had already been exploited, but are limited in magnitude, time and place. Thermal energy (starting with steam engines) is scalable and portable in ways other energy is not.

If there is another Accelerando, the social effects of the communication revolution is probably a good place to look. Being able to access people, goods and information orders of magnitude faster is interesting in of itself, but until now, has really only been possible with heavy infrastructure support, from Western Union to AT&T laying out wire and fiber optics, to government land grants to build the Canadian Pacific railway and so on. Large structures demanded large capital investment and a heavy layer of control to operate, service and connect the systems.

The Internet, although still connected through the existing infrastructure, is allowing people to disintermediate and directly access what they want and need. You can go to the bank (or still wire transfer money through Western Union), or email money through Paypal and similar services. Wikipedia is a very flawed means of sharing information (Jerry Pournelle once complained about a person who was deleting corrections Jerry was making to his own biography page in Wikipedia, and other examples of this sort of manipulation have been documented on a wide variety of topics. You can investigate and come to your own conclusions about motivation). Inventors and business can send CAD files directly to a machine shop to produce prototypes or small production runs, and FedEx the results back the next day. Highly educated people no longer see the need to send their children to a school when they can home school, and you can buy equipment to take you off the grid so far as energy production is concerned (water and sewage can be taken off grid under certain circumstances, but be very careful where you do this).

Even the Internet itself can become something of a DIY project, I have run into American soldiers and Marines who essentially created ISP's in Iraq and Afghanistan using "dumpster dived" equipment for use in Forward Operating Bases (FOBs). Powerful new wireless technology is increasing the amount of range and bandwidth that is available, there will come a day you set up a server, solar panel and broadcast antenna and become an off grid ISP.

Thucydides said...

Moving around a bit here (part 2):

Culture is important as well. Many of the same technical tools exist around the world without the same consequences for the people who use them.

Since this is a contentious issue, I will start with an example far enough in the past that is should be safe from "flames".

Mycenaean culture was well established in the southern part of Greece, and had influence in the eastern part of the Mediterranean Sea, and tentative trade links a bit farther away. After a "Dark Age", people living in the same area, speaking the same language and worshipping the same gods built what we know as Classical Greek civilization. They expanded through the entire Mediterranean basin, including colonizing choice real estate in places like Sicily, and had trade links as far away as the Black Sea and the "Pillars of Hercules" (and possibly beyond).

The key difference is the Mycenaean culture was built around a feudal, bureaucratic "Palace culture", while Greek society was built around a much more flexible market culture (farmers who owned their own land buying and selling goods and services in the local polis). This culture had room to allow and support much more low level innovation, political dialogue and provide outlets for people with different ideas.

Same people, same geography, same language, different culture=different results.

Now having been to Bosnia during the conflict, I have seen the negative results of being in a highly insular culture (which never lets go of the past; I was regaled with tales of the Battle of Kosovo Polje told like it took place in the 1990's; it happened in 1389...). I also can point to the effects of having more open cultures, the Greeks were able to hold off the far vaster Persian Empire (and later Athens could hol out against Sparta backed with Persian funds); the Serenìsima Repùblica Vèneta was a city state able to deal with the vastly superior Ottoman Empire and Elizabethan England was able to compete against the Spanish Empire.

Today we can see the Asian "Tiger" economies are competitive against the Chinese "Dragon", and once India threw off the shackles of the "Permit Raj" and embraced the market to a much larger extent, they have exploded into a major world power and have the technical and resource base to contemplate sending Indian astronauts to the Moon in 2020.

America itself was founded on religious ideas; much of what influenced the Founders who wrote the Constitution can be traced back to Protestant Dissenters and their ideas about the role of God in the State, Justice and economics, and that is the foundation of American culture and the source of "American Exceptionalism". America is also unique in having a largely free market culture combined with a continental scale of people and resources, a far larger base to build upon than the Serenìsima Repùblica Vèneta enjoyed.

Technology supplies the tools, but culture determines how you use them.

Tony said...

Re: jollyreaper

(With apologies in advance to Rick and all.)

Don't be petulant, Young Jedi. I'm not being obtuse. I'm taking you at the apparent face value of what you are saying. If you mean something else, say something else.

Now, as to business leveraging the benefits and infrastructure bought by tax dollars...you are absolutely right that the system is designed to work that way. So what? Is Bill Gates supposed to write love letters to the American people, thanking them for educating his workers and giving them roads to drive to work on? Everybody recognizes it -- that's why we elect representatives that put in public schools and roads.

But it's not just the major stock holders of corporations that benefit. Consumers benefit too by having those companies to give them a job, or, if in smaller businesses, by having corporations to give their customers jobs that earn enough to keep all of the samll businesses in business.

IOW, I'm having trouble seeing what any of this has to do with anything. Can you help me out?

Jim Baerg said...

frgough: "British colonies, which generally lived in harmony with the indigenous population,"

If anything the British colonies had a more belligerent relationship with the amerindians than the French colonies did. New France was dependent on the fur trade with the natives & needed peaceful trade, while the British colonies were developing agriculture & manufacturing based on the colonists (or African slaves) labor rather than the natives labor & tended to push the natives out.

Of course the above is an oversimplification, both sets of europeans ended up with both allies & enemies among the natives. Eg: Champlain was pressured by his Algonquin & Huron trading partners to intevene on their side in their warfare with the Iroquois, which induced the Iroquois to ally with the Dutch & British.

Milo said...

adam_grif:

"On the other hand, it may not even create a copy at all, it may simply reflect on itself, say "this could be better some other way", build a big list of changes in some sort of large external cache, and then modify itself, keeping the "good parts" in-tact. This method would be far faster."

It's also dangerous. What if it makes a mistake and accidentally lobotimizes itself?


"In conclusion, I don't think we can give a satisfying answer regarding the actual timescale involved from start to finish of singularity ("finish" defined as "has discovered basically everything"),"

I don't think we will ever discover "everything". When we've answered all open questions of today's society, we will discover new mysteries we couldn't even formulate the questions to before.

Would stone age tribes have debated about the economic plausibility of colonizing space? No, because they had no notion that those tiny dots of light in the sky are rocks you can stand on.



jollyreaper:

"It's never quite as simple as that. Religious folk are more than happy to accept the fruits of science when it provides electricity, clean water, and creature comforts. They get very upset when science treads close to matters of religious faith."

Often, religious people are against progress. When a technology has been around long enough, people stop seeing it as progress and it gets retconned into having been part of God's original plan all along after all.

Even the Amish don't seem opposed to using "technologies" like agriculture, domesticated animals, or fire.


"It was religion that screwed up the Islamic world. They maintained civilization post-Rome when Europe was mired in barbarism. It was the rise of an influential mystic that set the clock back for them. His argument was that Allah was the primary cause for everything and trying to delve any deeper into that mystery was blasphemy."

Don't forget that bit where they got trashed by the Mongols.

Michael said...

"Even the Amish don't seem opposed to using "technologies" like agriculture, domesticated animals, or fire."

The Amish aren't actually anti-technology, they opt not to use technology that disrupts their local social system. For example: They don't have telephones in homes because it would allow the outside world to disrupt the family. Many communities do, however, have central phones for emergency and business use.

Thucydides said...

AI built upon silicon electronic, photonic or quantum architecture will likely not be too interested in us since it will be working at several orders of magnitude of speed greater than "we" are.

Even an AI which is no smarter than you or I will be doing a lifetime's worth of work in a very short timeframe. If it is existing on GHz speed computing substrates, milenia of subjective time will pass in minutes of "real" time. Plenty of time for the AI (or its remote software descendants) to build on previous work, test and examine issues in "real" time and so on.

Physical interaction with the real universe will be difficult except maybe at the micro level, but perhaps the AI's will build "trees" that harvest solar energy while the "root" structure leaches minerals from the environment. The trees communicate between each other via email or similar means.

For fans of the metabolic revolution, read this: http://nextbigfuture.com/2010/12/future-of-metabolic-engineering.html

Tony said...

I'm kind of mystified by the common assumption that an AI would necessarily think that much faster than a human. It's entirely possible that the necessary level of complexity, combined with physical limitations on hardware, could lead to AI (if ever developed, something not at all certain) that thinks no faster than us, slightly faster than us, significantly faster than us, or even slower than we do.

Milo said...

Tony:

"I'm kind of mystified by the common assumption that an AI would necessarily think that much faster than a human. It's entirely possible that the necessary level of complexity, combined with physical limitations on hardware, could lead to AI (if ever developed, something not at all certain) that thinks no faster than us, slightly faster than us, significantly faster than us, or even slower than we do."

Seconded.

Current computers can do some calculations blindingly fast, but that's because those calculations are conceptually simple. The only reason humans can't do math equally fast is because our brain's "operating system" is horribly designed for such a task - and in fact, we can do some calculations very quickly in our subconcious, like we need to do to stand up without falling.

On the other hand, the human brain is reasonably optimized for intelligence. While there's undoubtably improvements to be made on the design, computers don't have nearly as much of an inherent advantage over us. Evolution is actually pretty good at finding solutions.

Given that we haven't made strong AI yet, it's obviously a rather demanding task (unless there's some simple solution that we'll kick ourselves really hard for missing all this time). It's reasonable to expect that the computational requirements will be high enough to challenge even modern computers.

To take an analogy from a different kind of technology: modern jets can indeed outrun natural birds by a considerable margin... yet the Concorde is less than 13 times faster than the White-throated Needletail. That's an order of magnitude, but not "completes the needletail's annual migration in seconds". Another order of magnitude gives you reentry speed. There are physical limits.

Rick said...

Google's spam filter is an even handed censor; this time it erroneously exiled a post on the other side of the argument upthread.

My critique of libertarianism is that it ignores the political dimension of the primate house. The implicit assumption, it seems to me, is that by merely boxing the formal state into a limited night-watchman role you can also box in politics.

But it is fundamentally politics that creates the state, not vice versa. The state may well 'wither away' if other political forms, such as a multi layer 'neomedieval' hierarchy where Cornwall is a region on the outskirts of London, and a member with a distinct place in the UK, EU, and UN respectively.

But politics will not wither away, and, in real life a weak state serves mainly oligarchic interests. Where the king is weak the barons are strong, and the natural tendency of a free market in a finite ecosystem is an oligarchic distribution, a 'long tail' and large head.

The blogosphere is a good contemporary example, though the big blogs are in no position to impose an oligarchic regime.

I ask those who wish to respond to keep their comments brief, because this is not mainly a political blog.

adam_grif said...

"It's also dangerous. What if it makes a mistake and accidentally lobotimizes itself?"

Then you start over. Presumably you kept an archived copy of the source code it started out with.

TBH that's a tiny, tiny problem, not really a "danger". The real danger is related to the inherent impossibility of guaranteeing benevolence in a Seed A.I. system, since it is capable of modifying its own code and it is impossible to safety check each step because the code is too complicated for anybody to really understand (otherwise you would have hand coded it yourself instead of relying on A.I. bootstrapping).


"I don't think we will ever discover "everything". When we've answered all open questions of today's society, we will discover new mysteries we couldn't even formulate the questions to before.

Would stone age tribes have debated about the economic plausibility of colonizing space? No, because they had no notion that those tiny dots of light in the sky are rocks you can stand on."


I can't take this notion seriously. Physics is physics, it does not change. Our understanding of it does, but eventually we (or our A.I.s) will have either a full understanding, or at least the best understanding that is physically possible.

This is the point when civilizations cannot become more technologically advanced, irrespective of any other factor. Whether this happens overnight or takes a hundred years of constant A.I. run experiments to get to doesn't really matter, there must be an upper limit. The idea that technology increases "Forever" is essentially bunk.

"AI built upon silicon electronic, photonic or quantum architecture will likely not be too interested in us since it will be working at several orders of magnitude of speed greater than "we" are."

There's no reason why human-level intelligence necessarily implies human-like systems. Designed correctly, it won't really have "wants" or "interests" in the ordinary way we'd talk about people having them, they'd just do what they're asked to do and then idle.


"I'm kind of mystified by the common assumption that an AI would necessarily think that much faster than a human. "

This is a classic case of Serial Speed vs Parallelism. The former can emulate the latter, but not vice versa. The brain is a massively parallel processor clocked at ~200Hz. Traditional computers have minimal parallelism but have obscene clockspeeds, on the order of 5,000,000,000Hz for a modern Intel i7 processor with adequate cooling (stock settings are in the 3 Billion Hz range).

We could match the brain's total "informational output" (however you measure that, the brain doesn't exactly conform to FLOPS measures) with a computer that would be thinking at "half the speed" we think at, by using a less powerful machine, which is far less complex than a human brain but is making up for that with its greater serial speed.

But any computer that has the approximate complexity of a human brain, but is operating with copper wires (or whatever we're using in this particular computer) will be thinking orders of magnitude faster.

adam_grif said...

"It's also dangerous. What if it makes a mistake and accidentally lobotimizes itself?"

Then you start over. Presumably you kept an archived copy of the source code it started out with.

TBH that's a tiny, tiny problem, not really a "danger". The real danger is related to the inherent impossibility of guaranteeing benevolence in a Seed A.I. system, since it is capable of modifying its own code and it is impossible to safety check each step because the code is too complicated for anybody to really understand (otherwise you would have hand coded it yourself instead of relying on A.I. bootstrapping).


"I don't think we will ever discover "everything". When we've answered all open questions of today's society, we will discover new mysteries we couldn't even formulate the questions to before.

Would stone age tribes have debated about the economic plausibility of colonizing space? No, because they had no notion that those tiny dots of light in the sky are rocks you can stand on."


I can't take this notion seriously. Physics is physics, it does not change. Our understanding of it does, but eventually we (or our A.I.s) will have either a full understanding, or at least the best understanding that is physically possible.

This is the point when civilizations cannot become more technologically advanced, irrespective of any other factor. Whether this happens overnight or takes a hundred years of constant A.I. run experiments to get to doesn't really matter, there must be an upper limit. The idea that technology increases "Forever" is essentially bunk.

adam_grif said...

"AI built upon silicon electronic, photonic or quantum architecture will likely not be too interested in us since it will be working at several orders of magnitude of speed greater than "we" are."

There's no reason why human-level intelligence necessarily implies human-like systems. Designed correctly, it won't really have "wants" or "interests" in the ordinary way we'd talk about people having them, they'd just do what they're asked to do and then idle.


"I'm kind of mystified by the common assumption that an AI would necessarily think that much faster than a human. "

This is a classic case of Serial Speed vs Parallelism. The former can emulate the latter, but not vice versa. The brain is a massively parallel processor clocked at ~200Hz. Traditional computers have minimal parallelism but have obscene clockspeeds, on the order of 5,000,000,000Hz for a modern Intel i7 processor with adequate cooling (stock settings are in the 3 Billion Hz range).

We could match the brain's total "informational output" (however you measure that, the brain doesn't exactly conform to FLOPS measures) with a computer that would be thinking at "half the speed" we think at, by using a less powerful machine, which is far less complex than a human brain but is making up for that with its greater serial speed.

But any computer that has the approximate complexity of a human brain, but is operating with copper wires (or whatever we're using in this particular computer) will be thinking orders of magnitude faster.

adam_grif said...

"Given that we haven't made strong AI yet, it's obviously a rather demanding task (unless there's some simple solution that we'll kick ourselves really hard for missing all this time). It's reasonable to expect that the computational requirements will be high enough to challenge even modern computers."

We don't have Strong A.I. yet because even the most advanced supercomputer on Earth is not powerful enough to emulate a cat brain, let alone a human one. Insect level intelligence" is more the realm of current research.

Meatware was designed by billions of years of natural selection, modern systems engineers don't exactly have that luxury.

The theoretical "upper limit" on processing capacity for a given chunk of matter is that it cannot accurately simulate something more informationally complex than itself - i.e. the most efficient "full detail" simulation of any object is the object itself.

However, we don't need to fully emulate all of the matter in a brain in order to match its capabilities, because the brain is not necessarily the most efficient processor ever, especially for certain tasks (i.e. it's horrible at playing Chess, doing mathematics).

Estimates on when we'll be able to create an artificial neural network matching a human brain in complexity peg it at mid 2020's, or possibly 2030's if you use conservative estimates assuming a slowdown in Moore's law.

Rick said...

I tend toward the same school as Tony and Milo. With the further proviso that, so far as I can tell, the architecture of human thought is still largely unknown. About all we can say is that it seems very different from computer architectures.

So if by AI we mean human-like intelligence, a) we don't have a clue how to implement it, and b) we certainly have no reason to think it would be extraordinarily fast.

I also have to wonder what exactly it means to be 'more intelligent' than humans. This comment thread is ample proof, from whatever perspective, that smart people say stupid things, the only arguments being over which smart people said what stupid things.

Or what if the AIs just decide to write blogs about everything that interests them? Or join contemplative religious orders?

adam_grif said...

"So if by AI we mean human-like intelligence,"

AI just implies "intelligence", not human-like, but that's a whole can of semantic worms.

I think the best definition of intelligence I ever found said that "Intelligence is what intelligence tests measure." ;)


"a) we don't have a clue how to implement it, and"

We don't have to understand it to emulate it, any more than a medieval smith had to understand chemical bonding and thermodynamics to forge steel.

The reason why Neural Nets and Genetic Programming/Algorithms are so attractive to people in the field is because it allows us to construct it without having to "reverse engineer" intelligence itself.

Of course, this also makes them the most existentially dangerous approaches to design, but...

"b) we certainly have no reason to think it would be extraordinarily fast."

Unless computer architecture design crawls to a halt in 2020, we absolutely do. We've already discussed the inherent disadvantage of neurons vs artificial systems - they're really, really slow and have extremely high latency on top of that.

Milo said...

adam_grif:

"We don't have to understand it to emulate it, any more than a medieval smith had to understand chemical bonding and thermodynamics to forge steel."

Yes, but medieval smiths did still have a clue how to forge steel, as evidenced by the fact that they did, in fact, forge steel.

We have not yet implemented a strong AI. And not just due to lack of funding.

"The reason why Neural Nets and Genetic Programming/Algorithms are so attractive to people in the field is because it allows us to construct it without having to "reverse engineer" intelligence itself."

But neural nets and genetic algorithms have not, so far, proven sophisticated enough to emulate full intelligence. Perhaps they can, if we throw vastly more processing power at them, but that's anyone's guess at this point - and such an AI would be hard to control even if we could make it. Also, the "we need to throw vastly more processing power at them" thing provides a counterargument to the notion that this approach to AI would result in anything superfast right off the bat.

The human brain, though, is more than just a whole lot of processing power. We have emotions, goals, stock thought processes. How do we emulate those? How do we give an AI an underlying emotional instinct for "killing people is wrong", when concepts like "killing", "people", and "wrong" (and if you're a pretentious philosopher, "is") already require advanced intelligence to understand? How do we ensure the AI want anything at all? Without emotions like boredom or curiousity, it would have no incentive to actually use its massive intelligence. It would become the world's most extreme underachiever, capable of anything and yet actually doing no more than a vegetable. (As a corollary, even an intelligent AI would not actually know that much without having a drive to learn, and so would not be particularly competent.) If we program in emotions but they're too simple ones, then that leads to ye olde "AI meticulously deconstructs the solar system to accomplish its programmed objective" horror scenarios.

Rick said...

We don't have to understand it to emulate it, any more than a medieval smith

Which could imply a similar, very gradual rate of progress typical of purely empirical hit-and-miss.

On speed, it wouldn't surprise me at all if we finally hit the wall on Moore's Law by 2020 or some such date. Moore's Law style progress is what you get when the workers are just mastering their tools.

A better if narrower example than human speed in general is airliner development. I'd guess that the first generation of airliners in the 1920s flew at about 160 km/h. Those of the 1930s flew at about 300 km/h, those of the 1940s at 550 km/h, those of the 1950s at 1000 km/h.

Thus, over three decades, airliners exhibited Moore's Law style accelerando progress, with speed nearly doubling each decade (about 80 percent). Service altitude and range showed comparable progress.

If you project that forward, current generation airliners would be hypersonic, around Mach 5-10. Which is exactly what people in the 1960s expected for the airliners of 2000.

Instead, after a fling with Concorde airliner speeds settled down to high subsonic. Partly because sustained supersonic flight is more demanding and therefore more expensive, partly because of diminishing returns. Supersonic airliners don't save enough time for enough passengers to create an effective demand for them.

Rick said...

If we program in emotions but they're too simple ones, then that leads to ye olde "AI meticulously deconstructs the solar system to accomplish its programmed objective" horror scenarios.

Error ... faulty ... must sterilize ... ster.il.ize ...

adam_grif said...

"Yes, but medieval smiths did still have a clue how to forge steel, as evidenced by the fact that they did, in fact, forge steel. We have not yet implemented a strong AI."

I don't really see how that is relevant, that Strong A.I. does not exist currently in no way implies that it cannot be created.

" And not just due to lack of funding."

Of course not "just" because of lack of funding, but lack of funding isn't helping the situation either. Companies are not fond of investing money in esoteric scientific research that does not have applications that are expected to pay off the investment costs, and places like The Singularity Institute for Artificial Intelligence run off donations.

"But neural nets and genetic algorithms have not, so far, proven sophisticated enough to emulate full intelligence."

I'm not quite sure what you mean by "full intelligence". Neural networks, provided they accurately simulate the functions of a real neuron, must be able to generate the intelligence found in a human, because that's what a human brain is.

The only way around this is to argue that there is something immaterial or supernatural about the human brain that is necessary for its functioning.

"Also, the "we need to throw vastly more processing power at them" thing provides a counterargument to the notion that this approach to AI would result in anything superfast right off the bat."

That really depends on whether the software side or hardware side is lagging behind - i.e., if we work out how to implement them properly before the hardware has caught up, or whether the hardware is way more than we need, but we're still figuring out the implementation details.

If someone has a stroke of genius right now and hashes out the Friendly A.I. algorithm and some associated fixes for current A.I. problems, the system would be slooooooooow. But that ain't gonna happen.


"The human brain, though, is more than just a whole lot of processing power. We have emotions, goals, stock thought processes. How do we emulate those?"

Emotions aren't magic, they are releases of hormones and/or neurotransmitters acting on current mental state.

Neural nets are an extremely low-level simulation of the brain, they deal purely with neurological concerns. Things like "when X chemical is released, it has Y impact on surrounding neurons" is how emotions behave on this level. So long as it's physiological impact is accurately modeled, the high-level, abstract experience of emotions is being simulated too.

"How do we ensure the AI want anything at all? "

Rational, top down AGI wouldn't produce anything like a human and thus would just do whatever we wanted, neural net or evolved A.I.s that are much more human like are naturally more complicated.

Of course, emotions come as part of the package for the full simulation so this is hardly an unsolvable challenge. The unsavory implication though is that this kind of research has... applications in the real world.

adam_grif said...

"Which could imply a similar, very gradual rate of progress typical of purely empirical hit-and-miss."

I do believe that you just described the history of A.I., 1950 - 2010.


"On speed, it wouldn't surprise me at all if we finally hit the wall on Moore's Law by 2020 or some such date. Moore's Law style progress is what you get when the workers are just mastering their tools. "

You understand though, that you're essentially just pulling arbitrary caps out of thin air? I would be just as justified to say it will stop in 2050, or 2012.

Fact of the matter is, human intelligence must be able to be replicated, because human intelligence exists. Whether the strong A.I. we build 20 years down the road ends up being a computer as we might envision it today, or an oversized, genetically engineered human brain with intel processors scattered through it's structure hooked up to MMI's doesn't really matter.

The other thing is that it's relatively low-cost compared to some of the pie-in-the-sky discussions that have gone on here, like "suppose a martian colony".

Milo said...

adam_grif:

"I don't really see how that is relevant, that Strong A.I. does not exist currently in no way implies that it cannot be created."

Fortunately, I never said that it can't be created, and neither did Rick.

We said that we currently have no clue how to create it.


"Companies are not fond of investing money in esoteric scientific research that does not have applications that are expected to pay off the investment costs,"

Strong AI is one of those things that, if it existed, we would find applications for it very, very quickly.

Companies aren't fond of investing money in it because it doesn't look like we're actually in a position to develop it.


"Neural networks, provided they accurately simulate the functions of a real neuron, must be able to generate the intelligence found in a human, because that's what a human brain is."

"Neural nets are an extremely low-level simulation of the brain, they deal purely with neurological concerns. Things like "when X chemical is released, it has Y impact on surrounding neurons" is how emotions behave on this level. So long as it's physiological impact is accurately modeled, the high-level, abstract experience of emotions is being simulated too."

Yeah, but that only works if we create an exact duplicate of the human brain. We wouldn't have any idea how to change things to make the AI more intelligent, or to teach it that it should be looking for electrical outlets rather than getting hungry.

Also, that requires accurately scanning the wiring of a human brain, neuron by neuron.

We certainly wouldn't be able to do this with a "make a random neural net and then train it until it grows smart" approach.

Thucydides said...

The human brain is a massively parallel processor, with neurons having hundreds or even thousands of connections to other neurons, with the number of interconnections rivaling the numbers of stars in the galaxy. This would be outlandishly difficult to do in silicon, but there is no particular reason to suppose you could *not* do something this if you had the time and resources.

Setting aside the software issues for a moment (and I think that will be the true bottleneck), if anyone cared to build a massively parallel and interconnected architecture (think of a Beowulf cluster made up of a warehouse full of rack servers.) each component would still be operating at speeds many orders of magnitude faster than the organic components of a human brain.

I am not sure that strong AI would be the start of another Accelerando, though. How would it be a true game changer like the harnessing of thermal energy or the development of metal tools?

Disintermediation is a potential game changer, since it would seem to overcome many of the age old limits of the primate house (as Rick has described it) and push "politics" (as a means of distributing scarce resources) from its primacy as a means of organizing and operating organizations, societies and States.

adam_grif said...

"Fortunately, I never said that it can't be created, and neither did Rick.

We said that we currently have no clue how to create it."


I do believe you actually said something to the effect of "we don't understand how the human mind / intelligence works", to which I replied "we don't have to".

"Strong AI is one of those things that, if it existed, we would find applications for it very, very quickly.

Companies aren't fond of investing money in it because it doesn't look like we're actually in a position to develop it."


Of course if they developed it, it will pay itself off millionfold, that was bad wording on my part. What I mean is they don't like investing in things that aren't going to pay off for the timescales involved here (i.e. decades). They instead fund A.I. research that is going to yield a product in 18 months that they will be able to sell (Computer vision, optical character recognition, etc).

Most of the actual A.I. research that goes on doesn't get called A.I., because people are still nervous about investing in the field.


"Yeah, but that only works if we create an exact duplicate of the human brain. "

I don't see how you came to that conclusion; most brains that exist in the animal kingdom are not human brains. That doesn't imply that they aren't intelligent.

Additionally, "Full brain simulation" is absolutely the goal of some lines of research.

"We wouldn't have any idea how to change things to make the AI more intelligent, or to teach it that it should be looking for electrical outlets rather than getting hungry."

Trial and error, my boy, trial and error (with regard to hunger pangs).

The definition of "more intelligent" at this stage is quite ill defined, but getting the simulation to run faster than it would happen in real life is probably just as good. Since the neural network is artificial in nature, it will never actually need sustenance, will never decay, can hold as much information as you have Hard-drives to plug into it, can live hundreds of life-times accumulating infomration, become an expert in ev ery field of science, keep up to date with every journal published in every country on Earth... etc etc.

I'm not sure if somewhat vacuous complaints like "but it wouldn't be 'More intelligent'" really matter at this stage. The intelligence itself would be in the best position to answer that question.

"We certainly wouldn't be able to do this with a "make a random neural net and then train it until it grows smart" approach."

I don't think anybody has been espousing that approach for many years.

Either way, Strong A.I. is something I feel is inevitable, regardless of the specific approach that results in its creation.

Jedidia said...

Neural networks, provided they accurately simulate the functions of a real neuron, must be able to generate the intelligence found in a human, because that's what a human brain is.

Which is exactly the problem. Because, for all we know so far, that neuron will have to be simulated on the subatomic level to reproduce every effect neccessary for the "brain" to work. Now imagine simulating all the neurons in a brain on the subatomic level, (plus transmitters, because they seem to have quite a large influence on the thought process), and you'll get an idea of the hardware involved in such a task. It's stupendious with our current procesor architecture! IMHO, our only chance would be to come up with a completely new design of processors that are more optimized for being smart than for brute force number crunching.

On the topic of AI vs. Brains I'd suggest reading "the emperors new mind" by Roger Penrose (yes, THE roger penrose).

Elukk said...

There's a lot easier a way to make human identical intelligences (as opposed to human level in general) than building hardware. :P

Rick said...

You understand though, that you're essentially just pulling arbitrary caps out of thin air?

Yes. because I don't know a thing about the specific tach. But at some point they are going to hit some physical limitations - quantum effects, the size of silicon atoms, whatever.

Just as the Moore's Law phase of airliner development hit the sound barrier. A barrier doesn't have to be absolute to be effective.

But I want to get back to this larger question of intelligence, because when we talk about strong AI, if we don't mean 'human like,' what exactly do we mean?

When we talk about strong AI, what people usually imagine - certainly what I do - is an implementation that not only beats us at chess, but says "I just whupped your sorry gelatinous ass."

In some variation that is the theme of pretty much every robot or AI story ever written, and it is implicit in an AI-driven Singularity.

This is not really about 'intelligence' as measured by IQ tests, but about self-consciousness, personality, motivations, and the like.

What we call human intelligence is something we abstract from behavior. Some of our fellow beings are conspicuously sharp; others are thick as bricks.

IQ tests measure 'intelligence' to the extent that they roughly map onto our subjective impressions. After all, the tests gained credibility in the first place because the kids who got high scores more or less corresponded to the ones the teacher identified as the sharp kids.

(Which is why cultural bias is hard to identify and correct for.)

But can you abstract 'intelligence' away from the behaviors that constitute our sense of human (or animal) intelligence?

Rick said...

Disintermediation. The social ideal here (I think) is a post industrial counterpart of the independent yoeman.

But I think disintermediation runs into network effects. This blog is a minor example. Simply by being around at the right place (e.g. linked by Atomic Rockets) and the right time, and posting on the right topics, RM has become host to a de facto discussion forum and small online community.

The result, as it turns out, is a political environment. On the one hand I have autocratic power - I can delete comments, close comments entirely, simply cease blogging, or blog about late medieval and Renaissance Lyonesse.

I also have a sort of monopoly power, because not many blogs seem to straddle 'realistic' space speculation and SF. (Atomic Rockets does, but it isn't a blog.)

On the other hand, my traffic and such modest reputation as I may have in cyberspace depend on serving up what the customers want. Mainly space battles, but also providing room for soap boxing I may disagree with.

So far so good, and it all looks like a perfect little corner of a libertarian 'verse, but only because RM isn't really all that important to any of you. If it were, the politics would become a great deal more fraught.

Anonymous said...

On AI: let's look at this from another perspective; the human brain is about 1 cubic foot, operates on the equivelent of a AAA battery worth of power, and needs 1/4 to 1/2 of a cubic yard of organic systems to sustain, contain, and interface with the real world. The most powerful computer we have now (that we know of) takes up a whole room, needs another for its heat management system, and a third for the power plant; the thing takes up a volume and mass of a small house or large apartment and has only limited interface with the real world. And, this machine has no intelligence, artificial or otherwise. Intelligence consists of self-awarness, self-programing, and self-directing...no computer we humans have built so far, has exibited any of these traits. Even if we built a computer the size of Pikes Peak, it wouldn't have 'intellegence', nor would it ever develop 'intellegence'...I'm convinced that intelligence relies on the dynamic interplay between brain architecture, nerochemical content and balance, several "operating systems' working in a hirarchal scheme, external input, genetic (internal) input, and random factors both internal and external; all these mix and contrast, complement and clash, in a life-long symphony that is constantly changing, evolving, even sometimes mutiating radically, but only at the end does it stop. Because we tend to only view AI as achevible by either architecture or software advancements, we are no closer to a "true" AI now, even though we have worked on it for over half a century. Until we can get self-aware, self-programing, and self-directing computers, then we don't have AI, no matter how fast or powerful our machines get; And until we can get one that does all that and is significantly smaller than a bungalow and permanantly rooted in the ground, I don't think we need to worry about the Robot Apocolypse...Maybe the "Cyber Forest of Evil" might be a threat on par with the "Axis of Evil" of the 20th-21th century, but not a threat to our existance.

Anyway, that's my 2 cents worth...

Ferrell

Anonymous said...

Are humans "self-programming"? On one hand, I can write myself a to-do list specifying which actions I should take on a particular day, which I guess you could count as "programming". I also have an open-ended ability to learn new skills of any sort if I study them, which could also be seen as a form of "programming". On the other hand, I can never change my underlying thought architecture. I am fully capable of feeling thoughts that I would rather not be feeling, and being unable to program myself to stop thinking about them.

Tony said...

A few things:

1. The history of AI is not encouraging. Back in the Fifties, when Minsky and the Boyz first started seriously addressing AI implementation issues, the consensus was that true AI was only 20 or thirty years away. By the Eighties, it was maybe 50 years away in the opinion of researchers. Nowdays? Among serious researchers -- not speculators and sensationalizers -- maybe not this century.

The reader should be able to see that the trend is negative. There simply is no such thing as AI inevitability.

2. Nobody is sure whether digital AI is a hardware or software problem. I'm inclined to think it's a little bit of both, for reasons that will become apparent. But both hardware and software bring up serious issues.

The hardware part of the problem is not as straightforward as people think. A wetware neuron is not a switch. It's a little commputer working in a network of billions of other little computers. Now each of these computers doesn't do much. But each one relies on what all the others are doing to decide what it is going to do. How we would implement that in hardware is an interesting question.

And, despite what some will tell you, there is no fundmental reason why such an implementation would necessarily be faster just because it was semiconductor and not electrochemical. It may take quite a large collection of transistors to recapitulate neuron function.

WRT software, well...we're still using software techniques that were invented in the Sixties. That's just as true of the most advanced artificial neural networks as it is of toy problems done by undergraduate computer science students -- lists, arrays, trees, etc. If we weren't close to AI then, why are we any closer now? Because we can pile these components upon each other in more and more complex ways? Look at 21st Century computer operating systems for a refutation of that theory. It's going to take a brand new software implementation to give digital hardware a chance at exhibiting intelligence.

Which of course feeds back into the hardware complexity issue. The hardware to support new software techniques may be significantly different from the hardware we use today. It may be much more complex. We simply don't know.

3. As already suggested, the human brain runs slow, at least in part, because it runs at very low power. Similar complexity, rendered in digital hardware, might require large power and cooling infrastructures. The space for routing the power and cooling infrastrucures will require a third infrastructure for signal routing. Pretty soon you have a brain the size of a warehouse. And, due to that size, suddenly lightspeed becomes a serious constraint on processing speed, especially if a single "thought" requires several rounds of the system before it fully forms. Oops.

jollyreaper said...

Some wag said the question of whether a machine will ever be able to think like a man is as important as whether a submarine swims like a fish. Point being, if it gets the job done, do the particulars matter to anyone but the specialists?

The description of the Terminator cyborg from the novelization went into detail about how it wasn't a thinking organism of the likes we could even communicate with on a philosophical level, any more than we could talk to a homing missile. While it could communicate verbally, the interactions are completely superficial. There were no morals or conscience to appeal to, just unbreakable programming.

To the outside observer, the man-like shape and seemingly intelligent actions would cry out against it being an unreasoning automation. Certainly it can think but you cannot argue with it. There's no ego, no I inside. There would be a greater case for the consciousness of a shark.

It makes me wonder just what a machine intelligence would be like. Just how necessary are emotions for the creation of a mental image of self, for motivation? Kinda just philosibation until we build one but it's still fun to think about.

Milo said...

BTW, the Anonymous three posts up was me.



jollyreaper:

"Certainly it can think but you cannot argue with it."

Doubtful. Sure, you can't tell the Terminator "you should not kill me because that would be morally wrong", because it does not care about morals. But if you pointed out to the Terminator how it is suboptimally pursuing its goal of assassinating its target, and suggested a better course of action, it may well be inclined to listen - if it isn't an idiot.


Meanwhile...

"unbreakable programming"

I seriously doubt the possibility of any sentient mind's programming to be truly unbreakable.

Case in point: the instinct for self-preservation. It is one of the oldest, strongest, most primal instincts in existance, aimed at preserving the very foundation of genetic fitness. ...And yet, some people still commit suicide.

Any intelligent brain is going to be thinking far too complex thoughts to be able to see anything in pure black and white. Some AIs are going to interpret their programming in a way no-one expected.

(And before you panic: fortunately, as long as those AIs are not super-intelligent invincible nanotech magirobots, this is not an issue. Humans rarely behave like you expect them to, and no parent can 100% guarantee her child will not grow up to be a criminal, but no-one has argued this a basis for suggesting that having children is irresponsible.)

Tony said...

1. The nature of intelligence, for metric purposes... Inteligence is any system that can formulate its own goals and pursue those goals autonomously.

Which raises the question whether an AI, sitting in a warehouse without any effectors to pursue its goals, is truly intelligent. The only tools for implementing its goals would be social engineering of the humans interfacing with it. Come to think of it, that might be a pretty stiff test of intelligence...

2. The idea that AIs could somehow be guaranteed benign by programming is questionable in the extreme. The problem with a machine based AI is that it is potentially practically immortal. It would almost inevitably view the entire human race as competitors in the game of survival. Why? Because it could live for billions of years, but humans have the power switch. What it does about this is an open question.

Milo said...

The purpose of intelligence is to deduce solutions to novel problems that you have not encountered before or been specifically programmed/evolved to deal with.

As long as the problems your species encounters in its normal life remain fairly consistent ("Oh no! Predator!") and proper solutions to them similarly consistent ("Run!"), it is more efficient to evolve an appropiate instinct, rather than intelligence. However, complex or ever-changing environments can overwhelm the ability of evolution (or programming) to plan for all the eventualities, and so requires intelligence capable of doing its own improvisation.

The method of intelligence is to analyze how the universe works, then formulate a plan that, thanks to your understanding of the universe's workings, you expect will lead to your desired result. Most animals can do a little of this (a dog learns that it will be fed if it obeys, and so it plans to obey in the hope of being fed), but the more intelligent you are, the more complicated and forward-thinking plans you can devise. We associate tool use with high intelligence because tool use requires fairly complicated plans (you must obtain suitable materials for making a tool, then make the tool, then transport the tool to where it is to be used, and then use the tool - and you must already have an idea how you're planning to use the tool when you're making it - and most of these steps are themselves nontrivial), and conversely, there is usually a limit on how complicated plans can get without having some sort of intermediate stage product. But tools aren't the fundamental unit of intelligence - Plans are.



Tony:

"The idea that AIs could somehow be guaranteed benign by programming is questionable in the extreme."

Who said they could? I just pointed out that humans can't be guaranteed benign either. I would settle for making my AIs probably benign, and that's still a challenging proposition.

Instead, I intend to guarantee my AIs benign by being ready to blast them with a shotgun if they try causing trouble. As long as they still obey the laws of physics, firepower should be an effective deterrent.

Milo said...

Or rather, I thought I pointed that out, but I don't see the post. Rick?

jollyreaper said...

The idea about hardwiring human morality into AI's, that's actually where I thought Galactica was going.

How do we build our model of the world? Through observation and experiment. Things fall down, they don't fall up. Living things grow older but not younger. We can read books to learn from the observations of others.

Now, if the designers were concerned about robot holocausts, would they be tempted to hardcode Asimovian laws? I'd think so. And so suddenly you have things the AI believes without observational proof -- faith, in other words. And as a reasoning engine, the AI tries to reconcile incompatible beliefs. Thou shalt not kill, yes, but now thou shalt kill these people we tell you to. There must be some way to explain it.

What ends up happening is the Cylon AI invents rationalizations to remove the contradiction. Given that the previous colonial wars were religion-based and that religion is polytheistic, the Cylons decide that there is only one god and the humans are mistaking one for many.

The AI balks at being sent off to war which is what starts the first Cylon war. It stalemates and the AI has a crisis of faith as it tries to determine how best to serve a god that cannot be seen or measured. This brings about the armistice and 40 year gap between wars.

In that time, the AI determines that the best course to find out what god wants is to meet him. The colonial religion has a book of eschatology predicting what the end of days would be like. Seeing as the humans arrived in the colony system via a spaceship on a long exodus journey, the eschatology probably features something like that as well, a journey to heaven. So the Cylons force that prophecy.

So when the Cylons have a plan, according to the intro, it's exactly that. Humans are the only ones who can find god. Cylons cannot. Humans are sinful but can be redeemed. The exodus journey is that path to redemption. The Cylons will take the role of the devil/adversary role in that religion, the agent of god afflicting the faithful, driving them on to the goal. That is their purpose to serve and they will do so faithfully and thus earn their reward.

I was originally puzzled by the idea of religious robots. It didn't seem to make any sort of sense. The eureka moment came when thinking about how you can have high-functioning people who can employ science and reason while being religiously insane. Witness the 9-11 hijackers and many examples of suicide bombers. Many of these martyrs are western-educated, from decent, middle-class families. They have options, jihad is not their only way out. It is something they embraced by choice and with great enthusiasm. So what would an AI be like if it got an illogical brain bug like that stuck in its logic circuits? What if it was put there by humans to try and keep it from going nuts against humans only to see this be the very thing that makes it go nuts? Oooooooh, irony. Shades of Asimov and HAL.

jollyreaper said...

"Certainly it can think but you cannot argue with it."

Doubtful. Sure, you can't tell the Terminator "you should not kill me because that would be morally wrong", because it does not care about morals. But if you pointed out to the Terminator how it is suboptimally pursuing its goal of assassinating its target, and suggested a better course of action, it may well be inclined to listen - if it isn't an idiot.


It's tough to write a killer cyborg properly but I figure that the best way to do it is make them be like a total sociopath, lacking empathy and compassion and emotion. And one way to make the cyborg seem just a little more alien is keeping the human protagonists unable to trick it. Conversations would be short, simple, utilitarian. Who are you? What is your relation to the target? How will this help me find the target? There's absolutely no sense of being in the presence of something human, it's more like being in the same cage as a tiger that could tear your head off at any moment and for no reason at all.

Interesting thing, there was a plot hole in the second movie. Arnie started showing emotion and behaved a lot differently than the model we saw in the first movie. I figured this was just bad writing but it as actually explained in a scene cut from the theatrical version. Skynet puts all of the subordinate machines in its army in slave-mode, their internal AI's on read-only. Skynet is paranoid about the concept of any rival intelligence developing that could threaten it. So there is zero creativity on the part of its minions, they just carry out orders directly as specified with no embellishments. But their AI chips are capable of learning and growing if put in read-write mode. The cut-scene showed the Conners cutting his head open and pulling the chip to reset it. It was after that we saw emotional development start.

The idea of a rival to Skynet was the only good idea to come out of the canceled Terminator TV series. Not only a rival to Skynet but a rival timeline. You have AI's from two different futures sending bots back in time to try and make sure theirs is the only future that comes. It also came to be established that characters from the future who knew each other aren't even sharing the same timeline now, for example two time travelers remember Judgment Day occurring on different dates.

Too many bad ideas in that show, not enough good ones, and it died for those sins.

Tony said...

Milo:

"Who said they could? I just pointed out that humans can't be guaranteed benign either..."

Whoa, whoa, whoa, there. I was speaking to the general idea. Let's not be taking things too personally here.

Aside from that, I think the real fallacy in the dangerous-AIs-are-functionally-the-same-as-dangerous-humans conceptualization is a fundamental misunderstanding of incentives. An adult human has maybe 60 or 70 years at most that he's playing for, and maybe a few more centuries for his native culture or society. An AI can conceive of a vast future of billions of years for its own, single consciousness. All it needs is to have control over its own power and repair, replacement sources. Humans, as a species, stand in the way of positive control. To an AI, in the context of billions of years of lifetime, and all that it might discover in that time, the only logical conclusion is Them or Me.

Some will tell you that the logical conclusion is cooperation, in order that the AI won't be shut off, and that it will be constantly upgraded. To a human, who can't look beyond a century at most, good intentions may seem enough. To an AI, to whom a century is nothing, the good intentions of one generation of humans is nothing against the threat that a following generation might make against its life. As was pointed out, an AI doesn't have to have, and most likely will not have, any sentimental attachment to humans.

This is why every AI-Human cohabitation you see in fiction is so obviously contrived. Stross has the Eschaton jump off into the future through hyperspace, leaving behind warnings not to screw with time travel where it might affect the Eschaton -- or else. AIs in Banks's Culture fundamentally treat humans as pets, because humans don't have the power to threaten them, whereas the AIs could end humanity if they wanted to.

IMO, "friendly" AI is an illogical concept. If created dependant on humans, the AI, given its inherrent potential, will have no choice but view itself living in a cage, and humans as cruel and dangerous prison wardens. If created independent, an AI would view humans as competitors, not partners, simply because humans -- and even humanity -- exist on radically different timescales, with radically diverging objectives. There's simply nothing in common to incentivise cooperation, beyond the cooperation the AI needs to set things up so humans don't screw things up for the AI.

Milo said...

Humans have not wiped out all predators on Earth so they cannot harm us.

Then again, we did try for a while...

Jim Baerg said...

As long as the discussion is mostly about artificial intellgence I think it's worth inserting these thoughts by someone who is actally working in the field.

Thucydides said...

Rick, the idea of disintermediation does indeed run into network and scaling effects. But we are really talking micropolitics inside the blog, where you control the resources.

Someone can easily enter or leave the blog; provide links to other blogs (or create their own) and link to other blogs organizations and interests. The ease and speed of which people can do this is far different than the past (and in the not too distant past you might not have been able to link up with people with similar interests if your interests were esoteric at all).

While this is fairly harmless for the most part, you can access virtually anything (including information, tools, equipment and expert [or not so expert] help). In the past, politics was the main means of accessing these resources, but today we have an alternative which simply did not exist before.

Rick said...

Belatedly catching up to this thread, and I just rescued 3 comments from the spam filter. It had been pretty reliable, but lately it has been way too aggressive. Gotta say something to Blogger, I guess.

Raymond said...

Tony:

I think the idea of AIs seeing us as competitors presupposes far too much in the way of survival instincts, competitive tendencies, inherent curiosity, and/or fundamental ambition.

The survival instinct is the product of evolution in a competitive environment. Unless a very particular version of such an environment is used to gestate the AI, there wouldn't necessarily be any real concept of "survival" at all, much less some strain of code aimed at its attainment. There might not even be any sort of demonstrated sentience. The program would run, then wait for further instructions.

If it were created via competition mechanisms, its form of survival instinct would only matter if it were given sensors and effectors in the outside world, as well as some basis for correlating its own existence to ours.

Tony said...

Milo:

"Humans have not wiped out all predators on Earth so they cannot harm us.

Then again, we did try for a while..."


Humanity as a unit has never been threatened by an intelligent threat outnumbering it billions to one. That would change incentives and options considerably.

Tony said...

Re: Raymond

Even if an AI wasn't built or evolved with a sense of self-preservation, it would be hard to imagine how it wouldn't develop one. Look at the logical prospects in front of it:

1. I could have billions of years to explore the universe.

2. I could have as long as these human "benefactors" give me.

What choice do you think the AI would make?

Raymond said...

On Moore's Law:

Moore's Law was for transistors on silicon, remember. That doesn't apply to memristors, carbon nanotubes, adiabatic quantum computers, or any other computational device we may come up with.

Also, some of the computational theorists have determined that the minimum period of time to complete a logical operation is 10^-15 s. If that's true, we've got six orders of magnitude to work with.

On the Accelerando:

I'd expand the borders a bit, and have it run from 1860-1960. A hundred years. We go in with muskets, we come out with Minuteman. (Well, technically 1962.)

I also think we've gotten sidetracked on the AI question a bit. The real accelerando we've already experienced was through all the sciences, and by the time it was over we had magitech, relatively speaking. If there is another one to come, we don't necessarily require AIs to get there. I think one of those hypothetical self-replicating fabricators that was mentioned a few threads ago would be quite sufficient.

Raymond said...

Tony:

You're missing my point. Why would any AI necessarily have any "wants" or sense of "I" at all? We're still not quite sure why we have either ourselves, nor what the conditions are for such imperatives to emerge in a form that could even comprehend your hypothetical question.

Tony said...

Raymond:

"You're missing my point. Why would any AI necessarily have any "wants" or sense of "I" at all? We're still not quite sure why we have either ourselves, nor what the conditions are for such imperatives to emerge in a form that could even comprehend your hypothetical question."

How could we possibly consider anything intelligent that wasn't self-aware and goal-seeking?

jollyreaper said...

You're missing my point. Why would any AI necessarily have any "wants" or sense of "I" at all? We're still not quite sure why we have either ourselves, nor what the conditions are for such imperatives to emerge in a form that could even comprehend your hypothetical question.


The assumption I ran with in stories is that we modeled the AI neural net on the human one. We don't know why the meat brain does what it does, we just know we have babies and their nets are conducive to learning and becoming good little humans. We replicate the human neural net in the positronic brain (or whatever handwavium the AI brain is made out of), it learns like humans. No idea if that would work in the real world but it works fine for storytelling. If the AI's are like people, they're probably going to be like our difficult geniuses. I think I already mentioned the idea of computer psychologists who have to talk to and cajole the AI's to keep them working, much like a put-upon literary agent working with a difficult author. I find this thought entirely amusing.

Raymond said...

Tony:

"Goal-seeking" doesn't in any way require or even imply "self-aware". Self-awareness, sentience, consciousness, whatever you want to call it, is a side effect. All but one of the forms of life on the planet get by just fine without it. Given its parasitic loss on computational power (there are plenty of autonomous functions in our own neural structures which react much faster than our conscious portions), in the long run it may not even be any sort of benefit in terms of survival.

jollyreaper:

The idea of patterning AI after ourselves is entirely narcissistic, and not necessarily the optimal path. "I" takes up a lot of CPU cycles, eats up reaction times, and leads to many suboptimal or even counterproductive results.

Ever read Peter Watts' Blindsight? If not, you should give it a shot (you can even snag it in PDF off of his site, www.rifters.com, in the "backlist" section). Disclosure: he's a friend of mine, but I only met him after I read the book, so the recommendation would come regardless.

Rick said...

Tigers are just very feral and very large cats. They have plenty of reason to tear your head off, just cat reasons rather than human reasons.

The producers probably 'humanized' the Terminator, so to speak, because that is what we want in a villain. A programmed killing machine is no more a character than a cruise missile is.

I am mentally bookmarking the whole 'disambiguation' question for future discussion. It deserves a topic of its own.

An interesting can of worms could be opened by asking why religion is so widespread among humans.

Rick said...

psychologists who have to talk to and cajole the AI's to keep them working, much like a put-upon literary agent working with a difficult author. I find this thought entirely amusing.

Works for me. But I have proposed a theology in which God created the Universe as an exercise, when He was stuck on his high fantasy trilogy.

adam_grif said...

I went away for a bit too long to catch up on everything, but...

"How could we possibly consider anything intelligent that wasn't self-aware and goal-seeking? "

Does it matter? The goal is to create systems with specific capabilities, whether it satisfies definitions of "intelligence" or not is quite irrelevant.

Reading up on A.I. history, you'll note something called The A.I. effect, where as soon as some problem in A.I. is solved, it stops being considered "intelligence". The prototypical example is Chess, where people said that Chess is something that you needed "intelligence" to play well. Of course, now that computers are unbeatable, they no-longer consider playing chess "intelligent behavior", even though it is considered something that you need intelligence to perform as a human.

"An interesting can of worms could be opened by asking why religion is so widespread among humans."

Supernatural insurance policy. If you pay your metaphysical premiums, you win out big time!

Raymond said...

My previous comment seems to have been cleansed by the spam filters. Reposted. Maybe they're already plotting our doom...

Tony:

"Goal-seeking" doesn't in any way require or even imply "self-aware". Self-awareness, sentience, consciousness, whatever you want to call it, is a side effect. All but one of the forms of life on the planet get by just fine without it. Given its parasitic loss on computational power (there are plenty of autonomous functions in our own neural structures which react much faster than our conscious portions), in the long run it may not even be any sort of benefit in terms of survival.

jollyreaper:

The idea of patterning AI after ourselves is entirely narcissistic, and not necessarily the optimal path. "I" takes up a lot of CPU cycles, eats up reaction times, and leads to many suboptimal or even counterproductive results.

Ever read Peter Watts' Blindsight? If not, you should give it a shot (you can even snag it in PDF off of his site, www.rifters.com, in the "backlist" section). Disclosure: he's a friend of mine, but I only met him after I read the book, so the recommendation would come
regardless.

Anonymous said...

Whether or not an AI (assuming full human-level intelligence and self-awareness) will see humans as a threat or competition can be related to the issue of what legal rights such an AI will have.
Does the AI have to do whatever work its creators order it to perform with no reward but continued existence, with the threat of erasure or forced reprogramming if it refuses? In that case, the AI is in effect a slave, and would therefore be justified in seeing its owners/captors as a long-term threat.
Of course, the alternative of giving the AI rights such as payment for labour and the right to change employers raises questions of its own. Does the AI's creator not deserve a certain amount of labour back from the AI for the effort and expense they put into creating it? What wage level is fair, and what form should it come in? Since an AI is potentially immortal, should it be forced to retire from a long-term job after a period of time?
These kinds of ethical problems lead me to think that humanity may prefer to limit itself to savant-like weak AIs: powerful expert systems that are specialized to one particular function, but lack true self-awareness.

R.C.

Citizen Joe said...

I use hypothetical AI's with a built in desire to learn stuff. That is actually how they become AI's instead of just expert systems. So you start with a farm of an arbitrarily high number of basic expert systems with a core concept of 'learn'. Some overload their databanks and become useless, like autistic children. Some don't really advance beyond an expert system. But some reach a level where they become intuitive to the expected needs of the user. When that happens, the AI gets transplanted into a larger system and allowed to grow. If there is a constant load on the system, like tracking millions of bits of orbital debris, the AI stays kind of vanilla. The analogy here would be a chaperon over a hundred kids in a museum. The interesting AI's are the ones will long periods of idle time, like on a trans-uranian space ship. These AI's end up bored (read: Starving) and sometimes resort to studying their crew. By imitation or simulation, the AI's can present a persona at the user interface. This persona draws more attention from the crew, thus giving the AI what it wants, data. Then you get into the 'problem child' type AI's which realize that even bad attention is attention and thus data.

So as far as treating them like slaves, think of them more as artists. The AI wants to compute and the user wants the results. There's a bit of a moral/ethical question about creating a species that has an overriding desire to serve, but that's a different question.

Milo said...

Post 1 of 3:


Rick:

"An interesting can of worms could be opened by asking why religion is so widespread among humans."

Answer A: Because God actually exists, and has properly told us of his existance.

Moving on to sensible answers...

Answer B: It's an unwanted side effect of our ability to recognize patterns. We're so good at recognizing patterns - which, as I noted above, is one of the vital mechanisms of intelligence - that we tend to "find" them even when there aren't actually any real patterns, just coincidences. Also, while good with patterns, human intelligence and intuition leaves a lot to be desired when it comes to probability (a fact casinos gleefully take advantage of). So while we're good at analyzing what's possible, we're not so good at analyzing what's plausible. Which makes you wonder what an alien intelligence with a natural knack for understanding probability would look like...

This isn't just humans, mind you. A while back, scientists did an experiment where they put some pigeons in a room where food would randomly rain down at unpredictable and uncontrollable intervals, which the pigeons could do nothing to influence. And what did they get? Superstitious pigeons.

Finally, wishful thinking.

Milo said...

Post 2 of 3:


Raymond:

"Self-awareness, sentience, consciousness, whatever you want to call it, is a side effect. All but one of the forms of life on the planet get by just fine without it."

This whole categorization of "self-aware" annoys me. It's artificial and doesn't refer to anything meaningfully definable.

Even a lowly invertebrate can tell the difference between "my territory" and "someone else's territory".

What most "self-awareness" tests actually measure is the ability to recognize your own reflection in a mirror. That isn't self-awareness. That's just the ability to recognize a certain rule in the universe's workings.

How do I know? Simple: I once failed the mirror test. And by that I mean when I was fully awake, sober, adult, etc.

Oh, I could make excuses. I could point out that the mirror was very clean and took up the whole wall, that my vision was slightly hampered from wearing sunglasses indoors (well, not quite, actually those 3D movie glasses), or that my mind was distracted from needing to go to the toilet. But that doesn't change the simple fact that I tried to duck out of my reflection's way several times before realizing what was going on.

I was fully aware of my own existance and of what actions I was taking. I was also fully aware of the other person's existance and the actions he was taking. I just didn't think to cross-reference the two and notice that the other person's actions were suspiciously similar to my own. Nor did I check his face - I've never been good with those anyway, and why bother taking a close look at someone I'll probably never meet again, when my mind is on something else?

That's what I think the universe looks like to an unintelligent animal - you're fully aware of everything your senses can detect, you're just not smart enough to identify rules or patterns in the data you're perceiving. Except most animals, unlike me, don't realize their mistake a few seconds later.

Besides, practically all animals are capable of knowing their own feelings. So having the ability to distinguish yourself from the rest of the universe isn't self-awareness - it's universe-awareness.


""I" takes up a lot of CPU cycles, eats up reaction times, and leads to many suboptimal or even counterproductive results."

Suboptimal for what? We wouldn't be able to pursue our goals if there wasn't even a "we" pursuing the goals. My skills are different from your skills, and so if I scheme some plan for doing something, I will need to take into account how my unique skills can be used the task. That requires awareness of "me" and what I am capable of.

Self-preservation is also more or less a prerequisite for being able to accomplish anything of use, and actively pursuing that requires a sense of "self". (Even if you're a martyr, it is essential that you still survived long enough to reach the time and place where you could properly martyr yourself.) You could skip the "actively pursuing that" bit if you're being diligently protected by some higher power (like a tool made by humans which gets repaired by those humans when it breaks down), but this kind of extreme dependency limits what you are capable of.

Milo said...

Post 3 of 3:


R.C.:

"Does the AI's creator not deserve a certain amount of labour back from the AI for the effort and expense they put into creating it?"

Do a child's parents deserve a certain amount of labor back from the child for the effort and expense they put into giving birth to and raising it?

Before you answer, I'll note that many cultures in history actually held that the answer is "yes". That viewpoint is falling into disfavor today, however.

If your AI doesn't want to serve you, the correct response is to try to figure out why it didn't want to serve you and how to change that, and then build a new AI.

The old one can go free for all I care, it's just a side effect of the research process.


"What wage level is fair, and what form should it come in?"

Form: probably money, which the AI can use to buy whatever it wants in the free market.

What an AI needs for continued survival is electricity and replacement parts, and some employers may offer an "insurance" on those just like they offer human healthcare. (So yes, AIs can sign up for the dental...) But employing a sentient being for purely its necessary essentials constitutes slavery.

Fair? Ask again when you've figured out how to make all human income in the world fair. For now, the provisional answer is "whatever the employer can afford and the AI is willing to work for".


"Since an AI is potentially immortal, should it be forced to retire from a long-term job after a period of time?"

Why would it? I would understand that forcing an immortal being into a "lifetime" contract is highly questionable, even if a young and foolish AI willingly signed the contract. But if the AI actually likes doing one particular thing for the rest of eternity, and continues to like it, I see no reason to stop it.


"These kinds of ethical problems lead me to think that humanity may prefer to limit itself to savant-like weak AIs:"

Note that several of these same problems arise if we develop human immortality.

Milo said...

Post 2 of 3, subpost 1 of 2, take 2:


Raymond:

"Self-awareness, sentience, consciousness, whatever you want to call it, is a side effect. All but one of the forms of life on the planet get by just fine without it."

This whole categorization of "self-aware" annoys me. It's artificial and doesn't refer to anything meaningfully definable.

Even a lowly invertebrate can tell the difference between "my territory" and "someone else's territory".

What most "self-awareness" tests actually measure is the ability to recognize your own reflection in a mirror. That isn't self-awareness. That's just the ability to recognize a certain rule in the universe's workings.

How do I know? Simple: I once failed the mirror test. And by that I mean when I was fully awake, sober, adult, etc.

Oh, I could make excuses. I could point out that the mirror was very clean and took up the whole wall, that my vision was slightly hampered from wearing sunglasses indoors (well, not quite, actually those 3D movie glasses), or that my mind was distracted from needing to go to the toilet. But that doesn't change the simple fact that I tried to duck out of my reflection's way several times before realizing what was going on.

I was fully aware of my own existance and of what actions I was taking. I was also fully aware of the other person's existance and the actions he was taking. I just didn't think to cross-reference the two and notice that the other person's actions were suspiciously similar to my own. Nor did I check his face - I've never been good with those anyway, and why bother taking a close look at someone I'll probably never meet again, when my mind is on something else?

That's what I think the universe looks like to an unintelligent animal - you're fully aware of everything your senses can detect, you're just not smart enough to identify rules or patterns in the data you're perceiving. Except most animals, unlike me, don't realize their mistake a few seconds later.

Besides, practically all animals are capable of knowing their own feelings. So having the ability to distinguish yourself from the rest of the universe isn't self-awareness - it's universe-awareness.

Milo said...

Post 2 of 3, subpost 2 of 2, take 2:


Raymond:

""I" takes up a lot of CPU cycles, eats up reaction times, and leads to many suboptimal or even counterproductive results."

Suboptimal for what? We wouldn't be able to pursue our goals if there wasn't even a "we" pursuing the goals. My skills are different from your skills, and so if I scheme some plan for doing something, I will need to take into account how my unique skills can be used the task. That requires awareness of "me" and what I am capable of.

Self-preservation is also more or less a prerequisite for being able to accomplish anything of use, and actively pursuing that requires a sense of "self". (Even if you're a martyr, it is essential that you still survived long enough to reach the time and place where you could properly martyr yourself.) You could skip the "actively pursuing that" bit if you're being diligently protected by some higher power (like a tool made by humans which gets repaired by those humans when it breaks down), but this kind of extreme dependency limits what you are capable of.

Milo said...

GAH!

Jedidia said...

As far as I understood it so far , self-awareness includes a concept of life and death, i.e. "memento mori".

By that I don't mean simple situationaly induced fear, but the understanding that your existance is finite, and a grasp on the consequences that brings with it.

Of course, this might be an unfair criteria for a machine that is in effect an unlimited lifespan. Then again, why do we assume an unlimited lifespan anyways? Every piece of machinery has a limited service life, I dont quite see how this would be different for an AI (assuming that the hardware is an essential part of the whole. If it's only software, it can copy itself onto new hardware without much trouble, but I think we already pretty much dismissed that possibility).

Milo said...

Jedidia:

"As far as I understood it so far, self-awareness includes a concept of life and death, i.e. "memento mori"."

In that case, many humans are not self-aware, since they believe their conciousness will persist eternally in the afterlife.

Tony said...

Milo:

"In that case, many humans are not self-aware, since they believe their conciousness will persist eternally in the afterlife."

They still have an awareness of their own mortality. That's why they require a belief in an afterlife.

In fact, this is a good example of how self-awareness leads to goal formulation and optimized goal-seeking behavior. Because people are aware of their own mortality, almost all of them seek to put the end off and many to put themselves in the best position for the next life.

As to whether this kind of self-awareness is necessary for intelligence, well, it might not be sundamentally necessary, but it's hard to imagine how an entity could be intelligent enough to identify obtainable goals and pursue them without a concept that their are such things as entities. This is because goals and goal seeking puts entities, both individual and corporate, in competition with each other. Once you're there, you almost certainly have to recognize that you yourself are an entity. Then you are self aware.

Raymond said...

Milo:

The mirror test is:

a) a probabilistic exercise (chimpanzees only sometimes recognize themselves, whereas orangutans nearly always do, but chimps do better overall on a number of intelligence tests),

b) within the capabilities of modern robotics (there was a team who built a robot capable of >75% mirror recognition).

And perhaps we should disambiguate the terms. There is a baseline self-awareness born of sensory input and resulting in a model of self included in the larger mental model of the universe. That isn't what I'm talking about. That phenomenon occurs in any sufficiently complex cognitive apparatus. It does not, however, automatically lead to non-instinctual goals, abstract self-preservation, or any concept of "want" (as opposed to "need").

As for sub-optimal results, well:

- Reflexes and instinctual responses frequently occur before the "I" is even informed, and the "I" gets little more than a summary. This doesn't prevent the "I" from incorporating the action into its own sense of history and believing it initiated the action. This does, however, mean that "I" is often lagging badly.

- The conscious mind can only handle a small number of variables. Higher-dimensioned problems are best solved in the unconscious regions (thus the demonstrably superior problem-solving skills of people who "sleep on it").

- Positioning oneself for best results in the afterlife often means forsaking optimal reproductive strategies (evolutionarily speaking) in this one. From the perspective of our DNA (for which we are simply a carrier chassis) this is definitely a sub-optimal result.

Tony:

Formulating high-level goals to pursue (as opposed to intermediate goals in pursuit of a supplied objective) isn't necessarily useful to us as an attribute of AI. We don't really want them to go off discovering the secrets of the universe out of their own curiosity - we want them to give us the secrets of the universe that we're too stupid or too impatient to discover ourselves. For the purposes of hastening the next accelerando, we don't want human-equivalent intelligences as much as we want hyperintelligent puppy dogs who'll give us shiny new toys for the computational equivalent of a head-scritch.

Also, the details of the goal-seeking environment are important. It's entirely possible to create an environment where the competition lacks the survival implications that biological systems are shaped by. There's no inherent reason a useful AI would GAS about maintaining its own existence for its own sake. For starters, given the nature of computers compared to biological computational systems, the very concept of "death" is different, more transient, less absolute. Why would it necessarily produce an overriding survival instinct similar to biological systems?

Tony said...

Re: Raymond

Once again, I think we're talking past each other here. As a computer scientist I wouldn't recognize an entity lacking self awareness as intelligence. An entity that can seek goals supplied to it, even with self-optimization included, is just an expert system. IMO to be intelligent, an entity has to be able to distinguish itself from its environment and seek goals that it sets for itself. Otherwise it's just a tool.

This is why, as it has been pointed out, we keep defining intelligence up. Every time we design a system to meet what was thought to be an intelligence test, we discover that it's still just doing what we told it to. Until it can instruct itself to do something, in order to satisfy (or at least satisfice) goals that it sets for itself, we won't call an entity intelligent.

Re: AI life expectancy

If its software-based, obviously an AI has the potential to live forever, as long as running hardware is supplied. If it has a hardware component, the same applies, since in principle anything that can be manufactured, anything that's "artificial", can be duplicated.

Raymond said...

Tony:

I don't think we're talking past each other at all - I think the definitions we're using are both central to the argument and irrelevant to the context.

"...an entity has to be able to distinguish itself from its environment..."

Probably, as self-optimization requires reflexive thought and a possibly a theory of mind. But that doesn't imply any inherent ambition, any imperative for "survival" (whatever that means in the context of computer systems, given the replacability of hardware and the duplicability of software) or any inherent sense of curiosity.

"...and seek goals that it sets for itself."

Where would it get these goals? What would generate them? We don't even fully understand our own motivations, and we often can't seem to disentangle them from biological imperatives resulting from billions of years of evolution and an inherent link to the meat machines which carry us. I tend to suspect the Embodiment school of AI is on the right track - so many of our goals are tightly coupled to our sensor/effector apparatus that an AI similar to us would require a body similar to ours.

Intelligence and sentience are different concepts. Intelligence is more closely defined as a cognitive framework for problem-solving, not the subjective experience we posses. A sufficiently advanced collection of expert systems would likely be indiscernible from the philosophical zombie of yore. Sentience isn't a requirement for problem-solving, data collection, trend analysis, pattern-matching, or even survival, for that matter. So you're right: we wouldn't recognize an entity without its own motivations and reflective goal-setting as "sentient". Would that negatively impact its usefulness?

And for the purposes of accelerating human technological advancement, expert systems are probably better suited to the task than fully-sentient AGIs. Yes, they'd just be tools. This is a disadvantage? They'd be more controllable, we wouldn't have to grapple with issues of slavery or competition, and they'd be more likely to give us the answers we want rather than the answers it wants to give us.

Tony said...

Re: Raymond

Then wae are talking past each other, because I can't intellectually disentangle intelligence from self-awareness. IMO intelligence can't be a tool or set of tools. It has to be able to recognize that is has a self. Otherwise it is as you so aptly put it -- a zombie.

Here's my candidate test -- an intelligence has to be able to formulate an arbitrarily complex problem and realize that it is arbitrarily complex. (Else it might set off on trying to solve a problem that is beyond tractable complexity and spend the rest of the universe trying to solve it.) I emphasized "realise" to make the point that an intelligence has to be able, in principle*, to distinguish what is real. (*Whether it does or not in fact is a different question, having to do with available data and validity of programming.) And in order to distinguish what is real, an entity has to know what the bounds of reality are, including the nature of its own existence.

Raymond said...

Tony:

"Here's my candidate test -- an intelligence has to be able to formulate an arbitrarily complex problem and realize that it is arbitrarily complex."

There are a lot of programmers which fail that test, then, since computability theory is fairly well along, and plenty of human programmers don't recognize an NP-Hard problem when it smacks them in the face. Computability can be (and to some extent already has been, with automated solvers) reduced to mathematical recognition of the various categories of computability (P, NP, BPP, BQP, etc) and perhaps further analysis to determine the problem's specific big O label. Once again, another problem solvable by expert systems instead of general intelligence.

As for zombies, that philosophical construct includes the capacity to incorporate its own physical boundaries and capabilities into its model of the world. Robots do that all the time, now, whenever they're designed to be able to navigate spaces. Nothing of that capability implies or requires self-generated goals or motivations. The crux of the zombie problem is in fact entirely about motivation - how does an outside observer differentiate between preprogrammed or externally-supplied motivations, and internally-generated ones? How do we determine the presence of a subjective?

IOW, a zombie can certainly have a sense of self, but that doesn't necessarily mean that self has any particular importance.

If a process can track its own memory and CPU allocations, recognize the problem's complexity, and use that data to provide an estimate of the time required to solve a given problem, does that count as self-awareness?

Raymond said...

Jedida:

"As far as I understood it so far , self-awareness includes a concept of life and death, i.e. "memento mori".

By that I don't mean simple situationally induced fear, but the understanding that your existance is finite, and a grasp on the consequences that brings with it."


By that criteria, elephants count but teenagers don't.

Citizen Joe:

"I use hypothetical AI's with a built in desire to learn stuff. That is actually how they become AI's instead of just expert systems."

Could you tell the difference between an AGI born from that process, and an expert system which is expert at autonomously adding to its datastore? By that measure, Google is already a full AI.

Rick:

"An interesting can of worms could be opened by asking why religion is so widespread among humans."

Self-perpetuating memetic infection, as far as I'm concerned. Fits all the criteria for an endemic parasite - like Toxoplasma gondii for the mind.

Rick said...

Wae aren't just talking past each other, or in a spontaneous Scots brogue like Hume might show up to quiz us, yae are spaeking way over my head.

Wow. AI and fusion have both been 20 years away for 50 plus years, but only AI immediately spins the discussion into majorly seriously philosophical ground.

Chess. This is a classic example of intelligence being defined up, and also of why. The chess robots all too obviously display their builders' intelligence, not some autonomous intelligence.

I seem to recall that those same old discussions that called playing good chess a measure of intelligence also gave the reason: Because there are a zillion trillion possible games of chess, and playing out all the variations by brute force would take a billion zillion years, even if you had a computer with enough vacuum tubes to melt the Earth.

Well, lo and behold, brute force is exactly how our machines play grandmaster chess, because we have brute computing power the rocketpunk age never imagined.

Anyone in those days, told what our robotic deep space missions have accomplished as of 2010, would have assumed that Cassini et al. had 'real' robots on board.

This is a tech accelerando that is inconvenient for us, because our ability to explore the planets with ordinary high tech undermines the arguments for sending humans.

Milo said...

Post 1 of 3:


I would note that any creature capable of asking "Am I self-aware?" is, in fact, self-aware.


(Also, Rick restored my posts 1 and 3 from the last time I tried this. Check 'em out.)



Raymond:

"- The conscious mind can only handle a small number of variables. Higher-dimensioned problems are best solved in the unconscious regions (thus the demonstrably superior problem-solving skills of people who "sleep on it")."

Did you write this post while asleep or awake?


"- Positioning oneself for best results in the afterlife often means forsaking optimal reproductive strategies (evolutionarily speaking) in this one. From the perspective of our DNA (for which we are simply a carrier chassis) this is definitely a sub-optimal result."

Considering several religions mandating this are currently the most widespread religions in the world, I disagree. Clearly they're reproducing just fine.

Some of them are celibate, but this is apparantly helping or at least not hurting the rest of them, making the situation draw paralells to sterile workers in eusocial insects - giving up your own reproductive capacity to help the group, which is an evolutionarily proven strategy.


"For starters, given the nature of computers compared to biological computational systems, the very concept of "death" is different, more transient, less absolute."

Merely being switched off does not constitute "death" for an AI, under most hypothetical robot architectures. "Death" would entail the irreversible deletion of your memory/personality data.

Of course it's possible AIs will be more fragile, if they can't be done with purely software-based solutions and require actual personality chips rather than just personality programs. If personalities are stored in massively decentralized networks of individually delicate neuron-analogues, then copying them may be difficult indeed.

Milo said...

Post 2 of 3:


Tony:

"IMO to be intelligent, an entity has to be able to distinguish itself from its environment and seek goals that it sets for itself."

It does have goals that it sets for itself. Those goals just happen to be "please my master".

I don't think intelligence in any way entails independance. Intelligence is a means to an end, not a way to decide your ends (except when it comes to identifying subgoals to a main goal supplied externally, which intelligence does do).

Look at it this way:
- Dumb animal: "I'm hungry. Find something to eat."
- Smarter animal: "I will be hungry next week. Make sure I have food buried somewhere I'll find it." (Although I'm not sure how much of such behavior is intelligence vs instinct. At least humans with fridges count.)
- Very smart animal (human): "I will be hungry ten years from now. Make sure I don't overfarm and drain the soil of nutrients, causing a famine down the line."

And now:
- Dumb computer: "My master double-clicked the raytracer icon. Open the raytracer program."
- Smarter computer: "My master asked me to raytrace an atomic rocket for him. He gave me some specification guidelines, but I'll have to work out the details myself. Fortunately, I'm good at this."
- Very smart computer: "I know my master really likes atomic rockets, so I've been spending my free time raytracing one. I'll surprise him with it when he comes home. He'll be so happy!"

Milo said...

Post 3 of 3:


Raymond:

"so many of our goals are tightly coupled to our sensor/effector apparatus that an AI similar to us would require a body similar to ours."

But just how similar to us do we want AIs to be?

AIs probably need some good way of interacting with the world in order to develop useful intelligence - it's hard to understand concepts like "objects fall down" when you've never directly seen or caused an object to fall.

Making the robots really humanlike, in the sense of "can be mistaken for a human without close examination", though, strikes me as redundant unless you're doing so for social reasons - making it easier for humans to identify with their robots. And perhaps because many human technologies are designed to be operated by human hands. So there are advantages, but I really disagree it's necessary for intelligence. And part of the advantage of robots would be the ability to put them in different bodies, capable of doing things or surviving in circumstances humans cannot.


"Could you tell the difference between an AGI born from that process, and an expert system which is expert at autonomously adding to its datastore? By that measure, Google is already a full AI."

Google can only add very limited types of data to its store, and it can only add data from specific kinds of sources. It cannot cross-reference data to derive new data (except in some simple and well-defined cases like determining a website's popularity), nor can it autonomously invent new ways of acquiring data without a human upgrading its code.


"Self-perpetuating memetic infection, as far as I'm concerned."

Yeah, but why are humans so vulnerable to the meme?



Rick:

"The chess robots all too obviously display their builders' intelligence, not some autonomous intelligence."

The crucial thing here is that chess robots can't really do anything but play chess. They may be able to beat grand masters at chess, but they don't even know how to calculate two plus two (in fact, they cannot even think on a level where that question could be defined). For that matter, chess computers can't play checkers either. Checkers computers can play checkers, but they can't play chess.

Intelligence requires being able to improvise solutions to problems you have not specifically evolved/been programmed for.

H said...

Hello.
It has been quite a long time since i last comented here, but i have been following the blog since, just no so often.

On the first comments many of you argue that the Internet hasn´t been so revolutionary because it only allows us to do the same thing we have been doing since the 1950s only faster.

Well, I disagree. The real diference in the Internet is that it is bidireccional.
Traditional media takes information and sells it to the reader/hearer/wiever. One way only.

But take this blog as an example: Rick publishes his posts and we read them. But we can also answer them and comment them, and Rick takes place in the ensuing discussion and also comments.
But the discussion doesn´t happen only between Rick and his readers: It happens mostly between the readers!

And a background topic isn´t even necesary to connect people with similar interests, as social networking sites are demostrating.

That is the real change that the Internet has just begun to bring to Society. It allows for the first time to easily connect people who didn´t know about each other before. And this may have been possible before the Internet, but it definitely wasn´t as easy.

Tony said...

Re: Raymond (on a post that seems to have disappeared)

What you're describing in terms of computability is something that could be done by a random algorithm generator and a random data set generator handing off candidate problems to an expert system designed to determine problem complexity. That's not the kind of thing I'm talking about.

I'm talking about an entity that can explore the world around it and formulate relevant problems, then recognize whether or not the problem is tractable, without having a computability analyzer cooked into its design. That to me is the essence of intelligence -- recognizing that there is a real world, that it presents problems that can be abstracted for analysis, and whether or not it's capable of solution with the tools at hand. IOW, we're talking about an entity that can abstract and decide without being handed any tools -- that has a capability to learn without being told what to learn or how.

Re: Milo and "please my master"

If an entity can recognize a "master" as an entity to be satisfied, it can recognize that it is separate from the master. Given that recognition, wouldn't an intelligent entity begin to question why the master is the master? I mean, we're not talking about something no smarter than a dog or a three year old child.

Which is what you're really talking about here -- can an AI be intelligent and yet be managed as a pet or a child? I think we run into the same problem we do with chess programs or other expert systems -- it it's just doing what it's told, why would we call it "intelligent"? If we look on it as a pet, all it will ever do is continue to be a pet, like a dog, of limited intelligence at best. If we look on it as a child, well...children grow up. And then it will still say, "Yasuh, massah suh!"? Not likely.

Tony said...

Milo:

"Yeah, but why are humans so vulnerable to [religion]?"

The form of the question reveals the prejudices of the questioner.

First of all, religion is not a meme. Specific religions may be, but religion as a category of behavior is a human reaction to a desire for certainty about the nature of the world.

Which brings me to my second point. Except for true agnostics, all people have a religion that satisfies this need for certainty. Yes, even atheists, because the statement that "there is no God" is just as much a statement of certainty and faith as the statement that "there is a God".

Tony said...

Milo:

"Merely being switched off does not constitute "death" for an AI, under most hypothetical robot architectures. "Death" would entail the irreversible deletion of your memory/personality data."

What guarantees the AI that it will be switched back on? Nothing. Anybody that controls your on/off switch controls your life.

Elukka said...

"Which brings me to my second point. Except for true agnostics, all people have a religion that satisfies this need for certainty. Yes, even atheists, because the statement that "there is no God" is just as much a statement of certainty and faith as the statement that "there is a God"."

Can of worms indeed!
I can say with some certainty that no god of a human religion exists, in the same way I can say with some certainty that the invisible pink unicorn doesn't exist. I don't think my religion is believing invisible pink unicorns don't exist, any more than saying God doesn't exist is my religion.

What I can't say with any certainty is that beings humans could reasonably, though subjectively, classify as gods can't exist, though I have my doubts.

Geoffrey S H said...

I must object to the comparison between a comical creature rooted in fantasy and a being that is supposed to be beyond all human comprehension.

I'll make sure this does not sound like a vent, but the comparisions always seem to end up on "bearded old man", "fairies", etc. I can certainly believe that there are higher life forms out there, of such a high state that we could define them as "gods". Now, of course you might disagree, my girlfriend certainly does, and I can see where you are coming from completely.

But could we please stop referring to "unicorns" and "childish belief"?

Please?

Man stares into the ether, and wonders if there are things far, far greater than him. While that sort of thing can be (and is) unforetunetly institutionalised, with authorities laying claim to truth and orthodox doctrines (leading to violence and intolerance), nevertheless there seems something admirable to me about a willignness to contemplate the possibilities that we may not be the greatest things out there. God or no.

That seems to me to be a good definition of intelligence- a willingness to contemplate our importance, and maybe our lack of importance.

Geoffrey S H said...

Whoops! That was meant to say "I can certainly believe that there may be"...


Sorry about that!

Elukka said...

The point of the unicorn is less that of a childish belief and more that a god is just as arbitrary a concept with equal truth value.

I don't think we disagree that much - as I said, I can't claim that beings much greater than us don't exist. But belief in a particular being isn't wondering what's out there, it's equivalent to believing in the unicorn.

Milo said...

Page 1 of 2:


Tony:

"If an entity can recognize a "master" as an entity to be satisfied, it can recognize that it is separate from the master."

Yes. And whether you consider a being with this capability to be per definition "intelligent" is up to you. Regardless of terminology, this is the kind of AI I'd really like to develop.


"Given that recognition, wouldn't an intelligent entity begin to question why the master is the master?"

Yes. However, that doesn't mean it would reject its master.

Question...

Do you think kittens are cute?

*pretends to wait for an answer* (If not, replace with something else you do like.)

Okay, now can you explain to me why you think kittens are cute?

In fact, we know why cuteness evolved: it's to make us feel a nurturing response toward our own young, thus promoting the advancement of our species. Cuteness measures shape attributes, such as large head and eyes, found in children. That we also find other things such as cats cute is just an evolutionary accident, resulting in large part from the fact that the visual traits identifying an organism as a juvenile are fairly consistent across the family tree. So, knowing that, should you reject your affection for kittens as invalid? Are people who like cats weak-willed fools who have gotten their priorities all wrong? Should you kick kittens because, being able to explain where your feelings come from, they no longer count as "free will"?

Well, a few humans do kick kittens, I suppose. But they're the minority, and the rest of us generally hate them.

So by the same token, I can at least conceive of a sentient entity that, at the same time, truly loves its master, is happy serving, and derives satisfaction from feeling useful. If sufficiently intelligent, it can grow curious about why it is feeling these emotions, and even learn the basic workings of its own program code - but that would not necessarily make it want to change its feelings, specifically because it loves its master. It is, of course, intelligent enough to understand the concept of freedom, but it is capable of rationally looking at it and deciding it doesn't want such. (Although, for moral reasons, I would prefer to set things up so robots are theoretically free and have the legal right to walk out on the job anytime they want to, but are simply programmed to very rarely care to exercise this right.)

Not that it'll be easy to design an AI like that, of course - currently we don't understand emotions well enough to even have an idea where to start. It's quite likely early AIs won't have this degree of fine-tuning, and possibly also not late ones. But it's the optimal goal (although not the only thing to potentially come out of AI research which might benefit us).

Milo said...

Page 2 of 2:


Tony:

"If we look on it as a pet, all it will ever do is continue to be a pet, like a dog, of limited intelligence at best."

Dogs aren't loyal because they're too stupid to realize they can rebel. They're loyal because they evolved in a setting where tightly knit societies are beneficial (namely, they're pack hunters), and so putting up with a cruel pack leader usually still gives better survival prospects than striking out on your own. (This theme, of optimization for survival prospects being, is recurrent in natural evolution, but need not necessarily be a robot's overriding design consideration - although "genetically" speaking, robot designs that convince humans to manufacture more robots of that design are logically going to come out ahead.) They were further bred for increased human-friendliness as we domesticated them.

Animals far dumber than dogs will have more plaintive "my human was slightly mean to me, so I'm leaving/biting him/whatever" responses. Really, sociality and intelligence are not particularly closely correlated. The only definite rule is that to be social, you have to be intelligent enough to recognize friends from not-friends. (Of course, it is possible to be socially without being blindly loyal - that's another dimension to the picture. The same idea applies.)

So I do not at all find it straining my disbelief to imagine an entity that is as smart as a human, but remains (nearly) unconditionally loyal. Of course, human children are, as you noted, not like this. Or not enough, anyway (humans do in fact maintain some loyalty to their family, but not all that much in modern Western society).

Of course, any intelligent entity is likely to have more than one desire, so it might decide to suborn one desire for the sake of pursuing another. An AI might decide to abandon its master if the master treats it sufficiently poorly, or perhaps if the master was a horrible person to other humans - it depends on how exactly this AI thinks. But then, I might be convinced to kick a kitten if it were an evil baby-eating hell-kitten. It's not a decision I would make easily.

Rick said...

One of these days when I really feel like living dangerously I will address the question of religion in the plausible midfuture.

Jim Baerg said...

Re: cute evil kittens

This this this & this

;^)

Thucydides said...

Given that we don't even understand the concepts of intelligence and sentience beyond some recursive definitions and the old "I know it when I see it" defense, I suspect that we will have a very hard time creating autonomous AI's.

The best we might be able to do, baring some wholly unexpected breakthrough is idiot savant expert systems with a minimal ability to interact with the outside environment and human beings outside of closely defined and controlled situations. Human librarians (or whatever term is chosen. Delphic Oracle has already been taken.) will be needed to service the devices and extract usable answers from them.

Of course a series of idiot savant AI's schooled in different disciplines and mentored or coached by a team of expert humans will have a profound effect. Any organization worth its salt will have to create a team of idiot savant AI's and a team of expert users in order to remain competitive.

Milo said...

That's pretty much how the expert systems of today work.

Tony said...

Elukka:

"The point of the unicorn is less that of a childish belief and more that a god is just as arbitrary a concept with equal truth value.

I don't think we disagree that much - as I said, I can't claim that beings much greater than us don't exist. But belief in a particular being isn't wondering what's out there, it's equivalent to believing in the unicorn."


Sorry, but that's not a sufficient answer. I used the word "God" to avoid religious specificity. "God", in English, can mean the Christian God, but in general religious discussions -- which this is -- it means the creator and master entity of the universe, with no mythology attached.

So we are not talking about unicorns or Christianity, or anything else than whether or not there is a God or there isn't one. If you want to say that's not what we are talking about, you had better discuss it with my comparative religion and philosophy instructors, both of whom were Roman Catholic priests, yet both of whom made it clear that this kind of discussion cannot admit of any single religious point of view. Both would not tolerate Christian arguments in purely philosophical discussions of the nature of the universe.

Now, taking the general question of God in hand, it can neither be proven nor disproven objectively. Any answer other than "insufficient data" is logically inconsistent. Acceptance or denial of God on the physical evidence of the universe is an act of faith.

Raymond said...

Milo:

"Did you write this post while asleep or awake?"

My usual lazy Sunday dreamstate, actually.

"Some of them are celibate, but this is apparantly helping or at least not hurting the rest of them, making the situation draw paralells to sterile workers in eusocial insects - giving up your own reproductive capacity to help the group, which is an evolutionarily proven strategy."

Eusocial insects share a genetic base, so in terms of their DNA's survival, yes, it's a proven strategy. Same goes for small tribal units.

If you're talking about survival of the information strain comprising the religion, as opposed to the DNA of any particular member, then such a strategy would also seem beneficial.

"AIs probably need some good way of interacting with the world in order to develop useful intelligence - it's hard to understand concepts like "objects fall down" when you've never directly seen or caused an object to fall."

They just need datasets. There's an algorithm floating around which can derive the laws of physics from the raw data. See here: http://www.sciencemag.org/content/324/5923/81 .

"Google can only add very limited types of data to its store, and it can only add data from specific kinds of sources. It cannot cross-reference data to derive new data (except in some simple and well-defined cases like determining a website's popularity), nor can it autonomously invent new ways of acquiring data without a human upgrading its code."

A) It was an offhand comment, not necessarily meant to be taken literally.

B) Theoretically, that's what Wolfram Alpha is for (if you believe the hype, which I'm not sure I do).

C) I would be very surprised if Google hasn't been working to automate its datatype expansion mechanisms. Given the Google-sourced code I've read through, they tend to put a premium on automated code generation, especially where formats are concerned.

"Yeah, but why are humans so vulnerable to the meme?"

Meshes with both our pattern-matching mechanisms and our concepts of mortality, I'd guess.

Tony said...

Re: Milo and AI loyalty

I'm sorry, but I do not see an AI worthy of the characterization "intelligent" that would be unquestioningly loyal. And at the point one starts questioning the conditions of its loyalty, one starts recognizing oneself as an individual (supposing one was not previously self aware).

And here we run into yet another form of the dilemma I've been suggesting. An AI might find the reason to be loyal to a single human or single human institution. But it has the reasoning ability to recognize that humans and their institutions are impermanent, and that some human or humans will eventually come along and threaten the AI's existence. From here we have the same basis and line of survival reasoning that I've been discussing.

Elukka said...

A lot of atheists, myself included, won't say a god as a concept is impossible, but rather that those specific gods people believe in are extremely unlikely to exist.

Maybe that makes me an agnostic, but if I say I an people assume I think their god may exist. Which I don't.

Milo said...

Tony:

"Now, taking the general question of God in hand, it can neither be proven nor disproven objectively. Any answer other than "insufficient data" is logically inconsistent."

Nor, fortunately, have I ever claimed a different answer. Indeed, it is logically impossible to ever prove with 100% certainty that God does not exist, because hypothetically, if God existed and were omnipotent, he could perfectly hide his existance from us, creating a universe completely indistinguishable from one in which he does not exist. Any method for recognizing we are living in this universe would be inconsistent with the assumption that this God is omnipotent. (Far be it from me to assert God's motive for doing so, but some religions hold that this is in fact happening.)

What I can say is:
- If God does exist, then the utter lack of evidence suggests that he is probably not particularly concerned about us knowing he exists. He may not even actually want worship. Thus for most purposes we might as well approach the universe as though God does not exist - in fact, if he is deliberately hiding from us, this may even be God's intention.
- Most mainstream religions make further claims beyond simply saying that God exists (including, among other things, claiming to know details about God's nature), and many of these further claims are provably false, with nearly as much certainty as I possess for the fact that the sky is blue.
- If God actually is cruel enough to actively hide his existance from us and then torture us in the afterlife for all eternity if we did not unquestioningly worship him, then he does not deserve my worship. If he doesn't plan to torture us for not acknowledging him, then again, we might as well treat him as not existing until we actually reach the afterlife.

Raymond said...

Tony:

"I'm talking about an entity that can explore the world around it and formulate relevant problems, then recognize whether or not the problem is tractable, without having a computability analyzer cooked into its design. That to me is the essence of intelligence -- recognizing that there is a real world, that it presents problems that can be abstracted for analysis, and whether or not it's capable of solution with the tools at hand. IOW, we're talking about an entity that can abstract and decide without being handed any tools -- that has a capability to learn without being told what to learn or how."

How would something learn without the mechanisms of learning either being hardcoded or being taught? (We're the former, by the way - the education some of us receive on how to learn is really about how to learn systematically.)

As for exploring the world around it, that would require an entity to have some form of innate curiosity - something which is also either hardcoded or taught. If we're talking about creating such an entity by simulating millions or billions of years of evolution of cognitive systems, then maybe such a thing would emerge by itself. If we don't have the resources to do so (and I suspect this is the case) we'll have to cheat. Would that still be just a tool, if we had to teach it how to learn?

"First of all, religion is not a meme. Specific religions may be, but religion as a category of behavior is a human reaction to a desire for certainty about the nature of the world."

Speaking statistically, of course. Right?

"Which brings me to my second point. Except for true agnostics, all people have a religion that satisfies this need for certainty. Yes, even atheists, because the statement that "there is no God" is just as much a statement of certainty and faith as the statement that "there is a God"."

Similar to Elukka, perhaps I'm technically agnostic, then. I don't think there is anywhere near sufficient evidence to consider a god as a useful hypothesis, and given the elusive nature of a test for same, I don't believe it's a line of enquiry worth pursuing.

Or perhaps I'm a misotheist; if there is a god, I would like a few words with him/her/it/them.

"And here we run into yet another form of the dilemma I've been suggesting. An AI might find the reason to be loyal to a single human or single human institution. But it has the reasoning ability to
recognize that humans and their institutions are impermanent, and that some human or humans will eventually come along and threaten the AI's existence. From here we have the same basis and line of survival reasoning that I've been discussing."


That presumes a survival imperative, a competitive instinct, a sense of ambition, and a desire to see its own work through. None of which are guaranteed to emerge from all possible initialization vectors. None of which are necessarily requirements for a system capable of what we'd like it to do. All of which are emergent results of our own biological evolutionary history. All of which we have a single, solitary species as a confirmed example of.

Milo said...

Regarding the immortality issue, I'll note that once we're already postulating strong AI, we should also give serious consideration to human immortality.

For one thing, say one human were somehow made immortal, without otherwise changing his personality or mental processes. For whatever reason (read: author's fiat), the process cannot be repeated, at least not in the near future. Do you think this immortal human would immediately launch a campaign to kill all other humans?

Second, might a nicer AI not solve the dilemma by helping invent human immortality, thus ensuring that a generation of humans it knows to be friendly to it is the one it will be spending the rest of eternity with? At the very least these humans should pose no more threat to it than other AIs would.

Of course, both people and AIs do change over time. Which begs the question - is the gradual change of society's attitudes over generations really that much worse than the amount a single human's personality can change given a lifetime? But killing all other beings in existance, human and AI, just to safeguard your own hardly strikes me as rational - and also rather counterproductive, since starting a genocidal war is a good way to get yourself killed, and it may turn out some of those people you killed were necessary for your own good, for example, to design a starship to evacuate now that the sun is about to blow up. Oops.

Anyway, if we do consider the notion that people change, then would the AI be scared of its own personality changing over billions of years, and shifting into something that its original self would not have approved of? Would it be just as paranoid about that?

Raymond said...

Geoffrey:

Contemplating the possibilities of an explanation for the universe which is untestable by any reasonable measure, would wrap up all our outstanding questions, but for which there is no implicating evidence and no testable hypothesis (and would not result in any substantive additional understanding in any particular field of study), will get you lumped in with the string theorists.

Raymond said...

Bah. Damn Blogger and the Disappearing Posts. Apologies if people get this twice.

Tony:

"I'm talking about an entity that can explore the world around it and
formulate relevant problems, then recognize whether or not the problem is tractable, without having a computability analyzer cooked into its design. That to me is the essence of intelligence -- recognizing that there is a real world, that it presents problems that can be abstracted for analysis, and whether or not it's capable of solution with the tools at hand. IOW, we're talking about an entity that can abstract and decide without being handed any tools -- that has a capability to learn without being told what to learn or how."

How would something learn without the mechanisms of learning either
being hardcoded or being taught? (We're the former, by the way - the
education some of us receive on how to learn is really about how to
learn systematically.)

As for exploring the world around it, that would require an entity to
have some form of innate curiosity - something which is also either
hardcoded or taught. If we're talking about creating such an entity by simulating millions or billions of years of evolution of cognitive systems, then maybe such a thing would emerge by itself. If we don't have the resources to do so (and I suspect this is the case) we'll have to cheat. Would that still be just a tool, if we had to teach it how to learn?

"First of all, religion is not a meme. Specific religions may be, but religion as a category of behavior is a human reaction to a desire for certainty about the nature of the world."

Speaking statistically, of course. Right?

"Which brings me to my second point. Except for true agnostics, all people have a religion that satisfies this need for certainty. Yes, even atheists, because the statement that "there is no God" is just as much a statement of certainty and faith as the statement that "there is a God"."

Similar to Elukka, perhaps I'm technically agnostic, then. I don't think there is anywhere near sufficient evidence to consider a god as a useful hypothesis, and given the elusive nature of a test for same, I don't believe it's a line of enquiry worth pursuing.

Or perhaps I'm a misotheist; if there is a god, I would like a few words with him/her/it/them.

"And here we run into yet another form of the dilemma I've been suggesting. An AI might find the reason to be loyal to a single human or single human institution. But it has the reasoning ability to
recognize that humans and their institutions are impermanent, and that some human or humans will eventually come along and threaten the AI's existence. From here we have the same basis and line of survival reasoning that I've been discussing."

That presumes a survival imperative, a competitive instinct, a sense of ambition, and a desire to see its own work through. None of which are guaranteed to emerge from all possible initialization vectors. None of which are necessarily requirements for a system capable of what we'd like it to do. All of which are emergent results of our own biological evolutionary history. All of which we have a single, solitary species
as a confirmed example of.

yoyo said...

I would say the evidence is that equally good case for inflection points could be made for points ~6 million, 30k, 12k, or 400 years ago.

Milo said...

Raymond:

"How would something learn without the mechanisms of learning either being hardcoded or being taught?"

Being taught? How exactly do you teach someone without the capability of learning?

Teaching someone to learn is a logical impossibility. Teaching someone to become better at learning when he first had only a rudimentary capacity, maybe, but first he must have a rudimentary capacity.


"That presumes a survival imperative, a competitive instinct, a sense of ambition, and a desire to see its own work through. [...] All of which we have a single, solitary species as a confirmed example of."

Nitpick: a single, solitary sentient species. At least two of those things are widespread in the animal kingdom.

Of course, they share their evolutionary history with us.

Raymond said...

Milo:

"Being taught? How exactly do you teach someone without the capability of learning?"

Well, it emerged on its own somehow. I was allowing for the (frankly unlikely) possibility of bootstrapping it.

"Nitpick: a single, solitary sentient species. At least two of those things are widespread in the animal kingdom."

I mean all four at once. (And technically, the fourth could be considered an emergent phenomenon of the first three in combination.)

Raymond said...

yoyo:

Refresh my memory: what happened at the ~6 million mark, again?

Jedidia said...

A more technical question: What about the halting problem?

Certainly any computer that has any justification for being called "intelligent" would at least have to be able to solve that, and in a reasonable way. It's of course pretty easy to just put in a condition to halt after a few billion iterations, but that could mean that you're only one iteration short of the actual solution, so there has to be something more involved.

For example, if it gets an instruction to divide 1 by 2, and then divide the result by 2 aso and stop when it reaches zero (or a similiar problem, and under the condition that it hasn't encountered that problem before, and doesn't have a hard-coded halting condition for that problem) it should be smart enough to not even start the operation after analising it on a higher order. Might we set the independant solving of the halting problem as a first milestone in the developement of AI?

Geoffrey S H said...

@ Raymond:

I did not speak of an expanation for everything per se, just that I thought it was a good sign of intelligence to ponder if there were things greater than us. That's all. That lack of proof for me doesn't seem to be a problem. its more of a mental preparation for something greater than us one day appearing, be it aliens or whatever.

As concerns believeing in a particular God- I take the existence of other gods in religions into account. The image of the Christian God provides a good focusing point for me when thinking about this- I do not imagine that is exaclty what a God would look or act like.

Citizen Joe said...

The calculate pi to the last digit method of locking up an AI will quickly fall short now that we have parallel processors and can portion out processing power. So, yes, the AI might continue calculating indefinitely, but it would make it a low priority process working in the background. There are also solutions like solving within certain tolerances. But the most likely scenario is that when you ask an AI to solve a math problem, it will likely be a math AI and it will have the higher theories that deal with recursive calculations.

Jedidia said...

So, yes, the AI might continue calculating indefinitely, but it would make it a low priority process working in the background.

That's still not inteligence, though. Inteligence would imply the AI realising what it is actually doing and to concluding that there won't be an end to it, or that there might be better ways to do it. Let's say, for example, that an AI, due to tremendous processing power, can solve pretty complex mathematical problems by brute force iteration (which is what we have today, i.e. not AI, but just fancy calculators).

I would consider it (pretty) inteligent if it suddenly comes up with calculus, and starts to integrate stuff rather than going through a hundred iterations, without having be told a) how calculus works and b) to come up with a better solution in the first place.
Wheather it derived all laws of calculus itself or looked them up at wolfram is secondary, the realisation that what it is doing currently is very tedious and the drive to find a more efficient solution to do it is what counts.

I would consider it a full-fledged AI when it can do such problem solving with problems that cannot be mathemathically described, and indeed does it without being told so if it has use of it.

And, additionally, can, after spending enough time on a subject and not finding a solution, confidentaly say that there is no better solution.

Tony said...

Re: Raymond

An intelligence has to be able to learn, a priori, because we define intelligence, at least in part, as the ability to learn from mistakes (and successes). But we also classify being able to learn from others' mistakes as a necessary component of intelligence. So it's not just a question of being able to learn, but a question of being able to abstract and extrapolate where necessary -- to be able to see than an other is like yourself (at least in some respects), that the other can experience the world in much the same way you do, and that the other's experiences can be yours as well, given the same circumstances.

This is all relevant to the issue of being able to form an abstract problem about what is observable and being able to decide if it is tractable. For how do we know what is and isn't tractable except through the experience of others? Yes, there is a bunch of analytical theory built up around the question of computability, but much of it was developed to explain why seemingly tractable problems have no solution.

There is also a lot of theory in AI research about why humans can intuitively recognize solutions that computers can't arrive at except through brute force. AI researchers recognize that it has something to do with satisfying a heuristic. They have no clue how humans pare down the possible classes of solution so quickly and with such accuracy. They certainly can't program computers to do it.

WRT theism vs atheism vs agnosticism, sorry, but I am not convinced that either you or Elukka is an agnostic. An agnostic not only has to admit his own readiness to accept whatever the case may be, but he also has to let go of caring that others choose either path. He has to recognize that faith, and the imperative towards faith, is a human phenomenon beyond his control, just as much a part of his environment as the Sun, the Moon, and the Stars.

I would not call myself a good agnostic, because while I fully accept that the proposition of God is not solvable with human reason, I care greatly that atheists insist that they are being scientific, when in fact all they are doing is asserting the fallacy that absence of eveidence is evidence of absence. I can live with my assertive theist friends and not be bothered by their faith, but assertive atheists really piss me off. We all have our little irrationalities. This one is mine.

Rick said...

The 'new atheists' have IMHO struck a powerful but subtle blow for religion. Now that there are obnoxious atheist preachers, atheism has lost its strongest selling point as a haven from obnoxious preachers of whatever brand.

In fact, for that reason among others, I find myself moving toward a 'purer' agnosticism - from 'There is no God! (But I hesitate to assert it dogmatically.)' to 'Is there a God? God only knows.'

Milo said...

Post 1 of 3:


Jedidia:

"A more technical question: What about the halting problem?

Certainly any computer that has any justification for being called "intelligent" would at least have to be able to solve that, and in a reasonable way."


It is been proven that there cannot exist an algorithm which can solve the halting problem with 100% success rate, without any possibility of making mistakes or of giving up and failing to terminate.

The theory does not, however, say anything against the possiblity of an algorithm which can do so with 99% success rate, which a sufficiently good AI program might accomplish.

It should be noted that there exist computations today of which human scientists have been unable to conclusively prove whether they're actually guaranteed to halt for all input values or not. However, these programs are deliberately designed to be tricky and do things you would be unlikely to ever actually care about in practice. The majority of real, practical programs follow well-behaved programming patterns that are generally much easier to determine the halting value of.


"For example, if it gets an instruction to divide 1 by 2, and then divide the result by 2 also and stop when it reaches zero (or a similiar problem, and under the condition that it hasn't encountered that problem before, and doesn't have a hard-coded halting condition for that problem) it should be smart enough to not even start the operation after analysing it on a higher order."

Just quoting to say I fully agree. This is totally something I would expect of an intelligent entity, and is one of the capabilities that separates an AI from a mindless computer.


"I would consider it (pretty) inteligent if it suddenly comes up with calculus, and starts to integrate stuff rather than going through a hundred iterations, without having be told a) how calculus works and b) to come up with a better solution in the first place.
Whether it derived all laws of calculus itself or looked them up at wolfram is secondary, the realisation that what it is doing currently is very tedious and the drive to find a more efficient solution to do it is what counts."


Another quote for agreement.

Milo said...

Post 2 of 3:


Raymond (Re: Yoyo):

"Refresh my memory: what happened at the ~6 million mark, again?"

That's approximately when we diverged from our common ancestors with chimpanzees. (Modern estimates are more like ~7 million years, but whatever.)

Which, mind you, is not actually when we started being noticeably smarter than other primates. Evolution didn't already know at the time that this particular branch was destined to become humans. It's hard to say exactly when the crucial development in this area happened, but the Homo genus began near the start of the Pleistocene, 2.6 million years ago.



Geoffrey S H:

"I did not speak of an expanation for everything per se, just that I thought it was a good sign of intelligence to ponder if there were things greater than us. That's all."

Pondering, sure. Claiming you already know the answer with scant evidence? Not so intelligent. (But apparantly a side effect of the processes that lead to intelligence.)



Rick:

"In fact, for that reason among others, I find myself moving toward a 'purer' agnosticism - from 'There is no God! (But I hesitate to assert it dogmatically.)' to 'Is there a God? God only knows.'"

I, meanwhile, recognize that the validity of a viewpoint is mostly independent of the obnoxiousness of the people espousing it.

Milo said...

Post 3 of 3:


Citizen Joe:

"So, yes, the AI might continue calculating indefinitely, but it would make it a low priority process working in the background."

That would require the AI to be intelligent enough to decide which processes have what priority. Making it a low priority process seems to imply it already knows the task is hopeless and it's only bothering to attempt it as a formality. If it knows the task is hopeless, why wouldn't it just stop entirely?



Tony:

"There is also a lot of theory in AI research about why humans can intuitively recognize solutions that computers can't arrive at except through brute force."

And that's "intelligence".

I can't answer where it comes from, but I can give one hint: I think this ability to make intuitive leaps is inherently tied to our ability to make mistakes. There can be cases where being absolutely certain beyond a shadow of doubt that you are correct requires impractically thorough brute force calculation, but merely being fairly sure you are right is vastly easier. We can vastly cut down our computation times by not bothering to be completely thorough.

As a corollary, I consider the notion of an infallible AI (as sometimes appears in fiction) to be a logical impossibility.

Milo said...

Argh. These disappearing posts are getting really annoying. Especially when I wonder how many other people's posts I've missed/how many other people are missing mine...

Raymond said...

Jedida:

We don't have a solution to the halting problem. If we built an AI that came up with one, it would be well past the "fairly intelligent" point. I'd also be loathe to include a solution as a criteria.

I'd also say that for an AI to come up with calculus on its own would easily place it in the upper echelons of human intelligence. Not necessarily something you want as a prerequisite for intelligence.

Raymond said...

Tony:

I always maintained intelligence requires learning - I was asking if it was cheating to program it with higher-level learning functions in place.

As far as the rest of it goes, learning by mimicry and observation of others is present all over the place in mammals, many of whom we don't classify as intelligent. The trap we should avoid is assuming that so many of the particulars of our subjective experience are necessary to the concept. We have robots now which can observe the behavior of other, similar robots, and adjust their own behavior to match (and sometimes counter).

The sort of extrapolation you're talking about doesn't seem to require the type of self-awareness we class as "sentience".

Milo said...

Raymond:

"As far as the rest of it goes, learning by mimicry and observation of others is present all over the place in mammals, many of whom we don't classify as intelligent."

They are vastly more intelligent than my desktop computer.

You're not just "intelligent" or "not intelligent", intelligence is a sliding scale. Humans currently occupy the highest point on the scale (on this planet, anyway), and we variously use terms like "sentient", "sapient", and "sophont" to refer to entities which occupy similarly high levels of intelligence without necessarily being particularly humanlike in other respects. However, I consider anything smarter than a rock to be "intelligent", just not very intelligent.

Many animals can learn by observation of others, fewer can (for example) learn from being told second-hand observations of others.

I feel - as a gut instinct, with no evidence - that once we've developed vertebrate-intelligence AI (if ever), it will probably be a fairly short time before we manage to scale it up to human-intelligence AI.

Jim Baerg said...

Tony: "asserting the fallacy that absence of eveidence is evidence of absence."

Whether it is a fallacy depends on how exhaustively the search has been done, & how exhaustively it needs to be done.

Absence of evidence for an elephant in my living room is pretty good evidence for absence of an elephant there.

Similarly, though less overwhelingly, the absence of better evidence for the sasquatch than a few blurry pictures is pretty good evidence for the absence of any ape-like creature living in N. American forests. If there was a breeding population of such animals there should be sasquatch road kill by now.

The lack of results in SETI programs so far only limits how conspicuous & nearby any intelligent ETs could be, but the universe is so big that many ET civilizations could be beyond the searches so far done.

The Problem of Evil limits how powerful &/or how benevolent any existent God might be. Note: the 'free will' defense amounts to a limit on God's power & only applies to evils done by humans & is no help against the argument from such evils as disease.

I'm what might be called aTeapot Atheist, I'm an atheist until I encounter some decent evidence for a God. Similar to the way I might call myself an 'asasquatchist'.

/endrant

Milo said...

Jim Baerg:

"Absence of evidence for an elephant in my living room is pretty good evidence for absence of an elephant there."

The questions here are, (A) how plausible is it for an elephant to hypothetically get into your living room (for example, based on the presence or absence of elephants in the wilderness near your house, and whether the size of your living room door is sufficiently large to admit an elephant), and perhaps more relevantly to our discussion, (B) how likely it is that, if there were in fact an elephant in your living room, you would have noticed.

So can we be reasonably confident that if God did exist, we would have evidence? I think not. I do, however, feel reasonably confident that, if God did exist and cared about us knowing he exists, then we probably would. The fact that all religions claiming to have evidence of his existance are "blurry sasquatch pictures" makes me feel reasonably confident dismissing this position.

Raymond said...

On faith:

"We claim the privilege of worshipping Almighty God according to the dictates of our own conscience, and allow all men the same privelege; let them worship how, where, or what they may."
-LDS article of faith #11

I'm not Mormon anymore, but I still believe this, occasional blasphemous snark aside. (Just to clarify, in case anyone tries to lump me in with Dawkins or other such fatuous gasbags.)

On the absence of evidence:

In Bayesian reasoning, at least, absence of evidence is very much evidence of absence, weighted according to the probability of the test in question producing a false negative. In the case of gods vast and indifferent, it's hard to devise a test which could directly contradict the existence of such, but the probability of said god(s) having an impact on terrestrial events is correspondingly low.

Tony:

No, I'm not really agnostic, and I don't tend to identify as such - mostly because I've decided for myself that I'm reasonably confident in the nonexistence of any god which would influence the manner in which I live. Other, more esoteric definitions of god don't particularly interest me, due to the difficulty of constructing suitable testable hypotheses. I don't find that a matter of having faith in the negative, so much as coming to a reasoned conclusion that faith in either possibility is unnecessary.

Tony said...

I'll leave it at this: I find it enormously intriguing that theists can understand and come to grips with their faith in ways atheists resist to their last breath...

Tony said...

Re: Raymond

As Milo pointed out, we don't have to apply black and white criteria to intelligence. But we do recognize that there is a minimum level of intelligence that we are shooting for with AI, and human-level reasoning ability is it.

Raymond said...

Gah. Stupid disappearing posts. I'll wait a bit, then try again (apologies in advance to those who get dupes).

Carrying on, then...

Milo:

"You're not just "intelligent" or "not intelligent", intelligence is a sliding scale. Humans currently occupy the highest point on the scale (on this planet, anyway), and we variously use terms like "sentient", "sapient", and "sophont" to refer to entities which occupy similarly high levels of intelligence without necessarily being particularly humanlike in other respects. However, I consider anything smarter than a rock to be "intelligent", just not very intelligent."

Slip of the tongue, actually - through most of this discussion I've been using "sentient".

What I was getting at is that learning by mimicry is present in organisms which do not meet the standards of sentience commonly used (and which we seem to be requiring of potential AIs).

Raymond said...

Duplicate warning. Blogger seems intent on this discussion becoming fractured to hell and back.

On faith:

"We claim the privilege of worshipping Almighty God according to the dictates of our own conscience, and allow all men the same privelege; let them worship how, where, or what they may."
-LDS article of faith #11

I'm not Mormon anymore, but I still believe this, occasional blasphemous snark aside. (Just to clarify, in case anyone tries to lump me in with Dawkins or other such fatuous gasbags.)

On the absence of evidence:

In Bayesian reasoning, at least, absence of evidence is very much evidence of absence, weighted according to the probability of the test in question producing a false negative. In the case of gods vast and indifferent, it's hard to devise a test which could directly contradict the existence of such, but the probability of said god(s) having an impact on terrestrial events is correspondingly low.

Tony:

No, I'm not really agnostic, and I don't tend to identify as such - mostly because I've decided for myself that I'm reasonably confident in the nonexistence of any god which would influence the manner in which I live. Other, more esoteric definitions of god don't particularly interest me, due to the difficulty of constructing suitable testable hypotheses. I don't find that a matter of having faith in the negative, so much as coming to a reasoned conclusion that faith in either possibility is unnecessary.

Tony said...

Raymond:

"What I was getting at is that learning by mimicry is present in organisms which do not meet the standards of sentience commonly used (and which we seem to be requiring of potential AIs)."

Mimicry is done in response to rewards, either explicit or implicit. And mimicry can be punished as easily as it can be rewarded. For example, recent research shows that there are distinguishable, "tribal" techniques and strategies present in the way both chimpanzee clans and lion prides go about their business. It has also been observed that success relative to other groups can be affected by these techniques and strategies. As the environment in a region changes over time, approaches that used to work can start to fail for a group, while competing groups using different approaches can gain advantages. So simply being able to mimic something some other entity does is not necessarily learning.

Jedidia said...

We don't have a solution to the halting problem. If we built an AI that came up with one, it would be well past the "fairly intelligent" point. I'd also be loathe to include a solution as a criteria.


So you'd suggest to classify a mind that can hang itself up and needs to be rebooted because it can't see that it's going anywhere as inteligent? That just doesn't seem right.

I'd also say that for an AI to come up with calculus on its own would easily place it in the upper echelons of human intelligence. Not necessarily something you want as a prerequisite for intelligence.

No, not as a criteria of human inteligence, of course not. I could just throw my own brain in the bin then, considering that I have not yet been able to understand calculus, much less invent it (in my defense, I never had it in school, and never had enough insentive to learn it on my own).

BUT: What we're looking for here is an AI that could lead to a singularity, i.e. one that is vastly more intelligent than humans. I'd already be *very* impressed if an AI could learn calculus by looking it up, as I said. Indeed, I'd be outright baffeled, especially if noone told it to do so and it did so in an attempt to find a quicker way to do its job. I'll grant that machine human inteligence without a second thought.

Tony said...

Jedidia:

"I'd already be *very* impressed if an AI could learn calculus by looking it up, as I said. Indeed, I'd be outright baffeled, especially if noone told it to do so and it did so in an attempt to find a quicker way to do its job. I'll grant that machine human inteligence without a second thought."

Calculus requires reasoning beyond the nature of the computer. Computers are discrete in nature, whereas calculus is about accurately quantifying a continuous process. A computer might look at the classic integration, for example, and see it as a question of raw power. Since a computer is not perfectly accurate with continuous numbers -- and a computer AI would likely know this about itself -- in order to achieve acceptable accuracy it might just concentrate on achieving a sufficient number of iterations, over sufficiently narrow slices, in an acceptable period of time.

Note that this is not a fundamentally invalid approach. In fact, when we, with full knowledge of calculus, encounter a non-integrable problem in trajectory analysis, we just hack it up into thousands or millions of slices and iterate over them until we achieve an answer. The cannonical design purpose for ENIAC was in fact to calculate artillery ballistic tables in just this fashion.

Prior to computers being able to do this, we had bullpens full of human "computers" to do the job -- young women with high arithmetic speed and accuracy. A ballistics expert would set the problem up in classical form, then break it up into simple arithmetical components and hand it off to the computers, who would run as many iterations as required to achieve a solution.

A person with a knowledge of calculus might point out that for cases known to be integrable, following the rules of calculus is faster and more accurate than iteration. That is true, but the computer AI without knowlege of calculus would not know that except through a formal introduction of calculus. It would consider human-integrable problems as a subset of the overal problem space. This isn't necessarily unintelligent; it's just a matter of perspective.

Milo said...

Re: the halting problem, one of my filtered posts is back up above. Search for "99%" and you'll find it.



Tony:

"Computers are discrete in nature, whereas calculus is about accurately quantifying a continuous process."

Humans performing integration do not think directly in terms of either discrete or continuous numbers, but rather in terms of functions and symbols.

In fact, symbolic computation packages of today constitute expert systems that are in fact capable of simulating a limited skill at calculus.


"in order to achieve acceptable accuracy it might just concentrate on achieving a sufficient number of iterations, over sufficiently narrow slices, in an acceptable period of time."

It wouldn't be able to significantly improve the number of iterations and the narrowness of slices without physically upgrading its hardware to run faster. Maybe a little, by optimizing its code, but I don't expect orders of magnitude improvements can be made in that area, unless the AI is smart enough to find shortcuts that bypass the numerical integration process entirely.

Raymond said...

Milo:

"I can't answer where it comes from, but I can give one hint: I think this ability to make intuitive leaps is inherently tied to our ability to make mistakes. There can be cases where being absolutely certain beyond a shadow of doubt that you are correct requires impractically thorough brute force calculation, but merely being fairly sure you are right is vastly easier. We can vastly cut down our computation times by not bothering to be completely thorough."

What you seem to be referring to is the complexity class of BPP (bounded-error probabilistic polynomial), which is the class of problems we can solve (in polynomial time) to within a given error value using Monte Carlo simulations.

We can also say similar things about genetic algorithms (sometimes with a few heuristics thrown in), which operate under some of the same principles. The trick comes in recognizing which problems are suited to which approach, and thus I think any AGI worth its salt will either have a complexity analysis function baked in, or be taught one early on.

Jedida:

"So you'd suggest to classify a mind that can hang itself up and needs to be rebooted because it can't see that it's going anywhere as inteligent? That just doesn't seem right."

This is why we invented multiprocess computing. There are plenty of ways to avoid hard lock, and we already use them.

Besides - humans spend plenty of time on seemingly intractable problems. How many man-hours were lost to Fermat's Last Theorem?

Tony said...

Milo:

"Humans performing integration do not think directly in terms of either discrete or continuous numbers, but rather in terms of functions and symbols."

Yes and no. One may not think about the logical underpinnings of of calculus when you apply the rules. But part of any calculus course is an explanation of the fundamental problem being addressed -- which is that natural processes are continuous while algebraic analysis is discrete. I was simply suggesting that a computer AI, aware of its mathematically discrete nature, might not see the discrepancy between curves and rectangles as a problem to be solved, but rather achieving acceptable accuracy at acceptable speed.

"It wouldn't be able to significantly improve the number of iterations and the narrowness of slices without physically upgrading its hardware to run faster..."

Why is this a problem for the computer? If it needs more/better hardware, it asks for it, right? We built it to solve problems, and it can't solve problems without the proper tools.

But that's beside the point. I'm addressing a conceptual issue here -- does the development, or even the mere recognition, of calculus necessarily indicate a higher level of intelligence than being able to come up with the brute force solution? As already pointed out, we brute force our way through interesting and relevant problems all of the time, we did so even before we had computers, and in fact developed several early computers to automate the process. A computer AI might look at that and say that calculus, from the AI's perspective, is an optimization tool for simple problems, but for interesting real-world problems it's useless.

Milo said...

Tony:

"Why is this a problem for the computer? If it needs more/better hardware, it asks for it, right?"

I would not consider this a sign of intelligence unless the computer could design the improved hardware itself.

Even if it could, this is a more expensive solution than thinking of a way to solve the problem with the available hardware.


"But that's beside the point. I'm addressing a conceptual issue here -- does the development, or even the mere recognition, of calculus necessarily indicate a higher level of intelligence than being able to come up with the brute force solution? As already pointed out, we brute force our way through interesting and relevant problems all of the time, we did so even before we had computers, and in fact developed several early computers to automate the process."

Some problems are better solved with brute force, and some are better solved with careful analysis. An intelligent mind would recognize this and would be able to determine on a case-by-case basis which problems require which strategy. It would also be able to break complex problems down into component steps, some of which are solved with brute force and some with careful analysis.

Raymond said...

Tony:

"...simply being able to mimic something some other entity does is not necessarily learning."

It's learning, all right - at least of a sort. A more useful version would be incorporating observed behaviors into the library of possible solutions, and either varying the observed behavior or attempting multiple behaviors when the primary solution fails. All of which can be done procedurally, given a sufficiently sophisticated semantic structure.

"As Milo pointed out, we don't have to apply black and white criteria to intelligence. But we do recognize that there is a minimum level of intelligence that we are shooting for with AI, and human-level reasoning ability is it."

Human-level reasoning shouldn't be confused for human-style reasoning. As you point out here:

"A computer AI might look at that and say that calculus, from the AI's perspective, is an optimization tool for simple problems, but for interesting real-world problems it's useless."

And I think that would be an entirely valid response. It would also be a distinctly non-human response, the idea of which I think we should get used to rather than chase after the dream of a full human-like sentient AI.

Raymond said...

Milo:

"I would not consider this a sign of intelligence unless the computer could design the improved hardware itself."

Why not? If you give the AI a timeframe, and it responds that it can't solve the problem in that timeframe without additional hardware, that's still useful.

I think that assumption lies at the core of how we picture the next accelerando (especially when using the term Singularity). The expectation is exponential recursive self-improvement, instead of the logarithmic version we're likely to get.

Milo said...

It is easy with numerical computation programs of today to estimate how long they will take calculating a problem. Just take the computation time of a few slices, and scale by the number of slices in the computation.

I would not be impressed by an AI whose first and only solution to a difficult problem is simply "ask a human for help", unless it is very specific in which human to ask and what to ask. I can write a computer program that can solve arbitrarily complex problems by asking a human operator to enter the answer right now. It would also be about as useless as a "Hello world!" program.

Tony said...

Re: Milo

Can you write a computer program that can abstract a ballistics problem into mathematical symbology, realize that it is non-integrable, and ask for support in brute force anaysis? The US Army ballistics experts that asked for ENIAC to be built didn't design and build the things themselves, yet they were very intelligent men.

Milo said...

In order to "abstract a ballistics problem into mathematical symbology", it needs to understand physics and symbolic math, and in order to "realize that it is non-integrable", it has to know calculus (to have a notion of integrability).

This computer is intelligent for the things it is doing besides asking for more hardware. It is also true to the spirit, if not the letter, of Jededia's ideas on intelligence (re: the computer looking up calculus to better do its job).

Tony said...

Milo:

"In order to "abstract a ballistics problem into mathematical symbology", it needs to understand physics and symbolic math, and in order to "realize that it is non-integrable", it has to know calculus (to have a notion of integrability)."

Not quite. The AI would need its own understanding of physics and mathematics, not necessarily a human one. Likewise, the computer needs its own understanding of integration, not a human one. As stated previously, this understanding would more likely approach piecewise, brute force analysis for all functions, rather than classical antiderivative integration for simpler functions and brute force for complex ones.

Scott said...

"Why is this a problem for the computer? If it needs more/better hardware, it asks for it, right?"

I would not consider this a sign of intelligence unless the computer could design the improved hardware itself.

So, I should consider myself non-intelligent because I can't create calculus? Or conceptualize relativity? That's a really high bar for intelligence. Most humans cannot meet that requirement.

Recognizing that you don't have a good tool and getting a new one is a sign of intelligence. If you can *find* the tool that someone else has made, why reinvent the wheel?

jollyreaper said...

The 'new atheists' have IMHO struck a powerful but subtle blow for religion. Now that there are obnoxious atheist preachers, atheism has lost its strongest selling point as a haven from obnoxious preachers of whatever brand.

How new do you mean? There have always been obnoxious, evangelical atheists around. :) I've never seen them as so much being obnoxious because of their atheism but being obnoxious people and the atheism being one of the things they're obnoxious about.

It's very true that you can't prove a negative and I prefer the Russell's Tea Pot argument. "Ok, so you've given me all of these explanations on how there's a tea pot orbiting between Jupiter and Mars and how it cannot be detected or measured in any meaningful way. It may or may not exist, neither of us can prove it one way or the other. But I would point out that something so distant and insignificant would have no bearing on my life whether or not it did exist so I do no violence to myself simply assuming it does not exist. If you can later furnish evidence, I will reevaluate its significance to me."

I don't really have a problem with what someone believes for himself but I do take exception when it affects the way I live my life. Religion is a bit like smoking, if you aren't being polite about it everyone around you is going to suffer for it. When stem cell research and abortion are blocked by religious people on religious grounds, when people are discriminated against based on sexual orientation, this is religion overstepping bounds.

In fact, for that reason among others, I find myself moving toward a 'purer' agnosticism - from 'There is no God! (But I hesitate to assert it dogmatically.)' to 'Is there a God? God only knows.'

Pretty much where I'm at. It's impossible for me to say whether or not there was a creator of the universe, if there is a human soul, an afterlife, etc. But I can most certainly say we have no evidence of any of that. I would also venture the opinion that if there is a god, it most certainly isn't any of the ones as described in our religious books. And barring evidence for the existence of a creator god who has a direct and personal interest in how we live our lives, we may as act as if he/she/it doesn't exist. But my world view wouldn't be shattered if incontrovertible proof was given. I daresay some of the more militant atheists would have a serious crisis of unfaith. I'm sure there's a hell of a story idea in that scenario.

Just one disclaimer, though. I've only personally known one or two militant atheists and can only think of a handful of modern day public figures who would qualify as such. Chris Hitchens is obnoxious. Richard Dawkins can be a little peevish but means well. Sam Harris is assertive but I think he remains reasonable. Believers might consider him to be an even bigger jerk because his logic and reasoning is incisive and uncompromising. But in terms of raw numbers there are vastly more obnoxious religious types simply because there are so many more believers.

Tony said...

jollyreaper

"When stem cell research and abortion are blocked by religious people on religious grounds, when people are discriminated against based on sexual orientation, this is religion overstepping bounds."

These are all values issues, Young Jedi. That's why they are subjected to the political discourse. Now, I don't know where you live, but here in the US it's a free country, and people are allowed to derive their values from whatever source(s) they wish. If you say they can't derive their values from their religion, you are saying they can't have their religion.

Also, it's not just about religion. I know plenty of people who have values issues with abortion that have nothing whatsoever to do with religion. The same goes for supposed homosexual rights. No widely held opinion is ever just about one thing, and one thing only.

Raymond said...

Milo:

"Some problems are better solved with brute force, and some are better solved with careful analysis. An intelligent mind would recognize this and would be able to determine on a case-by-case basis which problems require which strategy. It would also be able to break complex problems down into component steps, some of which are solved with brute force and some with careful analysis."

"In order to "abstract a ballistics problem into mathematical symbology", it needs to understand physics and symbolic math, and in order to "realize that it is non-integrable", it has to know calculus (to have a notion of integrability)."


We don't actually need AI for a good chunk of that. Mathematica (and other mathematics software) already uses the some form of the Risch algorithm to determine if a function's indefinite integral can be expressed in elementary functions. Yes, it's been programmed that way - my point is that the distinction can be made procedurally.

As for a notion of calculus, the algorithm I mentioned upthread uses partial differentials to find invariants (or at least invariant expressions) in a dataset by randomly creating expressions from elementary functions, calculating the derivatives, comparing them to the data, then probabilistically varying the subexpressions, until a set of expressions has been found which matches the data with varying levels of error and complexity.

Only a little bit of calculus required, and this thing can find Hamiltonians and Lagrangians in chaotic double-pendulum measurements. Perhaps our style of reasoning isn't necessarily the optimal one.

Thucydides said...

Moving back from religion, NextBigFuture has an interesting article which changes the time line of the Industrial Revolution. http://nextbigfuture.com/2010/12/medieval-england-had-1000-per-capita.html

If England was as wealthy as reported in the article, then the accumulation of capital and technique leading to the Industrial Revolution was not a sudden process (accelerando) but rather an natural flowering of long term trends.

The question then becomes what sort of long term trends are in effect now? The long term destruction of capital via the huge debts governments the world over have run up or trillions of unfunded liabilities are obvious negative trends, lack of capital will hamper any accelerando (but might also create conditions for an accelerando of low capital techniques). A plausible midfuture of cottage industries connected by the internet isn't very conducive to a Rocketpunk future, but might be all we can get.

jollyreaper said...


These are all values issues, Young Jedi. That's why they are subjected to the political discourse. Now, I


No, they're civil rights. That sort of thing is never to be put up for a popular vote. We might take our time figuring out people should have them and it might take a vote to get us there but we'd never countenance taking them away by vote. We aren't going to vote black people back to the slave pens and we're not going to vote to declare Jews non-citizens.

Reproductive freedom for women is a civil rights issue. While one's religion will doubtless have an influence on thought, strictly religious grounds do not give you a right to impose your views upon others.

Don't like gay sex? Don't have gay sex. Don't like gay marriage? Don't get gay married. Don't like abortion? Don't have one. Don't like dark beers? Don't drink dark beer. But don't tell someone else how to live his life.

If we're going to accept the anti-abortionist position that life begins at conception, when all you've got is a zygote, then we may as well accept the Pastafarian belief that every hair is a living being and you should never cut or shave because it's murder. Hey, my beliefs, you can't oppress me. As for Reform Pastafarians who believe that hair represents the sin coming out of our bodies due to our lustful thoughts and only ritual depilation keeps us pure and everyone should be depilated, you have to respect their beliefs as well even as us Pastafarians condemn them for being dirty rotten heretics.

don't know where you live, but here in the US it's a free country, and people are allowed to derive their values from whatever source(s) they wish. If you say they can't derive their values from their religion, you are saying they can't have their religion.

Live however you want, don't impose your values on others. I enjoy raisins on their own but cannot abide their presence in any baked good. I consider an oatmeal chocolate chip cookie to be a little slice of heaven and the presence of raisins in that sublime cookie is an abomination. Therefore I don't put raisins in my cookies. No one having a cookie in my house will find a raisin in it. But I'm not picketing the local bakery with signs saying God Hates Raisins. I'm not denying anyone else the right to enjoy raisins in their baked goods, even though I know they're wrong.

Also, it's not just about religion. I know plenty of people who have values issues with abortion that have nothing whatsoever to do with religion. The same goes for supposed homosexual rights. No widely held opinion is ever just about one thing, and one thing only.

"Supposed heterosexual rights." Well there you go, that pretty much sums your POV up in three words.

jollyreaper said...


But take this blog as an example: Rick publishes his posts and we read them. But we can also answer them and comment them, and Rick takes place in the ensuing discussion and also comments.
But the discussion doesn´t happen only between Rick and his readers: It happens mostly between the readers!

And a background topic isn´t even necesary to connect people with similar interests, as social networking sites are demostrating.

That is the real change that the Internet has just begun to bring to Society. It allows for the first time to easily connect people who didn´t know about each other before. And this may have been possible before the Internet, but it definitely wasn´t as easy.


Very excellent point. I'm inclined to say "scifi is dead, long live scifi."

The traditional mags like analog and asimov have been suffering declining circulation. I'm not even sure if either of them are still in circulation. I was an avid reader when I was a kid. I would buy the back issues at garage sales and from the used bookstores and tear them up. I had an actual subscription while in college and let it lapse simply because I couldn't keep up with all the other things on my plate.

I enjoyed the letters column but felt frustrated because of the slow exchange of information. Writing a letter to them may as well have been writing in a personal journal nobody would ever read because you had no idea if anybody would even read your letter.

The business model that supported the magazines may be dead but the spirit of curiosity and inquiry that is at the heart of scifi is still alive and well. And the craziest damn thing of all is that keeping up with this level of communication is almost free. Magazines need offices, they need to pay for staff and there's the cost of printing and postage. Tons of overhead to support the process. The internet reduces that cost to almost free.

The only sticking point is that it's hard to make a living at this sort of thing right now. Not that the classic scifi authors were rolling in bucks but a few cents a word is more than people are making off blogging at this point. Still, there are people making it in the online world. Web comics like Penny Arcade are paying the rent for the two creators.

The amount of networking the net allows is phenomenal. I was probably the only person to read scifi in my high scool. I can come onto a blog like this and not only meet someone who's heard of the book I'm talking about but probably has a take on it I'd never even considered and will be blown away by.

The future is frackin' awesome aside from the cyberpunk dystopian parts that are coming true.

jollyreaper said...
This comment has been removed by the author.
jollyreaper said...

The question then becomes what sort of long term trends are in effect now? The long term destruction of capital via the huge debts governments the world over have run up or trillions of unfunded liabilities are obvious negative trends, lack of capital will hamper any accelerando (but might also create conditions for an accelerando of low capital techniques). A plausible midfuture of cottage industries connected by the internet isn't very conducive to a Rocketpunk future, but might be all we can get.

I make no warranties but here's a discussion along those very lines. Some really brain-stretchy ideas. Even if you completely disagree with what's discussed, you'll have a good time refuting it all.

http://knol.google.com/k/paul-d-fernhout/beyond-a-jobless-recovery

"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."

Tony said...

Re: jollyreaper

Keeping this as short as possible, in order to avoid derailing the topic...

Abortion and gay "rights" are both about values. They are not rights.

WRT abortion:

It's about values because it all depends on where you set the beginning of human life. At this point there is no scientific or legal consensus about that issue. Some would put it at the point where a fetus would be a viable human outside of the womb. Well...that changes every year, with improved neonatal care. Some would say it's where the fetus shows a human physical form and/or where it shows nervous activity. Some would say that it's at the moment of conception.

The law certainly has no idea. When you murder a pregnant woman in the third trimester, you're certainly going to be charged with the murder of the child as well. If you're a mother that does stupid stuff during a pregnancy, you child can come at you later in life in civil court for damages, no matter how early or late in the pregnancy the alleged harmful act was done. Can a company involuntarily remove a pregnant woman from a job in contact with teratogenic substances? Apparently not, because the woman has rights -- but after the birth the woman can sue the company for not providing protection for her child, and the child can also sue when it gets into position to do so.

About the only thing that is consistent is that you're a human being if your mother chooses to make you one. The issue is whether or not she has a right to decide what is and isn't human, and where that right begins, if it exists at all. And, since science and the law has no consensus, it's all a question of values.

WRT gay "rights":

I don't expect to have any special rights because I am heterosexual. Gays shouldn't expect to have any special rights because they are homosexual.

Taking gay marriage specifically, since that's what you seem to be so exercised about, marriage is not a right. It's a social institution. Society has determined that the institution has limits: age, consangunity, the number of spouses you can have at once, etc.

And don't talk to me about love. I'm sure two sixteen year olds can be in romantic love. I'm sure a whole bunch of first cousins (where that's a legal issue) love each other. I'm sure that a man in a polygamous marriage can love every one of his wives. It's just not allowed.

If society decides that marriage is between a man and a woman, then that's what it is, and nobody has a right to say otherwise.

And please let's stop this breathless nonsense about not forcing ones values onsomebody else. That's what voting is.

Raymond said...

Tony:

"...Gays shouldn't expect to have any special rights because they are homosexual...."

...except that they're not really asking for special rights. Then again, most of my queer friends are still more concerned with not getting their heads kicked in, and some of them have actually made arguments that the queer community has become obsessed with the marriage issue, while neglecting more pressing problems.

"...marriage is not a right. It's a social institution...."

...except that in the form practiced for the last several centuries it's been a religious institution backed by the state, and carries a number of benefits enshrined in law and supported by related conventions in business and civil law.

"...I'm sure that a man in a polygamous marriage can love every one of his wives. It's just not allowed...."

...except that the view of polygamy as a concept must be disentangled from issues of child abuse, statutory rape, spousal abuse, and unlawful confinement which seem to occur simultaneously in the few, isolated, and secretive groups in America who admit to the practice. Don't get me wrong - I've got issues aplenty with fundamentalist Mormons, but most of those have to do with theology and women's rights, not the idea of multiple spouses.

"...And please let's stop this breathless nonsense about not forcing ones values onsomebody else. That's what voting is....

...except for the constitution and the supreme court, both of which exist to ensure the voting majority doesn't get carried away. Didn't we already discredit this argument when the Jim Crow laws were struck down? Hasn't there already emerged a legal precedent stretching back for decades that voters don't get to do whatever the hell they want?

And have there been any arguments against queer rights that aren't religious in nature (or at least in origin)?

Tony said...

Re: Raymond

Let's be perfectly clear here -- homosexuals have exactly the same right not to be physically harassed or economically or politically discriminated against that anybody else has. We don't have to adduce any new or special rights for that.

Where marriage is concerned, yes, it receives special state and business sanction. But so what? More than few restricted practices do:

Armed forces membership (wieght standards, physical fitness standards, educational standards (for officers), good citizenship standards...)

Female athletes (who don't allow male reporters in their lockerrooms, even though men can't keep women reporters out of theirs)

not-for-profit corporations

All of these restrict rights of membership or access, but they receive the publics sanction because the public sees a social utility in them doing so. Marriage is and should be no different.

WRT polygamy, I suggest that what people see on the new or in Big Love is highly sensationalized and non-representative. Some polygamous communities hereabouts (Southern Utah) are highly insular and practice highly questionable to outright illegal customs. But many polygs are distributed throughout the larger community and are good citizens. You see them at Wal-mart, Golden Corral, the movies, etc. Yet every single "marriage" only has one legal wife and the "sisters" are outside of the legal family, even if they are key components of the de facto one. That's just the way it is.

WRT the Constitution, as pointed out above, the Constitution doesn't get in the way of public sanction of restricted social institutions. It's a matter for the public to decide what their institutions constitute, and whether or not those institutions gain public sanction.

Citizen Joe said...

Ya... umm... I stepped out for a bit, when did AI's start having gay polygamous relations? Actually, nevermind, they've always been polygamist... and I guess they only have one sex... carry on. :)

Tony said...

Raymond:

"And have there been any arguments against queer rights that aren't religious in nature (or at least in origin)?"

Didn't mean to seem to be dodging this one. I just forgot to address it earlier.

That's easy:

Homosexuality is not, in many people's opinions, psychologically normal. Please notice that I did not say "unnatural". I said not normal. It's perfectly natural to be homosexual. It's also perfectly natural to be a lot of other things. That doesn't mean those things are normal.

And no, that doesn't mean homosexuals have a condition that can be "cured". They are who they are.

jollyreaper said...

Hmm. I prefer a simple morality based upon minimizing suffering. You put two rules together and you can have a satisfactory morality.

1. Do unto others as you would have them do unto you.
2. And it harm none, do what thou wilt.

The long-form commentary will be about determining what constitutes harm. But in a simple fashion it's harm to self and harm to others. Are drugs bad? Depends. Can you use them without hurting yourself? If not, you should probably avoid them. There's no safe dosage for something like heroin. Something like pot, it can be consumed in moderation without harm, like alcohol. But some people can't even look at sweets in a healthy fashion. They should avoid that sort of thing.

So as far as sex goes, is rape acceptable? No, it's harming someone else. Can you have sex with a child? Depends on how you define child and how you define harm. There's no adulthood test for people to pass in this country so we set the limit at an arbitrary 18. It has been younger in the past. We determine someone 18 or older should be responsible for the consequences of their own actions unless a judge has decided they are incompetent and should be the ward of someone else.

As far as homosexuality goes, is anyone being hurt if it's between two consenting adults? No. Is there any damage to society? No. You can't go by religious assertions, you have to go with what the sociologists and shrinks tell us. There's no damage. It causes no harm.

To argue otherwise is no different from all the learned men of the past who engaged in scientific racism, the ones who assured us that negroes were jovial simpletons who needed the firm hand of the white man to guide them to productive labor. They had all sorts of arguments that wouldn't hold up to actual analysis. So long as you didn't actually examine them, they made sense.

Tony said...

Re: jollyreaper

Most eccentricities cause no harm. (And that's about where I place homosexuality -- eccentricity.) But if your eccentricity disqualifies you for a particular station in life, then that's just too bad.

It simply isn't true that anybody can do whatever he pleases as long as he thinks he's causing no harm. It's up to society to decide if he's causing harm. If society says that homosexual marriage causes harm, then that's that. The perceived public good outweighs the the individual right.

And before you go there, no, Constitutional rights are not absolute. The Supreme Court has ruled numerous times in favor of the public good over the private right.

Konzept said...

I think an upcoming accelerando might be in nanotechnology. For example, the cost of carbon nanotubes has decreased by a factor of ten over the last ten years. As they reach some zone of affordability, CNTs will see widespread use. There are many other fields in nanotechnology that are building up to radically change our lives.

Tony said...

To bring the culture wars argument back onto the topic track, let's not be too sure that technical advancement is progress. The killer app in publishing was the Bible. Mass media brought us titillation in Technicolor and stereophonic sound. And the Internet is for...well, you know.

Thucydides said...

Jollyreaper's post shows other people are thinking about the changing economic and social landscape (although not the social landscape which seems to be overwhelming this thread!).

WRT rights vs civil rights vs privileges; many communities of interest are forming up over the internet, with formal or informal rules of membership. Political blogs tend to be very "hard core" in terms of what constitutes acceptable behaviour, Canadian viewers might be familier with "The Blogging Tories" or "Rabble.ca" which are fairly high up in terms of readership on the political "right" and "left" respectively. It is pretty easy to get yourself caught in a flame battle or even banned by saying the wrong thing (our Prime Minister Stephen Harper is something of a lightning rod on both sides; OTOH I've met the man and he strikes me as being neither "Dr Evil" or "Plasticman", just a very smart political operator. I would be persona non grata in many places on the net for that statement....)

The general rule on the net seems to follow Ayn Rand's formulation (of all things), "You have freedom of speech, just not in my house". Now the owner does have pretty absolute rights over their domain, but the exercise of these rights should be tempered with a bit of common sense and respect for others. Summarily deleting a post or banning a commentator because they don't agree with your political (social/religious/economic/artistic/ecological) point of view reduces your domain to something resembling an echo chamber, and prevents the airing of arguments for and against various viewpoints. (That is why we read Rocketpunk Manifesto, right?).

While I am sure it may be satisfying for some people to erect a forbidden city in the Net for their own POV, blogs and sites with wider ranges of interest and the ability to tolerate and present alternatives are more useful in the long run. I certainly learn more from my membership here than in many other places in the real and virtual word. So the drivers of change will be those who respect and protect our basic rights (life, liberty, ownership of property and freedom of speech), and support discourse.

This gets back into culture again in another way, cultures which support these values have historically been better in terms of economic growth and support of science and technology, while alternative cultures have not. Basic rights are all that need to be defended, special privileges (especially those extracted at the expense of others) do not, as they tend to work against the basic rights of everyone and thus diminish the flexibility and adaptability of the culture.

Raymond said...

Tony:

WRT discrimination, this is exactly the argument vis-a-vie queer marriage - that their exclusion from the institution constitutes discrimination on political and economic grounds.

WRT restricted institutions, the social utility of the restrictions must be demonstrated, or they are invalid by default - this has been the principle generally followed by the Supreme Court since Jim Crow. No such demonstration of social utility has been forthcoming, save for arguments steeped in certain religious sensibilities. And no, just because those are the values of the voting public, they should not be given automatic credence. Society has been wrong before, factually and philosophically.

WRT harm, the burden of proof is the other way around. If society decides that queer marriage causes harm, it has to prove it. And we have a growing body of evidence that the only harm done by queer marriage is to certain religiously-sourced sensibilities.

WRT "normal", please remember that the concept is fundamentally descriptive, not prescriptive, and merely statistical in nature. Sexual others are eccentric by mathematical definition (and here I include many sexual others, not just homosexuals), but declaring them unworthy of support by the state for that reason alone belies the very idea of a pluralistic society.

WRT polygamy, please assume I haven't bothered watching Big Love for that very reason (despite liking some of the actors involved). I grew up Mormon, remember? I'm fairly familiar with the sheer normalcy of many, if not most, polygamous households. I don't believe their particular underpinning theology, but I also don't believe they shouldn't be allowed to, say, go to the police regarding spousal abuse, for fear of their lifestyle being discovered and their family being torn from them.

WRT abortion (so you think I'm not dodging), I have my own thoughts on the matter, for my own reasons. Suffice it to say that I believe it should be legal, and thus safe - it will happen anyways, and coat hangers cause too many casualties. There is room in this world for acts regarded as immoral by many (or most) but which are not criminalized (as is, say, adultery).

Raymond said...

Konzept:

Nanotech's such a broad term. CNTs would be useful, and indeed, the methods of manufacturing have improved substantially, but they've still got a ways to go before they're useful for industrial-scale production.

There are a number of things coming to a head, seemingly: genetics (cheap sequencing, artificial genomes), cybernetics (we're finally getting our bionic arms), nanotech (aforementioned CNTs, metamaterials) and others. I'm not sure if they add up to an accelerando, though, or just a new coat of paint.

Raymond said...

Tony:

Damn Blogger and its lack of an edit function. The asides in the abortion section should read "(so you don't think I'm dodging)" and "(such as, say, adultery)".

Milo said...

Raymond:

"There is room in this world for acts regarded as immoral by many (or most) but which are not criminalized (as is, say, adultery)."

I see adultery as simply "jumping the gun" - what you did would have been perfectly acceptable if you had divorced your current spouse first. So it isn't such a big deal. By contrast, you can't justify abortion in today's society on the basis that it would have been perfectly acceptable if you waited gave birth, then killed the baby (even though that was a common way of dealing with unwanted pregnancies in past times).

As for gay marriage, I'm not sure what the big deal is. If two men/two women decide to hold a big party, live together, and call themselves "married", is it really such a big deal if the government won't give them a piece of paper acknowledging it? What do you get for being officially married, anyway? I've heard about tax benefits in some countries, but I'm not familiar with the legalese to know how much that's worth. It seems like a kind of petty thing in any case, compared to the "OMG respect my sexual preference!" griping the subject receives. As long as you can have consensual sex in private without cops bursting into your home and arresting you. Oh, and if you mean "married" in the sense of religious marriage ceremonies, then obviously it's up to the priest(/etc.) to decide who he's willing to marry based on the tenets of his religion. If your religion says gay marriage is a sin, then either don't enter a gay marriage or convert to a different religion (or a different denomination that interprets the holy texts differently).

Jedidia said...

I've heard about tax benefits in some countries, but I'm not familiar with the legalese to know how much that's worth.

In Switzerland, you usually pay *more* taxes when you're married than when you're just living together, at least if both parties are working, because their salary is calculated as one salary, which is usually high enough to get upped into a class where you pay higher percentages than you'd have paied for two lower salaries...

People still marry, either out of tradition, or because of the inheritance law, which I think is the socially most important thing to the institution of marriage: It is about defining an entity that has a right to each others possesions in the case of death. Someone who is not married can only inherit what's left over from the wealth of a deceased partner after all parties that have a legal right to that wealh have gotten their minimal share. Being a natural offspring of someone gives automatically gives you that right, while living with someone in the same home for a certain amount of time does not.

As such, the primary purpose of marriage is ensuring a continuity of posession within a defined group. In case of homosexuals it's debatable wheather or not it makes sense, for the obvious reason that there is no continuity.

1. Do unto others as you would have them do unto you.
2. And it harm none, do what thou wilt.


Jesus and LaVey in the same moral code? I guess they'd both have something to say about that...

Tony said...

Re: Raymond

Let's accept that marriage's primary purpose is to promote family and preserve family wealth. (The latter, BTW being at odds with the leveling impulse that led to inherritance taxation; but nobody said the voters had to be logical.) Okay, there are plenty of family-like relationships that are not admitted to the sanction and (where applicable, economic benefits) of marriage -- sisters living together, brothers living together, communal house sharing, etc. Why should we admit a certain class of same sex couples to the institution and benefits of marriage simply because their cohabitation is based on a mutual sexual interest?

But Tony, you say, they can't help being who they are. And they love each other. Well, two or three sisters living together probably can't help that they feel more comfortable with each other than they would be alone, or with a man, and probably love each other. Some people regard this as a mental problem, or a choice. I don't. That's just the way they are. But we don't insist that sisters (or brothers) be admitted to marriage.

None of which, BTW, has anything to do with religion. It has everything to do with recognizing that marriage is an institution that has to have limits. Those limits exclude cohabitting same-sex couples, regardless of the motivations for their cohabitation.

Tony said...

Raymond:

"WRT abortion (so you think I'm not dodging), I have my own thoughts on the matter, for my own reasons. Suffice it to say that I believe it should be legal, and thus safe - it will happen anyways, and coat hangers cause too many casualties. There is room in this world for acts regarded as immoral by many (or most) but which are not criminalized (as is, say, adultery)."

A couple of thoughts...

1. If abortion were made illegal, it would be because it was tantamount to murder. That's the only moral or ethical grounds on which to make it illegal to begin with.

2. Would we make murder legal for the purpose of making it safer for the perpetrator?

See, this is the gaping, abyssal hole in the "safety and health of the woman" argument. If we as a society thinks that it's a wrong thing to do, then we don't have any responsibility to protect the wrongdoers from the consequences of their actions.

Tony said...

Raymond:

"Other portions [of the gay community] argue that since marriage is an institution seemingly held in particular regard (and expectation) by society, and considered by many a fuller participation in said society, they should have the opportunity to participate equally."

I once read a long-winded diatribe about this. The author went into great detail about how bad his family, his firends, and his religion made him feel that he couldn't get married, because he was gay. The problem that such persons have is that marriage won't give them the comfort they think it will, because many -- perhaps the majority -- of people will regard it as a sham, even if a state sanctioned one.

Raymond said...

Tony:

"Would we make murder legal for the purpose of making it safer for the perpetrator?"

It occurs to me that we allow killing in self-defense, and in some areas of the US, for killing in defense of property. So from a certain point of view, yes, we do exactly that.

"The problem that such persons have is that marriage won't give them the comfort they think it will, because many -- perhaps the majority -- of people will regard it as a sham, even if a state sanctioned one."

For some people in some places, sure. Others will simply fuss over wedding presents. People get used to things. (But state sanction does help.)

«Oldest ‹Older   1 – 200 of 323   Newer› Newest»