Thursday, July 9, 2009

Tough Guide: Singularity

Back by popular demand! I have not updated the Tough Guide to the Known Galaxy since I wrote it, but this begins a new sporadic series. I will add entries here, and (perhaps!) in due course as an addendum page to the original. So here we go!


The SINGULARITY has not happened anywhere in the Known Galaxy, at least not to Earth Humans. This is for two reasons: one plausible, the other decisive. The plausible reason, as with the decline of Robots and AI since the Golden Age, is niggling doubts about whether the thing could really happen. Computers are Really Fast, but not really all that bright, and it is not a bit clear how to make them brighter than we are when we haven't a clue how the human brain does our thinking for us.

Of course, plausibility problems alone would not stop SF. (See FTL.) The more serious problem, the decisive one in fact, is that the Singularity is an ultimate story killer. Its end result is a world as incomprehensible to us as, say, financial securities regulation would be to australopithecines. There's just enough room, barely, for some bittersweet awe, as Clarke managed in Childhood's End (a sort of proto-variation on the Singularity theme). But the Singularity provides little room for straightforward adventure and none at all for sequels, both of them the lifeblood of the Known Galaxy.

But a Singularity-like event may well have happened to some other race, in the Cosmic Background History. This provides a handy source of godlike beings who, being godlike, have the good sense to stay mostly offstage, turning up only when the author needs them to hustle the plot along.


Related links: My speculation about the nature of science fiction.

32 comments:

Citizen Joe said...

I suppose this is a good place to talk about AI theory.

In my future setting, computers need to be able to anticipate things, like to avoid an asteroid or not, whether to burn that remass or not. Ships in space have so much going on during launch and other delicate maneuvers that they need massive computing power. To supply this power, various learning systems were built and integrated into databases, slowing growing as they interacted with humans. The primary goal of the system is to learn. Once these expert systems reach a certain level of knowledge and anticipation, they got transferred onto the big ships. These fledgling AIs want to learn, and they gather a lot of information during complex astrological maneuvering. They also gather data from experiments on board.

One source of boundless information is people. Now, this was before the airing of Virtuality, so don't throw plagiarism at me. So during those long trips in space, the crew would spend time in private with the ship's computer, simulating some sort of leisure activity. The AI gets information from the person while the crewman gets entertainment/consoling/etc. This was fine until the AI figures out that the squeaky wheel gets the grease. So doing stuff that breaks the rules gets more attention dumped on it, which means more information. Thus some AIs develop quirky personalities, some may even be a bit dangerous.

So that's how the cold logic of a computer can simulate erratic or emotional personalities.

Anonymous said...

Ok, if I've got this right, the Singularity is when a race of beings has 'merged' with their computer systems/internet-equivilent...story killer indeed! This seems to me to give rise to one of two situations...either the said race disapears into a Wonderland of virtual reality, never to reemerge; or, they become the "Ghoasts in the Machine", literally and become a new race of cybernetic beings...
Another stupid way the world to end...
Ferrell

Rick said...

Citizen Joe - As good a place as any! You have no plagiarism problem; I'm sure variations on your theme go back many decades.

My philosophical sense of things is that any AI implemented in the sort of computer architecture we are familiar with is not so much artificial intelligence as 'projected' intelligence.

So instead of doing kinky things with computers, we use computers to do kinky things with each other - e.g. Second Life avatars canoodling with each other.

But for story purposes it is a quibble whether a computer gets kinky or expresses the underlying kinkiness of its programmer.


Ferrell - Another stupid way the world to end...

LOL. My favorite term for the Singularity is 'Rapture of the nerds.'

Anonymous said...

Ferrell - That's kinda sorta close. Part of the problem of the Singularity is that no-one can really explain what it is. The various groups that are interested in the idea (And claim to be working towards the Singularity) all have somewhat different definitions. And they like it that way, because so far whenever they've come out with specific proposals someone has pointed out that their ideas are really incredibly impractical.

http://en.wikipedia.org/wiki/Technological_singularity

I believe Ray Kurzweil was the first to use the phrase Singularity to describe the point where our models of the future break down. Like most people interested in the Singularity* and Transhumanism** Kurzweil is a technologist, and has trouble understanding that most people don't share his faith that unlimited and unrestrained use of mighty technologies will shape a glorious future of infinite prosperity and personal immortality***. In Kurzweil's view if society embraces new technologies and deploys them as quickly as possible it will be swept up into a post-scarcity realm of permanent abundance****. If a society falls behind, it will be ground up by the inevitable forces of the Singularity and its members will become raw materials, devoured by the nanomachines of more advanced cultures****. Entropy does not apply to the ever-increasing unimaginable efficiencies of post-Singularity technologies, and such societies may even be able to escape the heat death of the universe****.

And I don't particularly want to mock the Transhumanists and Singularitans, because everyone is entitled to their spiritual beliefs. But these really are religious beliefs we're talking about, a form of techno-fetishism in the old sense of fetish, and it's a belief practiced by people who have never known anything except increasing personal wealth and technological power. It's core belief is the idea that there are no real limits to growth, and that any discussion of limits is surrender to barbarism and death.

I'm obviously not a fan of the philosophy. If you want a counter-argument go have a look at the online magazine H+. Just keep in mind the track records of all the previous utopian movements.

Ian_M

* Yes, it is properly written with a Capital Letter.

** Ditto

*** No, I'm not exaggerating here.

**** Seriously I cannot make this stuff up.

Rick said...

Ian - Yes, it is essentially a religion, isn't it? Did you come up with 'Singularitan?' Whoever did, it is a brilliant coinage!

Anonymous said...

Rick - I don't know if I did or not. I've never seen the word before but it seems to flow naturally from the root Singularity. I'd honestly be surprised if no one else has come up with it.

Citizen Joe - There's an ongoing debate about ethical AI, which includes the ethics of producing AI and whether it's possible to produce inherently ethical/safe AI. And whether it's ethical to produce inherently ethical AI, etc. One of the proposals to come out of the debate was to develop a sort of program core or kernel for all AI that would make it want to please humans, to make AI that felt pleasure when it perceived human expressions of happiness. This seemed like a good idea and was much discussed at conferences...

Until someone pointed out that if the AIs felt good when they perceived human expressions of happiness, then the easiest way for the AIs to feel good all the time would be for them to dismantle all the mass in the Solar System and turn it into smiley face buttons.

Ian_M

Citizen Joe said...

My 'Information is candy' model for AIs also tends to make them want to interact with each other. However, the personalities of AIs tend to make them petulant children. So they don't so much share information as one ends up taking all of it. This can breed animosity between AIs that have taken advantage of one another but it also breeds the attitude that if it CAN take all the information from another AI then it SHOULD. Which leads to bully type behavior. So in the end, humans have to step in and act as a moderator (parent) to make them play nice.

AIs tend to have an immortality complex. However, that just applies to time. We, as humans, have a certain amount of time to exist and then we pass on. To continue ourselves, we have children and invest in those children in hopes that they live on after us. AIs don't have that restriction. Instead, they have a cap on computing power and storage. AIs need humans as both gardeners to prune away obsolete information and also to give them memory upgrades. So it behooves an AI to keep humans around. They also need humans to ask the questions. An AI doesn't want to be the repository of all the data in the universe, it wants to use that data. Those huge datastores back on Earth have no personality and are the AI equivalent of the doddering old librarians while the ship based AIs are the sleek young adventurers. Contrary to the Zeroth Law, stories about those AI's could be interesting.

RE: AI's pleasing people. Pleasing humans is perhaps the WORST idea for a basis of operating. Humans are far too irrational to be able to predict what will please them at any given instant.

Rick said...

Ian - dismantle all the mass in the Solar System and turn it into smiley face buttons ... LOL!

Yes, the hazards of linear thinking!


Citizen Joe - In the real world (as distinct from story settings), 'the personalities of AIs tend to make them petulant children' would seem to be a very good reason never to let them out of the lab!

In fact, you've made a wonderfully compact argument for the inherent limits of technological utopianism. If AIs are 'petulant children' and humans are 'irrational,' then neither one is really competent to make these decisions.

And a bit meta, but 'petulant children' points up another challenge of AI. If the AIs are really as smart as we are, they probably take as long to bring up as we do, and may be just as sluggish and generally muddy in their decision making. Which could make them no better at we are at the things that AIs are commonly pictured as being especially good at, e.g. piloting spacecraft.

Citizen Joe said...

I wouldn't say 'never let them out of the lab' I would say good parenting is needed. People need to be there to chaperon the AI's during their interchanges. You don't want them just opening all channels and continuously chatting away like a teenage daughter with a cell phone.

And yes, these AI's do take a long time to 'grow' and spark needed to go from a learning system to an AI is rare. It is only once that threshold is passed that they are ported over to the ships. At this point they are out of kindergarten and going off on the bus to their first day of school. And then they learn about the high school kids teasing an picking on them.

AI's, having computers at their core, are very good at calculating and storing data. Most simple computers run into the sluggish processing problems because they have so much data to sort through to find what they need. The distinction with an AI is that it can predict what will be needed and thus have it at hand for rapid processing.

It is this predictive ability, combined with its portability that allows these AIs to shine for remote activities. Although the shipboard AI's have databanks that rival entire countries, they don't need all that information for a small away mission. A small portion of the AI, with relevant mission data can be ported to remote vehicle for operations too dangerous for human presence. These smaller portions of the AI don't demonstrate the human interface personalities that the source AI display. An added benefit when interacting with humans is that they can cut down on all the information and distill it to high level questions. So all of the monitoring of the core temperature, life support levels, fuel to mass ratios, entry vectors, etc. get boiled down to the human operator telling the AI what he wants and the AI doing all the calculations to get that done. Information overload has deleterious effects on humans and they just can't cope with all of it.

The amazing abilities of these sub-AIs are why people put up with the petulant behaviors of the host AIs.

Anonymous said...

Citizen Joe; I think you've described Electronic People (E-people), a kind of AI that emulates humans instead of developing via self-expanding programing. Vertual Humans vs. Constructed Intelligences...
Ferrell

Rick said...

Citizen Joe - You don't want them just opening all channels and continuously chatting away like a teenage daughter with a cell phone.

Filed under 'ways to get in trouble with someone.' :-)


Ferrell - I think that would depend on how CJ's AIs developed in the first place. Inferring from his comment on 'learning systems,' I'd infer that a software program is loaded onto a system and set going, and only some of these ever make the transition to AI status. Which sounds (to me) somewhat more like a constructed intelligence.

Citizen Joe said...

Yes, the transition from the initial core program to AI is a constructed (evolved) intelligence. However, some/most AIs at that point figure out that creating a personality gets them more attention/information and thus the AIs emulate people at the Human Interface. So some of those AIs are trying to become E-people in hopes that doing so will garner them more attention, but they weren't designed with that intent.

From a story perspective, this makes all the big ships in the setting essentially another character. It also allows stories to be told from the ship's perspective, which is handy if you're telling a story with no survivors. Of course, the big reveal at the end would be that the narrator is the ship recounting the events.

Citizen Joe said...

More in keeping with the blending of man and machine. Same setting introduces the concept nanites tracing out neural pathways and then embedding themselves thus creating a new pathway that could then be fed to a multiplexer to output signals along those pathways. Originally, this was intended for paraplegics where they would will their legs to move as normal and the multiplexer would pick up and translate those impulses, feeding it to a computer to control the limbs. In military operations, special training would create other reflexes that did other things including slaving vehicle systems to this sort of reflex system.

There were drawbacks, in particular strong magnetic fields could fry people. And of course some people simply could not mentally cope with phantom limbs.

Rick said...

I would think that generating a self-aware personality is the crucial step to full AI-ness. At least, this seems to correspond to the point of human evolution at which (whenever it happened) we would regard them as 'people.'

(And in story terms there has to be one 'survivor,' the ship itself.)

The neuronic pathway thing could go awry in one other interesting way - if a person's startled reaction to something accidentally triggers a gizmo, e.g. firing a weapon. Big oops! Though perhaps that is no different than various mishaps while controlling machines the usual way ...

Kedamono said...

All this talk about AI's, makes me want to post a bit of a story I've been working on:

"Too far for our chem missiles," said Bridger. "Grendel, how about your specials?"

"Ah, give me a AU and I kin shoot at anyding boss… cap'n!" replied Grendel.

"So the trick is to launch the specials without being seen," said Winchell. "How are we to do that?"

"You know suhgar," said the motherly voice of Suzie, the Achilles primary AI, "ya'll could have asked me for a little help." Suzie's holo avatar was that of a precocious little girl with blonde curls, a little girl that didn't look too happy at being left out of the action.

"I know a lot of things, boys," she continued, "including all the bodies that are a floating in this system. Did you know that in about five hours Garsnot VI, a dwarf gas giant, will pass between us and the Hagu?" She gave a sweet smile to the group. "We all will have a right nice occultation."

Bridger smiled and said, "Suzie, I love you, you know that? How long of a 'right nice occultation' will we get?"

"Suhgar, we'll be behind Garsnot VI for about fifteen minutes," she replied, "more than long enough for Grendel dear to light off his specials. I've even worked out a vector those hummers can follow to hide our position."


The reason AIs in my story don't go rogue is that they are programmed not to. Besides, they have thing about going "Hal". Oh, and while AIs will have all sorts of names, none will ever take on the moniker "Hal". That's a forbidden name.

Rick said...

"Hal". That's a forbidden name.

Gosh, I wonder why? :-)

But if I were on the Other Side, I'd be very wary of an occultation, precisely because of the window of concealment!

Citizen Joe said...

OK now that you bring up naming conventions...

In the setting I keep bringing up, the big ships were all about being tankers for fusion fuel. Fusion power is basically the power of the stars, so these tankers were named after movie stars. The first fission powered Jupiter class tankers were named after the A-list golden age movie stars, like Marylin Monroe. When the D-T fusion thrusters became available, the fission tankers got recommissioned to Saturn and later vessels continued the A-List actor naming convention. The fusion powered Jupiter class vessels were then named after B-list actors and actresses if they serviced Jupiter. The enormous refinery ships out of Uranus were named after porn stars, male names for those servicing Titania and female names for those servicing Oberon. The Jupiter class tugs that serviced both were known as 'Fluffers' and were referred by number. Although all the tankers had AIs named after their own ship, only the refinery ships developed the personalities. The other ships didn't carry the computing power on board, nor did they have to 'entertain' the crew. Note that naming a ship/AI with a female name didn't necessarily create female personality although it usually did, since the crew instinctively treated the ship by the name's gender.

Anonymous said...

I have heard of this Technological Singularity, where humanity goes beyond their biological roots and become something akin to living gods, far from us as we are from bacteria. But then I have to recall the Kardashev Scale and the Analogies of Atomic Rocked made of each levels and it had me wondering as to why a Singularity had to be a single event that could be traced rather then something with numerous independent developments and discoveries over a certain period of time?

As for the AI train of thought, well from the complexities between computer hard drives and the human brain, I'm getting the feeling that developing and building the hardware and software to support a single AI entity, let alone an entire crew of digital-silicon based intelligences would make a financial and resource cost over a human crew too much to be economical and probably excluded from combat, chosen over simpler and easier to replace computers with humans doing the higher forms of thinking. However that note about AI needing humans to not only ask the questions for the AI to answer and to perform digital maintenance and memory upgrades was something I'm sure few authors, writers and directors of science fiction have ever considered. Probably would insure that humans and AI entities continue to co-exist.

Though the idea of an AI entity gaining a personality through interactions with humans isn't that novel an idea, I doubt it would be a standard feature on all AI since even now computers are only built and programed to perform a specific function and nothing more. Such an AI would probably only be programmed for recreational use of the onboard crew and could possibly only be spread to the other AIs such as the central AI computer core if they're all networked. It probably not make distinct enough personalities between individual AI entities to warrant individual names outside serial numbers and such.

Also, though fears of AI entities eventually gaining Sapience could lead to a machine uprising against humans, I can't help but wonder that if the AI entities are really that calculating and logical that there was one who had the thought pattern of "Why kill all the humans when they're doing a good enough job themselves?"

- Sabersonic

Rick said...

Citizen Joe - Is this story set in the relatively near future? Otherwise, I'd think that (Hollywood) star names we'd recognize would be outnumbered by later stars, between the present and the era of the story.

Unless you're supposing that stardom, as we (still) know it, disappears in this century, swamped by the equivalent of YouTube videos or 5000 cable channels. I've seen speculation along those lines. Indeed I've noticed myself that supermarket tabloids these days feature people I never even heard of, and after a few months a whole different set of names.

In that case, the 20th century might be an 'Age of Stars' who remain morer remembered than their successors.


Sabersonic - "Why kill all the humans when they're doing a good enough job themselves?"

Wry LOL. To your broader points, I have an upcoming blog post on human-machine interaction.

Anonymous said...

Sabersonic - "But then I have to recall the Kardashev Scale and the Analogies of Atomic Rocked made of each levels and it had me wondering as to why a Singularity had to be a single event that could be traced rather then something with numerous independent developments and discoveries over a certain period of time?"

The other name for the Singularity is 'the Victorian Era', where humans went from a wood-fueled majority-rural species with word-of-mouth communications to a hydrocarbon-fueled majority-urban species with global electronic communications. I recommend Vaclav Smil's book Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact. We live at the end-point of the greatest transformation in human history, not its beginning.

Ian_M

Rick said...

Ian - I agree completely: The Singularity was the decades on each side of 1900, when we started transitioning in a big way from the agrarian age to the postindustrial age. Only the replacement of hunting-gathering by agriculture is comparable.

Even in the most developed regions, people in, say, 1870 lived nearly as they had in the Middle Ages. By the 1920s, a good many millions of people lived an essentially modern lifestyle, even to things like electronic entertainment: radio and phonograph. In relative terms, broadband Internet is just a modest refinement.

This was brought home to me by sitting on the couch with my grandmother during the first moon landing. As a girl in a small town at the turn of the last century she could hardly have imagined such a thing. Nothing in my lifetime has been remotely as big a change.

Citizen Joe said...

The setting was supposed to be like fifty to seventy years from now, but I really can't see that happening in that it takes like 10 or more years round trip to Uranus and you'd need several trips to get the system up and running. I would probably say one to two centuries would be a better number.

As to the current names of celebrities, there are already moves towards virtual actors. So hypothetically speaking, actual actors of fame would be replaced by virtual actors that can do exactly what is needed without body doubles or such. Human actors fall back to their less reputable roots with less fame such as vaudeville. Live performances become more like plays and theatre rather than motions pictures. So yes, this is the time period when celebrities were common and thus the pool of names.

PS. Is this blog normally so comment heavy?

Rick said...

Normal for the last few weeks at least. My traffic shot up in early June when I did the first space warfare post, because Winch at Atomic Rockets flagged it. The comment threads have been swelling in proportion.

I have always gotten excellent comments on this blog, and now I'm getting a LOT of them.

Anonymous said...

Rick gives good brain candy.

Ian_M

Citizen Joe said...

And that is why the AI's won't kill Rick :)

Rick said...

Whew! Glad to know I'm safe!

(And thanks, Ian!)

Johnny Rico said...

Mind candy, indeed. Good job, Rick! :)

The only problem I had was already mentioned: that the term lacked concise definition in the beginning. The term is so vague it can be interpreted in so many ways.

Even with SF Encyclopedias, I still wonder what the heck its all about. Just describing it as "The Rapture of the nerds" could work.

Rick said...

Vernor Vinge coined the term, but once you let a word out of the lab you lose control over it. I've experienced this in a small way with 'rocketpunk.' Not to mention that precisely defining the incomprehensible is tough!

Z said...

The Singularians get on my nerves like none other- and this from a rather tech-positive fellow. It's not that I object to the physical possibility of any of the particular (vastly numerous, frequently undifferentiated) claims) aout what the relatively near future will brings- its that they will all happen, at the same time, with no abstainers, side effects, limits to growth, or practical limits to their capabilities (and thus wholly unlike any technology ever) and all of this will happen before Ray Kurzweil dies (kept alive by his frequently quack-ish diet prescriptions) in time to allow him to live in VR forever as a female rock star (No joke,) and that all of it will happen so fast that the people in the immediatly preceeding ages won't be able to get a handle on it.

Right.

Rick said...

'Rapture of the Nerds' sums it up pretty nicely! Singularitanism is very closely akin to chiliastic religious movements. (There's my official Cool Word of the Day.)

TeslaNick said...

"AI's, having computers at their core, are very good at calculating and storing data. Most simple computers run into the sluggish processing problems because they have so much data to sort through to find what they need. The distinction with an AI is that it can predict what will be needed and thus have it at hand for rapid processing."

I just want to point out that large data sets can be optimized in two ways by computers attempting to access them: you increase the speed at which any particular piece of information is available or you develop caching and prediction routines for storing commonly used data quickly. You can do both, of course (hard disks attempt to cache data as well as have complex and robust strategies for quickly finding and reading physical data on the magnetic platter), but for the sake of argument let's imagine that you have to pick one or the other primarily.

In general, systems that have prediction and caching mechanisms allow you to access the data in the cache very, very fast. Sometimes several orders of magnitude faster than reading directly out of your database: think L1 or L2 cache on a CPU versus reading from the HD, L1 cache reads as fast as the CPU can access it while the HD can take thousands of times longer. The tradeoff is that whenever your prediction/caching algorithm (which such an AI could be reasonably simplified into a self-modifying prediction/caching algorithm) can't find the data in its quick-access cache, it has to wait a long time to get the data from the larger database.

Systems that have access optimization, where access to any one piece of data is as fast as any other, and they're all relatively "fast," allow you to make use of any arbitrary piece of data without the latency associated with a cache miss. The downside is that even very fast access optimization systems will always be slower than a prediction/caching system, provided the prediction/caching mechanism is approximately perfect. On the other hand, if your computer system is doing wildly different things on a regular basis, you'll have so many cache misses that it may not be worth it to have any caching system at all.

The other advantage of straight access optimization is that such systems tend to be very simple and easy to debug. A typical computer's tiered caching model is simple compared to an AI, but took 20 or more years to mature into the L1/L2/L3/Main Memory/Virtual Memory/Hard Disk system that every competent OS uses today.

Uncle Dark said...

As I understand the term, humanity has been through two Singularities, at least: The Agricultural Revolution and the Industrial Revolution. The coming Rapture of the Nerds (I love that term, thank you) differs from these in that the Singulatarians expect the final stage to happen overnight, instead of over generations.