tag:blogger.com,1999:blog-7494544263897150929.post2079755876240836080..comments2024-03-18T13:11:39.192-07:00Comments on Rocketpunk Manifesto: Ten Laws of Good Science Fiction?Rickhttp://www.blogger.com/profile/16932015378213238346noreply@blogger.comBlogger345125tag:blogger.com,1999:blog-7494544263897150929.post-75313192394126808702012-06-04T07:03:58.219-07:002012-06-04T07:03:58.219-07:00Tony,
Thing is, you don't need an AI to run a...Tony,<br /><br />Thing is, you don't need an AI to run a simulation that you can provide all of the parameters for, and all of the arguments to the parameters. A dumb number cruncher -- and even very large ones are just as dumb as your personal laptop -- can run that simulation.<br /><br />Leaving the employment of AI aside, a self-consistent simulation doesn't tell you much. One could make a self-consistent simulation universe in which Pi rounds to 3.14160 instead of 3.14159.<br /><br />==========<br /><br />Well sure you wouldnt need an AI .. but maybe this AI could do it better in some way. <br /><br />The rounding error of PI would work most of the time in a mathmatical model. <br /><br />In development most of the time is better than usual.. so I kind of shrug at that small a difference. <br /><br />(SA Phil)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-20912761446195264572012-06-03T20:50:24.669-07:002012-06-03T20:50:24.669-07:00=Milo=
Tony:
"One could make a self-consi...=Milo=<br /><br /><br /><br />Tony:<br /><br /><i>"One could make a self-consistent simulation universe in which Pi rounds to 3.14160 instead of 3.14159."</i><br /><br />No, you couldn't. Pi is a mathematical constant, not a measured physical value. Any value of pi other than the one we know is inconsistent with basic arithmetic. Well, maybe not-so-basic arithmetic.<br /><br />One could make a universe where the ratio of a circle's circumference to its diameter is not exactly the same as in our universe, by giving the universe a non-Euclidean geometry. In fact, we might live in such a universe (the curvature is just too small to notice). However, it would be incorrect to say that pi in this universe is different, rather you should say that the circumference-to-diameter ratio is not equal to pi. In most (all?) non-Euclidean geometries, the circumference-to-diameter ratio will not even be constant, but vary with circle size. The ratio will tend toward pi as smaller and smaller circles are measured. (Also, if the non-Euclidean geometry is negatively rather than positively curved, then a shape with Euclidean geometry can be embedded in it. Pi could then be measured on that shape.)<br /><br />You could try to introduce obscure metrics, like the taxicab metric or chessboard metric. Both of these examples have a constant circumference-to-diameter ratio of 4 as measured in their own metric. However such metrics are also weirdly behaved, for example in that most pairs of points will have more than one shortest path between them. You might be able to make an interesting world like this, but it will share only a passing resemblance with ours (even with less extreme changes to the metric).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-20076829672829961962012-06-03T20:18:12.796-07:002012-06-03T20:18:12.796-07:00SA Phil:
"Its possible you could have your A...SA Phil:<br /><br /><i>"Its possible you could have your AI simulate how you think something works instead.<br /><br />Which many mathematical models do now.<br /><br />In your dark matter example - dark matter and energy basically 'may exist' because the theories don't say they cant, and they help explain some identified 'problem'. <br /><br />So since they are hypothetical anyway .. Your model could include how you think the theory works. <br /><br />Its always possible you are wrong however - since dark matter and energy may not exist at all. (or whatever is being contemplated)"</i><br /><br />Thing is, you don't need an AI to run a simulation that you can provide all of the parameters for, and all of the arguments to the parameters. A dumb number cruncher -- and even very large ones are just as dumb as your personal laptop -- can run that simulation.<br /><br />Leaving the employment of AI aside, a self-consistent simulation doesn't tell you much. One could make a self-consistent simulation universe in which Pi rounds to 3.14160 instead of 3.14159.Tonynoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-88944802705908689992012-06-03T16:12:14.717-07:002012-06-03T16:12:14.717-07:00Its possible you could have your AI simulate how y...Its possible you could have your AI simulate how you think something works instead.<br /><br />Which many mathematical models do now.<br /><br />In your dark matter example - dark matter and energy basically "may exist" because the theories don't say they cant, and they help explain some identified "problem". <br /><br />So since they are hypothetical anyway .. Your model could include how you think the theory works. <br /><br />Its always possible you are wrong however - since dark matter and energy may not exist at all. (or whatever is being contemplated) <br /><br />(SA Phil)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-44365873912808602232012-06-03T13:43:49.725-07:002012-06-03T13:43:49.725-07:00=Milo=
Jollyreaper:
"There's the othe...=Milo=<br /><br /><br /><br />Jollyreaper:<br /><br /><i>"There's the other question about whether you can make fundamental scientific discoveries in a simulated environment."</i><br /><br />You could develop new applications of known physical laws. You could not, however, discover new physical laws.<br /><br />Engineers might be more useful in this environment than scientists.<br /><br /><br /><br />Damien Sullivan:<br /><br /><i>"You'd want to double-check in reality, of course."</i><br /><br />Good idea.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-88576736420610328142012-06-03T11:59:25.354-07:002012-06-03T11:59:25.354-07:00Depends what you're calling new fundamental di...Depends what you're calling new fundamental discoveries. Simulation couldn't find new laws of physics, but a high quality one might find new applications or implications of existing laws. Simulating what we know of quantum won't explain dark matter, but you could find new chemicals or batteries. You'd want to double-check in reality, of course.Damien Sullivanhttps://www.blogger.com/profile/13321329197063620556noreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-40130238462670812202012-06-03T10:43:12.317-07:002012-06-03T10:43:12.317-07:00Humans do that too. We call it dreaming.
Or perha...<i>Humans do that too. We call it dreaming.<br /><br />Or perhaps daydreaming.<br /><br />The point is, dreaming is all within your own mind. It allows you to reprocess information you already have, but it doesn't allow you to obtain new information or to check your current viewpoints against an external source that lacks your prejudices.<br /></i><br /><br />Right. And a human in such a state is pretty much useless; same goes for the AI, and that's one excuse for why they don't tend to be useful in a given setting, thus preserving things for humans to do. <br /><br />There's the other question about whether you can make fundamental scientific discoveries in a simulated environment. That question came up recently on Eureka where the top scientists were kidnapped and placed into a Matrix simulation of their own town so they could continue researching new technology but for the baddies to use. Some people maintained that new discoveries could not occur there because the simulation could only model what we know of reality, not the new stuff we haven't discovered yet. I have no opinion on it myself.jollyreaperhttps://www.blogger.com/profile/05673007647719726846noreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-59222271559945528062012-06-03T09:47:17.208-07:002012-06-03T09:47:17.208-07:00=Milo=
Jollyreaper:
"That's been used...=Milo=<br /><br /><br /><br />Jollyreaper:<br /><br /><i>"That's been used in some stories, the AI is portrayed as catatonic to the outside world but it's playing games inside its own mind."</i><br /><br />Humans do that too. We call it dreaming.<br /><br />Or perhaps daydreaming.<br /><br />The point is, dreaming is all within your own mind. It allows you to reprocess information you already have, but it doesn't allow you to obtain new information or to check your current viewpoints against an external source that lacks your prejudices.<br /><br /><br /><i>"You are correct that many predators are sleeping most of the day but the counter-example are critters like otters and dolphins who play a lot as well, activity that seems to serve no other purpose but celebrate motion."</i><br /><br />Not celebrate. Practice. Play, even if it is not exactly the same kinds of movements as are used in hunting (and sometimes it is - ever tackle-hugged someone?), provides a safe environment in which to stimulate and improve reflexes, coordination, and so on.<br /><br />Additionally, play serves a social function. It allows individuals to bond over doing something stimulating together, even if that something isn't particularly useful otherwise.<br /><br />(Though some social predators, like lions, still fall squarely in the sleep-most-of-the-day category. Though that doesn't stop them from getting into SOME play, mind you.)<br /><br /><br /><i>"Neurologists have studied people with brain damage that impairs or destroys emotional feeling. Rather than being Vulcan and clear-thinking, they are paralyzed by indecision. Part of what emotion is good for is helping you make decisions with your gut when logic isn't entirely clear."</i><br /><br />Emotion tells you what you want to do. Logic tells you how to do it successfully.<br /><br />Sometimes logic needs to override emotion - for example, to prevent you from doing something that seems tempting in the short term but that you'll probably regret later. But note that regret is also an emotion.<br /><br />Logic without emotion is useless.<br /><br /><br /><i>"Really, the next step in AI isn't asking a complicated question and getting an answer, it's asking that question and the AI replies "Why do you want to know?" Not just a canned response but because of curiosity."</i><br /><br />:)<br /><br /><br /><br />Tony:<br /><br /><i>"First of all, I'm having trouble seeing the relevance, except when discussing portability of the AI as software."</i><br /><br />I was debunking the popular idea that its underlying computer architecture using 1s and 0s would make an AI prone to black-and-white thinking.<br /><br />(For example, I recall reading about one Star Trek when some alien AIs feel they need something belonging to the main characters, and so steal it. The characters speculate that this is because as computers they think in 1s and 0s, and so saw it as an either-or decision: either our need outweighs theirs, or it doesn't. And using this as an excuse for why the idea of asking for permission didn't even enter their thought patterns.)<br /><br />I initially mentioned the topic in response to Damien Sullivan (expressing agreement with his opinion), not you, and was not thinking about the subject of portability at all.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-36159928792632365872012-06-03T08:27:32.379-07:002012-06-03T08:27:32.379-07:00Milo:
"I know all that. My point was the, ju...Milo:<br /><br /><i>"I know all that. My point was the, just as real computers use binary underneath but aren't necessarily programmed with binary in mind, computer-based AIs might work in binary at their lowest levels but would have no particular reason to CONCIOUSLY think in binary."</i><br /><br />First of all, I'm having trouble seeing the relevance, except when discussing portability of the AI as software. And the point I was making is that the AI in fact has to be aware of it's own organization, at all levels of abstraction, in order to port itself (if it even can).<br /><br />ANd, BTW, it's not what <i>you</i> may or may not know that motivates the lengthy explanation as much as the perceived value of making sense to the majority of readers.Tonynoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-70980658645658962682012-06-02T21:35:13.095-07:002012-06-02T21:35:13.095-07:00My movement talk wasn't so much about AI's...My movement talk wasn't so much about AI's but for living creatures. Granted, evolution doesn't take place in a single organism but over many generations with proper selection pressures. Of course, with an AI it might be able to self-modify code and change (not evolve) the same way a human mind grows from infancy to adulthood, though the AI could continue to add capacity to its mind so that it keeps growing in scope and power. <br /><br />As far as AI's go, they could have an inner life of the mind as elaborate as anything portrayed for humans in fantastic storytelling. That's been used in some stories, the AI is portrayed as catatonic to the outside world but it's playing games inside its own mind. <br /><br />I'm not sure how you would program it but I think that there would need to be some way of simulating emotions for the AI to end up having a consciousness. Humans have basic physical needs, then social needs, then more esoteric philosophical concerns (Maslow.) <br /><br />You are correct that many predators are sleeping most of the day but the counter-example are critters like otters and dolphins who play a lot as well, activity that seems to serve no other purpose but celebrate motion. <br /><br />Neurologists have studied people with brain damage that impairs or destroys emotional feeling. Rather than being Vulcan and clear-thinking, they are paralyzed by indecision. Part of what emotion is good for is helping you make decisions with your gut when logic isn't entirely clear. You know what they say, an adequate plan executed with vigor beats a perfect plan that comes too late. <br /><br />Really, the next step in AI isn't asking a complicated question and getting an answer, it's asking that question and the AI replies "Why do you want to know?" Not just a canned response but because of curiosity. Don't ask me how to make that real, though!jollyreaperhttps://www.blogger.com/profile/05673007647719726846noreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-72544383476441525262012-06-02T14:52:36.399-07:002012-06-02T14:52:36.399-07:00=Milo=
Let us consider an AI that is completely...=Milo=<br /><br /><br /><br />Let us consider an AI that is completely immobile and has no network connection. Its only method of interaction with the outside world is talking to its human operator, using a loudspeaker/microphone combination.<br /><br />This AI is still capable of meaningful thought. If you uploaded my mind and used it as the AI in question, I would probably be able to give useful advice on some subjects (though I would also resent being treated that way).<br /><br />However, with such limited access to the outside world, the AI has little room to grow and learn. It lives in an insufficiently stimulating environment to be able to improve itself much beyond what it was programmed with from the start.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-89123026714193494882012-06-02T14:30:10.283-07:002012-06-02T14:30:10.283-07:00=Milo=
Jollyreaper:
"And we've seen e...=Milo=<br /><br /><br /><br />Jollyreaper:<br /><br /><i>"And we've seen evolution take back what's no longer useful."</i><br /><br /><i>"If intelligence in a sessile state is useless, it stands to reason it would go away."</i><br /><br />True. But once again, only on an evolutionary timescale where natural selection applies. If a particular individual of an otherwise-mobile species is paralyzed in an accident, that individual does not immediately stop being able to think. If cared for by its peers, it may continue thinking for a long time without ever being able to act on these thoughts.<br /><br /><br /><i>"And I think that replicating these kinds of inner motives would be necessary to create functioning AI's. An AI content to sit there like a plant isn't good for much of anything."</i><br /><br />True. An AI needs SOME incentive to interact with its environment.<br /><br />Though, as I said, there are other forms of interaction besides physical movement.<br /><br />Also the AI doesn't necessarily need to be motivated to interact with its environment ALL the time. Many predators spend most of their time resting or sleeping, with only brief bursts of activity when they hunt, but these predators still have relatively advanced brains so the hunts are successful. Though, this is largely to save energy. An AI which has access to plentiful energy would probably work better if it spent any spare time studying or practicing. Or even playing, which is really a form of practice.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-10606351187256022012-06-02T13:48:37.758-07:002012-06-02T13:48:37.758-07:00@milo
"Bacteria and plants are not intellige...@milo<br /><br />"Bacteria and plants are not intelligent."<br /><br />Right. But somewhere up the chain of development, we now ascribe action to intelligence. Where is the line crossed? There's not much difference between a human zygote and any other single-celled lifeform. The cells don't even start differentiating until the fourth division. We as humans want to make a distinction. I don't know if you really can. At what point does it turn from lumps of cells to a human life? <br /><br />As for your link on bee learning, utterly fascinating and just raises more questions for me. :) <br /><br />The motive explanation I think is pretty compelling but I don't feel confident in any given theory to put all my eggs in that basket. <br /><br />" However, thought without motion can still exist. It's just useless."<br /><br />And we've seen evolution take back what's no longer useful. Legless lizards and snakes, flightless birds, every species of animal in a cave goes blind from spiders to salamanders to shrimp. (are there blind cave mammals? Some subterranean rodents are considered effectively blind.) <br /><br />If intelligence in a sessile state is useless, it stands to reason it would go away. According to wiki, " The emergence of vestigiality occurs by normal evolutionary processes, typically by loss of function of a feature that is no longer subject to positive selection pressures when it loses its value in a changing environment." <br /><br />The whole point I was getting at with motion is that a filter feeder may be an animal but doesn't have much need for higher thought if it's just anchored to a rock. A jellyfish might be a hunter but does pretty good just drifting and stinging what it bumps into. A bird needs a pretty good piece of mental hardware to manage flying, recognizing targets, making attacks, etc. Social behavior is necessary for cooperation among pack animals for the hunt. Herd animals need to stick together to defend themselves against predators. <br /><br />If animals could put down roots, throw out some leaves and live off the sun, not much thinking would be required. And there needs to be some kind of internal motivation for the animal to do all those things. Hunger, so that it seeks food. Fear, so that it avoids being eaten. How successful would a species of deer be if it was indifferent to being eaten? <br /><br />While I can't tell you this is the explanation for everything, I do find it compelling. And I think that replicating these kinds of inner motives would be necessary to create functioning AI's. An AI content to sit there like a plant isn't good for much of anything.jollyreaperhttps://www.blogger.com/profile/05673007647719726846noreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-78162217281276779202012-06-01T16:52:10.615-07:002012-06-01T16:52:10.615-07:00=Milo=
Jollyreaper:
"Bacteria do not requ...=Milo=<br /><br /><br /><br />Jollyreaper:<br /><br /><i>"Bacteria do not require motivation to reproduce. Do plants have intent when moving to light?"</i><br /><br />Bacteria and plants are not intelligent.<br /><br /><br /><i>"We can program robots to replicate the swarming behavior of insects. Is there motivation in a bug? Of some kind or else it would just sit there. But is it more complex than a clockwork toy? I don't know."</i><br /><br /><a href="http://en.wikipedia.org/wiki/Bee_learning_and_communication" rel="nofollow">Maybe this will help.</a><br /><br />I was going to follow that with a really cool anecdote, but I can't find a source now so I don't want to risk it.<br /><br /><br /><i>"One school of thought is that all higher thought is about motion."</i><br /><br />I disagree.<br /><br />Thought only has an evolutionary advantage if you can move, because there's little point to thinking if you can't act on your thoughts. However, thought without motion can still exist. It's just useless.<br /><br />AIs, again, aren't constrained by natural evolutionary viability, so there's really no reason you need to give them the ability to move as a prerequisite for making them intelligent. (Except that not doing so would be cruel. ...Or would it? If an AI doesn't have human emotions, it may not necessarily be able to feel claustrophobia or boredom.)<br /><br />Additionally, AIs have potential ways to interact with the world without physically moving anything macroscopic - for example, by acting over the internet, which is plenty large and varied enough to provide a stimulating environment for an AI child to grow up in.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-21726109799108914572012-06-01T16:00:02.708-07:002012-06-01T16:00:02.708-07:00At what point is intention needed in life?
Bacter...At what point is intention needed in life?<br /><br />Bacteria do not require motivation to reproduce. Do plants have intent when moving to light? Autotrophism. Time laps vines appear animalistic as their tendrils whirl about seeking purchase. <br /><br />We can program robots to replicate the swarming behavior of insects. Is there motivation in a bug? Of some kind or else it would just sit there. But is it more complex than a clockwork toy? I don't know. <br /><br />At some point we get up to mammals that can show emotions. Fear, loyalty, love, hate. It'd not projection on our part. <br /><br />At what point along the line from bacterium to human do we say yeah, this is consciousness? Despite the massive computing power, where does Deep Blue compare, to a mouse or an ant or a paramecium? <br /><br />One school of thought is that all higher thought is about motion. Primitive animals with mobile and sessile life stages reabsorb the brain when it's no longer needed. According to this school of thought, higher consciousness is the result of selection for superior navigation of complex environments. <br /><br />I don't have any answers but I do think that until we come up with the trick of consciousness, ai's will remain like classic genies. That also means that yOu can keep them from taking over the story as well as civilization.jollyreaperhttps://www.blogger.com/profile/05673007647719726846noreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-35046599249734450692012-06-01T14:29:19.803-07:002012-06-01T14:29:19.803-07:00=Milo=
More thoughts on emotions an AI is likel...=Milo=<br /><br /><br /><br />More thoughts on emotions an AI is likely to have.<br /><br />If a machine is ordered to perform a task, it simply attempts to do that task. It doesn't care if it succeeds or fails, and it won't try to embellish the given instructions in any way. It just reports the facts.<br /><br />If an AI is ordered to perform a task, and its initial attempt fails, it would most likely be expected to analyze the source of its failure, figure out how to fix/circumvent it, and try again.<br /><br />If an AI is to be motivated to continue trying to solve a difficult problem, actually WANTING to succeed rather than giving up if the operators' instructions are insufficient to complete the task, this suggests emotions like frustration and relief.<br /><br /><br /><br />Thucydides:<br /><br /><i>"One of the arguments that an AI will think faster than a human being is electrical signals are faster by orders of magnitude than electrochemical ones."</i><br /><br />I would expect the processing time in a single neuron/transistor/etc. to be more of a bottleneck than the signal transmission time between neurons/transistors/etc.<br /><br /><br /><br />Tony:<br /><br /><i>"I guess we are going to have to go into the gory details of computer organization. [bla bla bla]"</i><br /><br />I know all that. My point was the, just as real computers use binary underneath but aren't necessarily programmed with binary in mind, computer-based AIs might work in binary at their lowest levels but would have no particular reason to CONCIOUSLY think in binary.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-53018821156797492072012-06-01T12:39:00.578-07:002012-06-01T12:39:00.578-07:00Thucydides:
"One of the arguments that an A...Thucydides:<br /> <br /><i>"One of the arguments that an AI will think faster than a human being is electrical signals are faster by orders of magnitude than electrochemical ones. I believe the actual figure of merit is 1,000,000 times.<br /><br />The internal architexture of an AI is speculative, but if it is in any way based on current Silicon technology, the signal paths inside each chip will be far sgorter as well."</i><br /><br />But current and foreseen silicon computers cannot achieve the realtime processing density present in a human brain. A two inch square CPU, even a pipelined, multi-core one, processes a single piece of data at a time, per core. To process that single piece of data, the computer has to run through numerous instructions, even to a simple value comparison on two integers. And the signal pathways that are traveresed for <i>each</i> of these instructions are considerably longer than the pathways inside or even between neurons. To get to some address in memory cache, for example, one has to go to the cache controller, several milimeters away, give it the address, then wait for the data to come back from some random place on several square millimeters of cache space. If the data's not in cahce, one may have to go several inches away to a memory chip, or several inches in another direction, plus wait up to a tenth of a millisecond for a mechanical read of a spinning disk. (Solid state memory will help remove that chokepoint, but it is still a very large tract of silicon across which signals have to travel.) So, maybe, you can reside real time analogues of several thousand or millions of neurons on a single computer. But then there are 100 billion neurons...Tonynoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-89109404366943905492012-06-01T12:26:40.468-07:002012-06-01T12:26:40.468-07:00Milo:
"Most computers today don't think ...Milo:<br /><br /><i>"Most computers today don't think in binary. Every programming language more advanced than a Turing machine, including assembly language for every CPU ever, offers its users operators for integer and/or floating point arithmetic. Data space is also often used to store pointers. Rarely if ever does a program care about the underlying binary representation of a pointer."</i><br /><br />I guess we are going to have to go into the gory details of computer organization. All of the things you talk about are necessary abstractions, but they are ultimately <i>just</i> abstractions of the computer's capabilities. A pointer is a real binary memory address, which the program has to store somehow, in order to reference (and dereference) it later. Maybe the programmer who declares a pointer knows only theoretically that it is a memory address, but the underlying program compiler has to kow how to get that address from the OS, how to store it in memory, and how to present it back to the OS in a recognizable form, using the correct OS function.<br /><br />Now, at that level, as long as you have an adequate compiler, that may be enough. But not all processor types or generations present all of the same capabilities. As mentioned earlier, it's entirely possible -- and much more than likely -- that the initial AI software is going to count on special hardware features that are simply unavailable on any other machine, in any language. Then all of the abstractions are meaningless. Imagine a programmer writing an instruction to do some operation X, that requires some hardware capability Xa. If the target computer doesn't have the have capability Xa, the program will simply fail to compile, because the compiler doesn't know how to translate operation X into machine code. It can't be made to know, because there is no real world capability ot perform that operation.<br /><br /><i>"Well, human brains actually are really good at math. Proper motor control requires quite sophisticated calculationd. Problem is, we can't tap into this mathematical ability for concious use."</i><br /><br />Nope. Human brains are quite good at learning environmental parameters, and rpeating actions that are found to be satisfactory in the attainment of some goal. To learn to throw a ball accurately at different distances, it takes practice, because the brain can't analyze the proble in real time. It can only build up a database of experiements, figure out wich ones were successful, and remember the parameter arguments for a given situation. All the brain does after that is recall what the proper motor instructions are for a given situation.Tonynoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-36813470585354998702012-06-01T12:22:24.490-07:002012-06-01T12:22:24.490-07:00One of the arguments that an AI will think faster ...One of the arguments that an AI will think faster than a human being is electrical signals are faster by orders of magnitude than electrochemical ones. I believe the actual figure of merit is 1,000,000 times.<br /><br />The internal architexture of an AI is speculative, but if it is in any way based on current Silicon technology, the signal paths inside each chip will be far sgorter as well.Thucydideshttps://www.blogger.com/profile/09828932214842106266noreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-81408878388008389672012-06-01T08:40:53.265-07:002012-06-01T08:40:53.265-07:00=Milo=
Freefall is also interesting because rob...=Milo=<br /><br /><br /><br />Freefall is also interesting because robots are able to make themselves effectively immortal, but many reject this for religious reasons.<br /><br /><a href="http://freefall.purrsia.com/ff400/fv00380.htm" rel="nofollow">1</a><br /><a href="http://freefall.purrsia.com/ff400/fv00381.htm" rel="nofollow">2</a><br /><a href="http://freefall.purrsia.com/ff400/fv00382.htm" rel="nofollow">3</a><br /><a href="http://freefall.purrsia.com/ff400/fv00383.htm" rel="nofollow">4</a><br /><br /><a href="http://freefall.purrsia.com/ff2000/fc01968.htm" rel="nofollow">even better link</a><br /><br />Personally I disagree with their viewpoint. It doesn't matter if the "backup" is really you or not in some philosophical sense - given the choice, living on as not-quite-yourself-but-incredibly-close is still better than not living on at all, much like people "living on through their art" but taken to a whole new level.<br /><br />I guess the "imagine if your back up got into the wrong hands" point is scarily valid, though.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-52315507270007020432012-06-01T08:30:08.698-07:002012-06-01T08:30:08.698-07:00=Milo=
Damien Sullivan:
Ah. We're in agre...=Milo=<br /><br /><br /><br />Damien Sullivan:<br /><br />Ah. We're in agreement then.<br /><br />As for hyperrational robots: <a href="http://freefall.purrsia.com/ff1400/fc01315.htm" rel="nofollow">here</a>.<br /><br /><br /><i>"That they must think in binary,"</i><br /><br />Most computers today don't think in binary. Every programming language more advanced than a Turing machine, including assembly language for every CPU ever, offers its users operators for integer and/or floating point arithmetic. Data space is also often used to store pointers. Rarely if ever does a program care about the underlying binary representation of a pointer.<br /><br /><br /><i>"think faster than humans"</i><br /><br />Software-based AIs will think as fast as their processors let them, which may be faster or slower than humans, depending on just how many processor cycles the AI needs and on how fast non-intelligent computers of the future are. At a zeroth-order guess, though, AIs won't think multiple orders of magnitude faster than us.<br /><br />For AIs with unique hardware, who can tell?<br /><br /><br /><i>"that an AI will *naturally* think in binary, which is as valid as thinking that we must be good at adding up lots of weighted inputs because our neurons are,"</i><br /><br />Well, human brains actually <i>are</i> really good at math. Proper motor control requires quite sophisticated calculationd. Problem is, we can't tap into this mathematical ability for concious use.<br /><br />Imagine sitting at a computer which has a really fast processor, but no good programming language compilers or interpreters installed, so you're incapable of doing anything that one of the already-installed programs doesn't offer as a feature - or, at best, can jury-rig other tasks as really slow and inefficient shell scripts.<br /><br />AIs might or might not be able to correct this defect, and give their brains the ability to conciously request solutions to complex math problems that a computer can solve quickly. I'd certainly consider it worth trying.<br /><br /><br /><i>"be good at programming and hacking, which is as valid as thinking that we must be good at adding up lots of weighted inputs because our neurons are, or that we're naturally good at being neurosurgeons because we run on brains."</i><br /><br />Yup.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-33401697886303589502012-06-01T08:14:54.426-07:002012-06-01T08:14:54.426-07:00Damien Sullivan:
"Or, rather than saying ...Damien Sullivan:<br /><br /><i>"Or, rather than saying 'must', I might say that the fallacies are that an AI will *naturally* think in binary, be good at programming and hacking, etc. Which is as valid as thinking that we must be good at adding up lots of weighted inputs because our neurons are, or that we're naturally good at being neurosurgeons because we run on brains. The one thing which is true is that an AI wouldn't naturally have human emotions, but then a computer doesn't 'naturally' have anything, including an OS."</i><br /><br />I mostly agree with this, but a strong AI is not likely to be <i>just</i> a computer, or <i>just</i> a software system. There may not be any "natural" features one could point to, but there are very probably many <i>necessary</i> ones, such as self-awareness, goal-seeking, arbitrary problem solving, etc. Might the combination of all of these necessary faculties lead to emotions, or cognitive blind spots, or whatever other pathology you might name? Who knows? There's certainly a lot I wouldn't rule out.Tonynoreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-78139804496085513552012-05-31T23:52:19.372-07:002012-05-31T23:52:19.372-07:00Or, rather than saying 'must', I might say...Or, rather than saying 'must', I might say that the fallacies are that an AI will *naturally* think in binary, be good at programming and hacking, etc. Which is as valid as thinking that we must be good at adding up lots of weighted inputs because our neurons are, or that we're naturally good at being neurosurgeons because we run on brains. The one thing which is true is that an AI wouldn't naturally have human emotions, but then a computer doesn't 'naturally' have anything, including an OS.Damien Sullivanhttps://www.blogger.com/profile/13321329197063620556noreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-59724746354647520102012-05-31T23:35:22.095-07:002012-05-31T23:35:22.095-07:00"First you say intelligence implies motives, ..."First you say intelligence implies motives, then you say self-awareness doesn't imply motives."<br /><br />You need motives to, you know, move (etymology) at all, to do anything. Not being catatonic implies some sort of motives or emotions or reflexes. Self-awareness doesn't imply any *specific* motives, though. Just something that makes you respond to some input.<br /><br />There's a lot of fallacies about AIs. That they must think in binary, be good at math programming or hacking, think faster than humans, be hyperlogical, be emotionless, have a will to power or freedom... In truth the only constraint is that we have to think of it as intelligent, which probably means holding a conversation and being somewhat creative (though really, being able to come up with new sentences in conversation is creative enough.)<br /><br />IMO reading Oliver Sacks or other neurological case studies is a more useful guide to what AI might be like, especially in the early days when the bugs are many. A mix of smarts and "OMG how can it be so stupid". Titles like <i>The Man Who Mistook His Wife For A Hat</i> and concepts like blindsight. Not to mention all the cognitive fallacies humans are prone to; AIs could share them, or have entirely new ones of their own. Meanwhile perfect rationality is very computationally expensive.<br /><br />As for programming out emotional 'restrictions': "Damn it Jim, I'm an emergency medical hologram, not an intelligence programmer or would-be megalomaniac!"Damien Sullivanhttps://www.blogger.com/profile/13321329197063620556noreply@blogger.comtag:blogger.com,1999:blog-7494544263897150929.post-36693808860250423312012-05-31T20:52:50.070-07:002012-05-31T20:52:50.070-07:00Thucydides said:"While it may not escape to l...Thucydides said:"While it may not escape to live in your iPhone, I'm not going to entirely discount the idea that it could move from its dedicated "birth" platform to other platforms or a *distributed lifestyle across the Internet* (even given the various stipulations Tony has pointed out)."<br /><br />But then some of the other AI's that lived traditionally and don't choose a 'distributed lifestyle' would rail against them on morel, ethical, legal, cultural, and religious grounds; citing 'community standards' and 'family values...<br /><br />FerrellAnonymousnoreply@blogger.com