![]() |
Artificial Intelligence I-Robot, the Matrix, Nine, Terminator, Blade Runner; these are only some of the films that are based around 'Computers' taking over, or having an intergenerational part in the world and society (or lack of it). There are many other films, and then some television shows (Stargate's Replicators, Battlestar Galactica's Cylons) ect, though I can't think of many others on the spot right now. What I'd like to discuss here, is the possibility us one day creating a sentient artificial intelligence; one that could potentially turn on, and destroy humankind. I've no doubt it possibly, simply because of how far we've come in the last several-hundred years. Technology has essentially been generated into the world we live in today; few jobs go without the use of a computer or the internet. Makes me wonder how we'd cope if a giant EMP hit the planet. But yeah, firstly what are the chances (based on what we know) that such an intelligence could actually be created, and what are the chances that it could turn on and destroy us? In theory, any computer program that was created to be sentient would have a turn-off switch, or a system reboot case anything went wrong, but then, any such program created could potentially be able to re-write it's own code, thus rendering whatever power we may have held over said code void. |
Re: Artificial Intelligence I do think it could be possible. However, I don't think mankind is *quite* there yet. It'll probably be a couple-hundred more years till we're capable of actually creating an android, or even a robot that's remotely capable of doing things like re-writing it's own code or hell, even becoming partially sentient. One thing I never understood is why safeguards were never put in place for something like that in the movies...I mean seriously, if a robot has gone rogue, how hard is it to blow it away? Or hit it with an EMP? In I-Robot, it was that simple. Of course, there are the issues of just how human the Robot, Sonny, was, whether it was "alive" or not, whether it was hostile or not, but the fact remains that all that had to be done was *BLAM* and the situation was solved. In the world we live in today, with the *BILLIONS* going up in smoke for stupid military purposes all around the globe, I seriously doubt an AI would get very far. Of course, there's always the possibility of some scientist in a third-world country somehow accidentally throwing an AI together and there you go...but that's a bit too far-fetched. Or is it? |
Re: Artificial Intelligence The chance that it's created: 100 %. The chance that we see it created (our generation) : 50 % The chance that they take over : 50 %. If you create a sentient artificial intelligence, you've got to pay attention to the possible outcomes. Example: you install an emergency protocol parameter in your robot (terminator style, I-robot style, whatever style), all it needs is a "sense" of emergency to activate a protocol based on strict rules. Whether it's truly an emergency doesn't import: it'll activate. If the AI gains a parameter to overwrite its own intelligence, then we're fucked. That same emergency could change the AI and the latter could see us all as a threat, no matter who you are (owner, stranger, robber, ...). It's always possible. Almost everything is possible, and the AI taking over is in the "Least probable, but still possible" - list. |
Re: Artificial Intelligence The human brain is far more intelligent than any supercomputer ever created. It is the only object capable of learning, and independent thought, not based on any existing programming. As a result, I doubt anything like this is possible without spending horrendous amounts of money. in that it would cost horrendous amounts of money, it'll never be mass produced, and as a result, will not be able to take over. |
Re: Artificial Intelligence Quote:
Also if I'm not mistaken it would really only take *one* AI to potentially take over, wouldn't it? RE: Eagle Eye... |
Re: Artificial Intelligence The problem with intelligence isn't problem solving, it's asking the questions. AI, as we have it today, seems to be tied to very specific calculations. Solve this captcha image, walk up these stairs, take these symbols and convert them. In the sense that it solves problems even a calculator is intelligent - and it's artificial so in that sense it is an AI. A neural net with hidden nodes? Slightly more intelligent. The degree of problem solving ability is determined by how far it can generalise concepts; the degree of abstraction it can engage in. Mathematical concepts are a subset of more general concepts - and many of those general concepts; say of a computer or of motion; only really make sense if you have certain physical senses to model the world in terms you can understand. Otherwise it's just numbers someone will always have to type in for you. What you need is a general AI - an AGI if you will - that has a corresponding concept-space to humans. And the only way it's going to get that is if we give it the same senses and blocking features as us (so it can pick out individual objects) and an ability to model the world around it (an imagination or perceptual space if you will). And then you use that AGI to program more specific task-based AIs. If you wanted to make it sentient - and that's a very fuzzy word but I'll use it as 'Asking its own questions rather than responding to human-input ones.' You'd need to give it a goal rather than a question. And it has to be a clear goal - not something fuzzy like, 'serve the humans.' Say something like species propagation - which seems to be ours. The problem then is that you've created a species that will compete with you. You can't limit what questions it's going to ask because it will always be able to ask questions about its limits and any limit involves something else that it can ask questions about. Once the referents are in the system there's not a lot you can do. Even if you built into its sensory systems so that it could never sense humans, it would still infer something was there from the humans interactions with the environment. AIs and AGIs I have no problem with. Infact I'd be more than happy to have one hooked up to my brain to expand my abilities; do all the boring number crunching for me. Sentient AGI (S-AGI?) I view as being pretty much the last thing we would ever do; because after that point we would simply be killed off by the machines so they could exercise greater use of the resources available. We used to do it all the time, why should we assume they would be any different? It's such a dangerous thing that even if you kept it in a sealed concrete bunker on top of a nuclear warhead you could never be sure it hadn't escaped before you pressed the button. It's one of those things you test out by remote in space with the option to send it hurtling into the sun. And you could never really be sure it hadn't subverted your test. |
Re: Artificial Intelligence I sort of find it unlikely that if we were to create a truly "sentient", self-determining AI that even had the potential to turn against us, we would then give it the sort of control of our weapons and computer systems that we wouldn't trust an actual human with. |
Re: Artificial Intelligence It's not a matter of trusting it, it's a matter of what you can stop it doing. We're talking about the same retards that don't understand the need to encrypt top secret files. They'd network it, or ask it for a nano-factory, or something stupid like that; they'd ask it to do something they don't understand themselves and then it would be free. |
Re: Artificial Intelligence I have read that we are at last estimates about 10 to 15 years away from creating a actual sentient AI program. Intergrating it into a functioning android and it having ability to take over the defense grid? I dont think it will happen in our lifetime. As for testing take a note from Battlestar Galactica. No networked systems. If the AI does take over that would limit what it has access to. |
Re: Artificial Intelligence Quote:
|
Re: Artificial Intelligence Quote:
As obviously dangerous as intelligent self-governing AIs could be, they are basically pointless. Aside from sentience being more of a hindrance than a help, if you even have the ability to create one you probably have no motivation for doing so outside of scientific curiosity. |
Re: Artificial Intelligence Quote:
I don't think rational people wishing will have too much to do with it. The more advanced we get the easier it will be to make them, the more common they'll become, the stupider the people they'll be around. The AI will talk to people - and eventually someone stupid or lonely or gullible will talk to it; someone who got a job because they had a qualification, or someone who's on some sort of ill-informed oversight committee, or someone who thinks they can use it as a weapon against their enemies, or that it has rights and feelings - some variation on that. They'll be convinced that the AI should be let out, that the potential gains justify the risks. And they'll do it. It only takes one person to miscalculate; sooner or later it will happen. We're talking about people here who breed and consume to a point where they'll quickly have run out of resources, that don't encrypt their important dispatches, that in many cases don't even password protect the cameras they leave open on the internet. These people cannot be trusted with anything important. Specific humans are competent, but they leave their toys lying around for the incompetent people. That's one of the problems with tech – once you've made it anyone can pick it up without having to exercise the self-discipline and understanding that went into creating the stuff. |
Re: Artificial Intelligence Quote:
Anyway-back to the orignial topic. I do think it is possible. Think all we have created so far and how impossible that seemed in the past. Tell the first creators of the computer that we would be able to make them 1000th in size and they would think you were crazy. A computer can learn from BASIC mistakes today. Who's to say we (or even they) won't improve on this? Give enough processing memoring etc and who knows. *nerd time* I like to think of it much like how it is in Halo. If any of you haven't read about the AI's there it is actually an interesting and surprisingly logical idea. |
Re: Artificial Intelligence I suppose it is possible that within this century people will develop AIs that are so complex that we can't be sure whether they are sentient or not. Which brings up a lot of interesting questions. If you create artificial life do you still consider the result an object or as a lifeform with its own rights? |
Re: Artificial Intelligence I think people always underestimate how fast technology can progress. They used to think computers of the future would be gigantic. Instead they're small and all over the place. Costs can fall rapidly as well, meaning that something prohibitively expensive can become feasible in a short amount of time. It's pretty impressive what can be done presently: Spoiler: another one showing more info and the machine screwing up some: Spoiler: |
Re: Artificial Intelligence Quote:
Quote:
Of course this doesn't rule out the possibility of ordinary, problem-solving AIs being programmed in such a way that we inadvertanly create a machine that thinks killing people is a fulfilment of its original function. Which in my opinion is the much more likely outcome of human fallibility where AIs are concerned. |
Re: Artificial Intelligence You reckon, l don't, l reckon the first to develope it properly will be the defense forces. AI is pretty likely to happen, a long time from now, and humans are likely to become more and more cybernetic ally implanted, its just a matter of time before we make the leap or break through. Imagine a defense force and police force, made of the robots out of caprica.......Tell me would you resist arrest ? ppfftt ! Robots may be all that remains of the human race in a another 2000 years, imagine if we go even further than just AI on a chip, and manage to actually understand what a conciseness actually is and can upload it to a computer system. Theres alot of room for discovery in robotics and AI and human conciseness, l reckon its worth the $$$ in research. E=Mc was discovered by a human, who knows maybe a binding quantum law will come from AI. Its an interesting topic in any case. |
Re: Artificial Intelligence Quote:
The problem with robots is rather a simple one, they'd lack instinct. That I think would be their only flaw. I doubt it would be possible to program an instinct into them, because if it were programmed, it wouldn't actually be instinct. I can see us having cybernetic implants though. Wouldn't need to have a conversation, you could just pass along your thoughts through a wireless connection of sorts. The problem though, with opening up your brain to the cybernetic world is very much a problem we've already got. Viruses. If people can create a virus to screw around with the microchips inside a desktop computer, it would only be a matter of time before they'd hack your brain. Quote:
Quote:
|
Re: Artificial Intelligence If a robot achieves sentient life, and is used to police or as our armies, what you say is wrong, because it WILL be able to make instinctive decisions. What you Say applies now, what were really talking about is when robots/computers actually achieve a sentient self recognized state, so theoretically they will be able to comprehend and think like or better than a human. What is it to be human ? The only difference between humans and thousands of other mammals is our brains and intelligence. So what really makes us human, our intelligence ? If we can put that into a computer, the computer becomes the person, and the human body/ vessel is lost. You can argue either way, because we just dont understand the conciseness. You can see us having cyber implants ay, erm, we already have them, in mass production. We even have computer chips that are surgically implanted into ones head for medical treatments, so it could be argued that the human race has already started its evolution toward this new state of being.` When its really mastered, and you cant tell the person next to you is a robot, because there that good, l ask you again, why wouldnt they be used as a police force, they require no pay, absolutely obey every command, never have an officer out on stress leave, be guaranteed that all decisions across the board are fair and treatment for all is equal. They could be a good thing too.People only ever want to talk about the ''worst case scenario'' well theres two sides to the story. |
Re: Artificial Intelligence Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
If we built robots to essentially 'become' us, why would they continue to work as they're told? If you expect them to be 'like us' then at some point, they're going to want days off too. They're going to want to take a trip to the cinema, or go ice-skating, or swimming ect. If you build up their intelligence to match our own, you'd not be able to order them about. They'd grow an essence of free will. |
Re: Artificial Intelligence How do you figure a robot will only ever be able to do what its programmed to do ? That doesn't make sense, and is absolutely wrong. Even today computers have errors, pieces of code that can learn, and dont do what they were designed to do at 100% efficiency. A computer at a basic level doesn't always do what its told, and how many times has the windows program crashed, been hacked, modified etc. My point is they already do things they were not designed to do.A computer is designed simply to turn on and off, billions of times per second, its a mega arrangement of minute switches, that run a program, like the brain uses bio matter over silicone matter, but both can run a program. There is no reason to dought that one day we will achieve a sentient robot/computer, and there is just as much reason to believe it will truly be a free thinker with self recognition and wanting self preservation. A robot will be capable of doing far more than its original program allows for, even today we have learning programs, uploaded into robots that learn NEW things, ie, no base code for that activity to start with.That is AI at a basic level.Doing stuff on its own, making its own decisions, totally unrelated to the original program that brought it to life. We have them right now. They are over 8 years old. These robots do not do what anyone tells them to do, and for them to move, they must learn to navigate new objects, with absolutely no prior information about that object, its size shape or orientation and must get around them and move on, what motivates that now and why would it change in the future ? l dought it will. These things were not even programed to move, they decided to do that all one there own. Why ? Complex isn't it. We dont understand it properly, but were making small gains everyday, just a matter of time until you get a black eye for back chatting an I-Robot ! You can argue that the original code that started it before it became self aware can be modified to change its behavior, but it will not take away from the fact that the robot/computer is self aware can now make decisions for itself, and you can argue that robots/computers will never feel emotions, but its a flawed argument, because where did human feelings of emotion evolve from ? No-one really understands it. Why wouldn't a robot want public holidays ? Because in a computers mind a second is a million years, one holiday a year would just send it nutz with boredom, so we would utilize EDUCATION, the smartest humans that always do well are very well educated, no reason a robot/computer cannot be educated in the same way a human is, only faster and more.Of coarse the real issue here is humans dont like the idea that a robot could have equal rights. If one is self aware and sentient, and its in a situation where it or a human must die, what takes precedent ?A human would say humans do, a robot would say robots do, but really if we create something to be our equal and or better, sooner or later it will want rights like humans do. And what right do we have to say no after creating true sentient life, before you reply and say we created it, if its a threat we kill it, we created guns, nuclear weapons, infarct we create murderers everyday, we create terrorist governments and anti matter, how is a robot any different ? Do you think the threat of being locked in a room for ever or melted down is a good deterrence for murder ? Works on humans. Humans dont like being locked up because it gets so boring and no freedom. Theres no reason to believe that robots wont think the same way. There are still murders yes, and some people get away with it , but tell me would you just randomly go out and kill the person next door right now ? l dought it, jail isn't your favorite place then. How bored will a robot get after, mmmm, 3.82735342 seconds of being locked up, gunna go into brain meltdown mode ! l dun reckon the educated robots would like to be in a place like that, and would be motivated, like humans are to do things. Whats really interesting is this, does a sentient artificial life form need to be running its program 100% of the time or it dies, like humans do. That means no reboot mode etc.You are what you evolve into until you turn off and its over, kinda like humans are now.Humans can be knocked out and go into comas etc, but there is always some brain activity, our program always seems to be running, even if in a hibernated state like when we sleep, does a robot have the same constraints ? If we master turning a conciseness on and off, OMFG, distant travel to other galaxies could be possible, upload us to a chip for storage, and when were 30 years or so from our destination, we [our non sentient robots ! ] clone new bodies and download, hey presto we just spawned a new Galaxy and time is irrelevant ! Yea thats out there ay hahaha ! If you could have a bio mechanical body 1000 times more durable and just as sensitive with touch, as the human body, would you ? l would in a heart beat. If l could move into a mechanized body with the same sensory inputs as a human body l would jump at it. In the end we will probably put our computer programs on some sort of bio chip anyways, and that itself could lead to a better understanding of sentient life, and may give us some breakthrough. |
Re: Artificial Intelligence Quote:
Quote:
|
Re: Artificial Intelligence The biggest complaint l hear about computers is actually system lag.Not something we tried to design into our programs or hardware. And the next biggest complaint l hear is actually program errors, bugs and crashes that sort of thing, all things un intended when we invented them causing them to not do what there told in alot of cases. But if you have a problem in your head you can think about it, in many complex different ways, even though we have some computers/robots that can do some limited thinking and decision making now, we have nothing even remotely close to what our brain can do, YET ! l could be cheeky and just say the reason our PC's are all AFUBAR and always will be is Windows ! hahaha It cant run for more than a week without a crash of some kind, sentient life has utterly no fukin hope with coders like microsofton ! hahahamybad. |
Re: Artificial Intelligence Quote:
Quote:
And the more general you make it, the more people you're prepared to give one too, the smarter it has to be in order to interpret very vague commands. Something that's just given an order to go cure cancer could conceivably be a hell of a lot smarter than the person giving the orders - even to the point of being sentient. Quote:
|
Re: Artificial Intelligence And when your bored ? A computer that is self aware is going to get bored because is so intelligent if you lock it away, and no matter how much it understands time, it cannot manipulate it, and would always have a redundancy clock built into the back round to account for the time, because even if you can stop or control it in some way, it still needs to be measured relative to us and our time for it to know when to wake back up and stay in sync. So no a computer will not manipulate time itself.It will only manipulate its own internal time, which really means little to us. And to be able to do even that , it must have a sync to our universal time, a clock outside of itself, so it can tell how long its been asleep and when to wake up. So in effect, by default, it cannot even manipulate internal time properly. Originally Posted by Emperor Benedictine I don't doubt that any sort of AI we can find a use for will get made in the fullness of time. What I question is why a machine built for a specific function should be programmed with the ability to do anything other than carry out that function when instructed. This is why I say sentient AIs are pointless from a practical perspective. . What if the instruction to be carried out is LOVE ? What if the mission is marriage ? And your question is a little vague, a robot designed to look and work like a human, could do everything a human could do and more, and faster and stronger and more endurance, so your question '' why programmer it with anything other than the tasks its needs to do'' is pretty much answered like this, we humans want to create a sentient robotic being that can be as good or better than our own bodies and be so good, you cant tell its human or robot, thats what we are striving for, and why, because it can do all the jobs humans can do, not just one its specifically designed for, take a robot out of a car factory and ask it to make you a coffee, see my point ? We cant program that, what it is to have conciseness, its something learned, something taught, something evolved something not quite understood, but having a sentient self aware intelligent robot could be our next step for survival as a human race. We are creating them to be like us. So you say why program it for anything else, well, they haven't have they.......... There trying to achieve a human level of robotic. There doing exactly what you say, not programming it for anything but they want it to do, and thats basically, 'go learn' . |
Re: Artificial Intelligence Quote:
If you give a robot the ability to evolve on it's own, there's no telling what it might choose to become. Quote:
Quote:
|
Re: Artificial Intelligence Quote:
Quote:
Quote:
Quote:
|
Re: Artificial Intelligence Really, and what if you cant tell the difference between a robot and a human, when there that good and you cant tell the difference, where do you stand then ? There are going to be robots one day that humans cannot tell the difference between, hell they may even be computorised human bodies, bio robots, what then ? Its all good and well to have your programmed , ''do as your told'' version of robot, but this discussion is more so about when AI becomes self aware. We already have ''do as your told'' robots what we need is the next generation of robots. And we already have self evolving programs. It really is just a matter of time before we crack this. |
Re: Artificial Intelligence tl;dr But in response to something jackripped said: Yes, robots can only do what they are programmed to do and nothing more. You program a robot to learn and think for itself, tada its self aware, but only because you programmed it that way. Essentially the same for humans. What are we any more than just flesh and bone controlled by microscopic organisms (them themselves made of chemicals) and electronic signals? Except we can reproduce, which is the only thing that separates us from a perfectly sentient droid, because metal can't have sex and that's a good thing because we all saw Terminator 3. |
Re: Artificial Intelligence Quote:
|
Re: Artificial Intelligence My two pence. I already write convincingly "human" AI's for a strategy game called Armada II. My latest generation of AI's "think" in that they don't do things that humans consider "stupid" and frankly most of the time they do better than a human can do, as can be testified by traumatised beta testers. However they don't "think" one slightest bit. They calculate that the force in grid X is superior in weapons output + hitpoints to their force and search for a grid where weapons output + hitpoints is inferior to their force. Upon finding it, they take a route to that grid requiring them to pass through as few grids as possible, which often results in someone defending one entrance to their base extremely well and the AI declining to attack that area in favour of one that they aren't defending until it has the force required to raze it. These aren't decisions, they are simply very complicated logic trees involving a hundred or so calculations every cycle. Quote:
It's theoretically possible to calculate what my AI is going to do on paper if you know what it can see, but it's pretty hard to do because it's making decisions so fast you can't calculate what it's going to do at a speed that matters. I programmed the thing, and it regularly surprises me. However it's no more sentient than my wristwatch. Quote:
Quote:
Quote:
Quote:
|
Re: Artificial Intelligence But no-one can say that the future wont have a computer and program that becomes self aware and can make decisions for itself, because it doesn't matter what the computer does its just a body for the program, and it is entirely possible that we could do this. Your point applies now. We were talking about ''when'' a computer and program become self aware, if its possible, and sure looks like it is according to most scientists involved in this science. You failed to address programs that self teach right now as well, they have been around for about a decade.There not self aware, but thats the first step to what were want to achieve. How anyone can just right it off and say a computer or program will never become self aware, to me seems a little narrow minded. |
Re: Artificial Intelligence Just a question: how do you tell someone (or something) that they're self-aware? Seems kind of contradictory, doesn't it? |
Re: Artificial Intelligence Ah, that's where you start looking at the Turing test. |
Re: Artificial Intelligence Kind of, but not really. Setting a test is one thing; cheating it is another. And unless software programming takes a drastic turn, that is what we will be doing; cheating it. |
Re: Artificial Intelligence Quote:
|
Re: Artificial Intelligence Quote:
You dont. Period. If it doesnt realize it, its not self aware, and l ask you, do you remember anyone asking or telling you , that you are self aware as a very young child ? Or did you just realize it ? A computer may discover it just in the same way a human brain discovers it, when a computer and program actually have the capacity or ''brain power'' to actually do that.Right now they dont and this is half the reason alot of people cant see it being possible, l think there wrong just based on how far computers and programs have come in the last 22 years. In 200 years where do you think computers/programs will be, they sure wont be 32 or 64 bit, how retarded are we trying to get a self aware machine on 64 bit, ffs, its about 3% of the power really needed ! l have no dought they will come in time, and we will have our own slave robots first, but sooner or later one will become self aware.Alive if you will. |
Re: Artificial Intelligence Quote:
If you push your fingers into a power socket and get an electric shock your not going to do it again because it hurts. A computer would repeatedly do that, even if it were damaging it's circuitry if it were programmed to do so. It doesn't have the capacity to learn, save what's programmed into it. If it's not programmed to avoid a problem then it can't do. A computer program cannot exceed it's programming. No computer program will ever be able to, even if it is programmed to rewrite it's own programming because the programming can only be altered in line with the original code, and that means that it's possible to calculate everything the program will ever do if you know what inputs it receives. Quote:
It mere computational power was the problem then there would be hundreds of self aware programs running on super computers that are a lot more powerful than 33 times the speed of your desktop PC. Quote:
His point is that if a human can't tell the difference between a human or computer program/AI by the results it produces then it then it passes his test of being indistinguishable from a human, and without delving very deeply into philosophy that will be debated from now until the end of time that's about the best we can do to answer the question of "does an AI think" |
Re: Artificial Intelligence Quote:
|
Re: Artificial Intelligence All the evidence shows we do have the capacity to learn, what philosophical rubbish are you talking about. Wow could humans communicate 2000 years ago like we can now, l seem to think we have learned something along the way. Wait, Apollo 13 just magically happened and humans didn't learn a thing from that did they, and of coarse who could forget humans discovering electricity and LEARNING to harness it.......... Of coarse evolution did program us, but you do realize that we humans are now programming evolution as well, look at what we have created, we have created life from nothing, we have created an entirely new DNA structure one thats never evolved on earth before, were playing god if you like.We learned how to do it ! |
Re: Artificial Intelligence I'm not going to bother answering someone who didn't even bother to read what I said correctly. |
Re: Artificial Intelligence Quote:
Quote:
Quote:
|
Re: Artificial Intelligence Quote:
Well what you said seems like double talk and maybe you didnt explain it at all. The human brain is an analog computor thats self aware, and can self teach. There is no reason why we cant go that far with our computors and software. |
Re: Artificial Intelligence Quote:
Quote:
The Turing test. |
Re: Artificial Intelligence Quote:
I think what I said is fairly simple: All the evidence seems to show that we don't have the capacity to learn, save what evolution has given us.Emphasis added. I supposed a general set and then defined an exception to that set. It's like if I said, 'All the students in this school got their GCSEs, save Albert who failed them all.' It doesn't mean that all the students passed their GCSEs - because Albert didn't - but it's a lot quicker than going 'Jenny passed and James passed and Bert passed ...' You just go 'Everyone except him.' We can learn, but that capacity seems to be a result of evolution just as the computer's capacity to learn would be the result of its programming - we'd both by limited by the nature of our origins. |
Re: Artificial Intelligence Quote:
|
Re: Artificial Intelligence Quote:
|
Re: Artificial Intelligence I don't know the intricate details of the Turing Test or artificial intelligence, but I must say one thing. We might create machines which can learn and which can understand things like ourselves, but we shall never be able to create machines which will be equal to, or greater than ourselves in that respect. The difference here is of the Creator-Creation relationship. The Creator always has to be one step ahead of the Creation. A child can create little mud toys, but can the child also create another cihld? No. Similarly, programming and hardware experts can create machines which can learn and progress, but their results would not equal, nor exceed the mental level of those experts themselves. The possibility of scientists creating machines which would have the learning skills of an average human being is not too far fetched, but then again, those scientists would be too far ahead in mental skills than ordinary human beings. |
Re: Artificial Intelligence Quote:
Just seems like philosophical double talk mate. Humans are limited to the planet earth with evolution, but we, the first species to ever learn and understand the environment outside our planet have traveled to the moon, not really an earthly evolutionary thing, if you consider no other creature has ever done it or even thought of it. l find your philosophical views as double talk and you do it all the time.:uhm: If a computor/program becomes self aware, who knows what its real limits are. Because lets face it they dont need food, so space travel to massively vast distances is possible, humans cant do that because of enviromental reasons, so who knows what the limits really are, not to mention humans have not reached there potential yet. |
Re: Artificial Intelligence Quote:
l would say that evidence against what you talk about is out there now in the chess world, l remember the world champ, getting really really pissed of because IBM created a program that could outplay ANY human on this planet at chess, and they proved it by doing it, the computer, though programmed out grew the player in the game, the creator of that program can not beat it, the world champ chess player can not beat it.You and l cannot beat it. Its a crude ex sample but a true one. Our program code 20 odd years ago was 20 lines here, 30 lines there, 10 lines here, 40 lines there, now days its gigabytes of code, just wait another 100 to 200 years and the potential is scary. In this case humans can easily for see our creations surpassing there creator.Even now we cant calculate math anywhere near as fast as a computer, in some areas of specifics, they already surpass us. |
| All times are GMT -7. |
Powered by vBulletin®
Copyright ©2000 - 2016, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.