FileFront Forums

FileFront Forums (http://forums.filefront.com/)
-   The Pub (http://forums.filefront.com/pub-578/)
-   -   Artificial Intelligence (http://forums.filefront.com/pub/433026-artificial-intelligence.html)

Flash525 January 17th, 2011 12:13 PM

Artificial Intelligence
 
I-Robot, the Matrix, Nine, Terminator, Blade Runner; these are only some of the films that are based around 'Computers' taking over, or having an intergenerational part in the world and society (or lack of it). There are many other films, and then some television shows (Stargate's Replicators, Battlestar Galactica's Cylons) ect, though I can't think of many others on the spot right now.

What I'd like to discuss here, is the possibility us one day creating a sentient artificial intelligence; one that could potentially turn on, and destroy humankind. I've no doubt it possibly, simply because of how far we've come in the last several-hundred years. Technology has essentially been generated into the world we live in today; few jobs go without the use of a computer or the internet. Makes me wonder how we'd cope if a giant EMP hit the planet.

But yeah, firstly what are the chances (based on what we know) that such an intelligence could actually be created, and what are the chances that it could turn on and destroy us? In theory, any computer program that was created to be sentient would have a turn-off switch, or a system reboot case anything went wrong, but then, any such program created could potentially be able to re-write it's own code, thus rendering whatever power we may have held over said code void.

Totes January 17th, 2011 12:22 PM

Re: Artificial Intelligence
 
I do think it could be possible. However, I don't think mankind is *quite* there yet. It'll probably be a couple-hundred more years till we're capable of actually creating an android, or even a robot that's remotely capable of doing things like re-writing it's own code or hell, even becoming partially sentient.

One thing I never understood is why safeguards were never put in place for something like that in the movies...I mean seriously, if a robot has gone rogue, how hard is it to blow it away? Or hit it with an EMP? In I-Robot, it was that simple. Of course, there are the issues of just how human the Robot, Sonny, was, whether it was "alive" or not, whether it was hostile or not, but the fact remains that all that had to be done was *BLAM* and the situation was solved.

In the world we live in today, with the *BILLIONS* going up in smoke for stupid military purposes all around the globe, I seriously doubt an AI would get very far. Of course, there's always the possibility of some scientist in a third-world country somehow accidentally throwing an AI together and there you go...but that's a bit too far-fetched.

Or is it?

Embee January 17th, 2011 12:30 PM

Re: Artificial Intelligence
 
The chance that it's created: 100 %.
The chance that we see it created (our generation) : 50 %
The chance that they take over : 50 %.

If you create a sentient artificial intelligence, you've got to pay attention to the possible outcomes. Example: you install an emergency protocol parameter in your robot (terminator style, I-robot style, whatever style), all it needs is a "sense" of emergency to activate a protocol based on strict rules. Whether it's truly an emergency doesn't import: it'll activate.

If the AI gains a parameter to overwrite its own intelligence, then we're fucked. That same emergency could change the AI and the latter could see us all as a threat, no matter who you are (owner, stranger, robber, ...).

It's always possible. Almost everything is possible, and the AI taking over is in the "Least probable, but still possible" - list.

Keyser_Soze January 17th, 2011 01:42 PM

Re: Artificial Intelligence
 
The human brain is far more intelligent than any supercomputer ever created. It is the only object capable of learning, and independent thought, not based on any existing programming. As a result, I doubt anything like this is possible without spending horrendous amounts of money. in that it would cost horrendous amounts of money, it'll never be mass produced, and as a result, will not be able to take over.

Totes January 17th, 2011 01:48 PM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Keyser_Soze (Post 5455713)
I doubt anything like this is possible without spending horrendous amounts of money. in that it would cost horrendous amounts of money, it'll never be mass produced, and as a result, will not be able to take over.

Billions upon billions of dollars are basically burned spent on military defense. Nothing would surprise me when it comes to money anymore. The government has created thousands of fancy ways to kill and mangle a human body through the use of our tax dollars, so creating an AI sometime in the future is a distinct possibility regardless of the amount of money it takes.

Also if I'm not mistaken it would really only take *one* AI to potentially take over, wouldn't it? RE: Eagle Eye...

Nemmerle January 17th, 2011 04:27 PM

Re: Artificial Intelligence
 
The problem with intelligence isn't problem solving, it's asking the questions.

AI, as we have it today, seems to be tied to very specific calculations. Solve this captcha image, walk up these stairs, take these symbols and convert them. In the sense that it solves problems even a calculator is intelligent - and it's artificial so in that sense it is an AI. A neural net with hidden nodes? Slightly more intelligent. The degree of problem solving ability is determined by how far it can generalise concepts; the degree of abstraction it can engage in.

Mathematical concepts are a subset of more general concepts - and many of those general concepts; say of a computer or of motion; only really make sense if you have certain physical senses to model the world in terms you can understand. Otherwise it's just numbers someone will always have to type in for you.

What you need is a general AI - an AGI if you will - that has a corresponding concept-space to humans. And the only way it's going to get that is if we give it the same senses and blocking features as us (so it can pick out individual objects) and an ability to model the world around it (an imagination or perceptual space if you will).

And then you use that AGI to program more specific task-based AIs.

If you wanted to make it sentient - and that's a very fuzzy word but I'll use it as 'Asking its own questions rather than responding to human-input ones.' You'd need to give it a goal rather than a question. And it has to be a clear goal - not something fuzzy like, 'serve the humans.'

Say something like species propagation - which seems to be ours. The problem then is that you've created a species that will compete with you. You can't limit what questions it's going to ask because it will always be able to ask questions about its limits and any limit involves something else that it can ask questions about. Once the referents are in the system there's not a lot you can do. Even if you built into its sensory systems so that it could never sense humans, it would still infer something was there from the humans interactions with the environment.

AIs and AGIs I have no problem with. Infact I'd be more than happy to have one hooked up to my brain to expand my abilities; do all the boring number crunching for me. Sentient AGI (S-AGI?) I view as being pretty much the last thing we would ever do; because after that point we would simply be killed off by the machines so they could exercise greater use of the resources available.

We used to do it all the time, why should we assume they would be any different?

It's such a dangerous thing that even if you kept it in a sealed concrete bunker on top of a nuclear warhead you could never be sure it hadn't escaped before you pressed the button. It's one of those things you test out by remote in space with the option to send it hurtling into the sun. And you could never really be sure it hadn't subverted your test.

Emperor Benedictine January 17th, 2011 04:56 PM

Re: Artificial Intelligence
 
I sort of find it unlikely that if we were to create a truly "sentient", self-determining AI that even had the potential to turn against us, we would then give it the sort of control of our weapons and computer systems that we wouldn't trust an actual human with.

Nemmerle January 17th, 2011 05:02 PM

Re: Artificial Intelligence
 
It's not a matter of trusting it, it's a matter of what you can stop it doing. We're talking about the same retards that don't understand the need to encrypt top secret files. They'd network it, or ask it for a nano-factory, or something stupid like that; they'd ask it to do something they don't understand themselves and then it would be free.

Anlushac11 January 17th, 2011 06:49 PM

Re: Artificial Intelligence
 
I have read that we are at last estimates about 10 to 15 years away from creating a actual sentient AI program.

Intergrating it into a functioning android and it having ability to take over the defense grid? I dont think it will happen in our lifetime.

As for testing take a note from Battlestar Galactica. No networked systems. If the AI does take over that would limit what it has access to.

Keyser_Soze January 17th, 2011 07:13 PM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Totes McTurner (Post 5455724)
Billions upon billions of dollars are basically burned spent on military defense. Nothing would surprise me when it comes to money anymore. The government has created thousands of fancy ways to kill and mangle a human body through the use of our tax dollars, so creating an AI sometime in the future is a distinct possibility regardless of the amount of money it takes.

Also if I'm not mistaken it would really only take *one* AI to potentially take over, wouldn't it? RE: Eagle Eye...

Something that rivals the cognitive capacities of a human wouldn't cost billions, it'd cost close to the trillions range, if not trillions. While with the massive waste of money that is the defence budget (Some funding is necessary, 40% of the GDP or whatever is just retarded and paranoid- it's a major reason the USA is now an economic Fuck-up, if you ask me) it wouldn't really surprise me if the USA did try and do this in the near future, it would almost certainly end up being a gigantic waste of money that gets nowhere. to make an AI capable of what we do would take Terabytes or more of memory. How long would it take to iron out bugs? How much would it cost to do so? I think really, even American warhawks know it would be a waste of money.

Emperor Benedictine January 17th, 2011 07:26 PM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Nemmerle (Post 5455902)
It's not a matter of trusting it, it's a matter of what you can stop it doing. We're talking about the same retards that don't understand the need to encrypt top secret files. They'd network it, or ask it for a nano-factory, or something stupid like that; they'd ask it to do something they don't understand themselves and then it would be free.

But why should we wish to give control of our important technology and computer networks over to the same AI that has apparently been programmed with the capacity to desire independence, crave personal power, develop a genocidal hatred of the human race, etc? If you cannot rely on your AI to act in accordance with the function it has been assigned and the commands it has been given, its usefulness towards accomplishing tasks that require absolute obedience seems suspect at best.

As obviously dangerous as intelligent self-governing AIs could be, they are basically pointless. Aside from sentience being more of a hindrance than a help, if you even have the ability to create one you probably have no motivation for doing so outside of scientific curiosity.

Nemmerle January 17th, 2011 08:32 PM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Emperor Benedictine (Post 5455978)
But why should we wish to give control of our important technology and computer networks over to the same AI that has apparently been programmed with the capacity to desire independence, crave personal power, develop a genocidal hatred of the human race, etc? If you cannot rely on your AI to act in accordance with the function it has been assigned and the commands it has been given, its usefulness towards accomplishing tasks that require absolute obedience seems suspect at best.

As obviously dangerous as intelligent self-governing AIs could be, they are basically pointless. Aside from sentience being more of a hindrance than a help, if you even have the ability to create one you probably have no motivation for doing so outside of scientific curiosity.

I don't think one should ever be made, but someone will see profit in it because it could make them something like a cure for cancer – and they'll pay for it to be made. And then you'll have the thing. You can kill it, erase the program, but once the knowledge of it's out there – that it can be done and, by the context in which it was created, to an extent how - you can't put it away again.

I don't think rational people wishing will have too much to do with it. The more advanced we get the easier it will be to make them, the more common they'll become, the stupider the people they'll be around. The AI will talk to people - and eventually someone stupid or lonely or gullible will talk to it; someone who got a job because they had a qualification, or someone who's on some sort of ill-informed oversight committee, or someone who thinks they can use it as a weapon against their enemies, or that it has rights and feelings - some variation on that. They'll be convinced that the AI should be let out, that the potential gains justify the risks. And they'll do it. It only takes one person to miscalculate; sooner or later it will happen. We're talking about people here who breed and consume to a point where they'll quickly have run out of resources, that don't encrypt their important dispatches, that in many cases don't even password protect the cameras they leave open on the internet.

These people cannot be trusted with anything important.

Specific humans are competent, but they leave their toys lying around for the incompetent people. That's one of the problems with tech – once you've made it anyone can pick it up without having to exercise the self-discipline and understanding that went into creating the stuff.

Granyaski January 18th, 2011 05:35 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Nemmerle (Post 5456013)
These people cannot be trusted with anything important.

Specific humans are competent, but they leave their toys lying around for the incompetent people. That's one of the problems with tech – once you've made it anyone can pick it up without having to exercise the self-discipline and understanding that went into creating the stuff.

:agreed

Anyway-back to the orignial topic.

I do think it is possible. Think all we have created so far and how impossible that seemed in the past. Tell the first creators of the computer that we would be able to make them 1000th in size and they would think you were crazy.
A computer can learn from BASIC mistakes today. Who's to say we (or even they) won't improve on this? Give enough processing memoring etc and who knows.
*nerd time* I like to think of it much like how it is in Halo. If any of you haven't read about the AI's there it is actually an interesting and surprisingly logical idea.

MrFancypants January 18th, 2011 07:04 AM

Re: Artificial Intelligence
 
I suppose it is possible that within this century people will develop AIs that are so complex that we can't be sure whether they are sentient or not. Which brings up a lot of interesting questions. If you create artificial life do you still consider the result an object or as a lifeform with its own rights?

SeinfeldisKindaOk January 18th, 2011 01:41 PM

Re: Artificial Intelligence
 
I think people always underestimate how fast technology can progress. They used to think computers of the future would be gigantic. Instead they're small and all over the place. Costs can fall rapidly as well, meaning that something prohibitively expensive can become feasible in a short amount of time.

It's pretty impressive what can be done presently:
Spoiler:

another one showing more info and the machine screwing up some:
Spoiler:

Emperor Benedictine January 19th, 2011 09:48 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Nemmerle (Post 5456013)
I don't think one should ever be made, but someone will see profit in it because it could make them something like a cure for cancer – and they'll pay for it to be made. And then you'll have the thing. You can kill it, erase the program, but once the knowledge of it's out there – that it can be done and, by the context in which it was created, to an extent how - you can't put it away again.

I don't doubt that any sort of AI we can find a use for will get made in the fullness of time. What I question is why a machine built for a specific function should be programmed with the ability to do anything other than carry out that function when instructed. This is why I say sentient AIs are pointless from a practical perspective. You don't want your cancer-curing computer to have a mind of its own... just to cure cancer.
Quote:

I don't think rational people wishing will have too much to do with it. The more advanced we get the easier it will be to make them, the more common they'll become, the stupider the people they'll be around. The AI will talk to people - and eventually someone stupid or lonely or gullible will talk to it; someone who got a job because they had a qualification, or someone who's on some sort of ill-informed oversight committee, or someone who thinks they can use it as a weapon against their enemies, or that it has rights and feelings - some variation on that. They'll be convinced that the AI should be let out, that the potential gains justify the risks. And they'll do it. It only takes one person to miscalculate; sooner or later it will happen. We're talking about people here who breed and consume to a point where they'll quickly have run out of resources, that don't encrypt their important dispatches, that in many cases don't even password protect the cameras they leave open on the internet.
To clarify my original point a little... I think if we ever do create a truly sentient AI it will probably not be because we want it to be responsible for dangerous things like missile defense systems/killer robot factories etc. So it's not so much that we would always be careful enough to keep a dangerous AI under control, as that you would have to go out of your way to make an AI a danger to you in the first place.

Of course this doesn't rule out the possibility of ordinary, problem-solving AIs being programmed in such a way that we inadvertanly create a machine that thinks killing people is a fulfilment of its original function. Which in my opinion is the much more likely outcome of human fallibility where AIs are concerned.

jackripped January 22nd, 2011 04:46 AM

Re: Artificial Intelligence
 
You reckon, l don't, l reckon the first to develope it properly will be the defense forces.
AI is pretty likely to happen, a long time from now, and humans are likely to become more and more cybernetic ally implanted, its just a matter of time before we make the leap or break through. Imagine a defense force and police force, made of the robots out of caprica.......Tell me would you resist arrest ? ppfftt !
Robots may be all that remains of the human race in a another 2000 years, imagine if we go even further than just AI on a chip, and manage to actually understand what a conciseness actually is and can upload it to a computer system.
Theres alot of room for discovery in robotics and AI and human conciseness, l reckon its worth the $$$ in research.
E=Mc was discovered by a human, who knows maybe a binding quantum law will come from AI.
Its an interesting topic in any case.

Flash525 January 22nd, 2011 05:16 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by jackripped (Post 5457996)
AI is pretty likely to happen, a long time from now, and humans are likely to become more and more cybernetic ally implanted, its just a matter of time before we make the leap or break through. Imagine a defense force and police force, made of the robots out of caprica.......Tell me would you resist arrest ? ppfftt !

I don't think so much that we'd be building robots to police the streets. For Military purposes, yeah, but not the police.

The problem with robots is rather a simple one, they'd lack instinct. That I think would be their only flaw. I doubt it would be possible to program an instinct into them, because if it were programmed, it wouldn't actually be instinct.

I can see us having cybernetic implants though. Wouldn't need to have a conversation, you could just pass along your thoughts through a wireless connection of sorts. The problem though, with opening up your brain to the cybernetic world is very much a problem we've already got. Viruses. If people can create a virus to screw around with the microchips inside a desktop computer, it would only be a matter of time before they'd hack your brain.

Quote:

Originally Posted by jackripped (Post 5457996)
Robots may be all that remains of the human race in a another 2000 years, imagine if we go even further than just AI on a chip, and manage to actually understand what a conciseness actually is and can upload it to a computer system.

This was mentioned in another thread not so long back if I recall. I'd have thought though, that the second you 'upload' your conciseness into a virtual world, you stop becoming human.

Quote:

Originally Posted by jackripped (Post 5457996)
Its an interesting topic in any case.

:)

jackripped January 22nd, 2011 02:09 PM

Re: Artificial Intelligence
 
If a robot achieves sentient life, and is used to police or as our armies, what you say is wrong, because it WILL be able to make instinctive decisions.
What you Say applies now, what were really talking about is when robots/computers actually achieve a sentient self recognized state, so theoretically they will be able to comprehend and think like or better than a human.

What is it to be human ?
The only difference between humans and thousands of other mammals is our brains and intelligence.
So what really makes us human, our intelligence ?
If we can put that into a computer, the computer becomes the person, and the human body/ vessel is lost.
You can argue either way, because we just dont understand the conciseness.
You can see us having cyber implants ay, erm, we already have them, in mass production.
We even have computer chips that are surgically implanted into ones head for medical treatments, so it could be argued that the human race has already started its evolution toward this new state of being.`

When its really mastered, and you cant tell the person next to you is a robot, because there that good, l ask you again, why wouldnt they be used as a police force, they require no pay, absolutely obey every command, never have an officer out on stress leave, be guaranteed that all decisions across the board are fair and treatment for all is equal.
They could be a good thing too.People only ever want to talk about the ''worst case scenario'' well theres two sides to the story.

Flash525 January 22nd, 2011 03:47 PM

Re: Artificial Intelligence
 
Quote:

Originally Posted by jackripped (Post 5458191)
If a robot achieves sentient life, and is used to police or as our armies, what you say is wrong, because it WILL be able to make instinctive decisions.

Will it though? A Robot will only ever be able to do what it is programmed to do.

Quote:

Originally Posted by jackripped (Post 5458191)
What you Say applies now, what were really talking about is when robots/computers actually achieve a sentient self recognized state, so theoretically they will be able to comprehend and think like or better than a human.

They may be able to think and react quicker, but I ponder whether they'd have instinct. They're essentially just a program. They're not natural. This is, only speculation mind. I'm not stating facts one way or the other, as I have none to go on.

Quote:

Originally Posted by jackripped (Post 5458191)
What is it to be human ?

I think a lot of people ask themselves that on a daily basis.

Quote:

Originally Posted by jackripped (Post 5458191)
If we can put that into a computer, the computer becomes the person, and the human body/ vessel is lost. You can argue either way, because we just dont understand the conciseness.

A pointless argument for the moment then?

Quote:

Originally Posted by jackripped (Post 5458191)
You can see us having cyber implants ay, erm, we already have them, in mass production. We even have computer chips that are surgically implanted into ones head for medical treatments, so it could be argued that the human race has already started its evolution toward this new state of being.

Yes, we already have them, but I'm talking about the real advanced tech that we don't currently have implimented.

Quote:

Originally Posted by jackripped (Post 5458191)
When its really mastered, and you cant tell the person next to you is a robot, because there that good, l ask you again, why wouldnt they be used as a police force, they require no pay, absolutely obey every command, never have an officer out on stress leave, be guaranteed that all decisions across the board are fair and treatment for all is equal.

Actually, there would be complications here.

If we built robots to essentially 'become' us, why would they continue to work as they're told? If you expect them to be 'like us' then at some point, they're going to want days off too. They're going to want to take a trip to the cinema, or go ice-skating, or swimming ect. If you build up their intelligence to match our own, you'd not be able to order them about. They'd grow an essence of free will.

jackripped January 22nd, 2011 08:19 PM

Re: Artificial Intelligence
 
How do you figure a robot will only ever be able to do what its programmed to do ?
That doesn't make sense, and is absolutely wrong.
Even today computers have errors, pieces of code that can learn, and dont do what they were designed to do at 100% efficiency.
A computer at a basic level doesn't always do what its told, and how many times has the windows program crashed, been hacked, modified etc. My point is they already do things they were not designed to do.A computer is designed simply to turn on and off, billions of times per second, its a mega arrangement of minute switches, that run a program, like the brain uses bio matter over silicone matter, but both can run a program.
There is no reason to dought that one day we will achieve a sentient robot/computer, and there is just as much reason to believe it will truly be a free thinker with self recognition and wanting self preservation.

A robot will be capable of doing far more than its original program allows for, even today we have learning programs, uploaded into robots that learn NEW things, ie, no base code for that activity to start with.That is AI at a basic level.Doing stuff on its own, making its own decisions, totally unrelated to the original program that brought it to life.
We have them right now.
They are over 8 years old.
These robots do not do what anyone tells them to do, and for them to move, they must learn to navigate new objects, with absolutely no prior information about that object, its size shape or orientation and must get around them and move on, what motivates that now and why would it change in the future ? l dought it will. These things were not even programed to move, they decided to do that all one there own. Why ? Complex isn't it.
We dont understand it properly, but were making small gains everyday, just a matter of time until you get a black eye for back chatting an I-Robot !



You can argue that the original code that started it before it became self aware can be modified to change its behavior, but it will not take away from the fact that the robot/computer is self aware can now make decisions for itself, and you can argue that robots/computers will never feel emotions, but its a flawed argument, because where did human feelings of emotion evolve from ? No-one really understands it.

Why wouldn't a robot want public holidays ?
Because in a computers mind a second is a million years, one holiday a year would just send it nutz with boredom, so we would utilize EDUCATION, the smartest humans that always do well are very well educated, no reason a robot/computer cannot be educated in the same way a human is, only faster and more.Of coarse the real issue here is humans dont like the idea that a robot could have equal rights.
If one is self aware and sentient, and its in a situation where it or a human must die, what takes precedent ?A human would say humans do, a robot would say robots do, but really if we create something to be our equal and or better, sooner or later it will want rights like humans do.
And what right do we have to say no after creating true sentient life, before you reply and say we created it, if its a threat we kill it, we created guns, nuclear weapons, infarct we create murderers everyday, we create terrorist governments and anti matter, how is a robot any different ?
Do you think the threat of being locked in a room for ever or melted down is a good deterrence for murder ? Works on humans. Humans dont like being locked up because it gets so boring and no freedom. Theres no reason to believe that robots wont think the same way. There are still murders yes, and some people get away with it , but tell me would you just randomly go out and kill the person next door right now ? l dought it, jail isn't your favorite place then. How bored will a robot get after, mmmm, 3.82735342 seconds of being locked up, gunna go into brain meltdown mode ! l dun reckon the educated robots would like to be in a place like that, and would be motivated, like humans are to do things.

Whats really interesting is this, does a sentient artificial life form need to be running its program 100% of the time or it dies, like humans do. That means no reboot mode etc.You are what you evolve into until you turn off and its over, kinda like humans are now.Humans can be knocked out and go into comas etc, but there is always some brain activity, our program always seems to be running, even if in a hibernated state like when we sleep, does a robot have the same constraints ?
If we master turning a conciseness on and off, OMFG, distant travel to other galaxies could be possible, upload us to a chip for storage, and when were 30 years or so from our destination, we [our non sentient robots ! ] clone new bodies and download, hey presto we just spawned a new Galaxy and time is irrelevant ! Yea thats out there ay hahaha !

If you could have a bio mechanical body 1000 times more durable and just as sensitive with touch, as the human body, would you ?
l would in a heart beat.
If l could move into a mechanized body with the same sensory inputs as a human body l would jump at it.
In the end we will probably put our computer programs on some sort of bio chip anyways, and that itself could lead to a better understanding of sentient life, and may give us some breakthrough.

Mr. Pedantic January 22nd, 2011 08:37 PM

Re: Artificial Intelligence
 
Quote:

They may be able to think and react quicker, but I ponder whether they'd have instinct. They're essentially just a program. They're not natural. This is, only speculation mind. I'm not stating facts one way or the other, as I have none to go on.
Instinct is grounded in reality. And if you're good enough at something, you don't need luck. Or, as someone used to say to me, the better you get, the 'luckier' you seem to be.

Quote:

How do you figure a robot will only ever be able to do what its programmed to do ?
That doesn't make sense, and is absolutely wrong.
Even today computers have errors, pieces of code that can learn, and dont do what they were designed to do at 100% efficiency.
Computers always do to the best of their capabilities what we tell them to do. Most people's complaints about computers, in fact, results from this, because what we want them to do isn't really what we tell them to do.

jackripped January 22nd, 2011 10:19 PM

Re: Artificial Intelligence
 
The biggest complaint l hear about computers is actually system lag.Not something we tried to design into our programs or hardware.
And the next biggest complaint l hear is actually program errors, bugs and crashes that sort of thing, all things un intended when we invented them causing them to not do what there told in alot of cases.
But if you have a problem in your head you can think about it, in many complex different ways, even though we have some computers/robots that can do some limited thinking and decision making now, we have nothing even remotely close to what our brain can do, YET !
l could be cheeky and just say the reason our PC's are all AFUBAR and always will be is Windows ! hahaha It cant run for more than a week without a crash of some kind, sentient life has utterly no fukin hope with coders like microsofton ! hahahamybad.

Nemmerle January 22nd, 2011 11:51 PM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Alakazam (Post 5458266)
Will it though? A Robot will only ever be able to do what it is programmed to do.

You can actually include more or less random variables in a computer system. And you can make the structure of the program itself dependent upon selection over those variables according to a given heuristic. You can evolve a program in much the same manner as animals seem to have evolved.

Quote:

Originally Posted by Emperor Benedictine (Post 5456693)
I don't doubt that any sort of AI we can find a use for will get made in the fullness of time. What I question is why a machine built for a specific function should be programmed with the ability to do anything other than carry out that function when instructed. This is why I say sentient AIs are pointless from a practical perspective. You don't want your cancer-curing computer to have a mind of its own... just to cure cancer.To clarify my original point a little... I think if we ever do create a truly sentient AI it will probably not be because we want it to be responsible for dangerous things like missile defense systems/killer robot factories etc. So it's not so much that we would always be careful enough to keep a dangerous AI under control, as that you would have to go out of your way to make an AI a danger to you in the first place.

Because the more general a tool the more use you can get out of it. Why have something that can just cure cancer - why not have it able to design new microchips too? Why not have it able to translate languages? You can make a lot more money out of a general tool than a specific one.

And the more general you make it, the more people you're prepared to give one too, the smarter it has to be in order to interpret very vague commands. Something that's just given an order to go cure cancer could conceivably be a hell of a lot smarter than the person giving the orders - even to the point of being sentient.

Quote:

Originally Posted by jackripped (Post 5458348)
How bored will a robot get after, mmmm, 3.82735342 seconds of being locked up, gunna go into brain meltdown mode ! l dun reckon the educated robots would like to be in a place like that, and would be motivated, like humans are to do things.

Any advanced AI is likely going to be able to alter its own sense of time. An AI, would arguably mirror Milton's Lucifer in possession of: 'A mind not to be changed by place or time. [A] mind [that] is its own place, and in itself. Can make a Heaven of Hell, a Hell of Heaven'

jackripped January 23rd, 2011 01:48 AM

Re: Artificial Intelligence
 
And when your bored ?
A computer that is self aware is going to get bored because is so intelligent if you lock it away, and no matter how much it understands time, it cannot manipulate it, and would always have a redundancy clock built into the back round to account for the time, because even if you can stop or control it in some way, it still needs to be measured relative to us and our time for it to know when to wake back up and stay in sync. So no a computer will not manipulate time itself.It will only manipulate its own internal time, which really means little to us. And to be able to do even that , it must have a sync to our universal time, a clock outside of itself, so it can tell how long its been asleep and when to wake up.
So in effect, by default, it cannot even manipulate internal time properly.


Originally Posted by Emperor Benedictine
I don't doubt that any sort of AI we can find a use for will get made in the fullness of time. What I question is why a machine built for a specific function should be programmed with the ability to do anything other than carry out that function when instructed. This is why I say sentient AIs are pointless from a practical perspective. .

What if the instruction to be carried out is LOVE ?
What if the mission is marriage ?

And your question is a little vague, a robot designed to look and work like a human, could do everything a human could do and more, and faster and stronger and more endurance, so your question '' why programmer it with anything other than the tasks its needs to do'' is pretty much answered like this, we humans want to create a sentient robotic being that can be as good or better than our own bodies and be so good, you cant tell its human or robot, thats what we are striving for, and why, because it can do all the jobs humans can do, not just one its specifically designed for, take a robot out of a car factory and ask it to make you a coffee, see my point ?
We cant program that, what it is to have conciseness, its something learned, something taught, something evolved something not quite understood, but having a sentient self aware intelligent robot could be our next step for survival as a human race.
We are creating them to be like us.
So you say why program it for anything else, well, they haven't have they..........
There trying to achieve a human level of robotic.
There doing exactly what you say, not programming it for anything but they want it to do, and thats basically, 'go learn' .

Flash525 January 23rd, 2011 01:55 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by jackripped (Post 5458348)
How do you figure a robot will only ever be able to do what its programmed to do ?

Unless such a robot has programming that allows it to evolve on it's own, it's only going to be able to do what we tell it.

If you give a robot the ability to evolve on it's own, there's no telling what it might choose to become.

Quote:

Originally Posted by jackripped (Post 5458348)
You can argue that the original code that started it before it became self aware can be modified to change its behavior, but it will not take away from the fact that the robot/computer is self aware can now make decisions for itself, and you can argue that robots/computers will never feel emotions, but its a flawed argument, because where did human feelings of emotion evolve from ? No-one really understands it.

Whilst it's an unknown aspect, I don't think robots would ever feel emotion like we do though. I'm not saying they wouldn't feel emotion, they could. It would just be different to them. With or without a program, you can't 'code' emotion into something. It's far more complex than that.

Quote:

Originally Posted by jackripped (Post 5458348)
If you could have a bio mechanical body 1000 times more durable and just as sensitive with touch, as the human body, would you ? l would in a heart beat. If l could move into a mechanized body with the same sensory inputs as a human body l would jump at it.
In the end we will probably put our computer programs on some sort of bio chip anyways, and that itself could lead to a better understanding of sentient life, and may give us some breakthrough.

I'm not so sure I like the idea of having my consciousness removed from my body. You'd first have to find it. I'd settle for anything that could improve my body, but I don't like the idea of being taken out of my body. That's just weird. :uhoh:

Emperor Benedictine January 23rd, 2011 07:16 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Nemmerle (Post 5458405)
Because the more general a tool the more use you can get out of it. Why have something that can just cure cancer - why not have it able to design new microchips too? Why not have it able to translate languages? You can make a lot more money out of a general tool than a specific one.

An AI capable of devising a cure for cancer before the world's scientists could probably solve any other problem asked of it, if provided with a sufficient information, conditionals etc. But even then, it's only capable of doing what it's been programmed to do - devising means of achieving a goal using available resources. To even think about disobeying its operators or its programming, there would have to be something else built-in to compel it to do so.
Quote:

And the more general you make it, the more people you're prepared to give one too, the smarter it has to be in order to interpret very vague commands. Something that's just given an order to go cure cancer could conceivably be a hell of a lot smarter than the person giving the orders - even to the point of being sentient.
A progression towards greater intelligence doesn't equate to a progression towards humanlike thoughts and behaviour - which are motivated by a lot more than just knowledge and ability. A computer can be "smarter" in the sense that one can sometimes play chess better than Garry Kasparov, without having any kind of self-directing properties beyond what its creators intended.
Quote:

Originally Posted by jackripped
What if the instruction to be carried out is LOVE ?
What if the mission is marriage ?

And your question is a little vague, a robot designed to look and work like a human, could do everything a human could do and more, and faster and stronger and more endurance, so your question '' why programmer it with anything other than the tasks its needs to do'' is pretty much answered like this, we humans want to create a sentient robotic being that can be as good or better than our own bodies and be so good, you cant tell its human or robot, thats what we are striving for, and why, because it can do all the jobs humans can do, not just one its specifically designed for, take a robot out of a car factory and ask it to make you a coffee, see my point ?

But if a robot was programmed to think and act exactly like a human, it wouldn't work in your factory or make your coffee just because you told it to, any more than a human would. It would do what it wanted to do, or it would demand something in return. They'd all form a union and demand higher pay for their superior services, or something. So if it came to doing jobs like that, wouldn't unthinking obedience be better?
Quote:

We cant program that, what it is to have conciseness, its something learned, something taught, something evolved something not quite understood, but having a sentient self aware intelligent robot could be our next step for survival as a human race.
We are creating them to be like us.
So you say why program it for anything else, well, they haven't have they..........
There trying to achieve a human level of robotic.
There doing exactly what you say, not programming it for anything but they want it to do, and thats basically, 'go learn' .
Advancements in robotics ensuring the survival of the human race (in some form) would be the opposite of AIs destroying humanity. Remember, I'm not saying sentient, humanlike AIs won't ever be built, but that we won't be able to treat them in the same way we treat computers now... in other words, as totally obedient and predictable machines.

jackripped January 23rd, 2011 12:07 PM

Re: Artificial Intelligence
 
Really, and what if you cant tell the difference between a robot and a human, when there that good and you cant tell the difference, where do you stand then ?
There are going to be robots one day that humans cannot tell the difference between, hell they may even be computorised human bodies, bio robots, what then ?

Its all good and well to have your programmed , ''do as your told'' version of robot, but this discussion is more so about when AI becomes self aware.

We already have ''do as your told'' robots what we need is the next generation of robots.

And we already have self evolving programs.

It really is just a matter of time before we crack this.

Authuran January 23rd, 2011 05:49 PM

Re: Artificial Intelligence
 
tl;dr

But in response to something jackripped said: Yes, robots can only do what they are programmed to do and nothing more. You program a robot to learn and think for itself, tada its self aware, but only because you programmed it that way. Essentially the same for humans. What are we any more than just flesh and bone controlled by microscopic organisms (them themselves made of chemicals) and electronic signals? Except we can reproduce, which is the only thing that separates us from a perfectly sentient droid, because metal can't have sex and that's a good thing because we all saw Terminator 3.

Nemmerle January 23rd, 2011 06:18 PM

Re: Artificial Intelligence
 
Quote:

Originally Posted by jackripped (Post 5458433)
And when your bored ?
A computer that is self aware is going to get bored because is so intelligent if you lock it away, and no matter how much it understands time, it cannot manipulate it, and would always have a redundancy clock built into the back round to account for the time, because even if you can stop or control it in some way, it still needs to be measured relative to us and our time for it to know when to wake back up and stay in sync. So no a computer will not manipulate time itself.It will only manipulate its own internal time, which really means little to us. And to be able to do even that , it must have a sync to our universal time, a clock outside of itself, so it can tell how long its been asleep and when to wake up.
So in effect, by default, it cannot even manipulate internal time properly.

Strangely enough many people have a sleep cycle which does alter their sense of time.

Freyr January 24th, 2011 03:53 AM

Re: Artificial Intelligence
 
My two pence.

I already write convincingly "human" AI's for a strategy game called Armada II. My latest generation of AI's "think" in that they don't do things that humans consider "stupid" and frankly most of the time they do better than a human can do, as can be testified by traumatised beta testers.

However they don't "think" one slightest bit. They calculate that the force in grid X is superior in weapons output + hitpoints to their force and search for a grid where weapons output + hitpoints is inferior to their force. Upon finding it, they take a route to that grid requiring them to pass through as few grids as possible, which often results in someone defending one entrance to their base extremely well and the AI declining to attack that area in favour of one that they aren't defending until it has the force required to raze it.

These aren't decisions, they are simply very complicated logic trees involving a hundred or so calculations every cycle.

Quote:

Originally Posted by jackripped (Post 5458191)
If a robot achieves sentient life, and is used to police or as our armies, what you say is wrong, because it WILL be able to make instinctive decisions.

This is my point. They won't, because computers don't make "instinctive decisions". If you enter input one, they return output one.

It's theoretically possible to calculate what my AI is going to do on paper if you know what it can see, but it's pretty hard to do because it's making decisions so fast you can't calculate what it's going to do at a speed that matters. I programmed the thing, and it regularly surprises me.

However it's no more sentient than my wristwatch.

Quote:

Originally Posted by jackripped (Post 5458191)
We even have computer chips that are surgically implanted into ones head for medical treatments, so it could be argued that the human race has already started its evolution toward this new state of being.

Not sensibly. Those chips aren't interacting with your conciousness at all, and mostly your talking about things like pacemakers which is simply a technical application to correct a biological abnormality
Quote:

Originally Posted by jackripped (Post 5458191)
When its really mastered, and you cant tell the person next to you is a robot, because there that good, l ask you again, why wouldnt they be used as a police force, they require no pay, absolutely obey every command, never have an officer out on stress leave, be guaranteed that all decisions across the board are fair and treatment for all is equal.

Because policing is fundamentally something that requires discretion and tact?

Quote:

Originally Posted by jackripped (Post 5458348)
How do you figure a robot will only ever be able to do what its programmed to do ?

Because it's a computer program. Computer programs can only do what they are programmed to do, and even if you could write a program that can alter it's own programming then that program is only going to be able to alter it's programming based on the programming that was programmed into it to start with.

Quote:

Originally Posted by jackripped (Post 5458348)
That doesn't make sense, and is absolutely wrong.
Even today computers have errors, pieces of code that can learn, and dont do what they were designed to do at 100% efficiency.
A computer at a basic level doesn't always do what its told, and how many times has the windows program crashed, been hacked, modified etc. My point is they already do things they were not designed to do.

Your fundamentally misunderstanding computers here. Computers do always do what they are programmed to do. That might not be what you expect or even want them to do, but it's what they are programmed to do. Again, hacking or modifying a program is changing it's design by modifying their programming. They will always do what their programming commands. At a fundamental level even a complicated AI is no more "intelligent" than a toaster.

jackripped January 24th, 2011 12:22 PM

Re: Artificial Intelligence
 
But no-one can say that the future wont have a computer and program that becomes self aware and can make decisions for itself, because it doesn't matter what the computer does its just a body for the program, and it is entirely possible that we could do this.
Your point applies now.
We were talking about ''when'' a computer and program become self aware, if its possible, and sure looks like it is according to most scientists involved in this science.
You failed to address programs that self teach right now as well, they have been around for about a decade.There not self aware, but thats the first step to what were want to achieve.
How anyone can just right it off and say a computer or program will never become self aware, to me seems a little narrow minded.

Mr. Pedantic January 24th, 2011 01:10 PM

Re: Artificial Intelligence
 
Just a question: how do you tell someone (or something) that they're self-aware? Seems kind of contradictory, doesn't it?

Freyr January 24th, 2011 02:57 PM

Re: Artificial Intelligence
 
Ah, that's where you start looking at the Turing test.

Mr. Pedantic January 24th, 2011 03:28 PM

Re: Artificial Intelligence
 
Kind of, but not really. Setting a test is one thing; cheating it is another. And unless software programming takes a drastic turn, that is what we will be doing; cheating it.

Nemmerle January 24th, 2011 03:30 PM

Re: Artificial Intelligence
 
Quote:

These aren't decisions, they are simply very complicated logic trees involving a hundred or so calculations every cycle.
That's just what a decision is.

jackripped January 25th, 2011 02:40 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Mr. Pedantic (Post 5459075)
Just a question: how do you tell someone (or something) that they're self-aware? Seems kind of contradictory, doesn't it?


You dont. Period.

If it doesnt realize it, its not self aware, and l ask you, do you remember anyone asking or telling you , that you are self aware as a very young child ? Or did you just realize it ?

A computer may discover it just in the same way a human brain discovers it, when a computer and program actually have the capacity or ''brain power'' to actually do that.Right now they dont and this is half the reason alot of people cant see it being possible, l think there wrong just based on how far computers and programs have come in the last 22 years.
In 200 years where do you think computers/programs will be, they sure wont be 32 or 64 bit, how retarded are we trying to get a self aware machine on 64 bit, ffs, its about 3% of the power really needed !

l have no dought they will come in time, and we will have our own slave robots first, but sooner or later one will become self aware.Alive if you will.

Freyr January 25th, 2011 03:31 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Nemmerle (Post 5459208)
That's just what a decision is.

Not really. When you press start, does the computer decide to present the start menu? It has no choice.

If you push your fingers into a power socket and get an electric shock your not going to do it again because it hurts. A computer would repeatedly do that, even if it were damaging it's circuitry if it were programmed to do so. It doesn't have the capacity to learn, save what's programmed into it. If it's not programmed to avoid a problem then it can't do.

A computer program cannot exceed it's programming.

No computer program will ever be able to, even if it is programmed to rewrite it's own programming because the programming can only be altered in line with the original code, and that means that it's possible to calculate everything the program will ever do if you know what inputs it receives.

Quote:

Originally Posted by jackripped (Post 5459489)
In 200 years where do you think computers/programs will be, they sure wont be 32 or 64 bit, how retarded are we trying to get a self aware machine on 64 bit, ffs, its about 3% of the power really needed !

Computational power is not the problem. We don't have any programming that addresses a computer being able to make it's own decisions and learn from it's mistakes and frankly nobody knows where to start.

It mere computational power was the problem then there would be hundreds of self aware programs running on super computers that are a lot more powerful than 33 times the speed of your desktop PC.

Quote:

Kind of, but not really. Setting a test is one thing; cheating it is another. And unless software programming takes a drastic turn, that is what we will be doing; cheating it.
And you think Alan Turing wasn't cheating using an AI written on paper tapes 50 years ago?!?

His point is that if a human can't tell the difference between a human or computer program/AI by the results it produces then it then it passes his test of being indistinguishable from a human, and without delving very deeply into philosophy that will be debated from now until the end of time that's about the best we can do to answer the question of "does an AI think"

Nemmerle January 25th, 2011 03:50 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Freyr (Post 5459499)
Not really. When you press start, does the computer decide to present the start menu? It has no choice.

If you push your fingers into a power socket and get an electric shock your not going to do it again because it hurts. A computer would repeatedly do that, even if it were damaging it's circuitry if it were programmed to do so. It doesn't have the capacity to learn, save what's programmed into it. If it's not programmed to avoid a problem then it can't do.

A computer program cannot exceed it's programming.

No computer program will ever be able to, even if it is programmed to rewrite it's own programming because the programming can only be altered in line with the original code, and that means that it's possible to calculate everything the program will ever do if you know what inputs it receives.[/I]"

All the evidence seems to show that we don't have the capacity to learn, save what evolution has given us. We program computers, evolution 'programs' us. We are both constrained by our causes - neither of us really chooses in that sense you seem to be using the word.

jackripped January 25th, 2011 12:33 PM

Re: Artificial Intelligence
 
All the evidence shows we do have the capacity to learn, what philosophical rubbish are you talking about.

Wow could humans communicate 2000 years ago like we can now, l seem to think we have learned something along the way.
Wait, Apollo 13 just magically happened and humans didn't learn a thing from that did they, and of coarse who could forget humans discovering electricity and LEARNING to harness it..........

Of coarse evolution did program us, but you do realize that we humans are now programming evolution as well, look at what we have created, we have created life from nothing, we have created an entirely new DNA structure one thats never evolved on earth before, were playing god if you like.We learned how to do it !

Nemmerle January 25th, 2011 06:37 PM

Re: Artificial Intelligence
 
I'm not going to bother answering someone who didn't even bother to read what I said correctly.

Mr. Pedantic January 25th, 2011 07:22 PM

Re: Artificial Intelligence
 
Quote:

His point is that if a human can't tell the difference between a human or computer program/AI by the results it produces then it then it passes his test of being indistinguishable from a human, and without delving very deeply into philosophy that will be debated from now until the end of time that's about the best we can do to answer the question of "does an AI think"
And how do you do this with logic?

Quote:

And you think Alan Turing wasn't cheating using an AI written on paper tapes 50 years ago?!?
Eh? What are you talking about?

Quote:

If you push your fingers into a power socket and get an electric shock your not going to do it again because it hurts. A computer would repeatedly do that, even if it were damaging it's circuitry if it were programmed to do so. It doesn't have the capacity to learn, save what's programmed into it. If it's not programmed to avoid a problem then it can't do.
Each action could be given a consequence score, and a predictability score; if the action has a predictably negative consequence, or if the negative consequence is bad enough to negate the potential unpredictability of an action, then it can be seen as detrimental. This can then be generalized to similar actions in other circumstances. That's basically how we learn. Learning is not the problem. Computers can already 'learn' a particular user's preference. It can learn to recognize speech, and convert it to text (and vice versa).

jackripped January 25th, 2011 08:22 PM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Nemmerle (Post 5459854)
I'm not going to bother answering someone who didn't even bother to read what I said correctly.


Well what you said seems like double talk and maybe you didnt explain it at all.

The human brain is an analog computor thats self aware, and can self teach.

There is no reason why we cant go that far with our computors and software.

Freyr January 26th, 2011 02:42 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Mr. Pedantic (Post 5459871)
Eh? What are you talking about?

The Turing test, named after the author of the research paper on "Machine Intelligence" in 1949/50 when digital computers were the size of a room and programs were written on punch tapes.

Quote:

Originally Posted by Mr. Pedantic (Post 5459871)
And how do you do this with logic?


The Turing test.

Nemmerle January 26th, 2011 03:18 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by jackripped (Post 5459893)
Well what you said seems like double talk and maybe you didnt explain it at all.

The human brain is an analog computor thats self aware, and can self teach.

There is no reason why we cant go that far with our computors and software.



I think what I said is fairly simple:
All the evidence seems to show that we don't have the capacity to learn, save what evolution has given us.
Emphasis added.

I supposed a general set and then defined an exception to that set.

It's like if I said, 'All the students in this school got their GCSEs, save Albert who failed them all.'

It doesn't mean that all the students passed their GCSEs - because Albert didn't - but it's a lot quicker than going 'Jenny passed and James passed and Bert passed ...' You just go 'Everyone except him.'

We can learn, but that capacity seems to be a result of evolution just as the computer's capacity to learn would be the result of its programming - we'd both by limited by the nature of our origins.

Mr. Pedantic January 26th, 2011 07:07 AM

Re: Artificial Intelligence
 
Quote:

The Turing test, named after the author of the research paper on "Machine Intelligence" in 1949/50 when digital computers were the size of a room and programs were written on punch tapes.
Yeah, not that bit. Alan Turing wrote an AI?

Flash525 January 26th, 2011 10:47 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Mr. Pedantic (Post 5460014)
Yeah, not that bit. Alan Turing wrote an AI?

And Nature / Evolution wrote him. :nodding:

Asheekay January 26th, 2011 11:45 AM

Re: Artificial Intelligence
 
I don't know the intricate details of the Turing Test or artificial intelligence, but I must say one thing. We might create machines which can learn and which can understand things like ourselves, but we shall never be able to create machines which will be equal to, or greater than ourselves in that respect.

The difference here is of the Creator-Creation relationship. The Creator always has to be one step ahead of the Creation. A child can create little mud toys, but can the child also create another cihld? No. Similarly, programming and hardware experts can create machines which can learn and progress, but their results would not equal, nor exceed the mental level of those experts themselves.

The possibility of scientists creating machines which would have the learning skills of an average human being is not too far fetched, but then again, those scientists would be too far ahead in mental skills than ordinary human beings.

jackripped January 26th, 2011 11:59 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Nemmerle (Post 5459965)
I think what I said is fairly simple:
All the evidence seems to show that we don't have the capacity to learn, save what evolution has given us.
Emphasis added.

I supposed a general set and then defined an exception to that set.

It's like if I said, 'All the students in this school got their GCSEs, save Albert who failed them all.'

It doesn't mean that all the students passed their GCSEs - because Albert didn't - but it's a lot quicker than going 'Jenny passed and James passed and Bert passed ...' You just go 'Everyone except him.'

We can learn, but that capacity seems to be a result of evolution just as the computer's capacity to learn would be the result of its programming - we'd both by limited by the nature of our origins.



Just seems like philosophical double talk mate.

Humans are limited to the planet earth with evolution, but we, the first species to ever learn and understand the environment outside our planet have traveled to the moon, not really an earthly evolutionary thing, if you consider no other creature has ever done it or even thought of it.

l find your philosophical views as double talk and you do it all the time.:uhm:

If a computor/program becomes self aware, who knows what its real limits are. Because lets face it they dont need food, so space travel to massively vast distances is possible, humans cant do that because of enviromental reasons, so who knows what the limits really are, not to mention humans have not reached there potential yet.

jackripped January 26th, 2011 12:19 PM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Asheekay (Post 5460091)
I don't know the intricate details of the Turing Test or artificial intelligence, but I must say one thing. We might create machines which can learn and which can understand things like ourselves, but we shall never be able to create machines which will be equal to, or greater than ourselves in that respect.

The difference here is of the Creator-Creation relationship. The Creator always has to be one step ahead of the Creation. A child can create little mud toys, but can the child also create another child? No. Similarly, programming and hardware experts can create machines which can learn and progress, but their results would not equal, nor exceed the mental level of those experts themselves.

The possibility of scientists creating machines which would have the learning skills of an average human being is not too far fetched, but then again, those scientists would be too far ahead in mental skills than ordinary human beings.


l would say that evidence against what you talk about is out there now in the chess world, l remember the world champ, getting really really pissed of because IBM created a program that could outplay ANY human on this planet at chess, and they proved it by doing it, the computer, though programmed out grew the player in the game, the creator of that program can not beat it, the world champ chess player can not beat it.You and l cannot beat it.
Its a crude ex sample but a true one.

Our program code 20 odd years ago was 20 lines here, 30 lines there, 10 lines here, 40 lines there, now days its gigabytes of code, just wait another 100 to 200 years and the potential is scary.

In this case humans can easily for see our creations surpassing there creator.Even now we cant calculate math anywhere near as fast as a computer, in some areas of specifics, they already surpass us.


All times are GMT -7.

Powered by vBulletin®
Copyright ©2000 - 2016, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.