It's not a matter of trusting it, it's a matter of what you can stop it doing. We're talking about the same retards that don't understand the need to encrypt top secret files. They'd network it, or ask it for a nano-factory, or something stupid like that; they'd ask it to do something they don't understand themselves and then it would be free.
But why should we wish to give control of our important technology and computer networks over to the same AI that has apparently been programmed with the capacity to desire independence, crave personal power, develop a genocidal hatred of the human race, etc? If you cannot rely on your AI to act in accordance with the function it has been assigned and the commands it has been given, its usefulness towards accomplishing tasks that require absolute obedience seems suspect at best.
As obviously dangerous as intelligent self-governing AIs could be, they are basically pointless. Aside from sentience being more of a hindrance than a help, if you even have the ability to create one you probably have no motivation for doing so outside of scientific curiosity.
But why should we wish to give control of our important technology and computer networks over to the same AI that has apparently been programmed with the capacity to desire independence, crave personal power, develop a genocidal hatred of the human race, etc? If you cannot rely on your AI to act in accordance with the function it has been assigned and the commands it has been given, its usefulness towards accomplishing tasks that require absolute obedience seems suspect at best.
As obviously dangerous as intelligent self-governing AIs could be, they are basically pointless. Aside from sentience being more of a hindrance than a help, if you even have the ability to create one you probably have no motivation for doing so outside of scientific curiosity.
I don't think one should ever be made, but someone will see profit in it because it could make them something like a cure for cancer – and they'll pay for it to be made. And then you'll have the thing. You can kill it, erase the program, but once the knowledge of it's out there – that it can be done and, by the context in which it was created, to an extent how - you can't put it away again.
I don't think rational people wishing will have too much to do with it. The more advanced we get the easier it will be to make them, the more common they'll become, the stupider the people they'll be around. The AI will talk to people - and eventually someone stupid or lonely or gullible will talk to it; someone who got a job because they had a qualification, or someone who's on some sort of ill-informed oversight committee, or someone who thinks they can use it as a weapon against their enemies, or that it has rights and feelings - some variation on that. They'll be convinced that the AI should be let out, that the potential gains justify the risks. And they'll do it. It only takes one person to miscalculate; sooner or later it will happen. We're talking about people here who breed and consume to a point where they'll quickly have run out of resources, that don't encrypt their important dispatches, that in many cases don't even password protect the cameras they leave open on the internet.
These people cannot be trusted with anything important.
Specific humans are competent, but they leave their toys lying around for the incompetent people. That's one of the problems with tech – once you've made it anyone can pick it up without having to exercise the self-discipline and understanding that went into creating the stuff.
These people cannot be trusted with anything important.
Specific humans are competent, but they leave their toys lying around for the incompetent people. That's one of the problems with tech – once you've made it anyone can pick it up without having to exercise the self-discipline and understanding that went into creating the stuff.
Anyway-back to the orignial topic.
I do think it is possible. Think all we have created so far and how impossible that seemed in the past. Tell the first creators of the computer that we would be able to make them 1000th in size and they would think you were crazy.
A computer can learn from BASIC mistakes today. Who's to say we (or even they) won't improve on this? Give enough processing memoring etc and who knows.
*nerd time* I like to think of it much like how it is in Halo. If any of you haven't read about the AI's there it is actually an interesting and surprisingly logical idea.
I suppose it is possible that within this century people will develop AIs that are so complex that we can't be sure whether they are sentient or not. Which brings up a lot of interesting questions. If you create artificial life do you still consider the result an object or as a lifeform with its own rights?
I think people always underestimate how fast technology can progress. They used to think computers of the future would be gigantic. Instead they're small and all over the place. Costs can fall rapidly as well, meaning that something prohibitively expensive can become feasible in a short amount of time.
It's pretty impressive what can be done presently:
Spoiler:
another one showing more info and the machine screwing up some:
Spoiler:
"you know what else is a knee slapper? America's dong."
I don't think one should ever be made, but someone will see profit in it because it could make them something like a cure for cancer – and they'll pay for it to be made. And then you'll have the thing. You can kill it, erase the program, but once the knowledge of it's out there – that it can be done and, by the context in which it was created, to an extent how - you can't put it away again.
I don't doubt that any sort of AI we can find a use for will get made in the fullness of time. What I question is why a machine built for a specific function should be programmed with the ability to do anything other than carry out that function when instructed. This is why I say sentient AIs are pointless from a practical perspective. You don't want your cancer-curing computer to have a mind of its own... just to cure cancer.
Quote:
I don't think rational people wishing will have too much to do with it. The more advanced we get the easier it will be to make them, the more common they'll become, the stupider the people they'll be around. The AI will talk to people - and eventually someone stupid or lonely or gullible will talk to it; someone who got a job because they had a qualification, or someone who's on some sort of ill-informed oversight committee, or someone who thinks they can use it as a weapon against their enemies, or that it has rights and feelings - some variation on that. They'll be convinced that the AI should be let out, that the potential gains justify the risks. And they'll do it. It only takes one person to miscalculate; sooner or later it will happen. We're talking about people here who breed and consume to a point where they'll quickly have run out of resources, that don't encrypt their important dispatches, that in many cases don't even password protect the cameras they leave open on the internet.
To clarify my original point a little... I think if we ever do create a truly sentient AI it will probably not be because we want it to be responsible for dangerous things like missile defense systems/killer robot factories etc. So it's not so much that we would always be careful enough to keep a dangerous AI under control, as that you would have to go out of your way to make an AI a danger to you in the first place.
Of course this doesn't rule out the possibility of ordinary, problem-solving AIs being programmed in such a way that we inadvertanly create a machine that thinks killing people is a fulfilment of its original function. Which in my opinion is the much more likely outcome of human fallibility where AIs are concerned.
You reckon, l don't, l reckon the first to develope it properly will be the defense forces.
AI is pretty likely to happen, a long time from now, and humans are likely to become more and more cybernetic ally implanted, its just a matter of time before we make the leap or break through. Imagine a defense force and police force, made of the robots out of caprica.......Tell me would you resist arrest ? ppfftt !
Robots may be all that remains of the human race in a another 2000 years, imagine if we go even further than just AI on a chip, and manage to actually understand what a conciseness actually is and can upload it to a computer system.
Theres alot of room for discovery in robotics and AI and human conciseness, l reckon its worth the $$$ in research.
E=Mc was discovered by a human, who knows maybe a binding quantum law will come from AI.
Its an interesting topic in any case.
AI is pretty likely to happen, a long time from now, and humans are likely to become more and more cybernetic ally implanted, its just a matter of time before we make the leap or break through. Imagine a defense force and police force, made of the robots out of caprica.......Tell me would you resist arrest ? ppfftt !
I don't think so much that we'd be building robots to police the streets. For Military purposes, yeah, but not the police.
The problem with robots is rather a simple one, they'd lack instinct. That I think would be their only flaw. I doubt it would be possible to program an instinct into them, because if it were programmed, it wouldn't actually be instinct.
I can see us having cybernetic implants though. Wouldn't need to have a conversation, you could just pass along your thoughts through a wireless connection of sorts. The problem though, with opening up your brain to the cybernetic world is very much a problem we've already got. Viruses. If people can create a virus to screw around with the microchips inside a desktop computer, it would only be a matter of time before they'd hack your brain.
Quote:
Originally Posted by jackripped
Robots may be all that remains of the human race in a another 2000 years, imagine if we go even further than just AI on a chip, and manage to actually understand what a conciseness actually is and can upload it to a computer system.
This was mentioned in another thread not so long back if I recall. I'd have thought though, that the second you 'upload' your conciseness into a virtual world, you stop becoming human.
If a robot achieves sentient life, and is used to police or as our armies, what you say is wrong, because it WILL be able to make instinctive decisions.
What you Say applies now, what were really talking about is when robots/computers actually achieve a sentient self recognized state, so theoretically they will be able to comprehend and think like or better than a human.
What is it to be human ?
The only difference between humans and thousands of other mammals is our brains and intelligence.
So what really makes us human, our intelligence ?
If we can put that into a computer, the computer becomes the person, and the human body/ vessel is lost.
You can argue either way, because we just dont understand the conciseness.
You can see us having cyber implants ay, erm, we already have them, in mass production.
We even have computer chips that are surgically implanted into ones head for medical treatments, so it could be argued that the human race has already started its evolution toward this new state of being.`
When its really mastered, and you cant tell the person next to you is a robot, because there that good, l ask you again, why wouldnt they be used as a police force, they require no pay, absolutely obey every command, never have an officer out on stress leave, be guaranteed that all decisions across the board are fair and treatment for all is equal.
They could be a good thing too.People only ever want to talk about the ''worst case scenario'' well theres two sides to the story.
If a robot achieves sentient life, and is used to police or as our armies, what you say is wrong, because it WILL be able to make instinctive decisions.
Will it though? A Robot will only ever be able to do what it is programmed to do.
Quote:
Originally Posted by jackripped
What you Say applies now, what were really talking about is when robots/computers actually achieve a sentient self recognized state, so theoretically they will be able to comprehend and think like or better than a human.
They may be able to think and react quicker, but I ponder whether they'd have instinct. They're essentially just a program. They're not natural. This is, only speculation mind. I'm not stating facts one way or the other, as I have none to go on.
Quote:
Originally Posted by jackripped
What is it to be human ?
I think a lot of people ask themselves that on a daily basis.
Quote:
Originally Posted by jackripped
If we can put that into a computer, the computer becomes the person, and the human body/ vessel is lost. You can argue either way, because we just dont understand the conciseness.
A pointless argument for the moment then?
Quote:
Originally Posted by jackripped
You can see us having cyber implants ay, erm, we already have them, in mass production. We even have computer chips that are surgically implanted into ones head for medical treatments, so it could be argued that the human race has already started its evolution toward this new state of being.
Yes, we already have them, but I'm talking about the real advanced tech that we don't currently have implimented.
Quote:
Originally Posted by jackripped
When its really mastered, and you cant tell the person next to you is a robot, because there that good, l ask you again, why wouldnt they be used as a police force, they require no pay, absolutely obey every command, never have an officer out on stress leave, be guaranteed that all decisions across the board are fair and treatment for all is equal.
Actually, there would be complications here.
If we built robots to essentially 'become' us, why would they continue to work as they're told? If you expect them to be 'like us' then at some point, they're going to want days off too. They're going to want to take a trip to the cinema, or go ice-skating, or swimming ect. If you build up their intelligence to match our own, you'd not be able to order them about. They'd grow an essence of free will.
It's Probin' Time
Last edited by Flash525; January 22nd, 2011 at 03:50 PM.
This site is part of the Defy Media Gaming network
The best serving of video game culture, since 2001. Whether you're looking for news, reviews, walkthroughs, or the biggest collection of PC gaming files on the planet, Game Front has you covered. We also make no illusions about gaming: it's supposed to be fun. Browse gaming galleries, humor lists, and honest, short-form reporting. Game on!