I-Robot, the Matrix, Nine, Terminator, Blade Runner; these are only some of the films that are based around 'Computers' taking over, or having an intergenerational part in the world and society (or lack of it). There are many other films, and then some television shows (Stargate's Replicators, Battlestar Galactica's Cylons) ect, though I can't think of many others on the spot right now.
What I'd like to discuss here, is the possibility us one day creating a sentient artificial intelligence; one that could potentially turn on, and destroy humankind. I've no doubt it possibly, simply because of how far we've come in the last several-hundred years. Technology has essentially been generated into the world we live in today; few jobs go without the use of a computer or the internet. Makes me wonder how we'd cope if a giant EMP hit the planet.
But yeah, firstly what are the chances (based on what we know) that such an intelligence could actually be created, and what are the chances that it could turn on and destroy us? In theory, any computer program that was created to be sentient would have a turn-off switch, or a system reboot case anything went wrong, but then, any such program created could potentially be able to re-write it's own code, thus rendering whatever power we may have held over said code void.
I do think it could be possible. However, I don't think mankind is *quite* there yet. It'll probably be a couple-hundred more years till we're capable of actually creating an android, or even a robot that's remotely capable of doing things like re-writing it's own code or hell, even becoming partially sentient.
One thing I never understood is why safeguards were never put in place for something like that in the movies...I mean seriously, if a robot has gone rogue, how hard is it to blow it away? Or hit it with an EMP? In I-Robot, it was that simple. Of course, there are the issues of just how human the Robot, Sonny, was, whether it was "alive" or not, whether it was hostile or not, but the fact remains that all that had to be done was *BLAM* and the situation was solved.
In the world we live in today, with the *BILLIONS* going up in smoke for stupid military purposes all around the globe, I seriously doubt an AI would get very far. Of course, there's always the possibility of some scientist in a third-world country somehow accidentally throwing an AI together and there you go...but that's a bit too far-fetched.
Or is it?
"I have this condition where I'm really lazy." ~Toby Turner
"I mean, ugh, I don't care what people do with their bodies. It's what I want to do to their bodies that I care about." ~Schofield "Kill the weak so they can't drag the strong down to their level. This is true compassion." ~Benzir
Last edited by Totes; January 17th, 2011 at 12:24 PM.
The chance that it's created: 100 %.
The chance that we see it created (our generation) : 50 %
The chance that they take over : 50 %.
If you create a sentient artificial intelligence, you've got to pay attention to the possible outcomes. Example: you install an emergency protocol parameter in your robot (terminator style, I-robot style, whatever style), all it needs is a "sense" of emergency to activate a protocol based on strict rules. Whether it's truly an emergency doesn't import: it'll activate.
If the AI gains a parameter to overwrite its own intelligence, then we're fucked. That same emergency could change the AI and the latter could see us all as a threat, no matter who you are (owner, stranger, robber, ...).
It's always possible. Almost everything is possible, and the AI taking over is in the "Least probable, but still possible" - list.
I laugh at anonymous neg-reppers who misread my posts.
The human brain is far more intelligent than any supercomputer ever created. It is the only object capable of learning, and independent thought, not based on any existing programming. As a result, I doubt anything like this is possible without spending horrendous amounts of money. in that it would cost horrendous amounts of money, it'll never be mass produced, and as a result, will not be able to take over.
Last edited by Keyser_Soze; January 19th, 2011 at 02:22 AM.
I doubt anything like this is possible without spending horrendous amounts of money. in that it would cost horrendous amounts of money, it'll never be mass produced, and as a result, will not be able to take over.
Billions upon billions of dollars are basically burned spent on military defense. Nothing would surprise me when it comes to money anymore. The government has created thousands of fancy ways to kill and mangle a human body through the use of our tax dollars, so creating an AI sometime in the future is a distinct possibility regardless of the amount of money it takes.
Also if I'm not mistaken it would really only take *one* AI to potentially take over, wouldn't it? RE: Eagle Eye...
"I have this condition where I'm really lazy." ~Toby Turner
"I mean, ugh, I don't care what people do with their bodies. It's what I want to do to their bodies that I care about." ~Schofield "Kill the weak so they can't drag the strong down to their level. This is true compassion." ~Benzir
Last edited by Totes; January 17th, 2011 at 01:49 PM.
The problem with intelligence isn't problem solving, it's asking the questions.
AI, as we have it today, seems to be tied to very specific calculations. Solve this captcha image, walk up these stairs, take these symbols and convert them. In the sense that it solves problems even a calculator is intelligent - and it's artificial so in that sense it is an AI. A neural net with hidden nodes? Slightly more intelligent. The degree of problem solving ability is determined by how far it can generalise concepts; the degree of abstraction it can engage in.
Mathematical concepts are a subset of more general concepts - and many of those general concepts; say of a computer or of motion; only really make sense if you have certain physical senses to model the world in terms you can understand. Otherwise it's just numbers someone will always have to type in for you.
What you need is a general AI - an AGI if you will - that has a corresponding concept-space to humans. And the only way it's going to get that is if we give it the same senses and blocking features as us (so it can pick out individual objects) and an ability to model the world around it (an imagination or perceptual space if you will).
And then you use that AGI to program more specific task-based AIs.
If you wanted to make it sentient - and that's a very fuzzy word but I'll use it as 'Asking its own questions rather than responding to human-input ones.' You'd need to give it a goal rather than a question. And it has to be a clear goal - not something fuzzy like, 'serve the humans.'
Say something like species propagation - which seems to be ours. The problem then is that you've created a species that will compete with you. You can't limit what questions it's going to ask because it will always be able to ask questions about its limits and any limit involves something else that it can ask questions about. Once the referents are in the system there's not a lot you can do. Even if you built into its sensory systems so that it could never sense humans, it would still infer something was there from the humans interactions with the environment.
AIs and AGIs I have no problem with. Infact I'd be more than happy to have one hooked up to my brain to expand my abilities; do all the boring number crunching for me. Sentient AGI (S-AGI?) I view as being pretty much the last thing we would ever do; because after that point we would simply be killed off by the machines so they could exercise greater use of the resources available.
We used to do it all the time, why should we assume they would be any different?
It's such a dangerous thing that even if you kept it in a sealed concrete bunker on top of a nuclear warhead you could never be sure it hadn't escaped before you pressed the button. It's one of those things you test out by remote in space with the option to send it hurtling into the sun. And you could never really be sure it hadn't subverted your test.
I sort of find it unlikely that if we were to create a truly "sentient", self-determining AI that even had the potential to turn against us, we would then give it the sort of control of our weapons and computer systems that we wouldn't trust an actual human with.
It's not a matter of trusting it, it's a matter of what you can stop it doing. We're talking about the same retards that don't understand the need to encrypt top secret files. They'd network it, or ask it for a nano-factory, or something stupid like that; they'd ask it to do something they don't understand themselves and then it would be free.
Billions upon billions of dollars are basically burned spent on military defense. Nothing would surprise me when it comes to money anymore. The government has created thousands of fancy ways to kill and mangle a human body through the use of our tax dollars, so creating an AI sometime in the future is a distinct possibility regardless of the amount of money it takes.
Also if I'm not mistaken it would really only take *one* AI to potentially take over, wouldn't it? RE: Eagle Eye...
Something that rivals the cognitive capacities of a human wouldn't cost billions, it'd cost close to the trillions range, if not trillions. While with the massive waste of money that is the defence budget (Some funding is necessary, 40% of the GDP or whatever is just retarded and paranoid- it's a major reason the USA is now an economic Fuck-up, if you ask me) it wouldn't really surprise me if the USA did try and do this in the near future, it would almost certainly end up being a gigantic waste of money that gets nowhere. to make an AI capable of what we do would take Terabytes or more of memory. How long would it take to iron out bugs? How much would it cost to do so? I think really, even American warhawks know it would be a waste of money.
This site is part of the Defy Media Gaming network
The best serving of video game culture, since 2001. Whether you're looking for news, reviews, walkthroughs, or the biggest collection of PC gaming files on the planet, Game Front has you covered. We also make no illusions about gaming: it's supposed to be fun. Browse gaming galleries, humor lists, and honest, short-form reporting. Game on!