FileFront Forums

FileFront Forums (http://forums.filefront.com/)
-   The Pub (http://forums.filefront.com/pub-578/)
-   -   Artificial Intelligence (http://forums.filefront.com/pub/433026-artificial-intelligence.html)

Freyr January 24th, 2011 03:53 AM

Re: Artificial Intelligence
 
My two pence.

I already write convincingly "human" AI's for a strategy game called Armada II. My latest generation of AI's "think" in that they don't do things that humans consider "stupid" and frankly most of the time they do better than a human can do, as can be testified by traumatised beta testers.

However they don't "think" one slightest bit. They calculate that the force in grid X is superior in weapons output + hitpoints to their force and search for a grid where weapons output + hitpoints is inferior to their force. Upon finding it, they take a route to that grid requiring them to pass through as few grids as possible, which often results in someone defending one entrance to their base extremely well and the AI declining to attack that area in favour of one that they aren't defending until it has the force required to raze it.

These aren't decisions, they are simply very complicated logic trees involving a hundred or so calculations every cycle.

Quote:

Originally Posted by jackripped (Post 5458191)
If a robot achieves sentient life, and is used to police or as our armies, what you say is wrong, because it WILL be able to make instinctive decisions.

This is my point. They won't, because computers don't make "instinctive decisions". If you enter input one, they return output one.

It's theoretically possible to calculate what my AI is going to do on paper if you know what it can see, but it's pretty hard to do because it's making decisions so fast you can't calculate what it's going to do at a speed that matters. I programmed the thing, and it regularly surprises me.

However it's no more sentient than my wristwatch.

Quote:

Originally Posted by jackripped (Post 5458191)
We even have computer chips that are surgically implanted into ones head for medical treatments, so it could be argued that the human race has already started its evolution toward this new state of being.

Not sensibly. Those chips aren't interacting with your conciousness at all, and mostly your talking about things like pacemakers which is simply a technical application to correct a biological abnormality
Quote:

Originally Posted by jackripped (Post 5458191)
When its really mastered, and you cant tell the person next to you is a robot, because there that good, l ask you again, why wouldnt they be used as a police force, they require no pay, absolutely obey every command, never have an officer out on stress leave, be guaranteed that all decisions across the board are fair and treatment for all is equal.

Because policing is fundamentally something that requires discretion and tact?

Quote:

Originally Posted by jackripped (Post 5458348)
How do you figure a robot will only ever be able to do what its programmed to do ?

Because it's a computer program. Computer programs can only do what they are programmed to do, and even if you could write a program that can alter it's own programming then that program is only going to be able to alter it's programming based on the programming that was programmed into it to start with.

Quote:

Originally Posted by jackripped (Post 5458348)
That doesn't make sense, and is absolutely wrong.
Even today computers have errors, pieces of code that can learn, and dont do what they were designed to do at 100% efficiency.
A computer at a basic level doesn't always do what its told, and how many times has the windows program crashed, been hacked, modified etc. My point is they already do things they were not designed to do.

Your fundamentally misunderstanding computers here. Computers do always do what they are programmed to do. That might not be what you expect or even want them to do, but it's what they are programmed to do. Again, hacking or modifying a program is changing it's design by modifying their programming. They will always do what their programming commands. At a fundamental level even a complicated AI is no more "intelligent" than a toaster.

jackripped January 24th, 2011 12:22 PM

Re: Artificial Intelligence
 
But no-one can say that the future wont have a computer and program that becomes self aware and can make decisions for itself, because it doesn't matter what the computer does its just a body for the program, and it is entirely possible that we could do this.
Your point applies now.
We were talking about ''when'' a computer and program become self aware, if its possible, and sure looks like it is according to most scientists involved in this science.
You failed to address programs that self teach right now as well, they have been around for about a decade.There not self aware, but thats the first step to what were want to achieve.
How anyone can just right it off and say a computer or program will never become self aware, to me seems a little narrow minded.

Mr. Pedantic January 24th, 2011 01:10 PM

Re: Artificial Intelligence
 
Just a question: how do you tell someone (or something) that they're self-aware? Seems kind of contradictory, doesn't it?

Freyr January 24th, 2011 02:57 PM

Re: Artificial Intelligence
 
Ah, that's where you start looking at the Turing test.

Mr. Pedantic January 24th, 2011 03:28 PM

Re: Artificial Intelligence
 
Kind of, but not really. Setting a test is one thing; cheating it is another. And unless software programming takes a drastic turn, that is what we will be doing; cheating it.

Nemmerle January 24th, 2011 03:30 PM

Re: Artificial Intelligence
 
Quote:

These aren't decisions, they are simply very complicated logic trees involving a hundred or so calculations every cycle.
That's just what a decision is.

jackripped January 25th, 2011 02:40 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Mr. Pedantic (Post 5459075)
Just a question: how do you tell someone (or something) that they're self-aware? Seems kind of contradictory, doesn't it?


You dont. Period.

If it doesnt realize it, its not self aware, and l ask you, do you remember anyone asking or telling you , that you are self aware as a very young child ? Or did you just realize it ?

A computer may discover it just in the same way a human brain discovers it, when a computer and program actually have the capacity or ''brain power'' to actually do that.Right now they dont and this is half the reason alot of people cant see it being possible, l think there wrong just based on how far computers and programs have come in the last 22 years.
In 200 years where do you think computers/programs will be, they sure wont be 32 or 64 bit, how retarded are we trying to get a self aware machine on 64 bit, ffs, its about 3% of the power really needed !

l have no dought they will come in time, and we will have our own slave robots first, but sooner or later one will become self aware.Alive if you will.

Freyr January 25th, 2011 03:31 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Nemmerle (Post 5459208)
That's just what a decision is.

Not really. When you press start, does the computer decide to present the start menu? It has no choice.

If you push your fingers into a power socket and get an electric shock your not going to do it again because it hurts. A computer would repeatedly do that, even if it were damaging it's circuitry if it were programmed to do so. It doesn't have the capacity to learn, save what's programmed into it. If it's not programmed to avoid a problem then it can't do.

A computer program cannot exceed it's programming.

No computer program will ever be able to, even if it is programmed to rewrite it's own programming because the programming can only be altered in line with the original code, and that means that it's possible to calculate everything the program will ever do if you know what inputs it receives.

Quote:

Originally Posted by jackripped (Post 5459489)
In 200 years where do you think computers/programs will be, they sure wont be 32 or 64 bit, how retarded are we trying to get a self aware machine on 64 bit, ffs, its about 3% of the power really needed !

Computational power is not the problem. We don't have any programming that addresses a computer being able to make it's own decisions and learn from it's mistakes and frankly nobody knows where to start.

It mere computational power was the problem then there would be hundreds of self aware programs running on super computers that are a lot more powerful than 33 times the speed of your desktop PC.

Quote:

Kind of, but not really. Setting a test is one thing; cheating it is another. And unless software programming takes a drastic turn, that is what we will be doing; cheating it.
And you think Alan Turing wasn't cheating using an AI written on paper tapes 50 years ago?!?

His point is that if a human can't tell the difference between a human or computer program/AI by the results it produces then it then it passes his test of being indistinguishable from a human, and without delving very deeply into philosophy that will be debated from now until the end of time that's about the best we can do to answer the question of "does an AI think"

Nemmerle January 25th, 2011 03:50 AM

Re: Artificial Intelligence
 
Quote:

Originally Posted by Freyr (Post 5459499)
Not really. When you press start, does the computer decide to present the start menu? It has no choice.

If you push your fingers into a power socket and get an electric shock your not going to do it again because it hurts. A computer would repeatedly do that, even if it were damaging it's circuitry if it were programmed to do so. It doesn't have the capacity to learn, save what's programmed into it. If it's not programmed to avoid a problem then it can't do.

A computer program cannot exceed it's programming.

No computer program will ever be able to, even if it is programmed to rewrite it's own programming because the programming can only be altered in line with the original code, and that means that it's possible to calculate everything the program will ever do if you know what inputs it receives.[/I]"

All the evidence seems to show that we don't have the capacity to learn, save what evolution has given us. We program computers, evolution 'programs' us. We are both constrained by our causes - neither of us really chooses in that sense you seem to be using the word.

jackripped January 25th, 2011 12:33 PM

Re: Artificial Intelligence
 
All the evidence shows we do have the capacity to learn, what philosophical rubbish are you talking about.

Wow could humans communicate 2000 years ago like we can now, l seem to think we have learned something along the way.
Wait, Apollo 13 just magically happened and humans didn't learn a thing from that did they, and of coarse who could forget humans discovering electricity and LEARNING to harness it..........

Of coarse evolution did program us, but you do realize that we humans are now programming evolution as well, look at what we have created, we have created life from nothing, we have created an entirely new DNA structure one thats never evolved on earth before, were playing god if you like.We learned how to do it !


All times are GMT -7.

Powered by vBulletin®
Copyright ©2000 - 2016, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.