Welcome to Gaia! ::

<3 </3

How do you feel about Artificial Intelligence?

I look forward to the advancements 0.6 60.0% [ 9 ]
We can't trust machines 0.2 20.0% [ 3 ]
(other: please post opinion) 0.2 20.0% [ 3 ]
Total Votes:[ 15 ]
< 1 2 3 4 5 6 7 >

Suicidesoldier#1
Vannak
Suicidesoldier#1
Vannak
Suicidesoldier#1
Vannak

I haven't thought about or know about the relationship between intelligence and sentience to say that one won't eventually cause the other. It's a somewhat reasonable assumption: For instance, an type of AI we have today, automatic stock traders, have to account for their own behavior to accurately and precisely trade. That kind of introspection from a sophisticated enough AI may lead to sentience.


Calculators are very intelligent, but are not sentient.

They can't really make decisions and they don't think.


They just do.

Input commands give an output.


Just becuase it's complex doesn't mean it's sentient.

Soon as we give them the ability to think about things, we're in for trouble, even if they can't act on them.


Give them free thought but not free will- why it might as well be slavery.

Calculators are as intelligent... how?


Quick!

Preform 2,125,456,285 x 299,792,458!


And GO!

You lack a decent understanding of what intelligence is.


Computation, Problem solving, reasoning (as in which thing to use etc.)

I'm pretty sure a calculator fits in there quite nicely.


The point is that raw computation power would not result in sentience.

Thinking, feeling, consciousness, they are different traits all their own.


Even something as advanced as an AI, such as in a video game, could not qualify as sentient.

That would require abstract and perhaps quite arbitrary thought- a thinking machine.
How much abstract thought and decision making does a calculator make? Or a super computer, for that matter?

Just because it can quickly do computation faster than humans, that doesn't mean it's smarter than us, it means when you focus it's tiny computational capacity to do one thing it can do it quickly.

Fanatical Zealot

Vannak
Suicidesoldier#1
Vannak
Suicidesoldier#1
Vannak

Calculators are as intelligent... how?


Quick!

Preform 2,125,456,285 x 299,792,458!


And GO!

You lack a decent understanding of what intelligence is.


Computation, Problem solving, reasoning (as in which thing to use etc.)

I'm pretty sure a calculator fits in there quite nicely.


The point is that raw computation power would not result in sentience.

Thinking, feeling, consciousness, they are different traits all their own.


Even something as advanced as an AI, such as in a video game, could not qualify as sentient.

That would require abstract and perhaps quite arbitrary thought- a thinking machine.
How much abstract thought and decision making does a calculator make? Or a super computer, for that matter?

Just because it can quickly do computation faster than humans, that doesn't mean it's smarter than us, it means when you focus it's tiny computational capacity to do one thing it can do it quickly.


But it's still smarter than us within certain applications.

Yet it doesn't have sentience.

Fanatical Zealot

Vannak
Suicidesoldier#1
Vannak
Suicidesoldier#1
Vannak

You lack a decent understanding of what intelligence is.


Computation, Problem solving, reasoning (as in which thing to use etc.)

I'm pretty sure a calculator fits in there quite nicely.


The point is that raw computation power would not result in sentience.

Thinking, feeling, consciousness, they are different traits all their own.


Even something as advanced as an AI, such as in a video game, could not qualify as sentient.

That would require abstract and perhaps quite arbitrary thought- a thinking machine.
How much abstract thought and decision making does a calculator make? Or a super computer, for that matter?

Just because it can quickly do computation faster than humans, that doesn't mean it's smarter than us, it means when you focus it's tiny computational capacity to do one thing it can do it quickly.


But it's still smarter than us within certain applications.

Yet it doesn't have sentience.

No it's not. A calculator is not smarter than a human in any respect. Speed of thought does not in any way influence intelligence. That disqualifies calculators on a level much more superficial than what I should really be picking up on.

But go ahead, try to find a definition of intelligence where a calculator exceeds humans.


Speed of thought totally influences intelligence- not to mention that calculators are like never wrong.

Completely accurate nearly instantaneously- far exceeds human capabilities.


You'd be hard pressed to find a human that could even come close to what a standard non scientific home calculator could do.

That being said they obviously are not sentient.


Intelligence, Knowledge, accuracy of assessments =/= sentience, and I doubt it's what leads to sentience.
I'm sorry but if you think a calculator has problem solving abilities, you're simply wrong. A calculator is no more intelligence than an abacus or any more intelligent than a rock is at measuring the gravitational field strength of the earth.

Fanatical Zealot

It's capacities far exceed humans in terms of mathematics operations.

As far as problem solving goes it finds out which method to take based on the given stimuli.


Which algorithm it uses to determine large and low scale numbers, based on how many digits to calculate and whatnot does give it some limited degree in problem solving, but it's sheer accuracy, temporary memory and speed given it quite the computation ability.

Yet you'll rarely see it make computations on it's own.


There is no point, really, for a sentient calculator, or any machine.

Consciousness will have no bearing on it's capacities, nor a thermometers to measure heat, nor for a robot to build a car, so it is a useless endeavor, that could only bring a robot pain.
Suicidesoldier#1
It's capacities far exceed humans in terms of mathematics operations.

As far as problem solving goes it finds out which method to take based on the given stimuli.


Which algorithm it uses to determine large and low scale numbers, based on how many digits to calculate and whatnot does give it some limited degree in problem solving, but it's sheer accuracy, temporary memory and speed given it quite the computation ability.

Yet you'll rarely see it make computations on it's own.


There is no point, really, for a sentient calculator, or any machine.

Consciousness will have no bearing on it's capacities, nor a thermometers to measure heat, nor for a robot to build a car, so it is a useless endeavor, that could only bring a robot pain.


A calculator is a machine, not a thinking thing. It is not more intelligence because it can do certain type of math problems than an experiment is intelligent in that it gives accurate data. There is little decision making that isn't built into programming it has, the program of which may be intelligently created, but is not intelligent in and of itself.

Fanatical Zealot

Vannak
Suicidesoldier#1
It's capacities far exceed humans in terms of mathematics operations.

As far as problem solving goes it finds out which method to take based on the given stimuli.


Which algorithm it uses to determine large and low scale numbers, based on how many digits to calculate and whatnot does give it some limited degree in problem solving, but it's sheer accuracy, temporary memory and speed given it quite the computation ability.

Yet you'll rarely see it make computations on it's own.


There is no point, really, for a sentient calculator, or any machine.

Consciousness will have no bearing on it's capacities, nor a thermometers to measure heat, nor for a robot to build a car, so it is a useless endeavor, that could only bring a robot pain.


A calculator is a machine, not a thinking thing. It is not more intelligence because it can do certain type of math problems than an experiment is intelligent in that it gives accurate data. There is little decision making that isn't built into programming it has, the program of which may be intelligently created, but is not intelligent in and of itself.


In that case no machine can be in intelligent.

Also an experiment is a verb, not really a noun.


In any case computational capacity does not result in sentience.

It would have to be specifically programmed.
Suicidesoldier#1
Vannak
Suicidesoldier#1
It's capacities far exceed humans in terms of mathematics operations.

As far as problem solving goes it finds out which method to take based on the given stimuli.


Which algorithm it uses to determine large and low scale numbers, based on how many digits to calculate and whatnot does give it some limited degree in problem solving, but it's sheer accuracy, temporary memory and speed given it quite the computation ability.

Yet you'll rarely see it make computations on it's own.


There is no point, really, for a sentient calculator, or any machine.

Consciousness will have no bearing on it's capacities, nor a thermometers to measure heat, nor for a robot to build a car, so it is a useless endeavor, that could only bring a robot pain.


A calculator is a machine, not a thinking thing. It is not more intelligence because it can do certain type of math problems than an experiment is intelligent in that it gives accurate data. There is little decision making that isn't built into programming it has, the program of which may be intelligently created, but is not intelligent in and of itself.


In that case no machine can be in intelligent.

Also an experiment is a verb, not really a noun.


In any case computational capacity does not result in sentience.

It would have to be specifically programmed.
No machine we have so far is intelligent, but some are closer than others. Watson, the jeopoardy supercomputer is closer to intelligent than a hand held calculator.

But the problem with your statement is that computation does not mean sentience, that doesn't mean computational power can't lead to sentience. The ability to add isn't what makes something sentient, but we don't know if enough adding computational capacity can lead to something that can be sentient. What I'm saying is that we could create two identical computers as far as computation and with the right software one could be labeled as sentient and the other is just a machine with out said programming.

Fanatical Zealot

Vannak
Suicidesoldier#1
Vannak
Suicidesoldier#1
It's capacities far exceed humans in terms of mathematics operations.

As far as problem solving goes it finds out which method to take based on the given stimuli.


Which algorithm it uses to determine large and low scale numbers, based on how many digits to calculate and whatnot does give it some limited degree in problem solving, but it's sheer accuracy, temporary memory and speed given it quite the computation ability.

Yet you'll rarely see it make computations on it's own.


There is no point, really, for a sentient calculator, or any machine.

Consciousness will have no bearing on it's capacities, nor a thermometers to measure heat, nor for a robot to build a car, so it is a useless endeavor, that could only bring a robot pain.


A calculator is a machine, not a thinking thing. It is not more intelligence because it can do certain type of math problems than an experiment is intelligent in that it gives accurate data. There is little decision making that isn't built into programming it has, the program of which may be intelligently created, but is not intelligent in and of itself.


In that case no machine can be in intelligent.

Also an experiment is a verb, not really a noun.


In any case computational capacity does not result in sentience.

It would have to be specifically programmed.
No machine we have so far is intelligent, but some are closer than others. Watson, the jeopoardy supercomputer is closer to intelligent than a hand held calculator.

But the problem with your statement is that computation does not mean sentience, that doesn't mean computational power can't lead to sentience. The ability to add isn't what makes something sentient, but we don't know if enough adding computational capacity can lead to something that can be sentient. What I'm saying is that we could create two identical computers as far as computation and with the right software one could be labeled as sentient and the other is just a machine with out said programming.


Sentience isn't about raw computation power as calculators have shown.

We already know that it won't be the case as we have seen with super computers.


Sentience is a matter of thinking, you have to be programmed to think.

A screw driver can't think any more than a calculator or a super computer, it's something that has to be specifically programmed, or it has to be a glitch of some kind- random arbitrary thought can't happen without exact stimuli producing it first.
Vannak
Suicidesoldier#1
Vannak

Calculators are as intelligent... how?


Quick!

Preform 2,125,456,285 x 299,792,458!


And GO!

You lack a decent understanding of what intelligence is.

It's reasonable to assume that there is some "density" involved in one who continues to belabor a standpoint without adjusting his argument or perspective to conflicting information.
It is unlikely that he has firsthand experience in understanding this subject we are trying to debate.

Also,
There is such a state of being as being Animate, non-cognitive
A venus fly trap responds reflexively to catch it's prey. It's not trying to pan ahead or time it just right.
A designed social robot would be able to appear sentient with a complex pattern of reactions, though there is no genuine thought going around.
I would argue that there are many humans who are bordering on non-cognitive because their lack of rationality and critical thinking.
Suicidesoldier#1

Sentience is a matter of thinking, you have to be programmed to think.
A screw driver can't think any more than a calculator or a super computer, it's something that has to be specifically programmed, or it has to be a glitch of some kind- random arbitrary thought can't happen without exact stimuli producing it first.

I agree with your final point that a droid cannot think without an input stimuli. This is especially true in the case of social robots; they would be unable to start a conversation unless, for example, they are set to make a random statement from a pre-made list after a certain time delay. This act of conversation-starting doesn't make them any smarter, but it makes them appear more intelligent. In a way, interactivity denotes apparent intelligence.

You make the prospect of programming human level thought seem so simple with that statement. It's not something that you just sit down one day and figure out like a crossword puzzle...

Anyway though no tools can think for themselves, a calculator gives the appearance of having more intelligence than a screw driver. We can have non-sentient machines that appear to have emotion and feelings like the condition of Animate Non-Cognitive I just mentioned.

Also it's astronomically unlikely for a glitch to reprogram an AI to make it spontaneously acquire sentience. I think your basis for making such a claim comes from popular media rather than an understanding of computer science. The vast majority of the time, glitches cause intended functioning to break down or just stop altogether.

Could you give an example how a glitch in the code could create sentience?

Fanatical Zealot

Luke_DeVari
Suicidesoldier#1

Sentience is a matter of thinking, you have to be programmed to think.
A screw driver can't think any more than a calculator or a super computer, it's something that has to be specifically programmed, or it has to be a glitch of some kind- random arbitrary thought can't happen without exact stimuli producing it first.

I agree with your final point that a droid cannot think without an input stimuli. This is especially true in the case of social robots; they would be unable to start a conversation unless, for example, they are set to make a random statement from a pre-made list after a certain time delay. This act of conversation-starting doesn't make them any smarter, but it makes them appear more intelligent. In a way, interactivity denotes apparent intelligence.

You make the prospect of programming human level thought seem so simple with that statement. It's not something that you just sit down one day and figure out like a crossword puzzle...

Anyway though no tools can think for themselves, a calculator gives the appearance of having more intelligence than a screw driver. We can have non-sentient machines that appear to have emotion and feelings like the condition of Animate Non-Cognitive I just mentioned.

Also it's astronomically unlikely for a glitch to reprogram an AI to make it spontaneously acquire sentience. I think your basis for making such a claim comes from popular media rather than an understanding of computer science. The vast majority of the time, glitches cause intended functioning to break down or just stop altogether.

Could you give an example how a glitch in the code could create sentience?


I was merely entertaining the option that a sentient robot will not come about just cause.

There will be a reason, and it will most likely be programming, not computation power.
Suicidesoldier#1
Luke_DeVari

I agree with your final point that a droid cannot think without an input stimuli. This is especially true in the case of social robots; they would be unable to start a conversation unless, for example, they are set to make a random statement from a pre-made list after a certain time delay. This act of conversation-starting doesn't make them any smarter, but it makes them appear more intelligent. In a way, interactivity denotes apparent intelligence.

You make the prospect of programming human level thought seem so simple with that statement. It's not something that you just sit down one day and figure out like a crossword puzzle...

Anyway though no tools can think for themselves, a calculator gives the appearance of having more intelligence than a screw driver. We can have non-sentient machines that appear to have emotion and feelings like the condition of Animate Non-Cognitive I just mentioned.

Also it's astronomically unlikely for a glitch to reprogram an AI to make it spontaneously acquire sentience. I think your basis for making such a claim comes from popular media rather than an understanding of computer science. The vast majority of the time, glitches cause intended functioning to break down or just stop altogether.

Could you give an example how a glitch in the code could create sentience?


I was merely entertaining the option that a sentient robot will not come about just cause.

There will be a reason, and it will most likely be programming, not computation power.

The suggestion of a glitch implied that a robot would possibly become sentient "just cause," so there was a bit of confusion there.

Also, to be a little more technical in going beyond the vague suppositions within "programming" I think that data compression and optimizing heuristic functions along a critical path will lead to "smarter" robots without being solely dependent on computational power. Of course, a lot of research would need to be tone to alter the program to become more efficient; in doing so, abstraction of functioning is another important factor in order to manage the data stream and how the robot should respond to it.
I wish there we had more ability to talk about at least some technical aspects of this subject...

Fanatical Zealot

Luke_DeVari
Suicidesoldier#1
Luke_DeVari

I agree with your final point that a droid cannot think without an input stimuli. This is especially true in the case of social robots; they would be unable to start a conversation unless, for example, they are set to make a random statement from a pre-made list after a certain time delay. This act of conversation-starting doesn't make them any smarter, but it makes them appear more intelligent. In a way, interactivity denotes apparent intelligence.

You make the prospect of programming human level thought seem so simple with that statement. It's not something that you just sit down one day and figure out like a crossword puzzle...

Anyway though no tools can think for themselves, a calculator gives the appearance of having more intelligence than a screw driver. We can have non-sentient machines that appear to have emotion and feelings like the condition of Animate Non-Cognitive I just mentioned.

Also it's astronomically unlikely for a glitch to reprogram an AI to make it spontaneously acquire sentience. I think your basis for making such a claim comes from popular media rather than an understanding of computer science. The vast majority of the time, glitches cause intended functioning to break down or just stop altogether.

Could you give an example how a glitch in the code could create sentience?


I was merely entertaining the option that a sentient robot will not come about just cause.

There will be a reason, and it will most likely be programming, not computation power.

The suggestion of a glitch implied that a robot would possibly become sentient "just cause," so there was a bit of confusion there.

Also, to be a little more technical in going beyond the vague suppositions within "programming" I think that data compression and optimizing heuristic functions along a critical path will lead to "smarter" robots without being solely dependent on computational power. Of course, a lot of research would need to be tone to alter the program to become more efficient; in doing so, abstraction of functioning is another important factor in order to manage the data stream and how the robot should respond to it.
I wish there we had more ability to talk about at least some technical aspects of this subject...


You could create sentience in an AI program, there's just no point.

Quick Reply

Submit
Manage Your Items
Other Stuff
Get GCash
Offers
Get Items
More Items
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games
Mini-Games
Play with GCash
Play with Platinum