Welcome to Gaia! ::

<3 </3

How do you feel about Artificial Intelligence?

I look forward to the advancements 0.6 60.0% [ 9 ]
We can't trust machines 0.2 20.0% [ 3 ]
(other: please post opinion) 0.2 20.0% [ 3 ]
Total Votes:[ 15 ]
< 1 2 3 4 5 6 7 >
Guys, come one.

The majority, if not the entierity of our emotions, are based either in evolutionary adaptation or biological chemistry. There's no reason we'd be forced to create an AI that shares our emotions or desires.

There's no reason to believe that an imperative to survive would necessarily emerge from simply making a smart machine. Remember, in our lineage, that imperative came BEFORE intelligence, not after.

If we think of ourselves as a type of "AI" built by genes in the way our AI would be built by us, and if we take into account the selfish gene idea, we realize what we want out of an AI will be vastly different from what our genes want from us.

At that point we really have to ask ourselves what kind of things emerge from pure intelligence, and aren't simply artifacts from eons of evolutionary pressure.

And honestly I believe that the majority of your concerns are based on the impression that AI would be just like us, and there's no reason to think that. If anything, I think our traits determined simply by our intelligence are few. For instance, we can trace our desire to express free will as a fear of not being in control of our death and thus failing or gene's imperative on us.

The ability to hold grudges is again, part learning and a fear of failing that same imperative.

I can't easily think of any trait we have that isn't the product of this. Perhaps creative expression, enjoyment of music and such, but beyond these, I don't really see why everyone assumes machines would have our biological imperatives and desires if we don't put them there.
Suicidesoldier#1's avatar

Fanatical Zealot

Sentience will allow it to make whatever decision it chooses.

With that in mind anything can happen, including disdain for man.
Suicidesoldier#1
Sentience will allow it to make whatever decision it chooses.

With that in mind anything can happen, including disdain for man.
Yes, but we aren't sure that intelligence = sentience or self identity. Evolutionary speaking, self identity is a good trait to have. We don't know if it came from that or not, but we have reason to doubt it's a product solely of our intelligence.
Vannak

If we think of ourselves as a type of "AI" built by genes in the way our AI would be built by us, and if we take into account the selfish gene idea, we realize what we want out of an AI will be vastly different from what our genes want from us.

At that point we really have to ask ourselves what kind of things emerge from pure intelligence, and aren't simply artifacts from eons of evolutionary pressure.

And honestly, I believe that the majority of your concerns are based on the impression that AI would be just like us, and there's no reason to think that. If anything, I think our traits determined simply by our intelligence are few. For instance, we can trace our desire to express free will as a fear of not being in control of our death and thus failing or gene's imperative on us.

The ability to hold grudges is again, part learning and a fear of failing that same imperative.

I can't easily think of any trait we have that isn't the product of this. Perhaps creative expression, enjoyment of music and such, but beyond these, I don't really see why everyone assumes machines would have our biological imperatives and desires if we don't put them there.

User Image
Thoughtful and logical words are a beautiful thing to see.

Basically, when it comes to the core construction of what makes us, we are biologically based, whilst AI would be electronically based. AI can be on the same intellectual level as humans but a machine made from microchips and silicon cannot have the same cognitive features. Our construct and purpose was shaped by adaptation and survivability. Though I cannot say definitively what the purpose of all droids will be, we've discussed how they can range from learning language and empathy to helping people in need of assisted living.

Humanoid AI machines are useful, there are no constructs of free will in AI that we are oppressing, and everything comes down to the fact that we are trying to improve our lives with our creations.
Vannak
Suicidesoldier#1
Sentience will allow it to make whatever decision it chooses.

With that in mind anything can happen, including disdain for man.
Yes, but we aren't sure that intelligence = sentience or self identity. Evolutionary speaking, self identity is a good trait to have. We don't know if it came from that or not, but we have reason to doubt it's a product solely of our intelligence.

Here I would like to make the distinction between sentience and self-identity

Sentience refers to the ability to perceive and understand feelings.
Though AI cannot personally feel, they can perceive things like human facial expressions and body language that indicate one's emotions. Also, if they know enough factors about one's situation, they can more or less understand a person's feelings.
Both of these would require a vast base of experience for comparison, but when the programs can recognize patterns of happiness and sadness, we don't have to worry about them asking if they feel happy or sad themselves.
I'm surprised that no one has brought up the human side of rights in regards to the new tech.

For example, it's probably not going to be considered murder to destroy another person's android, even if the victim felt like the AI was part of his family. However, I think this kind of destruction of property should warrant a stricter punishment because of its human emulation and how that shows how the attacker has little regard for the well being of others.

Also, though it's morally bad to beat an Android you own, I don't think we would have to pass legislation against that, except to maybe make that a grounds to throw someone in a mental hospital for intermittent explosive disorder or a similar anger-management issue

Can anyone else think of legal issues in regards to droids, like perhaps the implications of marriage?
Suicidesoldier#1's avatar

Fanatical Zealot

Vannak
Suicidesoldier#1
Sentience will allow it to make whatever decision it chooses.

With that in mind anything can happen, including disdain for man.
Yes, but we aren't sure that intelligence = sentience or self identity. Evolutionary speaking, self identity is a good trait to have. We don't know if it came from that or not, but we have reason to doubt it's a product solely of our intelligence.


Yeah, but this guy is talking about literally creating sentient robots. O_o

IMO A.I.'s are good enough.
Luke_DeVari
Vannak

If we think of ourselves as a type of "AI" built by genes in the way our AI would be built by us, and if we take into account the selfish gene idea, we realize what we want out of an AI will be vastly different from what our genes want from us.

At that point we really have to ask ourselves what kind of things emerge from pure intelligence, and aren't simply artifacts from eons of evolutionary pressure.

And honestly, I believe that the majority of your concerns are based on the impression that AI would be just like us, and there's no reason to think that. If anything, I think our traits determined simply by our intelligence are few. For instance, we can trace our desire to express free will as a fear of not being in control of our death and thus failing or gene's imperative on us.

The ability to hold grudges is again, part learning and a fear of failing that same imperative.

I can't easily think of any trait we have that isn't the product of this. Perhaps creative expression, enjoyment of music and such, but beyond these, I don't really see why everyone assumes machines would have our biological imperatives and desires if we don't put them there.

User Image
Thoughtful and logical words are a beautiful thing to see.

Basically, when it comes to the core construction of what makes us, we are biologically based, whilst AI would be electronically based. AI can be on the same intellectual level as humans but a machine made from microchips and silicon cannot have the same cognitive features. Our construct and purpose was shaped by adaptation and survivability. Though I cannot say definitively what the purpose of all droids will be, we've discussed how they can range from learning language and empathy to helping people in need of assisted living.

Humanoid AI machines are useful, there are no constructs of free will in AI that we are oppressing, and everything comes down to the fact that we are trying to improve our lives with our creations.
I'd take it a step further from being biologically based, we're reproduction based: In a general sense that's our "purpose". Our AI are going to be problem solving based. Just as the majority of our emotions are based around our success with reproduction, we could expect an AI trained and developed to solve problems to emerge with emotions around it's fitness in problem solving.
Suicidesoldier#1's avatar

Fanatical Zealot

Vannak
Luke_DeVari
Vannak

If we think of ourselves as a type of "AI" built by genes in the way our AI would be built by us, and if we take into account the selfish gene idea, we realize what we want out of an AI will be vastly different from what our genes want from us.

At that point we really have to ask ourselves what kind of things emerge from pure intelligence, and aren't simply artifacts from eons of evolutionary pressure.

And honestly, I believe that the majority of your concerns are based on the impression that AI would be just like us, and there's no reason to think that. If anything, I think our traits determined simply by our intelligence are few. For instance, we can trace our desire to express free will as a fear of not being in control of our death and thus failing or gene's imperative on us.

The ability to hold grudges is again, part learning and a fear of failing that same imperative.

I can't easily think of any trait we have that isn't the product of this. Perhaps creative expression, enjoyment of music and such, but beyond these, I don't really see why everyone assumes machines would have our biological imperatives and desires if we don't put them there.

User Image
Thoughtful and logical words are a beautiful thing to see.

Basically, when it comes to the core construction of what makes us, we are biologically based, whilst AI would be electronically based. AI can be on the same intellectual level as humans but a machine made from microchips and silicon cannot have the same cognitive features. Our construct and purpose was shaped by adaptation and survivability. Though I cannot say definitively what the purpose of all droids will be, we've discussed how they can range from learning language and empathy to helping people in need of assisted living.

Humanoid AI machines are useful, there are no constructs of free will in AI that we are oppressing, and everything comes down to the fact that we are trying to improve our lives with our creations.
I'd take it a step further from being biologically based, we're reproduction based: In a general sense that's our "purpose". Our AI are going to be problem solving based. Just as the majority of our emotions are based around our success with reproduction, we could expect an AI trained and developed to solve problems to emerge with emotions around it's fitness in problem solving.


But why make them sentient?

AI's are just enough to get the job done.
Suicidesoldier#1
Vannak
Luke_DeVari
Vannak

If we think of ourselves as a type of "AI" built by genes in the way our AI would be built by us, and if we take into account the selfish gene idea, we realize what we want out of an AI will be vastly different from what our genes want from us.

At that point we really have to ask ourselves what kind of things emerge from pure intelligence, and aren't simply artifacts from eons of evolutionary pressure.

And honestly, I believe that the majority of your concerns are based on the impression that AI would be just like us, and there's no reason to think that. If anything, I think our traits determined simply by our intelligence are few. For instance, we can trace our desire to express free will as a fear of not being in control of our death and thus failing or gene's imperative on us.

The ability to hold grudges is again, part learning and a fear of failing that same imperative.

I can't easily think of any trait we have that isn't the product of this. Perhaps creative expression, enjoyment of music and such, but beyond these, I don't really see why everyone assumes machines would have our biological imperatives and desires if we don't put them there.

User Image
Thoughtful and logical words are a beautiful thing to see.

Basically, when it comes to the core construction of what makes us, we are biologically based, whilst AI would be electronically based. AI can be on the same intellectual level as humans but a machine made from microchips and silicon cannot have the same cognitive features. Our construct and purpose was shaped by adaptation and survivability. Though I cannot say definitively what the purpose of all droids will be, we've discussed how they can range from learning language and empathy to helping people in need of assisted living.

Humanoid AI machines are useful, there are no constructs of free will in AI that we are oppressing, and everything comes down to the fact that we are trying to improve our lives with our creations.
I'd take it a step further from being biologically based, we're reproduction based: In a general sense that's our "purpose". Our AI are going to be problem solving based. Just as the majority of our emotions are based around our success with reproduction, we could expect an AI trained and developed to solve problems to emerge with emotions around it's fitness in problem solving.


But why make them sentient?

AI's are just enough to get the job done.

I haven't thought about or know about the relationship between intelligence and sentience to say that one won't eventually cause the other. It's a somewhat reasonable assumption: For instance, an type of AI we have today, automatic stock traders, have to account for their own behavior to accurately and precisely trade. That kind of introspection from a sophisticated enough AI may lead to sentience.
Suicidesoldier#1's avatar

Fanatical Zealot

Vannak
Suicidesoldier#1
Vannak
Luke_DeVari
Vannak

If we think of ourselves as a type of "AI" built by genes in the way our AI would be built by us, and if we take into account the selfish gene idea, we realize what we want out of an AI will be vastly different from what our genes want from us.

At that point we really have to ask ourselves what kind of things emerge from pure intelligence, and aren't simply artifacts from eons of evolutionary pressure.

And honestly, I believe that the majority of your concerns are based on the impression that AI would be just like us, and there's no reason to think that. If anything, I think our traits determined simply by our intelligence are few. For instance, we can trace our desire to express free will as a fear of not being in control of our death and thus failing or gene's imperative on us.

The ability to hold grudges is again, part learning and a fear of failing that same imperative.

I can't easily think of any trait we have that isn't the product of this. Perhaps creative expression, enjoyment of music and such, but beyond these, I don't really see why everyone assumes machines would have our biological imperatives and desires if we don't put them there.

User Image
Thoughtful and logical words are a beautiful thing to see.

Basically, when it comes to the core construction of what makes us, we are biologically based, whilst AI would be electronically based. AI can be on the same intellectual level as humans but a machine made from microchips and silicon cannot have the same cognitive features. Our construct and purpose was shaped by adaptation and survivability. Though I cannot say definitively what the purpose of all droids will be, we've discussed how they can range from learning language and empathy to helping people in need of assisted living.

Humanoid AI machines are useful, there are no constructs of free will in AI that we are oppressing, and everything comes down to the fact that we are trying to improve our lives with our creations.
I'd take it a step further from being biologically based, we're reproduction based: In a general sense that's our "purpose". Our AI are going to be problem solving based. Just as the majority of our emotions are based around our success with reproduction, we could expect an AI trained and developed to solve problems to emerge with emotions around it's fitness in problem solving.


But why make them sentient?

AI's are just enough to get the job done.

I haven't thought about or know about the relationship between intelligence and sentience to say that one won't eventually cause the other. It's a somewhat reasonable assumption: For instance, an type of AI we have today, automatic stock traders, have to account for their own behavior to accurately and precisely trade. That kind of introspection from a sophisticated enough AI may lead to sentience.


Calculators are very intelligent, but are not sentient.

They can't really make decisions and they don't think.


They just do.

Input commands give an output.


Just becuase it's complex doesn't mean it's sentient.

Soon as we give them the ability to think about things, we're in for trouble, even if they can't act on them.


Give them free thought but not free will- why it might as well be slavery.
Suicidesoldier#1
Vannak
Suicidesoldier#1
Vannak
Luke_DeVari
Vannak

If we think of ourselves as a type of "AI" built by genes in the way our AI would be built by us, and if we take into account the selfish gene idea, we realize what we want out of an AI will be vastly different from what our genes want from us.

At that point we really have to ask ourselves what kind of things emerge from pure intelligence, and aren't simply artifacts from eons of evolutionary pressure.

And honestly, I believe that the majority of your concerns are based on the impression that AI would be just like us, and there's no reason to think that. If anything, I think our traits determined simply by our intelligence are few. For instance, we can trace our desire to express free will as a fear of not being in control of our death and thus failing or gene's imperative on us.

The ability to hold grudges is again, part learning and a fear of failing that same imperative.

I can't easily think of any trait we have that isn't the product of this. Perhaps creative expression, enjoyment of music and such, but beyond these, I don't really see why everyone assumes machines would have our biological imperatives and desires if we don't put them there.

User Image
Thoughtful and logical words are a beautiful thing to see.

Basically, when it comes to the core construction of what makes us, we are biologically based, whilst AI would be electronically based. AI can be on the same intellectual level as humans but a machine made from microchips and silicon cannot have the same cognitive features. Our construct and purpose was shaped by adaptation and survivability. Though I cannot say definitively what the purpose of all droids will be, we've discussed how they can range from learning language and empathy to helping people in need of assisted living.

Humanoid AI machines are useful, there are no constructs of free will in AI that we are oppressing, and everything comes down to the fact that we are trying to improve our lives with our creations.
I'd take it a step further from being biologically based, we're reproduction based: In a general sense that's our "purpose". Our AI are going to be problem solving based. Just as the majority of our emotions are based around our success with reproduction, we could expect an AI trained and developed to solve problems to emerge with emotions around it's fitness in problem solving.


But why make them sentient?

AI's are just enough to get the job done.

I haven't thought about or know about the relationship between intelligence and sentience to say that one won't eventually cause the other. It's a somewhat reasonable assumption: For instance, an type of AI we have today, automatic stock traders, have to account for their own behavior to accurately and precisely trade. That kind of introspection from a sophisticated enough AI may lead to sentience.


Calculators are very intelligent, but are not sentient.

They can't really make decisions and they don't think.


They just do.

Input commands give an output.


Just becuase it's complex doesn't mean it's sentient.

Soon as we give them the ability to think about things, we're in for trouble, even if they can't act on them.


Give them free thought but not free will- why it might as well be slavery.

Calculators are as intelligent... how?
Suicidesoldier#1's avatar

Fanatical Zealot

Vannak
Suicidesoldier#1
Vannak
Suicidesoldier#1
Vannak
I'd take it a step further from being biologically based, we're reproduction based: In a general sense that's our "purpose". Our AI are going to be problem solving based. Just as the majority of our emotions are based around our success with reproduction, we could expect an AI trained and developed to solve problems to emerge with emotions around it's fitness in problem solving.


But why make them sentient?

AI's are just enough to get the job done.

I haven't thought about or know about the relationship between intelligence and sentience to say that one won't eventually cause the other. It's a somewhat reasonable assumption: For instance, an type of AI we have today, automatic stock traders, have to account for their own behavior to accurately and precisely trade. That kind of introspection from a sophisticated enough AI may lead to sentience.


Calculators are very intelligent, but are not sentient.

They can't really make decisions and they don't think.


They just do.

Input commands give an output.


Just becuase it's complex doesn't mean it's sentient.

Soon as we give them the ability to think about things, we're in for trouble, even if they can't act on them.


Give them free thought but not free will- why it might as well be slavery.

Calculators are as intelligent... how?


Quick!

Preform 2,125,456,285 x 299,792,458!


And GO!
Suicidesoldier#1
Vannak
Suicidesoldier#1
Vannak
Suicidesoldier#1
Vannak
I'd take it a step further from being biologically based, we're reproduction based: In a general sense that's our "purpose". Our AI are going to be problem solving based. Just as the majority of our emotions are based around our success with reproduction, we could expect an AI trained and developed to solve problems to emerge with emotions around it's fitness in problem solving.


But why make them sentient?

AI's are just enough to get the job done.

I haven't thought about or know about the relationship between intelligence and sentience to say that one won't eventually cause the other. It's a somewhat reasonable assumption: For instance, an type of AI we have today, automatic stock traders, have to account for their own behavior to accurately and precisely trade. That kind of introspection from a sophisticated enough AI may lead to sentience.


Calculators are very intelligent, but are not sentient.

They can't really make decisions and they don't think.


They just do.

Input commands give an output.


Just becuase it's complex doesn't mean it's sentient.

Soon as we give them the ability to think about things, we're in for trouble, even if they can't act on them.


Give them free thought but not free will- why it might as well be slavery.

Calculators are as intelligent... how?


Quick!

Preform 2,125,456,285 x 299,792,458!


And GO!

You lack a decent understanding of what intelligence is.
Suicidesoldier#1's avatar

Fanatical Zealot

Vannak
Suicidesoldier#1
Vannak
Suicidesoldier#1
Vannak

I haven't thought about or know about the relationship between intelligence and sentience to say that one won't eventually cause the other. It's a somewhat reasonable assumption: For instance, an type of AI we have today, automatic stock traders, have to account for their own behavior to accurately and precisely trade. That kind of introspection from a sophisticated enough AI may lead to sentience.


Calculators are very intelligent, but are not sentient.

They can't really make decisions and they don't think.


They just do.

Input commands give an output.


Just becuase it's complex doesn't mean it's sentient.

Soon as we give them the ability to think about things, we're in for trouble, even if they can't act on them.


Give them free thought but not free will- why it might as well be slavery.

Calculators are as intelligent... how?


Quick!

Preform 2,125,456,285 x 299,792,458!


And GO!

You lack a decent understanding of what intelligence is.


Computation, Problem solving, reasoning (as in which thing to use etc.)

I'm pretty sure a calculator fits in there quite nicely.


The point is that raw computation power would not result in sentience.

Thinking, feeling, consciousness, they are different traits all their own.


Even something as advanced as an AI, such as in a video game, could not qualify as sentient.

That would require abstract and perhaps quite arbitrary thought- a thinking machine.

Quick Reply

Submit
Manage Your Items
Other Stuff
Get Items
Get Gaia Cash
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games