Welcome to Gaia! ::

<3 </3

How do you feel about Artificial Intelligence?

I look forward to the advancements 0.6 60.0% [ 9 ]
We can't trust machines 0.2 20.0% [ 3 ]
(other: please post opinion) 0.2 20.0% [ 3 ]
Total Votes:[ 15 ]
1 2 3 4 5 6 7 >

User ImageDiscuss why you are afraid of A.I. or why you support it

(Remember to vote in the thread poll, please!)

Hypothetically, imagine we had Androids (human-looking robots) with the computing capacity of the human brain.

A great deal of the public is afraid that if we made Artificial Intelligence advance beyond a certain point that the robots would all turn against us.

On the other hand, few people understand how A.I. programming works and it is astronomically unlikely for any such uprising to occur even if we were able to get the machines to be able to alter their own programming.


As always, try to be respectful of others' opinions
Also, there are a lot of undefined factors in this debate since we are dealing with a hypothetical, so bear that in mind
Computing capacity isn't relatable to the human brain, its the networks and complex structure that define the human brain over sheer capacity.

IRL Nerd

8,900 Points
  • Nerd 50
  • Forum Sophomore 300
  • Team Jacob 100
As DXnobodyX has stated, computing capacity cannot be comparable to how a computer works. A computer basically takes all the information into once place, the processor, and then cranks out the output.

The brain doesn't work in such a simplistic matter, there are numerous feed back and feed forward mechanisms that a computer simply cannot keep up with the sheer complexity of the human mind.

The same thing goes with why we can't just plug some interface into our brain and assume we can upload the information and become a master of whatever over night. This is not how the brain works. Unless we unlock the various complexities of the brain then we have nothing to be afraid of. And let's be honest here, aren't we already supposed to have A.I. robots already? We don't even know how attention works in the brain. If we don't even know how attention works in the brain then A.I. is still hundreds of years into the future.
DXnobodyX
Computing capacity isn't relatable to the human brain, its the networks and complex structure that define the human brain over sheer capacity.

I am aware of that. Thanks for clarifying because most people don't realize that important difference. We could have a billion terabytes of RAM and still no A.I. without the proper structures within the system

For the purposes of this discussion, we are assuming that the ability to process and use information is the same as that of human ability.
The_science_master
As DXnobodyX has stated, computing capacity cannot be comparable to how a computer works. A computer basically takes all the information into once place, the processor, and then cranks out the output.

The brain doesn't work in such a simplistic matter, there are numerous feed back and feed forward mechanisms that a computer simply cannot keep up with the sheer complexity of the human mind.

The same thing goes with why we can't just plug some interface into our brain and assume we can upload the information and become a master of whatever over night. This is not how the brain works. Unless we unlock the various complexities of the brain then we have nothing to be afraid of. And let's be honest here, aren't we already supposed to have A.I. robots already? We don't even know how attention works in the brain. If we don't even know how attention works in the brain then A.I. is still hundreds of years into the future.


Just a reminder, this discussion is intended to focus on social effects of having intelligent machines. While I realize technical aspects are important I don't want us to get hung up on them.

Sorry if I didn't state my intention well.

I want to acknowledge that you make some very good observations in your post.

Wouldn't we all be a bit happier if we had A.I. companions? ((Sure beats having a girlfriend))
[I know I'm going to get slammed for that comment, but let the opinions flow]

IRL Nerd

8,900 Points
  • Nerd 50
  • Forum Sophomore 300
  • Team Jacob 100
Luke_DeVari
The_science_master
As DXnobodyX has stated, computing capacity cannot be comparable to how a computer works. A computer basically takes all the information into once place, the processor, and then cranks out the output.

The brain doesn't work in such a simplistic matter, there are numerous feed back and feed forward mechanisms that a computer simply cannot keep up with the sheer complexity of the human mind.

The same thing goes with why we can't just plug some interface into our brain and assume we can upload the information and become a master of whatever over night. This is not how the brain works. Unless we unlock the various complexities of the brain then we have nothing to be afraid of. And let's be honest here, aren't we already supposed to have A.I. robots already? We don't even know how attention works in the brain. If we don't even know how attention works in the brain then A.I. is still hundreds of years into the future.


Just a reminder, this discussion is intended to focus on social effects of having intelligent machines. While I realize technical aspects are important I don't want us to get hung up on them.

Sorry if I didn't state my intention well.

I want to acknowledge that you make some very good observations in your post.

Wouldn't we all be a bit happier if we had A.I. companions? ((Sure beats having a girlfriend))
[I know I'm going to get slammed for that comment, but let the opinions flow]
Sorry about ignoring your intention of the thread. I should know better.

However having a companion that is essentially not human is like having a doll for a wife or husband. These kinds of things exist already and if you want to see an interesting documentary just google Guys and Dolls.
However having a companion that is essentially not human is like having a doll for a wife or husband. These kinds of things exist already

The hypothetical situation is that the androids have human level intelligence and ability. I am assuming a situation in which they are not just lifeless dolls, or poor imitations with stuttering movements and monotone voices.
Assume that there was a kind of android that you could hold a conversation with, one that would listen to you and be emotionally responsive (that is, responding appropriately to your emotions, as they don't have emotion of their own). Whilst their behavior will be different than humans' (almost entirely responsive and no spontaneous action) and they aren't defined as living, would a robot of social capabilities not make a good companion?

tl;dr : If androids were capable of making a social connection with you, couldn't they also make emotional connections? (Thus justifying marriage)

By the way, Science Master, that's a great suggestion of a documentary, though it misses the title issue for this thread dealing with sentient machines

Fanatical Zealot

Why would we make Androids?

This endeavor would be useless.
What capacities are we positing for these machines again? We already have programs fully capable of learning to hold rudimentary conversations, or not so rudimentary depending on how much you pour into them. Nobody's going to marry Cleverbot, although that might be because Cleverbot tends to be an a*****e.
Are we positing any semblance of self-awareness? Are we positing any general learning capabilities? Are we positing any notion of conscious-subconscious processing? Is there an internal utility routine? At how many levels? How many of them are mutable?
Why are we bothering to converse with these things again? Don't they have better things to do? Or are we simply building artificial slaves here? In which case why give them the ability to think at all?
Layra-chan
What capacities are we positing for these machines again? We already have programs fully capable of learning to hold rudimentary conversations, or not so rudimentary depending on how much you pour into them. Nobody's going to marry Cleverbot, although that might be because Cleverbot tends to be an a*****e.
Are we positing any semblance of self-awareness? Are we positing any general learning capabilities? Are we positing any notion of conscious-subconscious processing? Is there an internal utility routine? At how many levels? How many of them are mutable?
Why are we bothering to converse with these things again? Don't they have better things to do? Or are we simply building artificial slaves here? In which case why give them the ability to think at all?

There are a lot of questions there, so this is going to take a few posts for responses

For the purposes of this discussion the machines have all the cognitive abilities of human beings. However, the main difference is that they do not have emotions of their own, since they need to rely on outside cues to trigger a social response. This means that they do not have a self-consciousness as we understand it, though we could say that they are more or less socially aware of others' emotions in that the robots know the proper responses to facilitate general well being and happiness in the user. It sees that you have a sad facial expression, it recognizes that you have a negative emotion and you probably need to talk about it, and might ask you, "What's wrong?" The major difference from humans is that the robot will never make emotional demands of its own, as robots never have a bad day unless we program them the "feel" that way. Emotions are a lens through which behavior is filtered; it's advantageous to program them with a positive mindset that ultimately encourages well being.
(Response 2 to Layra-chan)

By the way, Cleverbot is basically a mirroring program, which means it takes humans' input responses and spits them back out nearly randomly. That's why the bot sounds so ridiculous and even rude at times. It's not thinking, nor is there even a near-human way of it processing conversation and recognizing meaning. The Turing test is an AI test where a person is connected with a chat to the bot, but the guy initially doesn't know if he is talking with a program or with a human. The goal is for the bot to be indistinguishable from a human in conversation, where this tester cannot tell the difference between a bot and a human on the line.
No chatbot has passed the Turing test, and the bots that currently exist are extremely limited in comparison to the ability humans have to converse. I would argue that, whilst you may be able to ask bots like Siri for information they are programmed to retrieve, there is no way to hold even a relatively simple dialogue with her. The main problem with the Turing test is that AI currently do not understand context or reference. In a conversation, multiple meanings are hard to get across with the intended meaning, and the bots cannot see a holistic progression of the conversation to understand implication that humans would get without thinking twice about it.

For bots to be able to converse on a human level, as suggested to explore social effects in this discussion, the bots would have to understand emotion and implication in some way. The reference for creating emotion is entirely outside of themselves when it comes to this aspect of communication. Think of it like they are purely empathic when it comes to emotion. As a final note in this regard, we can control and limit behaviors of androids so they don't harm anyone directly, as they would never get angry and beat someone on impulse. (Though the control of harm a user could command is another issue entirely)
Layra-chan
Are we positing any general learning capabilities? Is there an internal utility routine? At how many levels? How many of them are mutable?
Why are we bothering to converse with these things again? Don't they have better things to do? Or are we simply building artificial slaves here? In which case why give them the ability to think at all?

(Response 3 to Layra-chan)
Learning applies most prominently in regards to social functioning. A human-level android would need some degree of ability to learn how to adapt its actions and words in order to benefit its user's well-being (as identified by positive facial expressions and positive social interactions, for example). Largely, this would be based off the responses the droid receives from its actions so it matches positive responses to understand the positive nature of its actions. In order to acquire any such understanding, however, a vast amount of experience would be needed across several contexts before it could understand social relations. The thing here is we need to learn in order to understand the connection between the outgoing words/behaviors and the responding emotions as signified by observable reactions.

Note: the vast majority of human emotions are displayed through our body language; think of the androids in this discussion as being able to read body language like experts in order to derive underlying emotions.
Also, about how this applies to the real world, such a machine will be better at picking up on details and identifying what a person is feeling, making them useful to police enforcement when trying to determine if someone is lying or not. A robot will eventually be able to observe and process cues about body-language better than a human being.
Layra-chan
Is there an internal utility routine? At how many levels? How many of them are mutable?
Why are we bothering to converse with these things again? Don't they have better things to do? Or are we simply building artificial slaves here? In which case why give them the ability to think at all?

(Response 4) Explaining why androids are useful
First I have to apologize to Layra-chan because I don't understand what she means by "internal utility routine" as this is not a common phrase in my experience (and I don't get how this relates to different levels or mutability if it is a routine).
If you're talking about domestic use of androids, like daily chores, then that is certainly possible.
Since this topic covers the issue of utility, consider the example of assisted living for the elderly. It's a crummy job for a human to take care of old folks who have difficulty functioning in daily life. However, a droid would not mind tasks that humans would see as quite arduous, like helping an old person bathe and clothe themselves. On top of that, having a sense of positive human contact is very important for maintaining sanity and overall well-being. Consider this: if we have made human-level androids, then we have made the cure to loneliness. That has a great number of psychological implications in benefiting the psychological health of many who interact with the droids, as in this example of the elderly folks. They would also help socially reclusive people find much more meaning in life from having that sense of social connection which all human beings need on some level in order to be healthy. Is it not reason enough to improve the well being of these often quite distressed individuals who live in relative isolation?

With the utility of researching AI in general, is it not enough to expand our programming capabilities to make these vastly more complex computers? Also, consider that we did have computers that could think on the level of humans. Then technology could advance exponentially from the amount of progressive work the droids do in terms of research and productivity. Of course, humans would need to guide the development of this technology, but it's good to have humans interact with intelligent machines, as we need to integrate them into society rather than seeing them as this distant entity. Otherwise we won't be able to as effectively utilize their benefits; societal integration through social machines makes for better human-robot relationships.

On the economic side, as a marketable product, it is undeniable that there is a profitable demand for lifelike dolls for domestic use when it comes to simulating feelings of companionship and sex.

Greedy Consumer

they can only do what they are programmed or taught to do.

So I don't see how you would fear it in a rational manner unless someone told ti to do something stupid.
Though this is mainly about social effects of advanced AI, there are economic benefits to investment in such research. I already gave the example of how technology could increase exponentially from having intelligent machines who can work as effectively or more effectively than human beings.
I hope that (along with the previous responses I've made) clears up the question of "Why make androids"

If there is any continuation of this question, please explain why it would be useless, as I have already given examples to show that is useful, rendering empty statements of opinion to be invalid.

Also, I do acknowledge that in real life there is the possibility that resources put towards AI research ultimately become fruitless. It may be that we are unable to make recognizably intelligent machines with our current level of technology, and I recognize the uselessness that would entail. However, there are mounting advancements in the field of AI research, especially as computer technology becomes more affordable and more capable of handling large processing loads as part of Moore's law. I also acknowledge that computing ability does not directly translate to better computers, but it still helps for handling vast amounts of information to allow for later data compression and streamlining of program functions.
Furthermore, I am aware of the limitations of our computers being finite in capability, however the point is that we have not reached the physical limits of computational ability. There is room for improvement, and we cannot decisively say it is impossible if we do not try, just as we cannot say it is a useless endeavor when there are demonstrable applications as I have previously explained.

And when it comes to the robotics aspect for the physical aspects of design, though it's not exactly cheap research either, it has come a long way in terms of robotic mobility, limb-replacing prosthetics, and enhancement exoskeletons like the HAL suit. We even have a way of simulating touch through advanced tactile sensors that allow a robotic hand to pick up something as delicate as an egg without breaking it. If there was a more solid investment between the two, robotics would be the winner, which brings up the point that we could have a droid capable of looking human without necessarily thinking or acting like a human.

Quick Reply

Submit
Manage Your Items
Other Stuff
Get GCash
Offers
Get Items
More Items
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games
Mini-Games
Play with GCash
Play with Platinum