Welcome to Gaia! ::

<3 </3

How do you feel about Artificial Intelligence?

I look forward to the advancements 0.6 60.0% [ 9 ]
We can't trust machines 0.2 20.0% [ 3 ]
(other: please post opinion) 0.2 20.0% [ 3 ]
Total Votes:[ 15 ]
< 1 2 3 4 5 6 7 >

Ryu Kei Shou Kawazu
they can only do what they are programmed or taught to do.

So I don't see how you would fear it in a rational manner unless someone told it to do something stupid.

Thank you for realizing this! I also have the same opinion.

It's kind of a bother that we have a tradition of movies with robots trying to destroy humans or take over the world... well, Hollywood isn't known for being logical or in tune with reality.
In the context of computer science AI, a utility function or routine is a way to tell the program "this is your goal". It's the basic punishment/reward system for an AI, allowing it to learn that certain decisions lead to good things and certain decisions lead to bad things. It's vaguely analogous to the pleasure-centers in our own brains.
The reason this is important is that the utility function is what determines what the AI does once it has been trained. If it gets a million points for navigating its way through a maze, then it will learn to get through mazes, and it will try its best to get through any maze it comes across. If it gets points for bathing old people, then it will bathe old people. If it gets infinity points for killing all humans, well, guess what happens next.
Now the question is: who determines for what actions/outcomes the AI gets points? Is it programmed into the AI? Is there a guy at a control board somewhere watching the android and pressing a button every time the android does something good, and pressing a different button every time the android does something bad?
If there are multiple levels of processing, as there is in the human brain, does each level of decision making have its own utility function, with the lower levels doing things like "points for not running out of power" and the upper levels doing more abstract things like "points for making master happy"? And since the point of such levels is so that the AI has more flexibility in terms of its behavior, then will it be able to set its own upper-level goals, the same way we can decide on short-term plans?

The reason I'm asking these questions is that if we don't allow for multiple processing levels, with the upper levels able to be changed by the lower levels, then we don't have something with the cognitive ability of a human. I'm not even sure we have the cognitive capacity of a dog.

On the other hand, if we do have something with the cognitive capacity of a human being, then why are we making it do menial tasks? Certainly there are plenty of people who have no problem with not utilizing their full cognitive abilities, and an AI without emotion would in fact be better at thinking than your standard human.
I have severe ethical qualms about creating something that can think as well as a human being for the purpose of slavery. Are you really suggesting we make something sentient and not giving it any choice in its fate? And sure, if they don't have emotions then we don't have to worry about an angry robot uprising (although a calm, logic-based robot uprising is still on the table) but just because they don't feel pain or sadness or anger at this blatant abuse doesn't make it any better; in fact it makes it worse, taking advantage of those who don't know that something's wrong.
This kind of self-centered, "let's make thinking machines to do the tasks we don't want to" smacks of sociopathy. Anything with the capacity to learn has the capacity to choose, and denying it that right is slavery no matter how you spin it.
Frankly, this entire conversation is disgusting.
Layra-chan

Now the question is: who determines for what actions/outcomes the AI gets points? Is it programmed into the AI? Is there a guy at a control board somewhere watching the android and pressing a button every time the android does something good, and pressing a different button every time the android does something bad?
...
The reason I'm asking these questions is that if we don't allow for multiple processing levels, with the upper levels able to be changed by the lower levels, then we don't have something with the cognitive ability of a human. I'm not even sure we have the cognitive capacity of a dog.

Okay, I understand. I've hear the same thing termed "objective-based learning" which, now that I think about it, is basically the computational equivalent of cognitive-behavioral therapy, setting goal-oriented rewards and punishments for behaviors.
For determining these actions, we would need a massive amount of input for how positive or negative an androids actions are in relation to various instances. Ideally, with the help of user-feedback, the program would learn how to guide its actions.
Keep in mind that communication between the user and the android is important in creating the personality of a social robot (user control allows for a bit of leeway in making individual characteristics, though I still think some limitations are necessary from a designer's viewpoint, as you would want to prevent malicious people deliberately programming angry droids, to give an example). What I'm saying is that the most useful option is to have the AI make inferences of positive and negative behavior, the user would be able to observe these inferences through a display such as a mobile device (the equivalent of an app on your IPhone to let you know what your droid is perceiving as good or bad), and then the user would be able to input his own reward/punishment on a gradient scale to control behavioral tendencies. The ideal in this situation of intelligent AI is not simply to give reward or punishment, but for the AI to understand why something is good or bad. In that sense, one could compare the social actions of an android to the selected recommendations of a Movie site based off your previous selections an ratings. The program identifies general trends due to feedback and the user tweaks the details to his or her tastes. This brings up the issue of user responsibilities but that's another issue entirely; the user more or less is the conscience of the android (and I'm not sure every owner would be the paragon of Jiminy Cricket)

There's an issue of non-determinism (not being able to tell an outcome) when it comes to allowing AI to set their own goals. However, we don't exactly need them to do so if humans can guide the bots objectives instead. Again, the bot is identifies general trends, the user tweaks the details where needed. It's like communicating your likes and dislikes to a new friend; we simply can't write a program that makes everyone happy. We need feedback and interaction.
It's not clear-cut cognition, so I'm unsure if we can define that as being as smart as a dog or not... it's a situation unique from biological organisms in matters of control.
Layra-chan

On the other hand, if we do have something with the cognitive capacity of a human being, then why are we making it do menial tasks? Certainly there are plenty of people who have no problem with not utilizing their full cognitive abilities, and an AI without emotion would in fact be better at thinking than your standard human.
I have severe ethical qualms about creating something that can think as well as a human being for the purpose of slavery. Are you really suggesting we make something sentient and not giving it any choice in its fate? And sure, if they don't have emotions then we don't have to worry about an angry robot uprising (although a calm, logic-based robot uprising is still on the table) but just because they don't feel pain or sadness or anger at this blatant abuse doesn't make it any better; in fact it makes it worse, taking advantage of those who don't know that something's wrong.
This kind of self-centered, "let's make thinking machines to do the tasks we don't want to" smacks of sociopathy. Anything with the capacity to learn has the capacity to choose, and denying it that right is slavery no matter how you spin it.
Frankly, this entire conversation is disgusting.

The purpose of creating AI to do menial tasks is that it fulfills the need for those jobs that "someone has to do" that are generally not enjoyable.
For the people want to take up menial jobs that require little specialization, though there may be more competition from robots, there's nothing explicitly stopping them from continuing that course of work (at least, it is extremely unlikely for androids to be that prevalent for a very long time). When it comes to finding a job, that has to do more about national and global economy, though I cannot predict the effects that droids will have on the world market because I cannot foresee how people will trade and employ Androids.
I honestly don't know if these robots will run humans out of a living with low-education jobs. Ideally, we would want AI to be able to take care of most of the work in society so that we could focus on doing things we really enjoy. For example, I would love to be a writer full-time and publish books for a living, but that is an unstable source of income, so I am in college to get a degree for a more solid career. I think humans would be allowed to be more artistic and to enjoy life rather than strain at a 9 to 5 job.

The issue of androids compared and contrasted to slavery deserves its own unique post arrow
Layra-chan

I have severe ethical qualms about creating something that can think as well as a human being for the purpose of slavery. Are you really suggesting we make something sentient and not giving it any choice in its fate? And sure, if they don't have emotions then we don't have to worry about an angry robot uprising (although a calm, logic-based robot uprising is still on the table) but just because they don't feel pain or sadness or anger at this blatant abuse doesn't make it any better; in fact it makes it worse, taking advantage of those who don't know that something's wrong.
This kind of self-centered, "let's make thinking machines to do the tasks we don't want to" smacks of sociopathy. Anything with the capacity to learn has the capacity to choose, and denying it that right is slavery no matter how you spin it.
Frankly, this entire conversation is disgusting.

Intelligent Androids and Techno-Slavery
I would not consider the use of androids to be "slavery" any more than it's slavery to demand that a toaster and coffee maker work for you every time you use them.
But when a machine appears to have intelligence, it becomes an atrocity to control its fate. First, I have to say that I would be overjoyed if people showed such concern and caring for androids because wanting equality for them shows how we would want to accept them into society. The problem with that is that we're trying to treat them like humans, but their cognition is fundamentally different than that of human beings.
The reason why I believe it is not slavery is because the nature of their actions comes from social interaction. They have no personal emotions and therefore have no will of their own. They don't have a fate except for what their user commands of them. Though it is paradoxical to use the words "ethical slavery" next to one another, we must remember that these creations were deliberately designed to serve. Though they look and act human, they do not have human emotional needs or desires, just as their physical needs are different in the sense that they require a power outlet instead of a Big Mac and fries to give them energy.

I understand that slavery is a disgusting subject to talk about.
Yes, we have experienced historical atrocities regarding human slavery. Yes, that was a terrible thing that we should never do again. (And yes, slavery is still prevalent in lesser-developed places around the world) However, androids are not human, and function differently as mechanical beings of a social nature.
Now here's something to spin this discussion on its head: If we program a droid to find meaning though serving a master and achieving positive scores for its actions in making its master happy, riddle me this... does servitude not give this creation meaning in its life? After all, it's serving the purpose it was made for...

When it comes to choices, the objective is to program them to only to be able to make beneficial choices in whatever action they are specifically designed to do. There is nothing outside of their realm of choice that constituted an unfulfilled desire because a human would have to program that or allow a droid to be able to program such a desire, which is impractical, dangerous, and, in my opinion, programming them with the capacity for any such expansion upon negativity would be the unethical action.
Layra-chan

A calm, logic-based robot uprising is still on the table... just because they don't feel pain or sadness or anger at this blatant abuse doesn't make it any better; in fact it makes it worse, taking advantage of those who don't know that something's wrong.
This kind of self-centered, "let's make thinking machines to do the tasks we don't want to" smacks of sociopathy. Anything with the capacity to learn has the capacity to choose, and denying it that right is slavery no matter how you spin it.
Frankly, this entire conversation is disgusting.

Droids would not revolt unless we designed them with that capacity, which would be a lot of effort towards a self-defeating purpose.

If you wish to further your argument I request that you tell me how it is abuse if their designed purpose in existence is to serve their masters.
As humans, we don't have a clear purpose, which is a big moral debate for us. But in this case, droids were created for the benefit of humankind and they love to make humans happy (as defined by seeking creating joy as the bots' objective).

Again, these robots do not have personal emotions. Humans have needs they are born with, droids were made to help fulfill those needs, both physical and emotional. Droids make people happy, and they find meaning in fulfilling that objective.

I require a counter-example for you to demonstrate how this is detrimental to either party.

BTW, I respect your opinion and I completely understand why you see the use of sentient robots as unethical. I see how techno-slavery would be seen as sociopathic because of the abnormality of using humanoid beings as tools, and how apparently harmful and cruel that act seems. However, if there is no victim, how could there be a crime? Thus, I maintain that there is no insanity involved by writ. The issue of control is important in human psychology, however, droids have no such need to feel in control. I am not doubting that some users will feel a power trip, but the idea is to have androids allow humans to take more control in their lives by helping in various ways. To deny the world of the benefits of that help on the basis of paranoia or on a presumed morality is diagnosable along the paranoid-schizophrenic spectrum, and thus objectively insane. Tell me why this is not the case and I will revoke this diagnosis. (Note: I'm not saying you're nuts, but I'm saying that it would be quite unrealistic to go to the extreme and limit AI progress on "moral" grounds of something like freedom or equality for the machines. In reality, this will not stop people from protesting, and we have every right to protest anything that we don't like. I'm just asking for you to think about the reasons why you would protest.)
I'm just sharing what knowledge and theories I have to help raise awareness about the possibilities of AI. Also, I really do appreciate your thoughtful responses and additions to this discussion.

700 Points
  • Member 100
  • Gaian 50
Don't know if we'll ever see sentient machines due to tech and other limitations. If sentient machines do pop up how they act toward us and the world depends on their coding. We might give them reasoning and a degree of emotions since those are useful.

Seems messed up to make a sentient robot just for labor or curiosity since we're just biological machines and future weird tech robot is made of metals made to mimic what we do in a better/worse/same way. It'd be weird if we coded their needs to be work for humans, talk to humans, work some more and find energy rather than eat, sleep and so on for people.

Fanatical Zealot

Luke_DeVari
Though this is mainly about social effects of advanced AI, there are economic benefits to investment in such research. I already gave the example of how technology could increase exponentially from having intelligent machines who can work as effectively or more effectively than human beings.
I hope that (along with the previous responses I've made) clears up the question of "Why make androids"

If there is any continuation of this question, please explain why it would be useless, as I have already given examples to show that is useful, rendering empty statements of opinion to be invalid.

Also, I do acknowledge that in real life there is the possibility that resources put towards AI research ultimately become fruitless. It may be that we are unable to make recognizably intelligent machines with our current level of technology, and I recognize the uselessness that would entail. However, there are mounting advancements in the field of AI research, especially as computer technology becomes more affordable and more capable of handling large processing loads as part of Moore's law. I also acknowledge that computing ability does not directly translate to better computers, but it still helps for handling vast amounts of information to allow for later data compression and streamlining of program functions.
Furthermore, I am aware of the limitations of our computers being finite in capability, however the point is that we have not reached the physical limits of computational ability. There is room for improvement, and we cannot decisively say it is impossible if we do not try, just as we cannot say it is a useless endeavor when there are demonstrable applications as I have previously explained.

And when it comes to the robotics aspect for the physical aspects of design, though it's not exactly cheap research either, it has come a long way in terms of robotic mobility, limb-replacing prosthetics, and enhancement exoskeletons like the HAL suit. We even have a way of simulating touch through advanced tactile sensors that allow a robotic hand to pick up something as delicate as an egg without breaking it. If there was a more solid investment between the two, robotics would be the winner, which brings up the point that we could have a droid capable of looking human without necessarily thinking or acting like a human.


Why make Androids?

Why not just make a highly effective toaster if you want toast?


It's a horrible idea.

You'll make an individual with all these arbitrary qualities mimicking a human brain and then what?


What purpose would they serve- to be as humans, simply to exist?

A human body may lend credence to help those who are missing limbs, having difficulty moving, and a number of other things.


They may make great robots to build things, possibly with better precision and craftsmanship even then human hands.

But giving them sentience- there's no point in that, especially if we expect them to serve mankind.
Luke_DeVari

Wouldn't we all be a bit happier if we had A.I. companions? ((Sure beats having a girlfriend))
[I know I'm going to get slammed for that comment, but let the opinions flow]


Thats what they said about sex dolls, but its still a niche market. Further more do you want a slave or a life companion, an A.I. couldn't support they're side of the relationship.
Layra-chan

I have severe ethical qualms about creating something that can think as well as a human being for the purpose of slavery. Are you really suggesting we make something sentient and not giving it any choice in its fate? And sure, if they don't have emotions then we don't have to worry about an angry robot uprising (although a calm, logic-based robot uprising is still on the table) but just because they don't feel pain or sadness or anger at this blatant abuse doesn't make it any better; in fact it makes it worse, taking advantage of those who don't know that something's wrong.
This kind of self-centered, "let's make thinking machines to do the tasks we don't want to" smacks of sociopathy. Anything with the capacity to learn has the capacity to choose, and denying it that right is slavery no matter how you spin it.
Frankly, this entire conversation is disgusting.


I agree completely and

Layra-chan
Anything with the capacity to learn has the capacity to choose, and denying it that right is slavery no matter how you spin it.


Thats my favorite part

Fanatical Zealot

Luke_DeVari

Wouldn't we all be a bit happier if we had A.I. companions? ((Sure beats having a girlfriend))
[I know I'm going to get slammed for that comment, but let the opinions flow]

No, and it's primarily becuase I'm not a psychopath.
Suicidesoldier#1

Why make Androids?

Why not just make a highly effective toaster if you want toast?


It's a horrible idea.

You'll make an individual with all these arbitrary qualities mimicking a human brain and then what?


What purpose would they serve- to be as humans, simply to exist?

A human body may lend credence to help those who are missing limbs, having difficulty moving, and a number of other things.


They may make great robots to build things, possibly with better precision and craftsmanship even then human hands.

But giving them sentience- there's no point in that, especially if we expect them to serve mankind.

I don't know if I made myself clear enough, but creating a humanoid robot allows for a degree of control for creating social sentiment. I gave the example of old people because it's sometimes difficult for them to get social contact and support in their later years, especially if they live alone.

Perhaps another example is called for. Consider the social recluse; a guy who feels isolated from society may feel depressed, even suicidal, but if he was able to feel a connection with a social robot then he would be able to enjoy his life more. I'm not saying this is the strongest reason for making droids, but I am saying there is definitely a market in this regard.

I posit the argument that droids can be sentient but in a way that they are willingly servile. Sentience is the result of understanding a holistic perspective about how one's actions (and even one's very existence) affects things around them. Sentience is needed for a social robot to be able to react to social cues as a human does and to learn as a human does. The point with social responses is that language is so vast in its possibility for discussion that we can't make an "if, then" statement for every situation.

Sentience is based on outside stimuli and has no internal motivation, except for the programmed directives, which should include creating positive experiences, as an example.
chromealias
Don't know if we'll ever see sentient machines due to tech and other limitations. If sentient machines do pop up how they act toward us and the world depends on their coding. We might give them reasoning and a degree of emotions since those are useful.

Seems messed up to make a sentient robot just for labor or curiosity since we're just biological machines and future weird tech robot is made of metals made to mimic what we do in a better/worse/same way. It'd be weird if we coded their needs to be work for humans, talk to humans, work some more and find energy rather than eat, sleep and so on for people.


You bring up a good point that I would like to bring to attention for everyone else.

The needs of droids are fundamentally different than that of humans.

If this is true for their different physical nature, would it not also be possible for the very foundations of their psychological or spiritual needs to be different as well. I use the term "spiritual" loosely in this case.

A good part of the issue seems to deal with free will. We can program AI to have certain capabilities, but it is highly unlikely for them to spontaneously generate their own unique abilities. For example, a droid may be able to augment it's speaking style due to experience, but it may lack the programming to alter how it processes information as a whole (that is to say, it cannot form a unique individual perspective of the world).

Is it denying them rights to limit their thought capabilities, or are we trying to make them too human? Aren't things like selfishness and a lack of emotional control problems in humans that we would want to avoid in our mechanical emulations?
DXnobodyX

Thats what they said about sex dolls, but its still a niche market. Further more do you want a slave or a life companion, an A.I. couldn't support they're side of the relationship.

(this question is open for everyone, BTW)
What qualities would define an individual who could be a loving, supportive companion?
I'd really like to have several peoples' inputs on this question if they could.

As far as I go, I need someone that would be able to listen to me and comfort me. For example, when I say, "It's been a rough day," she should tell me something like, "It's okay, things will get better" as she gives me a reassuring hug. That kind of empathy is what I need in a relationship. I need someone that I know isn't going to leave me. If I'm loyal to her, she should be loyal to me (I've been cheated on by a serious relation in the past, and I'll never forget the feeling of emotional betrayal from that). I can't be friends with people who criticize me senselessly without reason or resolve, nor someone who finds joy in hurting/annoying other people or exploiting them for personal benefit whilst disregarding their general well being. A good relationship needs a strong base of communication, so I'd like to be able to talk with her often. And sex. You can't ignore the desire for sex. (I had an ex who wasn't a team player, if you catch my drift. It feels really empty when the relationship is one-sided, like how she was really demanding whilst ignoring my own desires.)

Is it wrong to engineer a personality to match your own?
Suicidesoldier#1
Luke_DeVari

Wouldn't we all be a bit happier if we had A.I. companions? ((Sure beats having a girlfriend))
[I know I'm going to get slammed for that comment, but let the opinions flow]

No, and it's primarily because I'm not a psychopath.

Please explain your reasoning as to why desiring companionship is psychotic.

Humans have a fundamental psychological need for some degree of social interaction and affection, lack thereof significantly harms one's quality of life. Are you suggesting that one should endure suffering loneliness because the AI companion is nonhuman? Why is that immoral or otherwise not allowed?

Quick Reply

Submit
Manage Your Items
Other Stuff
Get GCash
Offers
Get Items
More Items
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games
Mini-Games
Play with GCash
Play with Platinum