MovieChat Forums > Humans (2015) Discussion > If AI gained sentience, what are the mor...

If AI gained sentience, what are the moral ramifications


Would you consider it murder to dispose of "it?"

Even more, jumping philosophically.... from a non-religious, but just a hypothetical... would that "it" have a soul? I'm not talking in some particularl religion, but the basic idea of the AI/bot/droid being treated as an organic human should be treated?

Would it be wrong to shut them off and rip out the wires? Would the being be more valuable in morality than say a stupid bug? No one thinks twice about stepping on a spider about to bite them. Would the bot/droid/synth/whatever be nothing but a bug to step on?

The nasty spider that will bite you if you don't stomp on it... is it more worthy of its brainless existence than a robot?

Ponder...

reply

Spiders eat insects, not people. Flies and mosquitoes on the other hand carry many diseases that can infect you. Guess what kills flies and mosquitoes....yep, spiders. Basically, morality aside, it's in your best interests not to kill spiders. Not counting poisonous ones of course.

As for sentient machines, and the ethical dilemma involved with creating, and/or interacting with them, that's a tough scenario to ponder.

I would like to say if they're sentient they deserve the same rights as humans, but human behaviour tells me that would be a tough sell to my fellow homo sapiens.

This delves into ethical discussions such as moral status, and it's degrees.

One example I came across was if you could increase a great apes intellectual capacity to the point where it was self aware, it would probably not be granted full moral status by humans, who would still see it as fundamentally different from themselves.

If they wouldn't grant that status to our nearest biological cousin, the odds of an AI being recognized as such are slim and none.

reply

One example I came across was if you could increase a great apes intellectual capacity to the point where it was self aware, it would probably not be granted full moral status by humans, who would still see it as fundamentally different from themselves.


Didn't Pierre Boulle already address that? :-)

reply

Even more, jumping philosophically.... from a non-religious, but just a hypothetical... would that "it" have a soul? I'm not talking in some particularl religion, but the basic idea of the AI/bot/droid being treated as an organic human should be treated?

Would it be wrong to shut them off and rip out the wires? Would the being be more valuable in morality than say a stupid bug? No one thinks twice about stepping on a spider about to bite them. Would the bot/droid/synth/whatever be nothing but a bug to step on?


I'd be prepared to accept a sentient A.I. and long as it didn't try to harm humans. I don't mean in the way that Niska lashed out at those who provoked her, but just wanting to dispose of humans in general. If they were willing to peacefully co-exist I don't see a problem with that.


This is a THREADED message board. Please reply to the proper post!

reply

There's also the scifi ethical question of how many billions or trillions of them would you have to "kill" during development to get the code right? They wouldn't be human shaped but just test instances on a computer.

reply

Illogical though it seemed, most of the human race had found it impossible not to be polite to its artificial children, however simple-minded they might be. Whole volumes of psychology, as well as popular guides (How Not to Hurt Your Computer's Feelings; Artificial Intelligence - Real irritation were two of the best-known titles) had been written on the subject of Man-Machine etiquette. Long ago it had been decided that, however inconsequential rudeness to robots might appear to be, it should be discouraged. All too easily, it could spread to human relationships as well.
- Arthur C. Clarke, 3001: The Final Odyssey

reply