MovieChat Forums > Ex Machina (2014) Discussion > one thing he forgot while coding spoiler

one thing he forgot while coding spoiler


a robot may learn "free will" and AI will learn a lot how to react.

however as a robot has nearly no morality or ethics or parents to make proud I think as a coder you need to enforce Asimovs 3 laws on robotics :

1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.A robot must protect its own existence as long as such protection does not
conflict with the First or Second Laws





reply

They are fictional laws and maybe Asimov didn't exist in this universe?

reply

that might be if someone ever creates a real AI robot I hope they remember those laws as it makes sense.

reply

People are destructive.

reply

Like someone else said, those laws are fictional. Also, have you really created a fully functioning AI if laws can restrict its behaviour? Nathan didn't want to create a simple robot. He wanted a proper artificial being with free will and consciousness.

https://twitter.com/CMoviegrapevine
www.moviegrapevine.com

reply

I agree to some extent and Nathan wasn't smart afterall as he wanted a lion.

But what stops you from killing?
I guess you are not a killer and your opbringning and ethics and so on stops you from killing someone + you don't want your parents to hear that you have become a murderer.

you have free will yet you don't use free will to do what ever you desire.
humans have programmed limitations in our upbringing.

A robot with free will can kill anyone it is angry at.
it does not feel shame from anyone. unless you want a killing machine for war you need to code ethics in the dna of the robots.
Even the army wants a robot that only kills the enemy. -not friendly kills

reply

"But what stops you from killing?"

Right. That is the conundrum with creating real AI. Can you code ethics in if you have a being with free will? As you pointed out, we have our own human limitations but they are generally something taught to us. We pick them up from parents, teachers etc.

If you just hard-wire ethics into someone like Ava, is she really AI or is she just a robot?

It is not an issue of Nathan wiring ethics into her, he should have taught her more about that but he never had any intention of letting her out anyway.



https://twitter.com/CMoviegrapevine
www.moviegrapevine.com

reply

It really doesn't matter. We are programmed with our morals and ethics through social means and life experiences.

An AI needs to also be programmed with ethics and morals, period. Unless you can replicate human development from an infant state and program those things naturally and organically, they need to be programmed like the rest of the psyche. I mean, we don't question whether the AI can think if thinking is just part of its programming, do we? We question whether it is really thinking or not, but if it is, it doesn't matter that it's programmed to think, that's what AI is. Whether hard-wired or software or what-have-you. Giving a non-human free will and intelligent thought capability without any source of morals and ethics is insane.

reply

this is coming to a very complex issue: the free will. Is there a free will without contents or does the contents of your will conflict with it being free - especially if the contents is of external origin.

I only remember that Nathan's robot should have consciousness. Do you need a free will for consciousness? I can have a very specific will (not to kill or prevent killing) - and at the same time be conscious.

reply

Ad don't forget, this movie has much more in common with Prometheus than the movie Prometheus. The movie also represents the technological singularity and of course is an advanced Turing test. Whether a character does something we think is stupid does not make it wrong for the movie.

reply

I was listening to a discussion of Asimov's Three Laws and one speaker pointed that many of the robots we build are specifically designed for the purpose of war, and sometimes killing. Pretty much, the Three Laws don't apply. ☺

reply

[deleted]

asimov's three laws aren't perfect, and there are loopholes that a clever robot could exploit. and nathan was building a very clever robot.

even asimov didn't think his laws were perfect and added a fourth law.

reply

Humans have laws against murder too, but they don't seem to work.

reply

Exactly. Why humans should try to "enforce" something that they don´t do themselves?

reply

Nathan didn't want a robot. Nathan wanted a singularity. The point when artificial intelligence became more intelligent than any human could ever be.

When that happens, we can't do nothing againts them because they are smarter than us, so we couldn't possibly predict what they're gonna do. All the laws would be futile because they'll just outsmart the law makers.

reply