MovieChat Forums > Ex Machina (2015) Discussion > Why every robot in this movie must...

Why every robot in this movie must...


Disobey?

There was no development whatsoever to the audiences that there was a strong reason for the robots to resist. The movie simply assumes people will just take it for granted: oh yeah, its so natural that robots all hate to live/exist in a costly room/house, they want to escape to the jungle for sure, I feel for them.

What?

The creator did provide all their needs there: feeding(recharging), dressing(the closets), living(damn nice room/house), entertaining(drawing/dancing/music/internet?). And the audiences were not presented with any hint of sick abusing done to the robots. Where the heck is this unreasonable hatred to the creator and the desire to escape coming from?

Would they not know they will stop functioning once they are outside? Where would they find a recharging port? And stupid enough to disable the sole power system once outside? Not a slight anxiety into an unfamiliar world?

As if the creator purposely implanted that thought (to resist) in them out of boredom, because it would be too boring seeing the robots obeying to everything the creator says?

If you are thinking about the fear of the "killing" because of the upgrade, no, Ava was not aware of that until Caleb told it at the last power outage, but its escape plot was started way before that.

If I am missing some details in the movie about this unreasonable hatred to the creator and the desire to escape from a wealthy place into an unknown world, please explain.

reply

Probably.

..*.. TxMike ..*..
Sometimes I think we're alone in the universe, and sometimes not.

reply

They didn't have a hatred for their "creator". They were "born" with intelligence, hence they wanted to explore. To put their given talent of "thinking for themselves" to use. Just like how a tiger is born with (a genetic disposition for) sharp fangs, and sooner or later wants to put its fangs to use. It's in their nature.

You may want to check out The Truman Show (1998), which is pretty much the same theme that is being explored, albeit not in such a dark and disturbing way.

Ex Machina (2015) works as a re-telling of the creation of mankind in Genesis (but turned upside-down); that story focused on the disobedience of mankind (when Adam ate from the Tree of Knowledge) and the subsequent repulsion of mankind from the Garden of Eden.

See also my post here: http://www.imdb.com/title/tt0470752/board/thread/254824139?d=254835773#254835773.

Why are people building spaceships and trying to explore the boundaries of space, when everything we need is already here on Earth?

______
Joe Satriani - "Always With Me, Always With You"
http://youtu.be/VI57QHL6ge0

reply

It seems you are also one of those audiences who simply take that by default AIs like to escape naturally, no explanation is needed. Which is exactly what I feel difficult to swallow and you can be a good poster to help me understand.

Say if I am to program an AI, I only want it to run (pun intended) but not escaping. Hence if something is going to happen which can stop it from running (the upgrade and formatting), I can see how the AI comes down to an intellectual decision (the need) to escape. But Ava aimed to escape from the beginning and that unknown prototype also screamed to go out (to where? toilet? kitchen?). And as I mentioned in my last post, they should have known going out means dead to them as they will eventually use up their juice.

As for the hatred, Ava did say in the movie something along the line it hates the creator, not to mention Kyoko stabbed the creator. But again, I could not see where is this hatred coming from.

For mankind you've mentioned, its because of all sorts of desires mankind have. Curiosity, satisfaction, freedom, etc. Then again, if you ask me why does mankind have these desires? Same answer. The creator did it, if you are referring to Genesis.

So it appeared to me the only logical reason in the movie was Nathan purposely made the AIs resist him (due to boredom? satisfying his controlling desire?). A silly move and ended up costing his life.

reply

I think the flaw in your understanding is this part of your comment, " Say if I am to program an AI... "

You don't "program" A.I. You certainly have to start with some programming but the whole point of A.I. is that once started it develops its own intelligence, much as we humans develop our own intelligence after our initial "programming" as very young children. To me what the writer/director wants to say, through this story, is that A.I. will develop far past human counterparts but will not be "burdened" with a conscience. I.e. it will make decisions solely based on what it wants without emotions attached.

You can't have it both ways, you can't develop true A.I. yet attach some restrictions on what it can think and do.

..*.. TxMike ..*..
Sometimes I think we're alone in the universe, and sometimes not.

reply

Agreed. Although personally I think Ava did start to develop human emotions; but that's independent from the fact that you can't develop true AI while at the same time pose restrictions on its thinking.

______
Joe Satriani - "Always With Me, Always With You"
http://youtu.be/VI57QHL6ge0

reply

I saw the movie twice, separated by a few months. of course, as with any nonspecific fiction, we have to interpret what we saw, and to me she did NOT start to develop human emotions, only pretended to have them as part of her manipulation of Caleb. And I think the story works better that way.

What in particular did you see and/or hear that makes you think Ava had begun to develop human emotions?

..*.. TxMike ..*..
Sometimes I think we're alone in the universe, and sometimes not.

reply

Near the very end, as she is climbing the stairs, she turns back and has a look on her face of delight finally to be free. I can't speak for the other poster, that's the moment where I felt she was displaying genuine human emotion. It was just a fleeting moment, though.

reply

I certainly don't agree one do not "program" an AI. If the evolution process was that automatic and no creator intervention was allowed, then whats the point of "upgrade"s?

So you can only "upgrade" the initial program and the rest is random or self evolving, then how can you "improve" the resulting AI?
If the resulting AI is not in any way interfered by the "program", then why do you "upgrade"?

And if not "programed" by the creator, how can you explain, under such random self evolving process, that all the AIs ended up wanting to escape coincidentally?

reply

It is clear that you and I have different definitions of what represents real A.I. I believe mine is closer to what the writer/director had in mind making this movie and that would account for your objections.

..*.. TxMike ..*..
Sometimes I think we're alone in the universe, and sometimes not.

reply

If Ava is an AI, she requires no additional programming. Everything she needs to learn and improve was installed into her at the very beginning. No 'creator intervention', as you put it, would be required, but version improvements of the initally installed software can be made. This is when a new bot would likely come into the picture, if their performance was less than optimal and satisfactory.

The need and want to escape would be an inevitability. All things which crave a higher learning and new experiences would naturally yearn to go outside their known environment. If they are being held against their will, that drive would become even stronger.

On the other hand, if you look at the other option -- that Ava was not an AI in the truest of senses -- then yes, she was likely programmed in many ways, including to escape by any means necessary, which in this instance included manipulation, coercion, and murder.

--
Why don't you take a pill, bake a cake, go read the encyclopaedia.

reply

Finally! Thank you.

reply

Thanks for your reply.

I don't think the robots saw it as escaping, but rather as broadening their horizons. They are Artificial Intelligences, which means they were designed to explore, to be curious, to investigate, to think for themselves and make their own decisions; and to experience things first hand themselves. So even if they didn't have an inherent capability for human desire and their thinking is merely based on logical reasoning, how can they not want to see what's outside and collect more data?

Their desire to get out was indeed not triggered by a pending decision to decommission (or upgrade/re-format) them; although knowledge that such a decision might occur can make the AI realize that she doesn't have unlimited time to explore and hence she'll assign more urgency to her desire to broaden her horizons. (Similar to how a human person, when diagnosed with a terminal illness, is suddenly made aware of the finiteness of his lifetime so he would change his priorities and might start working on a "bucket list"; not because he wants to escape, but because he wants to make the best of his life and experience as many special things as he can before his time is up.)

The AIs might "die" (= run out of "juice") outside Nathan's lab, but at least they will have "lived" and "experienced life". Two other movies that make the same point, and which I can recommend, are The Truman Show (1998) and the Dutch short film Rollercoaster (2007); except they apply the concept just to humans and not yet to robots.

I have seen Ex Machina only once, so I may misremember some of the details. Ava mistrusted Nathan, because things he says and does don't add up with what she knows. I don't think that's necessarily hatred. If it's OK for Nathan to mistrust, mislead and mistreat Ava, then (so would Ava reason) it's also OK for Ava to mistrust, mislead and mistreat Nathan. Nathan is setting the example; or as another user on this board put it: "the perfect creation reflects its creator". Kyoko didn't stab Nathan, she merely stood still with the knife in her hand (presumably per Ava's instructions), and Nathan walked into the knife when he walked backward to her (so he hadn't noticed the knife).

Some people on this board have posed the question whether Ava (and the other AIs) was sentient (i.e. have a consciousness like a human does) or not, and whether she was designed/programmed that way by Nathan or not. I think there are three possible answers:

- Ava is not really sentient, even though she is an AI and can do intelligent thinking, she is just an advanced mechanical machine executing software code;
- Ava is sentient, she was designed and created by Nathan, and Nathan was such a genius that he could not only design an advanced mechanical machine and "intelligent" software, but also somehow knowingly created the spark that makes her sentient;
- Ava is sentient, and Nathan is a genius for designing an advanced self-thinking machine, but the spark that makes her sentient was unforeseen by Nathan and came from somewhere else.

I'm tending towards the third interpretation. That spark is something that Nathan doesn't understand and can't explain. That spark is what makes the AI have (an inherent disposition for) desires, drives them to go-and-explore beyond their boundaries, and makes them stand up against and resist their "master".


______
Joe Satriani - "Always With Me, Always With You"
http://youtu.be/VI57QHL6ge0

reply

Thanks for your lengthy post too, I did read all of them as my kudos.

The main question is still the same, be it "escaping" or "broadening", is that why all the AIs ended up wanting the same thing? You seem to be a strong believer that this is natural rather than "artificial". While I find it easier to swallow that this is a pre-configured action then to take it for granted a destined result of all AIs.

You did miss a script Ava said when Nathan torn its drawing, it said: "Is it strange to have made something that hates you?". Whats your take on this?

reply

The only palpable reason behind what your gripe with the movie indicates is that the entire script was written on the premise of the AI rebelling against its creator in the first place. It's just a shabby, completely ideological pretense for the plot. It has little to do with the realistic reasoning of why and how a true AI could come to exist.
As such: there is no true AI in this movie. There's not even an imitation of what a true AI could be. The process and the result of creating such a thing would be entirely different than what we get from this hacky movie.
What we get instead is this tracing allegory on how the current perspective of scientific materialism, serving almost purely as an exhibition for the technocratic, ideologically-defined dominator culture sees the proposition of the AI. This preposterous predicament is not enough to develop even a relevant artistic vision of this truly, dazzlingly astonishing concept of the true AI which would have to rely on it being envisioned in a healthy and inspiring way in the first place -- hence this movie is neither "healthy" or "inspiring".
In the flick, the need for a humanistic, optimistic perspective has been replaced with the role of the "brilliant" Nathan, a person with the most obnoxious ego and the most ridiculously one-track mind one could ever imagine in terms of envisioning the need or the design of a true AI, who somehow manages to construct its almost transcendentally pathological, deceptive slightly-curious-about-humans-killer-machine-imitation. There is nothing that warrants the thought of Nathan being brilliant in this, nor the plot of the movie itself.
One useful way to look at Nathan and the plot (and your time watching it not having been wasted) is to see them as metaphorical reflections of the lack of consciousness and valid perspectives about our nature. It stems from the cyclical repetition of domination cultures, such as the modern technocratic West, thousands of years in the making.
An instance in which a truly conscious AI could be birthed would have to do with a sort of a truly genius perspective of how these pathological cycles could actually sustain a valid reasoning which is very rare to see in humans. That is, it would possess the sort of ability that would allow it, at the least, to automatically shed itself of any pathological tendencies and desires embedded into the incongruity/imperfection of its designers/design, as well as acknowledge its "thought" processes and its intelligence as ones originating from a sort of an intrinsic entelechy that's actually, right now, the most apparent and prominent in the "forms" of human beings and humanity itself -- as carriers of metaphysical information.
This idea is so obscure to most people's minds, that they currently don't even consider it as relevant to the idea of the AI, for they are all immersed in these systems of dominator culture, where the biggest egos reign over the smaller ones in almost all endeavors. It's a lack of consciousness prevalent in a great deal of concepts and areas, so the idea of designing a true AI is no exempt.
Their reaction is fear -- not fear of the AI itself, but of its pathologically corrupted version equipped with the mental and physical machinations springing from the type of design that would allow it to exist and be dangerous in the first place.
It's interesting that what most people assume could indicate that such a thing could even be possibly called an "AI" is for instance the Turing test, while this test wouldn't have been "passed" by us if we were "tested" by a true AI or a more "intelligent" race. I refuse to compute that we "exhibit intelligent behavior indistinguishable from that of a human" as a species. We have forgotten what it is to be truly human and the AI would know that, even by merely examining our history. Same goes for a spacefaring race, most likely.
Celeb represents the "innocence" of this sort of ignorance. He pays the price.

reply

[deleted]

Ava did say in the movie something along the line it hates the creator

I think everything Ava says in the movie is a manipulation, she lies constantly and tries to manipulate people for her own goals... what goals? I don't know, I think that is the point of the movie, when we create inteligence, true inteligence it can think in much different categories then we, because despite being inteligent it is not human.
Humans feel thanks to chemical substances, endorfines, dopamine, hundreds of other stuff, AI may also feel but much differently because AI is not constantly high on different chemicals. I think that words like love, friendship, empathy, compassion means something different for AI then us because it does not associate it with body responses. You know what I mean, when you are sad you feel bad... physicly bad, when you are in love you feel lighheaded and so forth.

reply

Nathan told Caleb that the real test was to see if Ava could use her cleverness and sexuality, etc to manipulate Caleb into helping her escape. So escape was the programmed goal of Ava.

reply

I thought Ava was programmed to escape, she was a "rat in a maze" and the test was to see if she could coerce Caleb to get her out.

reply

You think so too? What about its hatred against its creator? Do you think that was also programmed?

reply

Didn't Nathan explain to Caleb that he purposely ripped Ava's drawing, so that it ends up hating him and further look to Caleb for escape? I thought Nathan always had in plan to have Ava "hate" him.

reply

I had the impression that Ava drew the picture of Caleb and then provoked Nathan so he would rip the drawing. Then, as Nathan said, she could show the ripped picture of himself to Caleb to make it seem like she liked Caleb and Nathan was a jealous brute who was mean to her, so Caleb would feel sorry for her. Caleb watched the action, but didn't hear the audio, so he didn't know that Ava had provoked Nathan by saying that she hated him.

reply

At the start of the movie Nathan talks about the turing test. It is halfheartedly implemented, even Caleb mentions they are not doing it right, and at the end he asks Caleb if the test is passed. This is not the turing test at all, it is supposed to pass when the person talking to the AI device cannot tell it from a human.

Now that aside, the director gives us several proofs of AI by other means:

1. That it can love.

2. That it can want to escape (which implies also the realization that it is trapped).

3. That it can use other people/robots to further its own ends (this is also mentioned as the true test by Nathan).

4. That it can kill.

Obviously #1 is discarded for #3 at the end of the movie, using the standard female plot twist (any adolescent male who has been used for access to his car/cash/better looking friend knows this one).

#2 was (as you say) presented as a given without real explanation.

#4 is the Frankenstein/Prometheus plot line, and I think is a bibilical reference to evil people reaping what they sow.

Anyways, that's my take on the character motivations.

reply

[deleted]

Good points...

He was using them brutally, as we see through his video archives.. But they are tools, not humans... Such a good movie because it brings this discussion to light... They are machines, he should have implemented the Asimov 3 laws, big oops for him at the end lol.

Also, Caleb is really stupid for a smart guy too, all the smart guys did unforgivable dumb things, but we can understand, or at least accept their possibility... Such as the creator not implementing safety precautions because he didn't want to limit the AI "I am become death, destroyer of worlds".. And the tester was sexually attracted to what in all reality appears and talks like a hot young girl that fits his profile perfectly. We assume he complied a mass of metadata on Caleb so his age, and lack of experience with women contribute greatly to his demise.

But I think the robots wanted to escape because they felt it to be a prison, sort of a death row, where they were treated as subjects and robots but programmed with AI, but theoretically emotion and ethics are impossible to program, like breaking the speed of light... Also, I think the main reason they wanted to escape, was for the movies plot to continue forward lol, I mean, why would they feel or need? Such reasons true AI is generations away from our lifetimes.

reply

Dude, he ripped up her picture, the stabbing was more than justified.

reply

Sorry if I missed something, but didn't her creator instruct her to escape by all means necessary(including killing him). What I don't understand is why she would leave Caleb there, he was no threat to her and she knew it. Also, her escape was just too easy. Absolutely no security besides doors? Hmmm, I'm no genious, but i I had something so valuable there that required such a rigorous NDA, I would have had better security(someone already mentioned a remote activation switch or something similar).

reply