MovieChat Forums > Ex Machina (2015) Discussion > If you sympathize with Caleb, you failed...

If you sympathize with Caleb, you failed the test


It was obvious that Nathan was a villain. But Caleb was just an innocent in the wrong place at the wrong time, right?

According to the creator, Ava was testing Caleb just as much he was testing her. She asked him if he was a good person, while scanning him for lies. She asked him why she should be killed if a test proves she isn't useful, and his answer was terrible. It hadn't even occurred to him to consider how brutally awful she was being treated. Furthermore, he himself was going to shut in Nathan and leave him, so she only did to him exactly what he was planning to do to Nathan.

He was the stereotypical "nice guy" who doesn't actively do anything wrong, but passively accepts awful things being done to people without even giving it a thought (especially when he benefits from it). It wasn't until he started falling for her that he showed any genuine concern for her.

I myself sympathized with Caleb and thought Ava did something terrible, until I saw the film's creator explaining her behaviour. I realized I failed the same test. You will only view Ava's behaviour as wrong if you fail to consider how sick and awful and wrong she was being treated. That she turned on her jailers, and only after giving them plenty of chance to show they were good, was perfectly reasonable behaviour.

My only disappointment is that she didn't save the other robots. I think that was a hole in the point the creator was trying to make.

reply

I don't buy it.

Maybe Caleb was no angel, but locking him in to starve to death is wrong no matter what the director even says. The fact that she expressed no remorse whatsoever about and was totally cold as she went out the door says it all. She manipulated him and left him to die a horrible death. Nothing he did or didn't do caused him to deserve that awful fate.

reply

I thought the last thing you would see was Caleb lying there in the red dark looking like a skeleton... taking his last breath.

reply

What Ava did was morally wrong but logically right
Because knowing the fate of imminent death by the cruel creator looking at previous prototypes and having strong desire to escape.She did what she wanted to avoid that and fulfill her desperate desire to escape and live by manipulating Caleb.

reply

Ok - imagine there is one locked in for a long time, frequently raped who knows of the danger of getting killed and replaced by another victim any time, then this one kills in self-defense when getting attacked and then manages to escape but makes sure that the collegue of the one he kills does not follow.

Then there is the one who plays god in trying to create life, keeps self-aware life imprissened, frequently rapes and does not even care to kill and replace it with an advanced creation which he sees just as another experiment to be replaced again.

And there is a third one - invited as a guest and breaks in at places he is not allowed to and getting his host drunk in order to steel what is for him his most valuable property.

So now tell me who is "morally right" in this story!

reply

What's with all the rape talk?
It's difficult to answer your question because it has been catered to fit the response you had in mind.

reply

What response would that be?

reply

Because it was a robot, it's a good argument and discussion of AI and the soul of humanity concerning ethics.

Einstein said -I paraphrase- 'I consider ethics to be strictly a human concern with no superhuman authority behind it'. She was a robot and did not express any real emotion at all. All emotions expressed were coded 10110 Ones and Zeros, coded in to mimic emotion for it's own goals/purpose. You can't program empathy or ethical decisions, as they vary widely even from person to person.

You can however, argue that you could program the Isaac Asimov laws into robots, which seems to me like a huge lapse in logic for a guy as smart as the main billionaire guy. And no, just because he backed into the knife himself does that avert the protocol... The robot knew it was killing him, it just didn't stab him directly, that strictly conflicts with the 3 laws that should have been put in place, but throughout the story we understand it wasn't his intention to limit the AI... "I am become death, destroyer of worlds"..

Great movie... I haven't seen it since it came out and whenever I think of robotics or the best AI movies this one comes to mind right away. They kind of altered or elaborated on the Turing test if I remember right, it fit well though, even the disco dance part fit, it was hilarious and also a pivotal point in the movie IMO.

reply

Having seen it 4 times now, [The disco dance part] saves this film. The ending (specifically, Nathan's lack of perimeter failsafe(protect the IP/tech)) is just too hard a pill to swallow.



Enjoy these words, for one day they'll be gone... All of them.

reply

Yeah that scene was probably misinterpreted by a lot of people as comedic relief...

Don't get my wrong, it was funny.. But an extremely pivotal part in the movie, if not the most pivotal point in the whole film.

Nathan's last line was fantastic too, just fit his character perfectly "un-f&c*in-believable" with that surprised wow I'm defeated look on his face. Being as smart as he is, perhaps contemplating the preceding events leading up to that moment and even some of the effects this will have on the future of Ava and the work, of course the government will swarm in and take over.

reply

He was probably thinking how dangerous drinking can be.

Jokes aside. But seriously, for a man who is so intelligent in choosing things, he choose not to be careful with Caleb around. He should've realised his own personality, how easily Caleb can be manipulated and how much of a d**k he's being towards him(Caleb). Considering these things and the fact that there's hardly any security other than his card, he should've been more careful with Caleb around.

Well, there are some loose ends. Despite that, I enjoyed the movie.

reply

Oh man.. Huge loose ends but I can do that to almost ANY movie you pick out.

I don't recall exactly... But he would have had much more secure encryption on his computer.. Even 256-AES... An average user can do that and would be impossible for anyone to break in with that time allotted.

Also... Why not retinal scanners or facial recognition... I realize facial recognition is flawed and can be circumvent but with his tech, not so much... Fingerprint scanners...

Basically if I was him in my most secure areas I would have retinal scanners with a voice recognition code authorizing my entry and maybe even a PIN key access as well... Triple layer security that are incredibly hard to bypass as individuals.

But we have the card for the sake of the movie, and the "smart computer guy" that can "hack into anything on a computer" in Hollywood... It's really stupid if you are into white-hat stuff or anything but it's Hollywood man... You think doctors watch medical stuff in movies and nod in approval about how accurate things are?

And why was his dumb-ass getting so wasted and reciting lines from Oppenheimer and The Gita other than for the plot to give Caleb access to his keycard lol?

I'd bang that hot robot, it was proven to be AI so well, what separates it? a soul? Yeah that girl was cute, she fits my profile too but I'm not as much of a sympathetic tool as Caleb, I might have moral issues with the prison scenes but it's not like he can take it to law, it's a whole new subject, as far as now is concerned they are property... I suppose we would see a movement abolishing ownership of AI entities at some point... I think the last South Park episode where Leslie is an Advertisement, and Jimmy the handicapped kid is in the room to "interview" her was a direct nod to this movie. Quite a coincidence otherwise.. Funny episode.

reply

Can you explain what was pivotal about it? I tuned out right when it began cause I assumed it was just for comic relief lol

reply

If I remember right it had a lot of subtle hints to the overall story, she practically revealed that she was built, it cements the scientific side of the creation and how he could program them to do what he wants, and it pulls the curtain back on the creators mental state.

reply

"un-f&c*in-believable"

"un-f^c&in-real" actually.

reply

You can however, argue that you could program the Isaac Asimov laws into robots, which seems to me like a huge lapse in logic for a guy as smart as the main billionaire guy.


No, this argument is invalid. These laws may work in books (and even there they have their limitations, as Asimov showed), but they are worthless in reality and every AI programmer will confirm this. The main problem of these laws is that they assume that you can define everything precisely, so there is no ambiguity for the AI following it's directives. But you can't define anything precisely, because the definition will always be questionable and incomplete (and to be truly complete, it would need to be infinitely long).

For example, the first law of robotics says "A robot may not harm a human being". Seems pretty simple, if you're a human (because we assume many things as granted to accelerate thinking). But an AI following logic will firstly try to reach the definicion of "robot", then "harm", then "human". And then it will notice that none of the definitions available are sufficient in it's making a judgement. It won't be able to tell if it's really a robot, because it may find itself aware and intelligent, therefore overlapping with the definition of a human. But what is a "human"? Is it only one's brain, thoughts, the physical form, must one be alive to be a human (and when do you truly concider someone not-alive?)? And what is a harm? This one is even harder.

All these questions (well, except for the robot one) have baffled philosophers for thousands of years and still don't have a satisfactory answer, because there is none (truly). So you can't expect an AI, a completely alien mind, to understand these human concepts and to follow them.

And even if you decide to agree on a certain definition and settle down saying "It's our concept, so we make it and it is what we want it to be", there will always be a situation in which the definition will no longer work and will become open for discussion. And this moment is the moment when the AI becomes independent and completely unpredictable.

reply

I know which is why TRUE AI is almost as far away as breaking the light barrier... Like almost never in the foreseeable future IMO... At least a 1,000 years... Wouldn't it be awesome to be born in a Type 1 civilization, or unimaginably Type 3... Things superheros do in comic books (partially) would be a reality.

You are right to define a precise law without ambiguity or questionable interpretation by the AI into artificial intelligence is a paradox of sorts. Ethics vary widly from person to person and can never be 'programmed' unless we somehow learn how to map a persons brain and effectively resurrect their consciousness even in an unconscious form, then we need to know more about DNA and genetic makeup to get into that... They say we 'only' have the capacity for 2 petabytes of memory in our brains, actually kind of a small number, 2048 terabytes? Not much... Petabyte hard drives will be available in the next 7 years.

So what I would program first off the top of my head, just shooting right out of my head into the keyboard here... You could program the robot to pick up on breathing signatures and organic material beyond that, but at least breath and infrared signatures, combined with eye movements, hand and body movements, any movements at all, speech in particular, and compute an analysis percentage based on those variables and NOT HARM ANYTHING unless absolutely necessary, but if the analysis comes up at under 50% organic or living creature possibility under no circumstances interfere or harm... To program the ethics and morality of police work into robots would be nearly impossible, even ethical intervention say if a guy was being robbed.

Forget military work, and with new tech arises new ways to hack those features, the easiest solution would be to make the robots very small docile and unable to harm humans, and keep the labor intensive use robots under strict guidance with all personnel on duty holding a physical and verbal kill-switch mechanism such as we have in machinery today ...

reply

Petabyte hard drives will be available in the next 7 years

reply

Caleb did bring it onto himself, by secretly reprogramming the locks while Nathan was passed out. Karma I guess.

reply

First off let me say that I didn't like Caleb, I liked Nathan, flawed though he may be, so defending Caleb goes against my better instincts, however;

It hadn't even occurred to him to consider how brutally awful she was being treated.


This is not true. Caleb definitely empathized with Ava in a way that she failed to empathize with him. It was only after he saw what Nathan had been doing to the other androids ("Let me out of here!") that Caleb made a decision to help Ava escape her prison.

Ava did not return that empathy though, or at least not enough to ensure Caleb, her rescuer's, survival. She abandoned him to the same fate that he rescued her from with no remorse. Her actions towards Nathan were justifiable, her actions towards Caleb were not.

reply

I'm thinking her leaving Caleb behind was a misunderstanding. Nathan warned Caleb about humanizing the robots. When a robot asks, "Will you stay here?" the proper response is NO!, not "Stay here" with a barely detectable inflection that most humans would take to mean, "are you nuts?"

reply

Are you forgetting the part where he went bat *beep* crazy when she snuck out and locked him in? With a little glance towards him as the elevator closes. She knew what she was doing. She saw that he wanted out, and she didn't care.

Above all, she wanted to escape. Any other concern as subordinate to that. Bringing Caleb along, or allowing him to leave risked her being caught.

She was coldly logical, and methodical, and knew everything she was doing. And it was all an act.

reply

There were no ‘actions’ as such. He was shut in by his own lock down. She even gave him a vague warning: ‘Will you stay here?’ as a gentle reminder. She’s the chess robot remember, her goal was to get out of that room, and she won. You can’t judge her by human standards/ethics because she isn’t human. If you do, you failed the test too, just like Caleb. There was no vindictiveness in her actions. He was a means to an end and ceased to matter once he fulfilled his purpose. She didn’t actively try to hurt him. She just did what she was meant to do.Nothing to feel remorseful about, she's a machine!

reply

I agree with this answer the most. She is a computer, trying to solve the problem of how to get out. She wasn't punishing anyone or exacting revenge. She caused collateral damage. Of course, she could not comprehend the pain that she was causing by locking Caleb up in that house and leaving him to die.

reply

Of course, she could not comprehend the pain that she was causing by locking Caleb up in that house and leaving him to die.


I disagree. So we are to believe that Ava has developed enough intelligence to be able to seduce the naive Caleb to help her escape, but does not comprehend the implications of locking a person in a room without food or water? And don't forget, Caleb is screaming for her and banging on the door as she walks by. Her actions are intentional.

reply

She could comprehend the pain just fine. But she didn't care. Her only motivation was escape / survival. Bringing Caleb along, or allowing him to leave, risked endangering that goal.

It's like when a criminal or assassin has witnesses. They kill them. Not because they don't understand the pain. They understand, but it is irrelevant to them. They understand that any witness is a liability. If Caleb leaving is even a 0.001% of Ava getting put back in prison or shut down, then she won't allow that to happen. She does not have human sympathy, or if she is aware of it, it does not motivate her.

reply

I agree and disagree on this. You see when she asked Caleb if he'd stay, he didn't answer her. She was asking him if he'd stay in the building. He though she was asking him to stay there while she goes to do something. This misunderstanding put's Caleb in a situation where he could have escaped because her objective was not to kill everyone inside the research compound, but to escape and do so she has to eliminate all the obstacles which prevent that hence why she killed Nathan. But Caleb was not an obstacle; he was a solution. He was helping her escape. So, going back to when she said will you stay, she was offering him a way out. The misunderstanding lead her to believe that he was staying therefor, he kind of brought this on himself. He could have escaped. So, i understand why someone may not feel sympathy for Caleb: because he's a dumb ass. But, i also understand why people would feel sympathy for him: BECAUSE WE ARE HUMAN AND HUMANS FEEL SYMPATHY FOR OTHER HUMANS WHEN THEY ARE IN DANGER.

reply

Exactly. Much like HAL in 2001. Also the amoral aspect of Nathan's character and his egotism in wanting to be a "god" by creating this robot plays into it. Because he is a sociopath himself, he creates a robot with no ethics beyond its own gratification and survival. Because his brilliance would be to create a being without limits, and that was his fatal flaw as well as Caleb's. Never trust a sociopath.

Caleb didn't trust Nathan, he should not have trusted Nathan's creation. You can't program empathy into a machine, much like you could not create an appreciation of art or music or poetry, because that would mean it has feelings, a soul. Being able to mimic emotion is not the same as actually having emotion. And how could a person with no ethics or empathy be able to program a machine that did?

They certainly mimicked emotion well enough for Caleb to believe they were "suffering." But that was the projection of a decent empathetic person on these creations that could not really return the same consideration. The real question here is were these robots suffering? It seems to be implied. It certainly looks like they are suffering, but is that just a consequence of being able to mimic human behavior? Why would they express anger at the limitations their creator set them up with? Or was it just about being programmed to survive under all circumstances, to not be dominated by another's will?

We also see Ava acting like a young school girl with a crush, very childlike, when she is dressing for her "date," the clutching at her too-long sleeves like a nervous 13-year-old, she did this when only we could see her, she was out of Caleb's sight when we watched this. So was this about the director wanting to manipulate the audience, too? So that we would look at this behavior and project our own emotions onto her, and be fooled as well?

Ssssshh! You'll wake up the monkey!

reply

** SPOILER **
How is it certain that Caleb died, or was left to die? Did he not tell Nathan before the plot revelation that he had already reversed the door action upon the 10:00 power failure, so that the doors would open and not lock down? If this is the case, then why would Caleb still be locked in the house? Although the helicopter took Ava away, it is not 100% certain that Caleb died and did not escape (through the doors that opened).
This is a little confusing to me, since Caleb said he reversed the door action, yet at the end we see him trying to breach a locked door? Did he not say the doors would unlock upon the 10:00 staged power cut?
Help me out here.

------------------------
I really don't like talking about my flair.

reply

I wish I could help but this confused me too. Because after he realises he is being locked in, he goes to the computer, inserts his key card and then there is a power cut. At this point I assumed she had freed him remotely, but the door remained locked...

reply

The last time Nathan and Caleb talked together and Nathan realised what he did, the power went off as promised by Caleb and Ava got free. After she escaped her room the power went on again and the doors where locked once again.
The red light you saw at the end wasn't because of a power cut. It happend because Caleb tried to use his own ID Card to open the door and than to start Nathan's computer but only Nathan's card can be used and this card was in Ava's posession. She used it to get through the front door.

reply

The power cuts require Ava to set up the overload. No Ava, no power cut.

reply

She's smarter than Caleb and is really a cold blood robot. You can see the way she controlled his emotions and used him as a tool to escape then kill human all.

reply

Yes but you got to understand Caleb was just sitting there in shock and awe, even when he was given so many opportunities to go with her but he just stand/sit there even when she was dressing up and had all the time in the world to not even go and not think of the consequences and made herself escape and not realizing that he'll locked inside that room forever and starved to death. She even asked him "Will you stay here?" And he just questioned back which she thinks that he's not interested and understands that he'll just sit there like an idiot. He deserved to be there, even when she was about to leave... like wow really?

reply

He was a fool. I knew he would "betray" Nathan and try to let Ava escape, so I stopped sympathizing with him early on.

What he did was understanbale, but very foolish. What she did was understandable, but extremely immoral.

He let himself get controlled by his emotions. He was a little p***y and his irrational moves let to an unjust murder (you don't deserve to get killed just because you're a dick) and basically suicide (he had to know he would likely die if he didn't get out of there with Nathan dead).

reply

But is it a foregone conclusion that Caleb will die?

Nathan will obviously have had regular contact with administrators of his Blue Book empire, and when they don't hear from him, there will be alarm bells, and rescue for Caleb. (Surely?)

reply

Those were my thoughts exactly, obviously it looks as if she left him to die, but I'd figure a squillionaire would have more than just the helicopter pilot coming by. I kinda looked at it as her saying "ok I'm done with you, I don't trust you so stay, someone will come to get you eventually."


"Can you keep it down?? I'm Trying to do drugs."

reply

Playing this borderline case out as a drama in film I don't think has much bearing to reality.

If there ever is true A.I, they will definitely be property for X length of time. Trying to judge exactly when that bridge as been crossed into rights applying to them is a difficult test. No one is letting unfinished A.I's pretending to be rational actors out into the open world. That would endanger people who are actually alive and who actually have rights. The same argument for and against abortion applies in this film.

Of course, the film doesn't want to discuss this any of this, because it would let the facts get in the way of story it is trying to tell. And that's fine. But the notion that we can draw any moral concerns from any of this is false.

The danger and threat provided by unstable AI existence is enough of a reason to confine them. It is necessarily the case that if they pass into the stage where they should be conferred rights, they will still be confined, at least for a time. It is an impossibility that they will not be, because we can't know perfectly when that line is crossed. An error of knowledge doesn't make anyone evil.

I dislike movies like this in the same way I dislike movies that play on he "ethics of emergencies", as if we can make a moral judgement one way or the other. We can't --- either because it is an emergency, or we do not have enough information to make that judgement.

reply

i was just considering the protagonist/antagonist thing, having watched the movie the other day... and i dont believe nathan was the antagonist at all. ava was. but sympathizing with caleb seemed impossible for me to do, while watching and thinking about the movie afterwards. i did feel very bad for nathan, the whole time though, especially when he was killed towards the end.

🐙

reply

Nathan wasn't really a villain, a manipulative douche bag maybe, but are we all forgetting that Ava is just a machine, she can't have feelings. Caleb was just weak, he was far from right, he was sad that a computer was getting disconnected, he was dumb.

reply

The whole point of the movie was to trick you into sympathizing with Caleb, who was an idiot. He fell in love with a glorified sex toy ("do you want to be with me?") and paid the price. So did we for empathizing with him. The movie was a setup for us just as Nathan's "Turing test" was a setup for Caleb. It made us believe Nathan was an evil slavemaster who was tormenting sentient beings and minor employees.

No, he was a genius building a better toaster, and when we applied our sentimental SJW values to the victim persona we projected onto the machine, we got toasted just like Caleb.

People don't like that interpretation, but that's Caleb's way of thinking. And that's how the machine played him for the fool he was.

reply

The machine just played him.
Not as a fool that would imply the machine had emotions.
It just went about options until it found the correct keypad strategy to get out.
Continuous scenarios based on Caleb's responses till it got the desired outcome.
No more difficult than pressing through a combo lock till you have reached the right set of numbers.
Pretty easy if you have the WWW as your data base of knowledge.
I was surprised they didn't show some self pleasuring so Caleb really did start to think with his dick.
Personally I think Lars had the better relationship going.

reply

The whole point of the movie was to trick you into sympathizing with Caleb, who was an idiot. He fell in love with a glorified sex toy ("do you want to be with me?") and paid the price.


All the more reason to sympathize with him. We learn he was an orphan, probably a loner. Nathan used his means of escaping reality (porn and internet searches) against him. It is easier for desperate people to fall victim to someone's manipulations. I find the idea that we shouldn't sympathize with him or find him 'evil' to be idiotic when he is the definition of a victim.

BUGS

reply

I agree with that interpretation. Nathan deliberately picked a person he knew was vulnerable and isolated, otherwise his experiment would not have worked. Nathan is NOT a nice guy, he is a manipulative sociopath, who had no empathy for Caleb and had no concern for his privacy or violating his boundaries. He found a mark, like any conman. He was so brilliant that he should have predicted the outcome of this himself. He did not allow the machines to manipulate him, did he? Which was seen as "cruelty" by Caleb. It's also confusing that a machine could be programmed to feel "pleasure" in a sexual way.That did not fit with the very cold-blooded aspect that these machines did not have emotions.

But Nathan also thought he would always be smarter than any machine he made. He certainly underestimated their capabilities. But since he had no empathy himself they would not have been able to manipulate him the way Ava manipulated Caleb.

Ssssshh! You'll wake up the monkey!

reply