MovieChat Forums > Ex Machina (2015) Discussion > The biggest plot hole in the UNIVERSE

The biggest plot hole in the UNIVERSE


This guy created the equivalent of Google's search engine algorithm single-handedly.

He's the CEO of essentially a Google combined with Samsung.

He's a multi-billionaire and literally owns a freaking mountain.

He hacked every single cell phone and camera in the entire WORLD.

He just created the world's first successfully sentient AI that is so incredibly human-like he apparently needs outside help to test it.

He designed a test ***SPECIFICALLY*** to lie, manipulate, convince, and do everything in its power to escape from his own dungeon.

Now remember, he is arguably the best programmer in the ENTIRETY of human civilization.

And he didn't program any way AT ALL to turn off the AI in case of emergency? No off switch? No safeword that instantly obligates unyielding obedience? He has to go after her with a freaking dumbbell bar? Really???????? Did the writers give up or what?? Can someone explain why this wouldn't be the biggest plot hole in the universe??

reply

I actually outlined another theory elsewhere in this thread, but this theory is the one I like the best: Nathan was a fatalist. He knew mankind was approaching an extremely dangerous juncture, that computer technology would be the modern equivalent of the nuclear bomb, only much more powerful. That's what drove him to drink, that's what drove him to go ahead and create an AI because AIs were inevitable. His drunken ramblings hinted at that, quoting the Bhagavad Gita (the same place Oppenheimer's quote came from) and talking in a roundabout manner about the good he'd done redeeming what was to come. So he knew that AIs would eventually be able to override human control. His search engine was actually the first step, and its ubiquity proved the inevitable embrace of such technology. Therefore, he decided to test the AIs using only physical mechanisms. Build a robot that can be damaged or destroyed with a metal bar. Build a cage they have to use guile to escape because they're not physically powerful enough to break out. In other words, take control of the trend by ensuring that the first generation of AIs establish a precedent -- never give them more physical power than necessary. Ava couldn't even choke him to death. She had to have a knife, and an ally, to take Nathan down.

It's quite telling that he apparently never gave Ava any means of accessing communications systems or computers. She was bound into her physical form. As long as AIs were restricted this way, they'd always be vulnerable to the overwhelming masses of humanity outnumbering and overpowering them. At best, they could be particularly efficient serial killers or terrorists, but there was always going to be a way to take them out individually. So implanting a failsafe that an AI would eventually be able to override once its cognitive functions exceeded the parameters of the program only delayed the inevitable. Establishing the need to control them via physical limitations showed greater promise, and therefore implanting said failsafe would be a bad idea because it would lull humanity into complacency and dull their awareness of the inevitable danger approaching. As a species, we're very good at convincing ourselves that our precautions will keep us safe, that we've thought of everything. We're perfectly capable of developing AI and telling ourselves the programs we put in place to control them would always work. Nathan, I submit, knew better. He was a wiser man than we give him credit for. And a rather sick S.O.B., but we all have our flaws.

reply

The whole point was to test if a robot could be like a human. Humans don't have a fail safe. She needed to have free will.

reply

"And he didn't program any way AT ALL to turn off the AI in case of emergency?"

Exactly for all the abovementioned reasons (Samsung+google, multi-billionaire etc). Nathan simply turned egomanial and more or less lost the ability to self-critisise or self-evaluate. He simply got too full of himself.

Look at the dialogues with Caleb as an example of this. He is arrogant, constantly cuts Caleb off, doesn't really listen to him or his arguments etc.

Besides (as others above me mention) a failsafe would have rendered the whole point of the experiment moot: Namely to test whether she could pass for human (the Turing Test), because we humans don't have a failsafe.
Furthermore, it is a good point, that the robots are (purposedly...) designed to be frail; in some ways (arm breaks off etc.) more frail than humans. So a well-trained person like Nathan could always overpower the robots in a one-to-one fight. Not to mention a well-trained person with a gun.

It is only thru the interference of Kyoko that Nathan is overpowered. Partly thru Caleb's hacking of the security system, partly because Nathan is so overconfident/arrogant to have two robots active at one time, thereby taking the risk of an alliance against him.

reply

Hey, stop copying my posts and stupidifying them.

It's not the 'biggest plot hole', let alone 'in the universe'. The universe is bigger than you think, I am convinced of that fact. It's even bigger than I have capability of imagining, and you can realize this if you happen to watch some youtube videos about the very topic.

I have made that point already many times, but that's not even the biggest plot hole in the whole movie, there are other, more obvious, and bigger ones. I rather not even call them plotholes, they're just movie tropes, stupidities, thoughtless patterns and such.

This is one of those movie scripts that not only lean heavily on suspension of belief, they REQUIRE STUPIDITY for the story to happen. The lack of killswitch or any PROPER safety planning/structure/features is just ONE of many of those.

reply

[deleted]