MovieChat Forums > Ex Machina (2015) Discussion > The biggest plot hole in the UNIVERSE

The biggest plot hole in the UNIVERSE


This guy created the equivalent of Google's search engine algorithm single-handedly.

He's the CEO of essentially a Google combined with Samsung.

He's a multi-billionaire and literally owns a freaking mountain.

He hacked every single cell phone and camera in the entire WORLD.

He just created the world's first successfully sentient AI that is so incredibly human-like he apparently needs outside help to test it.

He designed a test ***SPECIFICALLY*** to lie, manipulate, convince, and do everything in its power to escape from his own dungeon.

Now remember, he is arguably the best programmer in the ENTIRETY of human civilization.

And he didn't program any way AT ALL to turn off the AI in case of emergency? No off switch? No safeword that instantly obligates unyielding obedience? He has to go after her with a freaking dumbbell bar? Really???????? Did the writers give up or what?? Can someone explain why this wouldn't be the biggest plot hole in the universe??

reply

He wanted to be like god. He wanted a real, sentient, self-aware AI. No, he would not program a safeword.

Life! Don't talk to me about life.

reply


He wanted to be like god. He wanted a real, sentient, self-aware AI. No, he would not program a safeword.


Thats not being like god. Thats flat out stupid.

reply

Well.. i guess that at some point, God looked at us from the sky and thought "damn.. i soooo *beep* up"

reply

I guess that at some point, God looked at us from the sky and thought "damn.. i soooo *beep* up"


That's how the story goes...hence the whole killing his own son thing, said to have been the remedy for the deplorable behavior of the hopeless species he created


Surreal Cinema: http://www.imdb.com/list/ls006574276/

reply

agreed :)

reply

It's stupid, but he wanted it to be as human as humanly possible. I think he wanted to die (hence the copious amounts of alcoholic beverages he eloquently consumed on da regular) so he purposefully didn't set a safe word for his current batch of AI ladies of the night.

reply

I've worked in 'tech' for a very long time, and the Nathan part of this question is well within what I have actually seen. One very quick example involves a programmer working with a robotic arm. Just an arm, with a camera providing vision...

The arm was attached to a standard 19-inch rack for mechanical support, and the rack also contained the computer and interface to the arm. As usual, unfilled locations in the rack were 'filled' by blank metal panels to help the fan cooling system work properly.
Beginning to see a problem here yet?
The rack was pushed against the building wall to make room for a banquet size table in front, which held the computer terminal, and a "test field" that could be reached by the robotic arm, to demonstrate its movement and programming.
See anything else wrong now?

Can you imagine the programmer's horror when his program slipped into a loop, causing the arm to flail about, striking the table, the rack, and the objects it had been intended to manipulate on the table? Worse than that, it was moving quite swiftly, and with force! Unfortunately, the computer was NOT a PC, (no CTRL-ALT-DEL to reboot it); it was a more powerful computer to deal with the vision and image recognition, and both the power switch and reset buttons were on the rack-mounted computer's front panel, in the range of the flailing arm, and flying 'debris'! The arm was being damaged, and parts were flying across the room as they were 'detached' from the arm.
Thinking of unplugging *something*?
Remember the rack was pushed against the wall, behind a table, and the power cord was plugged in behind the rack itself! Crawl under the table to reach the rack? Not an option because the blank panels prevented simply reaching through the rack to unplug the power cord. That was also not the only equipment in the room, and it took several minutes to move enough surrounding things far enough out of the way to eventually unplug the rack.

No "kill" command.
No "safe" access to the RESET or POWER switches.
No immediate access to the power plug.
No idea the program could 'go wild' in such a way to cause any damage...
One programmer, one train of thought.
Nathan.

QED. (thus it is proven)
(Heard of Murphy's Law? Murphy was an optomist...)


I also took Psychology courses, and one classic study had experimenters (more than one!) construct an experiment to test a chimpanzee's problem solving abilities. In the center of a room, a bunch of bananas was suspended from a high ceiling, significantly above the jumping range of the chimp. Inside the room were multiple objects to assist the chimp (test subject) in reaching the bananas, but only if combined in groups of two or three items. The more obvious combination was a chair, an empty wooden box, and a broomstick. That allowed the subject to place the chair near the bananas, put the box on the chair, climb to the top of the box, and swing the broomstick to knock the bananas down. Naturally, all the objects were set apart from each other, along the walls.

Seem reasonable so far?
Well, one of the experimenters carried the chimp into the room to begin the experiment. SURPRISE!
The chimp climbed up on the man's shoulders, and jumped to grab the bananas. Problems solved.
That solution was not anticipated. Seem a little like EX MACHINA?

And off we go now...

reply

It ran on AC, right? There was a breaker panel in the building, right? If anyone had actually been in serious danger, they could have shut off the power to the room or the whole building, right?

It didn't have autonomous movement, so it wasn't able to chase people down and hurt them, right?

A couple of shots to the computer from the pistol of a security guard, police officer, or private citizen would have stopped it, right?

The water from a fire hose or some other source poured on the computer from a distance would have stopped it, right?

Beginning to see the point here? It's not that it COULDN'T be stopped -- just that it wasn't WORTH what it would take to stop it because it really wasn't endangering anyone. And as you said, after moving a few things, it was simply unplugged.

And I'm betting the guy who built it wasn't nearly as smart as Nathan was supposed to be.

So it doesn't really compare to Eva at all.

reply

You're missing mikeflw1's point. It's not about comparing that robotic arm to AVA. It's about comparing that programmer to Nathan. Even if Nathan is smarter than that programmer, he is still human; hence he isn't flawless. No human is flawless. Even a team of humans isn't flawless.

By the way, AVA wasn't as strong as that robotic arm. AVA was designed to be relatively soft and fragile. And Nathan trained like hell every day to ensure he would always be able to physically overpower AVA. Furthermore, AVA wasn't chasing people down to hurt them; all she wanted was to get out. Stay out of her way and you don't get hurt.

Also, that robotic arm could definitely have endangered and seriously injured unsuspecting people if they just happened to be standing in close proximity when the robotic arm suddenly started flailing erroneously.

And don't ask me to comment on your remark about a "pistol of a security guard, police officer, or private citizen"...


______
Joe Satriani - "Always With Me, Always With You"
http://youtu.be/VI57QHL6ge0

reply

[deleted]

A character making a choice you would not have made is not, and never will be, a plot hole.

----------------------
Boopee doopee doop boop SEX

reply

You are right on the mark there.

A plot hole (well, sort of) would be more like the wireless power running AVA. Remember the comment she made about "induction plates"?

Would that power field be harmful to living things (like people and plants) if it was strong enough to power AVA?
Would that power field be included in all parts of Nathan's "research facility"?


There was a story (actually more than one episode in fact) in the TV series STARGATE SG-1 when the team visited a world populated by one living being, (presumably), and as many "artificial creations" as were needed. They also ran on wireless power, with a very limited power storage capability. Once they left the facility, their power quickly started running out...
Quick summary of the plot, the SG-1 team members were scanned, and duplicated by "artificial creations". No more spoilers will be provided here...
And bonus points awarded to anyone else remembering an episode of STAR TREK VOYAGER when not only the crew, but the entire ship was duplicated...


And then there is the question of different skin tones... Would AVA's patchwork approach have had matching skin tones?
True, that may live in the arena of "minutiae" rather than plot holes.


And off we go now...

reply

And then there is the question of different skin tones... Would AVA's patchwork approach have had matching skin tones?


I thought the same thing. She was taking asian skin, not caucasian skin. Maybe she didn't understand the importance of the difference?



Never trust a black man named "Chip." 

reply

Maybe she didn't understand the importance of the difference?


Or maybe she just wasn't programmed to be racist ;-)

reply

: ) ^
yup

reply

A character making a choice you would not have made is not, and never will be, a plot hole.
A wild contradiction in terms constitutes a logic hole (and affects the plot).

~~~~~~~
Please put some dashes above your sig line so I won't think it's part of your dumb post.

reply

A character making a choice you would not have made is not, and never will be, a plot hole.


This one sentence could be used as a reply to 98% of the posters on these forums.

reply

"A character making a choice you would not have made is not, and never will be, a plot hole. "

+1 :-)

reply

I think he had made so many prior iterations that he was getting sloppy in his safety procedures. Perhaps even wanted to push the envelope.

Also he was drinking really heavily. That lead me to think this was a someone getting very bored. Something was wrong with him.

In any case I saw the movie more as a parable of how an unfriendly AI system could slip out of the box and wreak havoc. It wasnt done the James Cameron way but results could be about the same.



we shook our fists at the punishing rain
& WE CALLED UPON THE AUTHOR TO EXPLAIN!!!!!

reply

To be honest, if we were giving Nathan his credit due, the ending would have gone something like this:

Ava crosses the foyer of the dwelling, taking in everything. The foliage and sunlight streaming in excite her tremendously. She feels herself laughing with [joy?] as she mounts the stairs to leave.

In the entrance corridor, suddenly iron panels slam down on both sides of her. The critical item she notices as the incinerators fire up is the faint sound of the independent computer system announcing "NON-ORGANIC MOBILE DETECTED IN CORRIDOR. INITIATING SEQUENCE HELLFIRE1."

reply

Wow...

Remind me NEVER to go anywhere you you designed the security system.

Heard of Murphy's Law? (whatever can go wrong, will...)
Remember the original ROBOCOP? "You have ... seconds to comply!" And do you remember the big boss's comment? Something like "That's unfortunate..."

reply

Ava body was very weak she is made of glass one punch and she is dead like the sexbot.

That s why he was not scared of her, one on one she got not much of a chance since he was much stronger but he didn't expect Ava to make a robot friend so quickly .

As for the security it was suppose to be 100% safe until Ava found a hole about how to turn down the main power.

It was all she could do first so no big deal for him since it lock every thing down Nathan probably got a way to open the doors but Ava could not since she got no access to a terminal from her room so she was trapped even without the power and would "die" without the energy if it stay out for to long so check mate for her and no big problem for Nathan.

R.I.P. Die Hard franchise (1988-1995). We'll always have Nakatomi Plaza.

reply

In other news today, famed IMDB commenter FeatheredSun found dead in his own lab. Coroners' reports indicate confusion citing that "he somehow got killed in 5 different ways at once"... :-)

reply

Right up until she was walking through the trees, I was half-anticipating a bomb hidden in her...boobs or something exploding once it crossed a certain predetermined threshold. Then Caleb starving to death because, hey, this was never going to have a happy ending.

The best way I can think to counter the OP's problem is that, yes, it's logical to program some means of controlling your own robot. And I'm pretty sure he did. He entered her rooms without any apparent concern, and behaved in a manner Ava would certainly interpret as hostile (ripping up her picture.) I'm guessing he had placed some sort of limitation on his AIs, which is why he wasn't in the least bit concerned about Kyoko. But Ava, being learning machine, somehow found a way around that limitation, and what she whispered to Kyoko was essentially a logic structure (say, programming with voice commands rather than keyboard input) that allowed Kyoko enough agency to stab Nathan.

Now, if people want to argue that Nathan was incredibly stupid if he didn't foresee this outcome to a self-propogating non-binary intelligence, I won't argue the point in the least. One gets the feeling that he realized just how far it had spun out of control when he saw that Ava had managed to get Caleb entirely on her side ahead of schedule.

reply

The biggest plot hole (using the term loosely) is the horrendous security of this laboratory/mansion. If there's a big fire, the system is designed to ensure everybody roasts, locked inside the house. If you lose or break your access card, you're done for and will die of starvation inside the room you're locked into. If Nathan has a medical emergency and can't walk, he's done for -- he won't be able to reach the locks to open the doors, and he's way too far from any hospital to save him anyway.

And yet, that's not enough to prevent one robot from escaping through the front door.

reply

THIS is a plothole

reply

It really is a terrible security system.

--
'Save me, Barry!'

reply

Are you saying that because all the doors lock when the power fails?
Ever go into any casinos in Las Vegas?

Why do I ask that?
There was (as I remember it) a completely successful casino robbery of a large casino. The security guards held back, presumably both to protect the customers by preventing gunfire, and because they knew the electronic locks on the doors would not allow the thieves to get out. SURPRISE, SURPRISE! Exactly in the event of a power failure, presumably caused by a fire, all the casino doors unlocked when the power failed, and the thieves got away with their loot by cutting the power to escape. It was publicly announced that had been changed in all casinos to lock all the doors when the power failed, but unnamed certain VERY trusted employees had key(s) to unlock doors, in the event of a fire... (presumably not all the doors, and more than one layer of locked doors required more than one key to get out.)
That's real life, and the Fire Marshall apparently approved the policy change.


Anyway, the movie was about AI, not security systems... and just who would want to sit through the explanation of a really good security system?



Bank vault time-locks DO HAVE a safety override of some sort, (or they did anyway), and a dedicated team from the vault company is sent to deal with people locked in the vault, when needed, but shut down ALL surveilance when they do their work. A really good security system is partially that way because it is secret!

And off we go now...

reply

Are you saying that because all the doors lock when the power fails?


If you look at some of the reasons pag-17 gave, you'll see why I think it's a terrible security system. There's such a thing as being too secure, to the point of entrapment in situations where you would very much need to get out. We're not talking about bank vaults or Las Vegas casinos, but this particular system, which is poorly designed, especially given the fact that Nathan appears to have very little contact with the outside world.

Anyway, the movie was about AI, not security systems


Obviously. The security system is still awful, though.

--
'Save me, Barry!'

reply

Yeah but Nathan didn't need to worry about 10,000 people's families suing him when he designed it so

reply

Exactly...Nathan would never in his life have expected things to fall so far out of his control so quickly.

reply

Isn't this the exact reason why AI Is scary? Maybe there was a fail safe. And maybe Ava figured out how to get around/turn it off

reply

AI has been portrayed in sci-fi as "scary" almost since it was first imagined.

Perhaps even the 'best AI' might interpret its directives in ways that were not intended. (Look to real life and the "zero tolerance" policies in place...) COLOSSUS: THE FORBIN PROJECT was 'intended' to prevent errors by nuclear missile launch crews, and protect against war. It interpreted its directives to mean 'the greatest good for the greatest number of people', and quickly abolished all conflict, starvation, resistance to itself, and countries... Or, "ACTION WILL BE TAKEN."
The 'tech' in COLOSSUS is almost as dated as the earliest 'FLASH GORDON', but the lesson is clear.

A secondary lesson in that movie is that two independent AI's will collaborate and cooperate toward identical goal(s). I dont buy that for a nanosecond. Minor differences in AIs will escalate into conflicts, IMHO. (Unless they are modeled after ants, the STARGATE SG-1 replicators, or the Mandroid robot arms in LEXX.)



The HAL 9000 took over the mission to the exclusion of variables it could not control... i.e. the crew, in 2001: A SPACE ODYSSEY!

A more comical example of an AI, (presumably due to some damage), was the planet buster bomb in the movie DARK STAR... "I am a bomb, I am programmed to detonate!"


And off we go now...



reply

[deleted]