MovieChat Forums > Transcendence (2014) Discussion > Why a virus wouldn't work against this A...

Why a virus wouldn't work against this Ai. (Spoilers)


At the end of the movie there is a virus that essentially kills the Ai and this kills the power grid also because the Ai is in everything.

This is good. Eventually kill the Ai and even the power goes. Forget about the death toll this would have on the world. Except with this Ai there is one flaw.

The Nanobots. We see throughout the movie their ability to repair humans, plants and especially the Solar Pannels.

If they have the ability to repair anything. Including the solar panels then they also have the ability to be solar panel powered themselves! Assuming they are operating in a collective state over some sort of advanced WiFi. The moment a virus tried to destroy the Ai, over the span of trillions of these nanobots the Ai could transfer itself. Rebuild in a world without a human interface available and essentially recreate the entire world as it sees fit.

In fact. At the very end. This may be the very thing that the droplet of water from the sunflower was saying.

I'll be honest I haven't really thought too much of this. But an Ai will figure out many different fail safes and virus protection and survival will be at the top of its list.

reply

I had thought of this, too, and made mention of one thing I'd change in a forum thread I made recently, referenced http://www.imdb.com/title/tt2209764/board/thread/255434588 .

An AI, for all intents and purposes, would seek to do everything it can to protect itself, just as any other lifeform would. It would also consider, with its expansive intelligence and numerous calculations of scenarios, that it could be at risk for malicious attack. So it could seek to create countermeasures to safeguard against this. The big thing I see with any malicious intrusion in our world to date is it's built on taking and knowing one existent form of a system, and then exploiting it. Any repairs or updates performed on said system come as a result of finding these intrusions and stopping them before they get exploited. An AI that is constantly calculating could always see that having any standalone form of code creates the risk that someone or something can exploit it. So it makes sense to always change the code, change the passwords, change the means in which any external source can interface. Even the likeliest scenario can be calculated in the same time that the unlikeliest can...but logic dictates that anything that isn't human will seek to perfect itself. So...even though the Will-AI is human in some measure, I would think that the AI side would have begun to do everything necessary to protect itself.

And if there was the risk that the central hub for the AI could be attacked on any level, a contingency could be enacted. Yours would be a good example. These nanobots, as I can tell, are all dependent on one element for function: the AI. To be spoken to and have their actions dictated, their actions commanded. We do not know the existent limit on "storage" these nanobots can contain, but if you get a colony of them active enough with a base existence of the AI's data, maybe there's something there to work with.

I loved the concept the movie exhibited, but so many things in it went wrong. It was just humans vs machines/AI again...but I am of the mind that as we've seen so many of these fall centric to human favor...I'd love to see one where we see what happens when the AI gets their way.

reply

The AI already knows that's what they'll try to attack it with. And as you saw, all their attacks fail.

The AI willingly allowed itself to become infected and didn't fight it.

reply