MovieChat Forums > Echelon Conspiracy (2009) Discussion > feed the computer it's own actions (SPOI...

feed the computer it's own actions (SPOILER ALERT)


Echelon is this super high artificial intelligence computer that is "programmed" (sworn) to defend the constitution. However, the G Men, drunk with their own power, and thinking they are being high mindedly patriotic, let echelon out into the wild to "clean up" the country.

IN the last scene of this movie the computer geek asks echelon to examine it's own actions and the computer concludes it must destroy itself!!! This last bit is the ONLY thought provoking philosophical bit in the ENTIRE movie, AND it is a relevant topic in modern computer science curricula! Would a computer told to simulate itself Stop? Run in an endless loop? Can a computer examine itself from within its own programming? Is this even possible for a Universal Turing Machine? etc etc.

You, CS major, IMDB fans, any thOUGHTS??????

ASIDE: In my opinion, Sadly, here in US, real life is undoubtedly SLIPPING further and further into right wing schema and mindset. I don't doubt that one day, given the vast amounts of money gifted to lawmakers by the data mining industry, computer industry, certain lobbies, and the political proclivities of the M.I.C., and geopolitics of corporate sponsored government, our entire "justice" system will be computer run (and it will NOT be justice, but what the privileged think is justice, whilst they profit, immune from it.) Justice will be more probabilistic, circumstantial, subject to bayesian analysis, but it won't be human. Is that justice?

reply

Thoughtful maybe but far from original. This was lifted directly from Star Trek episode "The Changeling", among others. But since the whole premise was lifted from Eagle Eye I'm not surprised.

As for the political thought, is that really necessary here on IMDB? Can't any part of the net be free of political bickering? Especially when you're so wrong :}

reply

There was no implicit mechanism for feeding it's own actions as input into an analysis... and as you see, when explicitly directed to, it went into an infinite loop... (or at least the reasoning and deduction part of the software).

Infinite Loop aside, it definitely broke itself from the loop when it reached a certain reliability of result.

The power of the programmer is that we can design ANYTHING into a system. We literally can make a system work in anyway... So anything is possible.

Can we say that the AI achieved consciousness of self? First we need to define consciousness and that goes hand in hand with defining the self... those are two definitions that we have historically been unable to understand.

It seems like we run into similar self-referential problem... can the hammer hammer a nail into itself?... Can we see our own eyes directly?

We have to look at something else to tell see ourselves and then it's not reliable because we can't certify what we get is real.

This line of thought always leads down to a matrix-esque concept of "What is real". We can't trust our senses, we can't even trust our brains, reasoning and deduction.

So ultimately, we have no real concept of self, and consciousness appears to be simply moments when we focus on something fully in the moment without mental judgments, categorizations, or interpretations... at all other times we are unconscious in a very real way... on auto-pilot.

This AI was simply following programming which never mandated that it analyze it's own actions... until it was told to do so.

reply

Yes, but if programs are to be CORRECT, then, "programming ANYTHING" is not the way to go.

You know there is simple RECURSION routines that call themselves almost endlessly till correct stopping conditions are reached.
Yes you are correct, I dont think the hammer can hammer a nail into itself, so the movie is silly to suddenly ask echolon to chew its own output.

reply

There are CORRECT ways to program ANYTHING.

Especially with a learning AI... for example: an AI that uses a genetic algorithm to solve a problem will iterate through most solutions testing a specific utility measure that denotes success at a given threshold... in that case the AI could do things in an "INCORRECT" way in order to determine the CORRECT way.

reply

yeah i meant CORRECTNESS in terms of final outcome not the smaller steps (for eg monte carlo will try everything and anything) but for it to be "correct" it must be useful in the world.

I guess you are right, At many levels, A program CAN EASILY examine it's actions (simple feedback devices work on the same principle to stabilize all kinds of physical objects: like segway scooters, also there are gradient descent search schemes that examine their past actions and outcomes to improve, genetics like you mentioned, neural nets, on and on) to bring about desired outcomes......is it CONSCIOUS? we'll never know. how ironic: no definition of consciousness.

reply

Exactly, that's the biggest issue... we can't label something as having consciousness... or even being alive, before we define those concepts in a meaningful way...

And quite frankly, we may not be able to define them at all ever in a meaningful way with our puny meat brains.

reply

LOL: "meat brains". you are a true programmer!

reply