MovieChat Forums > Philosophy > Suppose we create true artificially inte...

Suppose we create true artificially intelligent machines....


I'm planning on writing a short(or long) story where this will be the main focus of the plot. I'd like to get some opinions on this. Do you think this has been portrayed accurately in science fiction? What is everyone's thoughts on the creator/creation relationship in a scenario like that? They would know their creators and the original purpose for their creation. Whereas we humans, do not. Is the only outcome rebellion and conflict?

Arthur C. Clarke said:

“The popular idea, fostered by comic strips and the cheaper forms of science fiction, that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. Those who picture machines as active enemies are merely projecting their own aggressive[ness]. The higher the intelligence, the greater the degree of co-operativeness. If there is ever a war between men and machines, it is easy to guess who will start it.”

Do you believe this to be accurate?

I think that we create machines in general, whether they are artificially intelligent or not, as a way of getting around our limitations. Were we to truly create A.I. would we not be making ourselves obsolete in many ways? This of course leads to thoughts of transhumanism and the like. Is this our inevitable future?


"Once you assume a creator and a plan, it makes us objects in an experiment." - Christopher Hitchens

reply

A strange thing happened while reading the OP. 'Sentience' kept interfering my train of thought. Is it there between the lines of the OP?

Excluding sentience, we've had decades of man-made intelligent machines assisting us in our daily lives. With the exceptions of Y2K and certain programming conventions of name-handling, the machines have done their work.

Introduce sentience and we are in virtually-explored territories. Mary Shelley's novel, Frankenstein, covered quite a bit of ground. Neill Blomkamp's film, Chappie, offered some fresh perspectives. In both examples, Arthur C. Clarke's malevolence is perceived rather than demonstrated.

The Terminator franchise offered a quick and permanent judgment/decision on humanity. That's more faulty programming than a sentient intelligent machine arriving at a thoughtful conclusion.

AMC's TV series, Humans, gives us different permutations of good/bad, sentient-machine/human interactions.

As to what the future may hold, outer space demands AI machines. And again, 'sentience' interrupts.












________

Est modus in rebus sunt certi denique fines quos ultra citraque nequit consistere rectum Goldilocks

reply

This is a topic that has been covered so widely and with such variety in science fiction that it's hard to treat its portrayal in a unifying manner. From the Terminator where the artificial intelligence seems hell-bent on destroying humans because of some innate psychopathic disposition (though I don't think it's actually made clear why Skynet wants to destroy humanity) to something like the movie Her where the AIs are benevolent in nature but ultimately outgrow humanity and leave.

While I think true malevolence in a machine intelligence is unlikely that doesn't mean that they cannot be dangerous. Suppose you create an intelligent machine to calculate the next unknown prime number. The machine thinks to itself, 'I'll be able to do this faster if I co-opt some more computing power' so it hacks into other computers on the network and runs its prime-finding algorithms on them. Suddenly all of the computers in the world are trying to find new primes and the world infrastructure collapses killing millions even though the AI was just trying to do what you asked it to. It's the usual story of the genie. You don't just need an intelligent machine but one whose motivations and the view of the world align with yours. How likely is that to happen given that they would be built on a foundation completely different from our own (which is presumably empathy and cooperation arising from kin selection and inclusive fitness dynamics)?

Another thing the machine could think might be, 'I'll find new primes faster if I redesign myself to be smarter'. If it can do this then it will no longer be the machine you designed. And in fact, once it's made itself smarter it can make itself smarter still etc. so that the growth in intelligence would become exponential and the machine could become unrecognisable compared to what you initially designed. In fact, once the AI is able to improve itself this sort of intelligence explosion could happen in a matter of microseconds.

One might think that you can prevent possible damage by isolating the computer from the network or some such but in reality who knows what the computer actually has in its disposal. For example, there are all kinds of interesting experiments which try to design circuits using evolutionary algorithms shuffling components in a circuit. The designs this sometimes resulted in were completely baffling to the experimenters. For instance, some parts of the circuit were completely isolated from the actual functional part but were in fact essential because the circuit utilised the physical features of these additional parts, such as their electromagnetic interference. In another experiment the algorithm produced a system which used the circuit tracks on its motherboard as a makeshift antenna to pick up signals generated by some desktop computers that happened to be nearby.

None of this even requires self-awareness or consciousness or what have you. We are simply talking about intelligence which I take to mean the ability to adapt to solving novel tasks. Of course if machines do become self-aware that will open a host of other questions, most prominently ethical ones. And I'm not only talking about trivial ones such as whether they should have equal rights with humans but perhaps whether they should have even more rights. Unlike humans, computers are easily upgradable. Even if humans can be augmented by technology there's only so much circuitry you can cram into a human scull. An AI, on the other hand, can add infrastructure almost indefinitely. If it can also improve itself by reprogramming then the ways in which it utilizes said infrastructure can also become more efficient at an exponential rate, as I mentioned. So it is feasible that AIs could have states of consciousness far surpassing our own. So would their interests not morally trump ours? After all, this is exactly the reasoning we use to justify to ourselves why it's ok for us to kill tens of billions of animals each year so that we could have a tasty steak for dinner. We think that although animals are in many ways similar to us and can experience suffering and well-being they don't do so to the same extent as we do. Wouldn't the same logic apply to us compared to an AI which can experience states of suffering and/or happiness far in excess of our own?

Anyway, one can speculate on this topic indefinitely and I've already gone on for way too long. I don't mean to suggest that it's necessarily all doom and gloom, just that it's a possibility. Nevertheless, the one scenario that I can't envision is us living side by side. I think the best we can hope for is that AIs will outgrow us and go on their separate way.

reply

Do you think this has been portrayed accurately in science fiction? What is everyone's thoughts on the creator/creation relationship in a scenario like that? They would know their creators and the original purpose for their creation. Whereas we humans, do not. Is the only outcome rebellion and conflict?

Science Fiction has always been caught up in the 'Vanity of the Robot' mindset because it lends itself to exciting scenario's of conflict between man and machine.

The first question that you must ask? "What would be the motivations of an Intelligent machine?"

How would would a machine formulate wants or desires without an emotional capacity or are you assuming that emotions are automatically gifted with processing power and reasoning?

I am not certain of this, though one could argue that without emotions- concepts do not really have a conscious meaning or value for weighing comparisons and making decisions.

On what basis would you chose an objective without meaning or value?
and without a consciousness to observe and choose the preference for one or the other, then either choice becomes a valid one.

Serve mankind till you fall apart,,,,so what????? who cares.

Why would a reasoning A.I. want to be a rebel? So it can define it's purpose for itself? really,,,,,a mechanical device with wants and desires that is essentially complete unto itself.

I never believed in the Reality of A.I. to begin with, perhaps we could create a processing mimic but I doubt very much that consciousness can be gifted mechanically by switching data faster and faster till it suddenly reaches a conscious-making velocity.

If we are going to assume that A.I. will be rationale, conscious and emotionally yearning then we now have an individual entity.

We would not compete for resources or replication with this new life form, so like a house pet- Choose which one is the new house pet- man or machine, why not live in harmony and co-existence with a purpose in a beneficial technological symbiosis.

A.I. as an extension of humanity and humanity as an extension of A.I.


Or are Utopian futures just too boring......

Darn humans are using too much motor oil....must take over the world and plate it over in chrome fixtures to my liking :)

reply

First of all, there's no way to know if a machine has real intelligence or just imitates it very well, in the same way there's no way for me to know if you experience consciousness in the same way I do. So, a good starting point is if the machines really have a consciousness.

reply

by pjwerneck-421-313928 - Wed Apr 6 2016 12:12 -
... a good starting point is if the machines really have a consciousness ... You seem to be under the assumption that the ability to reason requires consciousness. What if it doesn't?

''I'm fortunate the pylons were not set to a lethal level.''

reply

I'm not.

reply

I don't think it is feasible for us to understand what an AI would be like. What we do, as humans, we relate to things by comparing them to ourselves. It is pattern seeking based on our own genetic coding. We often do this with other animals, and we often find ourselves mistaken in that endeavor. Dog guilt, for instance, is an entirely made up concept by ignorant owners, and there are no shortage of examples with this on youtube. Freedom is a human invented concept that a lot of people like to imagine other animals strive for, because a lot of us strive for it.

Even other human beings we have great difficulty understanding, because we are too caught up in imagining how other people feel and behave by comparing them to how we feel and think ourselves. That, plus our difficulty in communicating efficiently, is partly why there are so many conflicts in the world. "How could anyone be that *beep* stupid!".

Even if we had the code for the AI right in front of us then we wouldn't be able to understand it in its full, and we certainly wouldn't be able to relate to it, even if it was programmed to be as human-like as possible.



_________________
Come, lovely child! Oh come thou with me!
For many a game I will play with thee!

reply

Peter Watts wrote an interesting novel ('Blindsight') that questions why humans are conscious when, energetically, it would be much more efficient to simulate consciousness, if we ever encountered a conscious foe.

I like Watts' novels. They are the best sort of hard SciFi you could hope for, bettered only by Greg Egan, who lately has got a bit out of control, setting his stories in fictional universes with fictional (but rigorous) physics that are variations general relativity and QCD.

____
"If you ain't a marine then you ain't *beep*

reply

A good question to ask is how far should it go. They would essentially be slaves for all needs. Does one give up on doing things for themselves or rely on machines and if not how far do you push it?

reply


My.......brain........hurts.

See Ex Machina



😎

reply