MovieChat Forums > Transcendent Man (2011) Discussion > Ray never acknowledges the 'ghost'

Ray never acknowledges the 'ghost'


I.e. the "ghost in the machine," as it is referred to - the part of us that is "alive" and "aware" of all of this happening. Our science really has no idea of what consciousness or how it can come from this supposed "physical matter" that we believe in, and somehow in the next 5-10 years we are supposed to make computers as smart as us? Even if our brains are just computers, we will never be able to develop theories on actually making computers "aware." Like now, they will always just be processing while completely oblivious to it all.

reply

I have not seen the movie, but I have my own thoughts on what consciousness is. Consciousness is a three step process. First we experience the world around us in real time, we then record those experiences to memory, and later on, we contemplate the meaning of those memories using knowledge we have gained based on past experience. I don't see a reason why a computer mind could not have every bit the amount of consciousness as a human being. Information, experience, and an ego for motivation are all that are needed.

What makes a dogs consciousness different than a humans? Dogs experience the same world as we do but they are short on memory, language, and knowledge. A dogs experience in any one moment might be very similar to our own, because they experience the same world as we do, the only thing is that a dog will never sit down and dissect what exactly just happened and why or how. A dog does not have access to physics or history books, and even if it could somehow understand, a dog does not have the capacity to store as much information as a human.

That said I can only imagine what a consciousness with more available memory, running at a faster speed, able to see many more angles/possibilities than a human.

As far as emotions, they are just chemical reactions that are interpreted positively or negatively by the brain. All that would have to be done is to set parameters.

Human beings are naturally and necessarily egoistic, they think that they are the pinnacle of existence. But this is because all they know how to be is a human, they don't have access to other peoples consciousness. This is why there is racism, sexism, and if there is such a thing specism.

reply

The premise of thedesertfox's post assumes that there is some special part of a human being which is "alive," and that this part is beyond the physical world. This idea is known as "vitalism."

Personally, I reject vitalism. There is considerable (though not conclusive) evidence against it, and really none for it.

In effect, I'm saying that I do not believe in ghosts.



I disagree with you, but I'm pretty sure you're not Hitler.
- Jon Stewart

reply

I'm pretty sure that most (if not all) scientists are very much NOT vitalists when it comes to scientific study.

In regards to what coco said: I think there are two ways of describing what seperates our brains from lesser animals. You can describe the abstract concepts of the differences in consciousness. Or you can describe the differences in the physical aspects of our brain.

It's the abstract discussion that is hard to do. And that's why I think the OP is having a hard time understanding. It's hard to describe exactly what conceptualizations separate our brains from other animals.

But really the physical aspect is pretty easy to understand. We simply have much more advanced (evolved) brains than any other species on the planet. Although we don't know all of the physical aspects (yet), we will probably learn them one day and realize how our brains are just a more complex version than other animals.

reply

Actually, what you are describing isn't consciousness, or at least not the way I would define it.
A dog has no consciousness in the human sense, not because of his lack of capacity to store information, but because he can't process the information as we can.
You could of course argue about this, but a dog has no consciousness, because he doesn't really have a concept of time itself. A dog has no way of knowing he is alive, a solitary life-form that can make decisions based on a mental simulation of the future. A dog doesn't know there is a future, except on an instinctual level. It can't understand the concept of the future itself, which is basically what sets humans apart from animals.
A dog will never plan for winter, because it doesn't know, can't mentally visualize, that there will be a winter. IT can't even grasp the concept of the sun rising tomorrow. Animals always exist in the moment,in the now, all the planning they do for the future is instinctual not conscious. A bird doesn't build a nest because it knows, it will lay eggs in it in the future. It does it because of an old program that runs in his brain without the bird realizing it or it's impact. Therefore of course, an animal can never be afraid of death as a concept.

That may not be true for all animals of course, some magpie-species, dolphins and apes may be on an intellectual level where they can grasp the concept of time. But I'm not an expert on that, so maybe someone else can chip in here.

reply

OK, please understand that I'm not making any "animals are people too" argument here. I'm not arguing that animals do (or do not, for that matter) have consciousness, or what consciousness even is. But...

How do we know that a dog doesn't know about his own mortality? It isn't like we can ask him about it. How do we know that a bird doesn't know that the nest it builds today will hold eggs next month? It wouldn't understand this like we do, but how can we be so sure that it doesn't understand it at all?



I disagree with you, but I'm pretty sure you're not Hitler.
- Jon Stewart

reply

Dogs are conscious. Living in the moment is a sign of consciousness. They obviously aren't as adept as humans but I think they realize who their owners are and can recognize emotions. Just watch a dog get excited when people get excited. That's consciousness.

As for machines, it's possible they could develop a conscious if the proper programming is in place. You could program a machine to react in the same ways a person would under certain circumstances. For instance, you could program a machine to recognize emotion on peoples faces and have them categorize the emotion appropriately as sad, mad, happy etc. Though they wouldn't really be conscious, they would simulate consciousness which is what the human brain does if you think about it. But people don't like to think their emotions are simulations of the real thing. It's the real thing to us because we like to think our emotions aren't mere processes are brain is running in reaction to our perception of reality. If a machine gains the ability to take in the variables of life there should be no reason why it couldn't function as a human does.

Where it gets complex is abstract thought but you could program the computer to compile its reality into its memory and it could learn and be able to make links between similar ideas and concepts spawning abstract thought that may actually be more perceptive and quicker than the human mind.

This is just a thought but you may be able to program a machine with the ability to write abstract stories. You could give it the ability to understand story structure and it could use what it knows which would nearly be everything since it would have access to the internet. It could recognize key words based on what people find entertaining and write a formulaic story which is what Hollywood Writers do now. This would also mean the machine could lie. It could tell white lies based on the emotion of the person it's communicating with to save their feelings.

I don't really know where I'm going with that but I think Ray is right in saying that one day the singularity will happen and are machines will become conscious, perceptive, and smarter than we are. Which isn't a bad thing as long as we can still communicate with them.

These are some movies of mine. Enjoy!
http://www.youtube.com/lanser87

reply

It's a problem like this that leads me to think we will never have A.I. We can't even define consciousness! Our thought is too insular, too anthropocentric, for us to even consider the consciousness of dogs--which I'm sure they have but, like it was said, they are not quite adept. This must mean there are some basic (I almost want to say "quantum") principles of consciousness, though. Until we figure those out, we can't map our minds or upload them into computers or, really, do anything with them. I fear we will always be lost in our own heads.

reply

Part of Kurzweil's outlook of predictions is that soon (or eventually) the entire human brain and and its functions may be thoroughly mapped. If it can be thoroughly mapped down to the cellular or even atomic (if necessary) level, it could theoretically be reproduced in another substrate than the biological as we know it. The reason it seems impossible to do this today is simply because we don't yet have that technology/computing power, yet.

But that's no rational reason to say its impossible for the future. I consider it quite possible, for I can see no reasons (other than irrational, superstitions ones) for its IMpossibility.

reply

[deleted]

I have my own thoughts on what consciousness is. Consciousness is a three step process. First we experience the world around us in real time [...]

Then in order to explain consciousness you're gonna have to explain how we can experience the world around us, how can particles bouncing on each other generate a perception at all. In fact this is known as the hard problem of consciousness. If we can't explain sensory perception in term of physical processes, then it might be that sensory perception comes from unphysical processes, from things that are not part of what we call the material world.

reply

Organic life wasn't always conscious, the first life that arose about 3.8 billion years ago definitely was not conscious.

I 'think' (not sure) that self awareness is tied to multicellular organisms that live in social groups (since you need self awareness to function as part of a group of animals). Primates, elephants, dolphins, etc. That trait likely didn't arise until a few million or so years ago (again as a guess, I have no idea). The vast majority of life on this planet lacks self awareness and consciousness.

But the point is that self awareness and consciousness evolved via natural selection, so there is no reason that science cannot figure out how it works and replicate it (and eventually make something better). Our limbs and organs evolved via natural selection and we are in the process of replicating them via machines and eventually bypassing them.

For all we know our free will and self awareness are severely limited to the kinds of free will and self awareness that are possible. I would be surprised if this wasn't the case.

I'd prefer the machines not be self aware, since that means they pose less of a threat IMO. But I don't see it happening.

It doesn't matter if it takes 20 years or 500 years before we figure out the neurology of self awareness, it'll happen sooner or later.

reply

I disagree. Think about consciousness for a second. There is a reason most of us do not remember things before a certain age. For me, it was about age 2. It was as if I suddenly woke up one morning and I was aware.

Anyways, I believe consciousness is simply a large collection of memories and information able to be interpreted by our own biological computers. Through all 5 senses we have gathered enough information to realize that we exist and to analyze such. Babies use primal instincts until a certain age where they "come out of a dream". At that point, they have gathered enough information to become self aware. Computers simply need to be more powerful with larger hardisks to become this.

"Death! Delicious strawberry flavored death!" http://www.youtube.com/watch?v=KBB0PGt3Syo

reply