Blog #28: SEARLEoin. What’s at steak for consciousness.

If a computer follows instructions, itโ€™s not really thinking according to John Searle because it’s not conscious. However, Searle’s Chinese Room experiment raises the question about at what point you can say that a computer is thinking or not thinking. And can computers ever be conscious?

A couple of years ago at camp, I took a class at UVA about artificial intelligence. My peers and I learned about the many things computers are able to do and how seamlessly they can do them. For example, we watched videos of robots performing surgery, robots diagnosing people through conversations just like human doctors would, and even robots comforting the elderly in retirement homes, cracking jokes through a computer screen and speaker. One of the videos that really struck me the most was one where scientists taught a robot similar to how they would a small child and then observed as the robot picked up on new words and associated them with simple objects like an apple. After all, that’s how we learn too, so why is it any different when computers do it?

The robots/computers we studied at camp so closely mimicked human behavior to the point where I had to remind myself they weren’t actually consciously thinking and were just programmed to do the things they were. But how do we draw the line between thinking and consciousness in humans versus in computers? If computers can act like humans and respond to complex situations like humans, what’s the difference between the two in regards to their minds (artificial or not).

Before I even begin to answer anything, I think it is important to define both “thinking” and “consciousness”. According to the dictionary, consciousness is the state of being aware of one’s surroundings and the fact of awareness by the mind itself and the world. By these standards, computers can’t be conscious because although they can sense their surroundings if programmed to do so, they don’t really know they exist. Humans can assert that they exist and know they exist because they can consciously feel themselves asserting it (the Cogito), but this is not possible for computers to do. Even if it is, we won’t really ever know that they are thinking about it anyway.

The definition of thinking is the process of using one’s mind to consider or reason about something. This is technically possible for computers to do. Even if they are programmed, they still have the capacity to take things in, reason, and then act just like humans do. Therefore it is possible for humans to think but not to be conscious.

However, the year before I took the class on AI, I took one on ethics/morality. Based on what I learned in the ethics/morality class, I think that ethics and morality are the difference between human and computerized thinking and consciousness.

For example, let’s say there’s one person we somewhat like standing on one side of a train track and then on the other side there are 100 people we don’t know and have no real relation to. For the purpose of the explanation, let’s just say the one person is our kinda-sorta friend. You don’t see them very often, but you still know them and care for them at least a little bit.

Now a train is coming. The train has to go over one of the two tracks and you have to decide which route it takes. It will either take the route with the one friend or the 100 strangers. Whatever route it takes, it will kill everyone on that side of the track.

A human would likely struggle to decide what to do here. Our moral beliefs would contradict each other and we’d seriously doubt what to do. We’d want to save our friend but we also probably wouldn’t want to be responsible for killing 100 people. We’d feel bad for the 100 people’s friends and family and would probably understand that logically, killing 100 people has a bigger effect than killing one. Despite this reasoning, we would struggle with killing our one friend on the other side even if we aren’t close to them. Logically, we know that killing one person would affect fewer people, but we’d still feel an emotional attachment.

Both humans and computers can doubt themselves and their decisions. However, a computer wouldn’t feel bad about whatever they decided even if was a “mistake” in the end. Computers have no moral compass and no intrinsic ethical values. Even if the computer is programmed to adhere to some sort of ethical belief system, it won’t feel guilty for whoever it decides to kill. Computers can’t regret things like humans can. Computers just decide, act, recalculate/gather more info, and move on.

Image result for train tracks ethical dilemma

Secret code for Mr. Summers:

This is a spider it’s coming on the spider web. He’s crying. That’s because he doesn’t have any friends. This is a purple sun.

Join the Conversation

1 Comment

  1. Oh no! We can’t excape the trolley! STEAK, what’s at steak..? Why that’s my favorite animal. I don’t know about your spider, but when I am old, I shall wear purple.
    So, one cryptic declaration deserves another (please respond in a blog).
    Which question would you rather answer: What time is it? Or, What is time?
    ๐Ÿ™‚

Leave a comment

Your email address will not be published. Required fields are marked *

Bitnami