Who is person




















Before turning to Nagel's argument, you might want to learn a bit about functionalism. If so, click here and a new window will open up with an introduction to functionalism.

Now that you have a basic understanding of the theory of functionalism, here is Nagel's reason for thinking that a functionalist account of the mind will never be able to capture the fundamental nature of what it means to be conscious.

If an android is to be a person, must it have subjective experiences? Further, even if we decide that persons must have such experiences, how are we to tell whether or not any given android has such experiences?

Consider the following example. Assume that the computing center of an android uses two different "assessment programs" to determine whether or not to perform a particular act. In most cases the two programs agree. However, in this particular case, let's assume, the two programs give conflicting results. Further, let us assume that there is a very complex procedure that the android must go through to resolve this conflict, a procedure taking several minutes to perform. During the time it takes to resolve the conflict, is it appropriate to say that the Android feels "confused" or "uncertain" about whether to perform A?

If we deny that the android's present state is one of "feeling uncertain," on what grounds would we do so? Very briefly, his answer is: Yes, a machine could be conscious.

In principle it is possible that an artifact like an android might be conscious, and it could be so even if it were not alive, according to McGinn. But, he argues, we have no idea what property it is that makes US conscious beings, and thus we have no idea what property must be built into a machine to make it conscious.

He argues that a computer cannot be said to be conscious merely by virtue of the fact that it has computational properties, merely because it is able to manipulate linguistic symbols at the syntactic level.

Computations of that kind are certainly possible without consciousness. He suggests, further, that the sentences uttered by an android might actually MEAN something i. That is, the android might still lack subjective experiences, there might still be nothing that it is like to be that android. McGinn's conclusion then? It is possible that a machine might be conscious, but at this point, given that we have no clue what it is about HUMANS that makes us conscious, we have no idea what we would have to build into an android to make IT conscious.

Hilary Putnam offers an interesting argument on this topic. If there existed a sophisticated enough android, Putnam argues that there would simply be no evidence one way or another to settle the question whether it had subjective experiences or not. In that event, however, he argues that we OUGHT to treat such an android as a "conscious" being, for moral reasons. His argument goes like this. One of the main reasons that you and I assume that other human beings have "subjective experiences" similar to our own is that they talk about their experiences in the same way that we talk about ours.

Imagine that we are both looking at a white table and then I put on a pair of rose-colored glasses. Thus, we might say that when I speak of the "red table" I am saying something about the subjective character of my experience and not about objective reality.

One analysis of the situation is to say that when I say that the table appears red, I am saying something like: "I am having the same kind of subjective experience that I typically have when I see something that is REALLY red. Putnam asks us to imagine a community of androids who speak English just like we do.

Of course, these androids must have sensory equipment that gives them information about the external world. They will be capable of recognizing familiar shapes, like the shape of a table, and they will have special light-sensors that measure the frequency of light reflected off of objects so that they will be able to recognize familiar colors.

If an android is placed in front of a white table, it will be able to say: "I see a white table. So what is the point of this example? Well, Putnam has shown that there is a built-in LOGIC when it comes to talk about the external world, given that the speaker's knowledge of the world comes through sensory apparatus like the eyes and ears of human beings or the visual and audio receptors of a robot.

A sophisticated enough android will inevitably draw a distinction between appearance and reality and, thus, it will distinguish between its so-called "sensations" i. Now of course, this may only show that androids of this kind would be capable of speaking AS IF they had subjective experiences, AS IF they were really conscious--even though they might not actually be so. Putnam admits this. He says we have no reason to think they are conscious, but we also have no reason to think they are not.

Their discourse would be perfectly consistent with their having subjective experiences, and Putnam thinks that it would be something close to discrimination to deny that an android was conscious simply because it was made of metal instead of living cells. In effect he is saying that androids should be given the benefit of the doubt.

He says:. Since Putnam thinks that there is no evidence one way or the other to settle the question, he says that we must simply decide whether we are going to grant androids the status of conscious beings. He says that we ought to be generous and do so. Not everyone would agree with Putnam on this score. Kurt Baier, for example, disagrees with Putnam, arguing that there would be good reason for thinking that the android in question was not conscious.

Putnam considers two of Baier's objections and tries to speak to them. We do not have the space to consider their debate here. It is not a debate easily settled. We are going to close this discussion with a dramatic scene near the end of the Star Trek episode that we've been following. In an earlier scene with Guinan played by Whoopi Goldberg , Picard was lead to embrace a moral argument in defense of Data. This is an argument that you may find interesting because it is remarkably similar to Putnam's argument in spirit.

So, is Commander Data a person? We do not presume to answer that question in these pages. But we do hope that you've thought through the question a little more deeply that you had before. From the question, "Could a machine be a person" we can move to the question "Could you be a machine?

A brief discussion of that topic is found here:. Teachers Materials Support. Community Grants Illinois State Profiles. David Leech Anderson : Author What is a person? During the trial, the attorneys consider the very same questions that concern us here: What is a person?

Is it possible that a machine could be a person? In the Star Trek episode, it is assumed that anything that is "sentient" should be granted the status of "personhood" and Commander Maddox suggests that being sentient requires that the following three conditions must be met: Intelligence Self-awareness Consciousness Captain Picard, who is representing Commander Data in the hearing, does not contest this definition of a person.

What is intelligence? Artificial intelligence: Can a machine think? Funtionalism: a theory of the mind. Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it.

Some extremists have been prepared to deny it even of mammals other than man. No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.

There may be further implications about the form of the experience, there may even though I doubt it be implications about the behavior of the organism. But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism--something it is like for the organism. We may call this the subjective character of experience. It is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence.

It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experience nothing.

Perhaps anything complex enough to behave like a person would have experiences. But that, if true, is a fact which cannot be discovered merely by analyzing the concept of experience. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior--for similar reasons. I do not deny that conscious mental states and events cause behavior, nor that they may be given functional characterizations. I deny only that this kind of thing exhausts their analysis.

If physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective character it seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view. Philosophical Review I have concluded. Robots may indeed have or lack properties unknown to physics and undetectable by us; but not the slightest reason has been offered to show that they do, as the ROBOT analogy demonstrates.

This test can also be: Edited i. Printed to create a handout. Sent electronically to friends or students. Did you spot a typo? Grammarly's app will help with: 1 Avoiding spelling errors 2 Correcting grammar errors 3 Finding better words This free browser extension works with webmail, social media, and texting apps as well as online forms and Microsoft Office documents, like Word and Teams.

Download the app. We have two books: 1 "Smashing Grammar" Written by the founder of Grammar Monster , "Smashing Grammar" has an A-Z glossary of grammar terms, a punctuation section, and a chapter on easily confused words. Each entry starts with a simple explanation and some basic examples before giving real-life, entertaining examples.

Some object to these new -person compounds on the grounds that they are awkward or unnecessary, insisting that the equivalent and long-used compounds in -man are generic, not sex-marked. Others reject the -man compounds as discriminatory when applied to women or to persons whose sex is unknown or irrelevant. To resolve the argument, certain terms can be successfully shortened anchor; chair. See also chairperson , -ess , lady , -man , -woman.

Knowing when to use "persons," "people," or "peoples" can be confusing, especially with regard to things like personal identity. Learn the best times to use each word.

Words related to person body , character , customer , guy , human , individual , life , man , somebody , woman , being , cat , creature , gal , identity , individuality , joker , mortal , party , personage. How to use person in a sentence If possible, try to check out the qualifications of the person posting.

September 17, Popular-Science. Even the most cautious schools are seeing outbreaks Sy Mukherjee September 17, Fortune. Pearls of Thought Maturin M. Gulliver's Travels Jonathan Swift.

Rosemary in Search of a Father C. Persons is rarely used, except in official English: several persons were interviewed. Christianity any of the three hypostases existing as distinct in the one God and constituting the Trinity. A living human. The composite of characteristics that make up an individual personality; the self.

The living body of a human. Physique and general appearance. Published by Houghton Mifflin Company. All rights reserved.



0コメント

  • 1000 / 1000