All of the following discussion is based on:
Churchland, Paul M. (1988). Matter and Consciousness (revised edition). Cambridge, MA: MIT Press.
and
Hergenhahn, B.R. (2005). An Introduction to the History of Psychology (5th ed.). Belmont, CA: Thomson Wadsworth.

Questions for distinguishing various positions on the mind-body problem

Are the mental / spiritual and the physical / material things in the world really two separate kinds of thing, or just one kind?
Two: Dualism
One: Monism

DUALISM:
Are the mental / spiritual kind of thing and the physical / material kind of thing actually different substances, or just different kinds of properties present in physical substances?
Different Substances: Substance Dualism
Different Properties: Property Dualism


MONISM:
Is the one kind of thing in the world a mental / spiritual kind of thing or a physical / material kind of thing?
Mental / Spiritual: Idealism (George Berkeley, 1709)
Physical / Material: Materialism



Terms used in the same way by Hergenhahn and Churchland

Terms used differently by Hergenhahn and Churchland

Terms appearing only in Hergenhahn's discussion of Dualism



Notes on various approaches to the mind-body problem
based on Churchland, Paul M. (1988). Matter and Consciousness (revised edition). Cambridge, MA: MIT Press.

  1. DUALISM

    1. substance dualism - mind and body are two separate substances

      1. Cartesian dualism (also known as interactionist substance dualism) - the body is obviously physical, but introspection reveals a conscious essence capable of language use and reasoning (what we think of as mind or spirit); this immaterial spiritual essence takes up no space and interacts with the physical body (through the pineal gland, by affecting the "animal spirits"). But how can something immaterial that takes up no space have any effect on something physical that does?

      2. popular dualism - the body is physical and the mind isn't, as above, but the mind does occupy space; thus it can cause actions of the physical body. Perhaps it can be thought of as an unknown form of energy that is independent of matter, but has the same effects as normal physical energy. (Remember though, Einstein showed that matter and energy are actually the same thing, so we would have to be talking about something he didn't think of.) We usually say that anything physical takes up space and anything that takes up space is physical. But we think of electrons as physical even though they have not been shown to take up space. So we can think of mind as non-physical, even though we allow it to take up space. (And we do allow it to take up space, in the sense of having a definite location: it's in the brain.)

    2. property dualism - there is only one kind of substance, and that's physical; but this physical matter can have two kinds of properties - physical (like flexibility or wetness), and non-physical (like sensations of redness or pain). Non-physical properties are essentially those of conscious intelligence, so they also include things like beliefs and desires. They are irreducible to physical causes like nervous system activity. But coincidentally, these properties are only found in one kind of matter: the kind brains are made of.

      1. epiphenomenalism - mental phenomena (i.e. those non-physical properties) are caused by the brain, but don't cause anything themselves. They emerge from physical systems when the system reaches a certain degree of organization. So for instance, sadness doesn't cause crying; crying is a purely physical process (that is, physiological in this case) with purely physical causes in the environment or in the body, and the mental state of sadness occurs at the same time as this physical state. Sadness is just our experience of the physical state. The logical extension of this idea is that nothing you think or do is really voluntary; everything is just a physical event that looks to you as though you decided or wanted to do it. And it looks that way because, as a side effect of the physical process, your brain produces these mental states.

      2. interactionist property dualism - the same kinds of properties emerge but then they do have an effect on physical states (brain states in particular). But here, and to a lesser extent in the case of epiphenomenalism, certain inconsistencies arise. These properties are said to emerge from physical matter, but to be inexplicable in physical terms. As if that weren't bad enough, the interactionist position adds that the emergent properties can then have an effect on the physical processes that gave rise to them, but still retain their essentially non-physical character. In other words, they seem to be clearly physical, but that notion is just what is being rejected.

      3. elemental property dualism - again, mental properties are thought of as irreducible to physical properties, but in this case they do not simply emerge in a highly organized physical system. Instead, they are taken to be fundamental properties of matter which are (or could be) present in all matter, not just brains. As a historical parallel take the case of electromagnetic properties. For a long time physicists tried to explain magnetic phenomena in the same terms they use to describe a bouncing ball or a pulley: the laws of mechanics. But eventually they discovered that that would never work, because actually electromagnetic properties were just as basic to matter as mechanical properties. Similarly, it might be that mental properties are just as basic to matter as physical properties. The problem with this idea, though, is that it doesn't seem that mental properties are everywhere -- in pencils and houses and furniture -- but are really only found in highly organized systems like the brain.

    arguments about dualism: PRO...

    • RELIGIOUS BELIEFS usually include the notion of an immortal soul, separate from the body... BUT... such beliefs are driven by social forces with different aims than those of science. Even if religion's explanations turn out to be an accurate description of the universe, science must ignore them since they are not part of its legitimate domain.

    • INTROSPECTION tells us that there is something non-physical operating in human consciousness... BUT... introspection often fails to give us insight into the essential nature of things. For instance, it doesn't seem to us that the sounds we hear are pressure waves carried through varying densities of the air molecules surrounding us -- but that's just what sound is. Consciousness probably is something equally unintuitive.

    • The IRREDUCIBILITY of mental phenomena to something physical suggest that something other than the physical must be responsible for the mind's activities... BUT... cognitive science has made progress in reducing some mental phenomena (like language and reasoning) to physical descriptions, in the sense that physical systems like computers have been given elementary abilities of this nature. Anyway, just claiming that mental phenomena can't be described in physical terms is hardly an argument; it's merely rhetoric instead of proof, and that's not good enough for scientific knowledge.

    • PARAPSYCHOLOGICAL PHENOMENA like E.S.P. are obviously non-physical... BUT... actually, these could be explained in physical terms just as easily as in non-physical terms: both types of explanation are equally far-fetched and unlikely, but an imaginative person can always make up a story to fit his observations. The main point, though, is that no scientific observation that was logically and methodologically sound has ever given a shred of credibility to claims about E.S.P. or similar abilities.

    arguments about dualism: CON...

    • The principle of PARSIMONY (having a minimal number of explanatory concepts) favors an explanation of all phenomena in terms of just one substance; but of course, this is just a rule of thumb in scientific thinking.

    • The EXPLANATORY IMPOTENCE of a second substance as to the workings of mind suggests that no second substance is useful or logically called for. For example, we know something of how the physical brain works, what it does, and what hampers its activity; but there are no rules governing how mind-stuff behaves, and no clear way to ever arrive at such rules. How does positing an inexplicable substance contribute to an explanation of anything?

    • NEURAL DEPENDENCE of mental phenomena is evidence that mental events are, at some level, physical events. Brain damage can cause impairments in perception, language ability, reasoning, and even so-called "higher" functions like emotions and consciousness itself. If all these depend on neural activity in the brain, then what exactly would be the domain of any non-physical system?

    • EVOLUTION tells us that all of today's life forms developed from simpler forms, which in turn can be traced back through geological history to a handful of cell types that first proved successful at surviving and reproducing. In the series of mutations and complications of this living matter (which led from single cells to underwater mucky stuff to plants and animals and finally to us), there seems to have been no opportunity and no need for any non-material substance to spontaneously insert itself in to the chain of events. We are purely matter because that's what life has always been, since long before there were humans.

  2. PHILOSOPHICAL BEHAVIOURISM - stems especially from the ideas of a group of philosophers in the early twentieth century calling themselves "logical positivists". In the face of revelations about the physical world (concerning the nature of time, light, energy, and subatomic physical laws), a huge body of scientific findings was suddenly called into question. To avoid any further confusions in the future accumulation of knowledge, the methods of science needed to be clarified: objectivity, rigorous logic, and precise observation were emphasized. Since traditional philosophical problems dealt mostly with unobservables, it was supposed that they were just the result of sloppy language use, and that they could be solved (in fact dissolved) by an analysis of the language used to describe them.

    Mental states, as unobservables, awaited some clearer way of talking about them as observables. Since behavior was their observable consequence, mental states were taken to be just a shorthand way of referring to the behaviors (actual or potential) connected to them. Potential behaviors can be thought of as dispositional properties: how the organism is disposed to behave in a particular situation. As dispositional properties, then, mental states could be given strict operational definitions.

    A mental state like "pain" is not observable; its consequences are, though. "Pain" in the head is a concept that means nothing more than that the person experiencing "pain" is likely to: say "I have a headache"; take aspirin; grimace and moan; lie down, etc. The observables determine the meaning of the term.

    The thing about pain, though, is that it's a qualitative experience. If the above description were accurate, headaches wouldn't be so bad; but pain is painful. Also, if someone were to fake having a headache, they would engage in exactly the same behaviors; that means either that a fake headache and a real one are the same, or that since the terms can't be distinguished, they are both meaningless. Neither option is satisfactory.

    Aside from neglecting qualia, the definition fails logically. There are conditions necessary for any disposition to be present. So, taking aspirin depends on other mental states -- the belief that aspirin helps headaches, a desire to be rid of the headache, and so on. These mental states also require definition, and the list of qualifications goes on forever. Mentalist terms keep creeping into the observations.

  3. REDUCTIVE MATERIALISM (THE IDENTITY THEORY) - mental states are the same as brain states; they can be identified with them, and share all their properties. Curiosity, happiness, and sensations of color are activity in certain structures of the brain and can be reduced to brain activity in the same way that physics reduces sound to pressure waves in air.

    Reductionism is based on a concept of "intertheoretic reduction", which says that any statement about the basic terms of one theory can have a corresponding statement about the basic terms of a related theory. These basic terms are usually called the "natural kinds" of a theory. The true or false statements we can make involving those terms are called propositions.

    We can make a statement about the natural kinds in our everyday theory of sound: a noise was loud. In a mathematical description of the same thing we would say the signal had a large amplitude; and in physics we would talk about the density of the air molecules.

    For a reduction to be valid, the propositions on one level must be translatable to the propositions on the other level (so that when we say "loud" in conversation we also have something to talk about on the physical level); and the principles relating them on one level must be expressible on the other (yelling causes louder sounds because physically, the increased airflow leads to greater density of air molecules in the pressure waves).

    arguments about the identity theory: PRO...

    • on the level of both the individual since its conception, and of the species since its appearance in EVOLUTION, all the processes involved in behavior are, apparently, physical; therefore it's reasonable to suppose that mental behavior is also a physical process. Additional support comes from the observation (in cases of brain damage, for instance) that all known mental phenomena depend on some form of neural activity.

    • Further, NEUROSCIENCE has seen some success at explaining behavior at the level of the nervous system in simple organisms. However, none of this evidence indicates a match-up of propositions and principles in mental and neural descriptions, as is required by a reductive explanation. It just suggests a close connection between the mental and the physical.

    arguments about the identity theory: CON...

    • INTROSPECTION tells us that "redness" isn't the same as a brain state... BUT... it also tells us that sound isn't the movement of air molecules, and it's wrong about that.

    • "CATEGORY ERRORS", or unintelligible statements result -- that is, we can't talk about minds having the spatial properties of brains, or brains being meaningful, even though we would have to allow these if minds and brains were the same thing... BUT... maybe reality is different than our common sense language tells us. Before sound was given a physical description, it didn't make sense to say it had a "wavelength", and similarly, right now it doesn't sound right to us to say that brain states have a meaning.

      One theory of meaning, though, holds that a term gets its meaning from its relationship to all the other terms in its descriptive system. Meaning is given to a term based on the role it plays in that system. So a word in our language, or a term in our description of the mind, is used in the definition of certain concepts and not others; its a part of one idea, while other ideas are parts of it; propositions involving it require the use of some terms and disallow others. If such a "cognitive economy" were duplicated in some other system -- say in a system of neurons instead of a system of words -- it would be possible for the units of that system to have the exact same relationships to each other. And if they did, then it would be true, according to this view, that a neural event could have meaning just as legitimately as could a mental event.

    • OTHER PROPERTIES NOT SHARED include "being known as consciousness": mental states are everybody's idea of consciousness, but neural firing doesn't seem to be a conscious experience at all... BUT... this is saying that our knowledge about something is an intrinsic property of the thing itself, which is untrue; mental states and brain states could well be the same thing and our familiarity with one or the other description would have no bearing on the issue. Another unshared property might be "being knowable as consciousness": supposedly we are aware of our mental states, but we cannot, even in principle, be aware of our brain states... BUT... asserting that we cannot know our brain states could simply be a false statement, because after all, if it is the case that mental states and brain states are identical, then we've known our brain states all along. Knowing our brain states would be the same as knowing our mental states.

    • QUALIA - even assuming that one day we have perfect, complete knowledge of the brain and nervous system and can describe mental states in neural terms, it would still be impossible for a colorblind person to really know what a sensation of red is like; qualia are beyond any list of facts, sentences, or propositions... BUT... qualitative knowledge can be viewed as a different type of knowledge than propositional knowledge, and whether your knowledge is qualitative or propositional, it's still knowledge of the same thing. And perhaps one day there will also be a sufficient neuroscientific description of qualia.

  4. FUNCTIONALISM - mental states are defined by their relations to a) environmental causes, b) other mental states, and c) behavioral consequences. For instance, the mental state of "pain" is typically a) caused by some trauma or injury, b) a source of annoyance, distress, and a desire for relief, and c) is likely to result in wincing, verbal complaints, and care-taking behaviors. Likewise a belief such as the belief that "John is tall" is a) caused by seeing John's height, b) related to the beliefs that other people are shorter and that not many are taller, c) likely to result in utterances like "John is tall", warnings to John to duck when approaching a low doorway, and choosing him first for your pick-up basketball team.

    All that is required for identifying a mental state is that it should play a certain role in the various inferences and transitions that can be made from one mental state to another (as well as that it be caused by the same things and result in the same behaviors). As an analogy, the roles of king, queen, bishop, knight, rook, and pawn in a chess game could be played just as well by six different plastic chips, or pieces of fruit, or toy automobiles -- but if we make the substitute pieces move the same ways and follow the same rules as the real chess pieces, we could use them to play a game of chess. We say different mental states have different functional roles, organized into a system of inferences and exchanges, a "cognitive economy" as described above.

    Just as a "chess economy" can be duplicated using pieces other than traditional chess pieces, a "cognitive economy" could be duplicated in some kind of system other than human brains with their circuits of neural electrical signals. Reduction to physiology, in the technical sense described above, is very unlikely, since the "some kind of system other than human brains" might be, for instance, a Martian brain with a totally unfamiliar biology, or a computer whose mental states are instantiated by silicon electrical chips instead of neurons. In fact, one could imagine a complex inference system built out of tinker toys and empty soda cans connected by strings and pulleys -- since calculating devices have actually been constructed out of such crude materials. (There's no special reason that computers must have the kind of electronic hardware they have, aside from practical considerations of size and speed!)

    This is the move from "type identity" to "token identity" in specifying the relation between mind and body. The reductionist identity theory described above holds that there is a type identity: every type of mental state (a pain, a belief) corresponds to a particular type of physical state (activity in the somatosensory cortex, or in the temporal lobe of the brain). But functionalism allows a broader kind of relation: every mental state must correspond to some physical state of some physical system, but it could be a system of neurons, or of computer circuits, or of strings and soda cans -- i.e., not any particular physical system but just some token physical system

    Rejecting the strong mind-body "type identity" for the softer "token identity" is what allows psychology to be an autonomous, separate discipline from, say, biology or neuroscience. Psychology need only concern itself with the functional relations among mental states and behaviors; the details of nerve firings in brains or electrical circuitry in computers are the "engineering details" that can be left to the people who specifically study that machinery. This is also the rationale for the field of Artificial Intelligence, which seeks to simulate human mental processes in a computer, without concern for the detailed workings of the entirely different biological machine -- the brain -- that normally carries out mental processes.

    arguments about functionalism: CON...

    • QUALIA are the sensations we experience, the "what-it's-like" part of mental states -- the part of pain that actually hurts, or the part of the taste of salt that's just plain salty-tasting, apart from its relation to any other states or experiences; William James called them the "raw feels" of consciousness. But since they are not relational in the sense of functionally defined states, they aren't included in a functionalist view of mind.

      Thus the familiar question of the inverted spectrum goes unanswered: what if everything I call "blue" actually appears yellow to you, everything I call "red" actually looks green to you -- but we're both used to the way things look, we both use the word "red" to refer to firetrucks, roses, and sunsets, etc., and there is no possible way we could ever get inside one another's heads and realize with surprise, "hey, that's not red at all, don't you even understand what green looks like?!" Then we would have a situation where our mental states ("seeing blue, seeing green") differed qualitatively, but not in terms of that mental state's relation to environmental causes (the firetrucks), other mental states (our sense of which colors clash), or resulting behaviors (identifying the colors of the US flag as "red, white, and blue").

      A further qualia-related problem is that few people would accept the notion that a computer whose optical scanner registers "light reflected in the 720 nm wavelength range" is actually having the experience of "seeing red", or that its chemical probes into a liquid could register not just a certain chemical makeup but actually the "taste of orange juice". Surely computers don't have mental experiences even if they have physical states bearing a certain functional relation to each other...

      BUT... perhaps one could claim that qualia exist simply as aids in identifying mental states ("this is the sensation of redness, so act accordingly") and the qualia themselves are not intrinsic to the mental state but are actually just an accidental feature; perhaps the same qualia aren't even present each time the mental state is -- think of the variety of sensory contexts that affect our recognition of certain tastes, or temperatures, or even colors under different lighting conditions. Qualia may well be real but they may be specific to particular physical instantiations of mental states, and computers may have a different qualitative experience of "pain" that is very unlike our own but is equally useful to a computer "mind" in identifying the state it is in. (Perhaps it is even equally distressing to the computer in some sense!) Qualia as the focus of our introspective discriminations among mental states are then more of an auxiliary to the functionalist description of the mind

    • DOMAIN SPECIFIC REDUCTION suggests that functionalism may be right but still may not provide an account different from the identity theory. This is because it's certainly possible that mental states experienced by humans -- sadness, pain, hunger, belief, goals -- may be so closely tied to the environmental context, that such states never arise in a system other than those with precisely human concerns. In other words, a Martian may experience pain or emotion in a way that is so foreign to humans that it hardly makes sense to call it by the same name at all. Likewise a computer may hold a certain belief but act on it in such different ways, under such different circumstances, and with such different possibilities for action, that we could not even recognize it as a belief as such, given the very specific kinds of functional relations involved. If, as a result, we find that we are really only concerned specifically with pain-in-a-human, sadness-in-a-human, goals-in-a-human, then clearly the only system that will support those functional relations is a human brain, since only it is capable of exhibiting those functional relations in the first place. And if this is so, our narrowing down of functionalism to the domain of specifically human mental states has produced the equivalent of the identity theory: mental states can only be explained as the states of the human brain.