Comments on “A billion tiny spooks”

Comments

Representational theory of mind still popular?

Shane's picture

The representational theory of mind is the dominant approach.
Simplifying somewhat, it says that beliefs, desires, and intentions are “represented” as sentences in a special language (“mentalese”). Mentalese, in turn, is “implemented” as physical things (structures, states, or processes) in the brain.”

I agree with you on the representational theory of mind being wrong, but not the statements on its current importance.

Cognitive science was a bit of a failure in my eyes - and the failure is partly due to the importance of the representational theory of mind in tying things together. These days, I would say the majority of people who study the mind (psychologists and neuroscientists) don’t believe in the representational theory of mind (in the Language of Thought version of mentalese that you mention) and it doesn’t feature in their work. There are lots of people who call themselves cognitive psychologists and cognitive neuroscientists, but I don’t think they would ascribe to cognitivism (as formulated here) and aren’t much interested in what philosophers have to say about meaning, though I couldn’t really judge what most philosophers of mind believe these days. And I also have doubts on it is what most educated members of the public think - those that read Pinker’s “How the mind works” uncritically might assume it to be true. But my guess is that the popularity of this particular idea peaked about 20 years ago (e.g. Pinker’s book was published in 97). But there are a lot of other popular science books since then on how the mind works which do not assume such a theory, and are more influenced by empirical findings than philosophical theories.

It might be fair to say that many believe in a weaker view that disregards cognitivism but takes the stance that the mind/brain processes representations, and those representations have “meaning” and those meanings are somehow “in the head”, so I would be interested to see what you think the negative consequences of that are.

The failure of cognitive science

Cognitive science was a bit of a failure in my eyes… [it] peaked about 20 years ago

Yes, indeed! This page (now only a summary) will explain the reasons for this, with a brief history of this.

It will argue that most of the cognitivist assumptions (individualist representationalism, most importantly) were accepted uncritically by neuroscience, however.

aren’t much interested in what philosophers have to say about meaning

Yeah, and this is a big problem, because they accept representationalism without thinking about it at all. In other words:

many believe in a weaker view that disregards cognitivism but takes the stance that the mind/brain processes representations, and those representations have “meaning” and those meanings are somehow “in the head”

For nearly all of them, the question “how could that possibly work?” doesn’t even come up; it’s just taken for granted that it somehow does. Philosophers have asked the question and failed to answer it, which shows that one ought to worry that many it doesn’t work. (And, in fact, I’m reasonably sure that it can’t.)

It would be helpful for neuroscientists to notice that philosophy of mind has failed; this is directly relevant to their own research questions. Instead there’s a comfortable sense of “well, that’s obvious, and anyway it’s not part of our problem domain, we’re doing science, we don’t care about questions like that.”

Getting this wrong has profound implications for what sort of thing a person is, what a self is, what societies are, how individuals and societies relate, and so on.

Should science care about philosophical (non) problems?

Shane's picture

Philosophers have asked the question and failed to answer it, which shows that one ought to worry that many it doesn’t work…there are good in-principle reasons to think that no answers can be found

Should a scientist ought to worry or care in the slightest? Especially if it is a question for which no answers can be found? I am not convinced they should. It seems similar to the hard problem of consciousness, where most neuroscientists don’t worry about the arguments philosophers have with each other (and a few also think no answers can be found), and it doesn’t seem to be an impediment for the science.

I mentioned meaning in my post, but only to try to approximate or guess the kind of argument you want to to make. But the key thing, as you mention, is individualist representations - the idea that brain works by doing computations on information, and that information can be described as a “representation”, and that a high level “cognitive” description is a useful level to understand the operation of the system. This is accepted uncritically because it works. The idea of “representations” may be philosophically dubious, but, say, if you want to describe how vision or hearing works (i.e. sub-systems rather than a person-level explanation of beliefs and desires), a cognitive explanation is pretty helpful (though in cognitive neuroscience, compared with cognitive science, the wet stuff level is needed for the complete story).

Why (some) neuroscientists need to understand representation

I agree that the “hard problem” is probably best ignored. (Although I have suggested an fMRI experiment that might help with it!)

“Representation” is used to mean two (or more) very different things. As an example of the first: “in a thermostat, the angle of the bimetallic strip represents the temperature in the room; the thermostat uses this to compute when to turn the furnace on.” Here it’s unproblematic to understand the system causally; there is a direct physical connection between the temperature and the angle. However, exactly because the causality is so direct, it’s not clear that thinking about it in terms of “computation” or “representation” is helpful. It’s probably mostly harmless, though.

Now consider “I know that Ouagadougou is the capital of Bourkina Fasso.” I’ve never been there; I don’t know anyone who has been there; I don’t know anything about the place; I wouldn’t recognize it if I were dropped there. My causal coupling with Ouagadougou is extremely indirect. Or “I know that Radagast was a wizard”; a true statement, even though Radagast doesn’t exist and never did. A causal account seems inherently impossible or useless.

The representational approach says that I know Ouagadougou is the capital of Bourkina Fasso because there’s a representation in my head that says “Ouagadougou is the capital of Bourkina Fasso” (in mentalese). Then I can do computations with that (e.g. I know countries have only one capital, so I know Timbuktu is not the capital).

This is what can’t work. There’s a slew of reasons it can’t work, any one of which would be sufficient. All attempts to say what it would even mean for a thing-in-the-head to represent “Ouagadougou is the capital of Bourkina Fasso” have failed in principle (never mind the scientific and engineering considerations, which are also insuperable).

It’s an unfortunate historical confusion to use the same word “representation” for the thermostat and Ouagadougou. What is going on in the two cases must be entirely different. However, people regularly implicitly assume that they are more-or-less the same, and slide from one to the other without noticing.

In perceptual neuroscience, it’s (mostly) unproblematic to say “neurons in V1 cortex represent image edges,” because there’s a straightforward causal pathway involved. Because this does work, and “representation” is used for both phenomena, it’s normal to assume that there are neurons somewhere else that represent my knowledge of Ouagadougou, and they could be understood using the same methods. But that can’t be true.

Single-cell-recording neuroscience mostly avoided this problem. But now in the fMRI era, neuroscientists make claims about representations of the second sort. When they do, they talk nonsense.

Undoubtedly, there are neurons involved in my knowledge of Ouagadougou; but so are other things that aren’t in my head at all. This knowledge is socially distributed. I know about Ouagadougou only by relying on other people, who could (for example) get me there.

Not caring doesn't seem to get in the way

Shane's picture

My argument was that (cognitive) neuroscientists don’t need to worry about philosophy because it (mostly) isn’t relevant to the science. Your argument here is that they do, because if they don’t listen to philosophers, then philosophers will think they talk nonsense and they use words like computation and representation inappropriately. But they don’t care what philosophers think!

Your argument appears to be based on the assumption that non-philosophers should care about the philosophical problem of reference or intentionality (i.e. that “Ouagadougou” has to have some link to Ouagadougou for it to have “meaning”). (And part of your argument is based on the mentalese-style representationalist theory of mind, which as I mentioned before, though of historical interest, has little to no importance to current scientific understanding of how the brain works). But what psychologists/neuroscientists care about is how we can use information (or whatever is learnt and stored in the networks of the brain) to do stuff, like store the information you supplied in your comment, and answer questions about what the capital of Bourkina Fasso. And there doesn’t need to be some proximal or causal story to a something in the world - “Ouagadougou” - in order to do that.

I am looking forward to seeing your discussion of the consequences of wrong-headed assumptions, however, those consequences (as I understand it) are mainly about consequences beyond science - i.e. how we see ourselves and society etc… And here I can see philosophy being helpful.

Limits due to draftiness

Writing this book incrementally, over many years, causes many problems. One is that most of it is currently either entirely absent, or appears only in draft form, usually vague and sketchy.

If this page were complete, I hope it would explain why I would disagree with some of your last comment, or would clarify how you misunderstood what I was saying (so that we would agree after all). But I expect that writing it will be a couple of full-time weeks worth of work; and I think I probably can’t explain the line of argument any better in a comment.

One part of the writing will be checking in on the current state of cognitive neuroscience, which I’m out of date on. It’s possible that it has changed enough that what I was going to say in the page is obsolete and irrelevant. I think that’s unlikely—but until I’ve done the reading, I can’t be sure.

Thanks for your comments—they were incisive and helpful!

Okay, if I may enter this

Obnoxious Stranger's picture

Okay, if I may enter this discussion…

While I sort of agree that current outlook on cognition tends to perilously gloss over “extracorporal” aspects of cognition, I think it’s quite fair to say that cognitive processes of a grown up human are relatively autonomous.

Thought experiment:

If someone (who already knows capital of Bourkina Fasso) were to be isolated in a room for, say, two weeks, and then presented with a “what is capital of Bourkina Fasso” question, one would quite definitely remember it’s Ouagadougou and not, say, Berlin.

A “can not sync with human collective” error will not happen :)

As to “neuro-representationism”, I can’t help but clumsily try to defend the poor thing, or at least its weakest versions (that are quite less broken, IMHO, than their mentalese progenitors)

“weak” version of “neuro-representationism” does not imply existence of either “mentalese language” or some kind of “platonic realm of representations”, it merely suggests that, for you to be capable to (autonomously) make claims about capitals of foreign countries, your brain needs to encode both specific information about them and the wider contextual framework needed for you to be able to make sense of the question and concepts involved (if you can’t tell what a “country” is, how can you discuss what it’s capital is?) , which does not appear to be an overtly absurd claim.

Also, to the best of my limited understanding, this approach does not demand that the construct being “weak-sense represented” in this manner is a real object or something you interacted with personally (so it can be “Radagast”, “communism” and even “a blivet” though some might argue that the last two items are tautological ;) )

It’s representationism in weakest possible sense, and frankly it does not look like there is much in terms of sane physicalist alternatives to it.

Fun sidenotes:
1) there must be some kind of encoding scheme in the brain for this to work, so you can of course claim that it tries to “sneak mentalese through the back door”.
However this scheme is not a “language” unless in most general sense (in this sense genetic code itself, as well as processor instructions and file systems would also count as “languages”)

2) If 1) is true, than it just might so be that there are concepts that can not be “represented” in the brain, “literally unthinkable” things, which is a curious idea.

3) what if some part of philosophy crucial for a completely coherent epistemology actually belongs to this “literally unthinkable” category, and how would we even find that out ?

The duck/rabbit flip

Hello Obnoxious,

I’m torn, because I would like to respond substantively to your (unobnoxious and perfectly sensible) comments. But as with Shane’s similar points above, it’s unlikely that anything I could say would be helpful, short of actually writing the page itself (and probably several other pages in addition). That would take several full-time weeks worth of work (at least).

From within the representationalist view, what you say is obviously right. From within the non-representational view, it’s obviously wrong.

There’s a profound perceptual shift that one needs to make in order to switch those views. It’s like the duck/rabbit and young/old woman pictures. If you have only seen the old woman, it’s completely clear that she’s old, and anyone who says it’s a young woman is insane or idiotic. And vice versa. Once you’ve seen both, with a bit of practice you can flip back and forth fluidly.

I don’t know how to induce that flip in other people. I was as incredulous as anyone when I first encountered the nonrepresentational view, and I resisted it vitriolically for several months. Eventually I saw enough bits of the picture in isolation that the whole thing suddenly flipped.

Animism is not Cartesian Dualism

Herbal Panda's picture

“The natural human view (of pre-modern people) is that the mind is not a physical thing. It is the “ghost in the machine”: a ‘spook.’ “

I think this is incorrect. If there is a “natural human view”, it is animism - the view that “mind” is a pervasive property of the entire universe, and that all things - including things that modern people may not consider to be “alive” or “conscious” - partake of this property.

Panpsychicism/animism vs dualism

Hmm, yes, you are probably right. I’ve seen both claims made by experts in the field. At a guess, the “natural view” is vague, and resembles panpsychicism or dualism depending on context.

Embodied Cognition

Hey David,

I’m really enjoying reading through all your work and it has triggered some delightful changes in me. Thank you!

If it does not require too much exposition on your part, I was wondering if you could hint at how closely your views of cognition (“interactional dynamics”, etc) hew towards some of the recent writing on embodied cognition? Specifically stuff like this blog or the work of Anthony Chemero

Embodied cognition

Hi Gary, glad this has been enjoyable!

I don’t know the current work on embodied cognition well. (Thanks for the pointers; I’ve added them to my to-read list.) From what I’ve read, it’s closely similar to the work Phil Agre and I did in the mid/late 80s, and draws on some of the same earlier philosophical and psychological work. (Gibsonian perceptual psychology and American Pragmatism, for instance.) Although we were among the first to champion this view specifically against the mainstream cognitive science of the time, we weren’t alone; we found several like-minded folks in various other fields. Lakoff was the only one who became well-known, but there were others. Phil organized an interdisciplinary workshop sometime around 1990 that was quite exciting.

I felt at the time that this approach had considerable promise, although probably not in AI in the short term. I thought seriously about switching fields to psychology or philosophy or something, but decided instead to do something entirely different.

The approach seemed to vanish completely for a couple decades (although maybe I was just out of touch). The Clark and Chalmers “Extended Mind” paper was the first sign of a revival (that I know of). It seems to be gathering momentum lately, which I’m happy about.

I would say that most of what I have seen from the renewed movement lacks a piece we thought critical, which was social situatedness. We got that from ethnomethodology and from Heidegger.

The embodied approach may go a long way toward understanding non-social animals. For humans, sociality is so fundamental that looking at embodiment without it risks serious distortion.

When I went to college I met

anders horn's picture

When I went to college I met some linguistic professors who self identified as cognitive linguists (They explained that “cognitive” meant that behaviorism was evil and Chomsky was ridiculous (linguistics doesn’t exactly move at light speed)). They told me that the way words were stored in the brain had to be whole utterance at a time because the evidence contraindicated all the simpler previous theories. Also they were big fans of language as social and embedded (and that its purpose was coordination rather than communication) and gave me “understanding computers and cognition” and “metaphors we live by” to read. The older one of them was thrilled about the idea of change in the meaning of words being driven by abductive reanalysis. They didn’t give me “Vision Instruction and Action” to read but expect they would strongly agree with the section on reference being grounded in shared focus. I’m reading more about AI because I want to make a simulation that reproduces the kinds of semantic change that we see in language but because so much of the patterns in language are there directly because of patterns in the (lived) world I need for the agents to have something to communicate about before I can see if it acts something like language.

Something to communicate about

I want to make a simulation that reproduces the kinds of semantic change that we see in language but because so much of the patterns in language are there directly because of patterns in the (lived) world I need for the agents to have something to communicate about before I can see if it acts something like language.

Yes, that seems right to me!

Language is almost entirely about people communicating in order to collaborate. It’s inseparable from practical activity (such as making breakfast, but also including social activity, e.g. relationship maintenance).

Linguistics has almost entirely ignored this, which unfortunately makes most of it nonsense. There are exceptions, of course.

FWIW, I think the “conversation analysis” approach is the most likely to be fruitful. One of the last thing I did in AI before leaving the field was to try to model some bits of its insights. (I didn’t finish that, and never wrote about it.)

Physically remembering impossible actions

Kris's picture

When talking about Fighting Games with my community members, we often physically act out the techniques we see on screen in order to communicate them to each other. When remembering what a technique is, but can’t quite remember the name of it, the bodily action springs to mind, then the name might follow.

This is very amusing when you have strings of impossible movement, such as animations canceling immediately into other animations. Seeing a person go from one action and as fast as possible into the next one, or imitating the action of flying in the air while being firmly on the ground.

They physically remember humanly impossible action, because that is the most expedient frame of reference for remembering what characters do in a fighting game! I would have thought that surely they would remember the physical button & joystick inputs for the techniques, but that is not the case. The button & joystick inputs are the physical shortcut to produce the bodily-framed memory of physically impossible action! Delightfully weird!

Any explanation of your non-representational view?

Jeremy Wright's picture

David,

You said:

From within the representationalist view, what you say is obviously right. From within the non-representational view, it’s obviously wrong.

There’s a profound perceptual shift that one needs to make in order to switch those views. It’s like the duck/rabbit and young/old woman pictures. If you have only seen the old woman, it’s completely clear that she’s old, and anyone who says it’s a young woman is insane or idiotic. And vice versa. Once you’ve seen both, with a bit of practice you can flip back and forth fluidly.

I don’t know how to induce that flip in other people. I was as incredulous as anyone when I first encountered the nonrepresentational view, and I resisted it vitriolically for several months. Eventually I saw enough bits of the picture in isolation that the whole thing suddenly flipped.

For clarification: are you saying you don’t think our knowledge is encoded within our own minds (or bodies at least)? As far as I can tell it must be, and if it is then any scheme for encoding that knowledge could rightfully be called a “representation”.

Also you say you don’t know how to induce the “flip” in others, can you at least point to what flipped you? Or give a brief account? Meaning, meaningfulness, and meaninglessness are all easily accounted for from a representational view, so I’d love to understand why you would abandon it.

The flip

I’m sorry that I haven’t written much about this yet. As I said above, there’s no way to explain this concisely. It involves a major paradigm shift, in which you realize that much of what you believe about minds is wrong. Most people need to come to understand how many of the individual bits are wrong before the whole picture flips over.

are you saying you don’t think our knowledge is encoded within our own minds (or bodies at least)?

Right; in many cases it can’t be. I gave an example in passing in this post; search for “socially distributed knowledge”.

Clearly things-in-the-head play some role, but they are often not sufficient. And, generally, understanding things-in-the-head in terms of representations is misleading.

“Mental representation” is a bit like phlogiston. It’s not that specific empirical claims about it are wrong; it’s that it doesn’t cut reality at the joints, so you have to throw the concept away and start over.

can you at least point to what flipped you? Or give a brief account?

Yes, “I seem to be a fiction” is about this. (In part, at least.) You could search for “Dreyfus” if you don’t want to read the whole thing. His book Being-in-the-World is what flipped me; and unfortunately it might still be the best available account. I hope to do better eventually.

Meaning, meaningfulness, and meaninglessness are all easily accounted for from a representational view

Actually not. Attempting to do this always runs into insoluble logical paradoxes. The snarl of problems around “original intentionality” is one manifestation, for instance.

Things-in-the-head

The snarl of problems around “original intentionality” is one manifestation, for instance.

Let me unpack that one more step…

Suppose that there is a physical thing-in-the-head that you claim represents the knowledge that “George Washington was the first President of the United States.” What physical property makes it represent that?

No one has found an answer to that, even in principle. By “in principle,” I mean the problem is not that we don’t know enough about neurophysiology to answer it.

To make the issue clearer, it may be better to think about a physical thing in a computer that supposedly represents this knowledge. What physical property could make a thing-in-a-computer represent that?

(Mostly people who believe in mental representations think that computers could have them too, and there’s no in-principle difference between things-in-computers and things-in-brains. There are exceptions, like John Searle.)

Encodings

Evan H's picture

OK, I think your computer analogy may have “flipped” me. Either that or hopefully I’m on the cusp but not quite over the edge yet. I’ll try to summarize my current understanding.

I’ll start with the thermostat example. There, we have a direct causal link between two things (temperature and angle). It has to work, because of physics. The same way a ball has to fall if dropped. A thermostat whose angle does not encode the temperature is broken, by definition. There is no alternative explanation.

On the other hand, neural “encodings” only make sense in the context of intelligent interpretation. You cannot look at some set of neurons which supposedly encode a fact, in isolation, and determine if they are correct/incorrect/broken. Using the computer analogy, the bit string “001011011001” could represent “George Washington was the first president of the USA” given a program that interprets it that way, but that is just a tautology. In principle any bit string could “represent” any fact, given the correct program to interpret it. Therefore there is no “universal” or “correct” encoding for any given fact. Encodings exist, in the sense that some physical state exists in the brain, but there is in principle no reason to believe that different brains use similar encodings, or that a given brain’s encoding is stable over time, or that encodings must be localized in a single brain - e.g. writing or speech are examples of brain-external encodings of information that, like computer programs, are inherently arbitrary but are also capable of being interpreted.

As part of that intelligent interpretation, many “facts about the world” (e.g. “George Washington was the first president of the USA”) must be interpreted in a social context, meaning an individual must incorporate brain-external encodings. Going back to the thermostat example - a brain that believes John Adams was the first president of the USA is not necessarily a priori “broken”, in the way that a thermostat which does not encode the temperature is broken - there are matters of historical contingency as well as the matter of how that brain came to hold that belief.

Halfway down the rabbit hole

Evan, thanks, this is excellent!

The position you explain here is roughly that of Brian Cantwell Smith in his On the Origin of Objects. I think if you read the Amazon blurb on it, you will recognize some of what you said there. For example, his notion of “the middle distance” between direct causal coupling and irrelevance; and that for something to count as a representation, it has to be capable of being wrong. This seems importantly right, and I think that book is a huge contribution, and I recommend it highly.

That said, I think the rabbit hole goes deeper. Maybe a way to see that is starting from:

the bit string “001011011001” could represent “George Washington was the first president of the USA” given a program that interprets it that way

The question is: what does it mean for a program to interpret “001011011001” as “George Washington was the first president of the USA”? What property of the computation makes it do that? Or, given a particular computation, how can we tell whether or not it is doing that? I don’t think Smith gives an adequate answer to that.

A note written on paper is a representation only because someone can read and interpret its meaning. Searle’s “original intentionality” argument is that this is also true of representations in computers; their meaning is derivative from human use. His assumption is that people are different, in that we somehow individual humans have a power of “original” representation, not derived.

To go deeper, you have to regard all representations (in the “second sense” of “representation”) as at least potentially communicative. Or, to radicalize the claim, there is no original intentionality. All meaning is derived from interpretation potentially by someone else.

Science overreaching itself (into the pseudoscientific?)

My argument was that (cognitive) neuroscientists don’t need to worry about philosophy because it (mostly) isn’t relevant to the science. Your argument here is that they do, because if they don’t listen to philosophers, then philosophers will think they talk nonsense and they use words like computation and representation inappropriately. But they don’t care what philosophers think!

Scientists should care if they project their findings into assuming that follows in other areas (as pointed out by Chapman with the neurons in V1 cortex representing image edges and then projecting that process onto how one’s knowledge of Ouagadougou is similar). This is potentially bad for science because it can lead to scientists ignoring issues that actually need to be challenged and researched more. Maybe I am wrong here, as I am not familiar with the current research being done, but it seems that epistemological humility is important for science. That is, scientists can take a pragmatic approach (‘if it works it works’) but also be conscientious of all the sticky philosophical issues that demarcate when the pragmatic approach or findings are overreaching beyond what it can actually lay claim to. Who knows, by taking philosophy more seriously it could change how scientists view issues like these, and perhaps allow for someone to think of a hypothesis or some way to try to move towards empirically testing these now tenuous philosophical issues.

That can’t happen however, if scientists shrug off philosphical considerations just for the purported sake of pragmatism. I have been reading a lot of Haidt (“The Righteous Mind”) and other related things (The Interface Theory of Perception), and it seems that even in just how one phrases something it can alter perception and action towards that given thing. (E.g. when phrased, ‘x amount of people will die’ if you take this action vs. ‘x amount of people will be saved if you take this action,’ people were much more likely to take the riskier option if phrased in the negative/death focussed approach; also, ‘should the laws be changed to dimish the voting age from 18 to 16?’ incurred a lot more negative response as it was percieved as an affront to authority/status quo v. ‘should the laws be changed to give 16 and 17 year olds the right to vote?’ which was framed in a rights-based style and elicited greater positive response). Thus, when discussing anything it is important to consider the high likelihood of all of these ‘background’ moral or social or cultural processes - and therefore why at least greater philosophical competency could be important to making scientists be more scrupulous and epistemoligically humble/less overreaching.

Add new comment

Navigation

This page is in the section Selfness,
      which is in Doing meaning better.

The next page in this section is ⚒ The true self.

The previous page is Schematic overview: self.

This page’s topics are Essentialism, History of ideas, and Self.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2018 David Chapman.