Comments on “A billion tiny spooks”

Representational theory of mind still popular?

Shane 2015-07-13
The representational theory of mind is the dominant approach. Simplifying somewhat, it says that beliefs, desires, and intentions are “represented” as sentences in a special language (“mentalese”). Mentalese, in turn, is “implemented” as physical things (structures, states, or processes) in the brain."

I agree with you on the representational theory of mind being wrong, but not the statements on its current importance.

Cognitive science was a bit of a failure in my eyes - and the failure is partly due to the importance of the representational theory of mind in tying things together. These days, I would say the majority of people who study the mind (psychologists and neuroscientists) don’t believe in the representational theory of mind (in the Language of Thought version of mentalese that you mention) and it doesn’t feature in their work. There are lots of people who call themselves cognitive psychologists and cognitive neuroscientists, but I don’t think they would ascribe to cognitivism (as formulated here) and aren’t much interested in what philosophers have to say about meaning, though I couldn’t really judge what most philosophers of mind believe these days. And I also have doubts on it is what most educated members of the public think - those that read Pinker’s “How the mind works” uncritically might assume it to be true. But my guess is that the popularity of this particular idea peaked about 20 years ago (e.g. Pinker’s book was published in 97). But there are a lot of other popular science books since then on how the mind works which do not assume such a theory, and are more influenced by empirical findings than philosophical theories.

It might be fair to say that many believe in a weaker view that disregards cognitivism but takes the stance that the mind/brain processes representations, and those representations have “meaning” and those meanings are somehow “in the head”, so I would be interested to see what you think the negative consequences of that are.

The failure of cognitive science

David Chapman 2015-07-13
Cognitive science was a bit of a failure in my eyes... [it] peaked about 20 years ago

Yes, indeed! This page (now only a summary) will explain the reasons for this, with a brief history of this.

It will argue that most of the cognitivist assumptions (individualist representationalism, most importantly) were accepted uncritically by neuroscience, however.

aren't much interested in what philosophers have to say about meaning

Yeah, and this is a big problem, because they accept representationalism without thinking about it at all. In other words:

many believe in a weaker view that disregards cognitivism but takes the stance that the mind/brain processes representations, and those representations have "meaning" and those meanings are somehow "in the head"

For nearly all of them, the question “how could that possibly work?” doesn’t even come up; it’s just taken for granted that it somehow does. Philosophers have asked the question and failed to answer it, which shows that one ought to worry that many it doesn’t work. (And, in fact, I’m reasonably sure that it can’t.)

It would be helpful for neuroscientists to notice that philosophy of mind has failed; this is directly relevant to their own research questions. Instead there’s a comfortable sense of “well, that’s obvious, and anyway it’s not part of our problem domain, we’re doing science, we don’t care about questions like that.”

Getting this wrong has profound implications for what sort of thing a person is, what a self is, what societies are, how individuals and societies relate, and so on.

Should science care about philosophical (non) problems?

Shane 2015-07-14
Philosophers have asked the question and failed to answer it, which shows that one ought to worry that many it doesn't work...there are good in-principle reasons to think that no answers can be found

Should a scientist ought to worry or care in the slightest? Especially if it is a question for which no answers can be found? I am not convinced they should. It seems similar to the hard problem of consciousness, where most neuroscientists don’t worry about the arguments philosophers have with each other (and a few also think no answers can be found), and it doesn’t seem to be an impediment for the science.

I mentioned meaning in my post, but only to try to approximate or guess the kind of argument you want to to make. But the key thing, as you mention, is individualist representations - the idea that brain works by doing computations on information, and that information can be described as a “representation”, and that a high level “cognitive” description is a useful level to understand the operation of the system. This is accepted uncritically because it works. The idea of “representations” may be philosophically dubious, but, say, if you want to describe how vision or hearing works (i.e. sub-systems rather than a person-level explanation of beliefs and desires), a cognitive explanation is pretty helpful (though in cognitive neuroscience, compared with cognitive science, the wet stuff level is needed for the complete story).

Why (some) neuroscientists need to understand representation

David Chapman 2015-07-14

I agree that the “hard problem” is probably best ignored. (Although I have suggested an fMRI experiment that might help with it!)

“Representation” is used to mean two (or more) very different things. As an example of the first: “in a thermostat, the angle of the bimetallic strip represents the temperature in the room; the thermostat uses this to compute when to turn the furnace on.” Here it’s unproblematic to understand the system causally; there is a direct physical connection between the temperature and the angle. However, exactly because the causality is so direct, it’s not clear that thinking about it in terms of “computation” or “representation” is helpful. It’s probably mostly harmless, though.

Now consider “I know that Ouagadougou is the capital of Bourkina Fasso.” I’ve never been there; I don’t know anyone who has been there; I don’t know anything about the place; I wouldn’t recognize it if I were dropped there. My causal coupling with Ouagadougou is extremely indirect. Or “I know that Radagast was a wizard”; a true statement, even though Radagast doesn’t exist and never did. A causal account seems inherently impossible or useless.

The representational approach says that I know Ouagadougou is the capital of Bourkina Fasso because there’s a representation in my head that says “Ouagadougou is the capital of Bourkina Fasso” (in mentalese). Then I can do computations with that (e.g. I know countries have only one capital, so I know Timbuktu is not the capital).

This is what can’t work. There’s a slew of reasons it can’t work, any one of which would be sufficient. All attempts to say what it would even mean for a thing-in-the-head to represent “Ouagadougou is the capital of Bourkina Fasso” have failed in principle (never mind the scientific and engineering considerations, which are also insuperable).

It’s an unfortunate historical confusion to use the same word “representation” for the thermostat and Ouagadougou. What is going on in the two cases must be entirely different. However, people regularly implicitly assume that they are more-or-less the same, and slide from one to the other without noticing.

In perceptual neuroscience, it’s (mostly) unproblematic to say “neurons in V1 cortex represent image edges,” because there’s a straightforward causal pathway involved. Because this does work, and “representation” is used for both phenomena, it’s normal to assume that there are neurons somewhere else that represent my knowledge of Ouagadougou, and they could be understood using the same methods. But that can’t be true.

Single-cell-recording neuroscience mostly avoided this problem. But now in the fMRI era, neuroscientists make claims about representations of the second sort. When they do, they talk nonsense.

Undoubtedly, there are neurons involved in my knowledge of Ouagadougou; but so are other things that aren’t in my head at all. This knowledge is socially distributed. I know about Ouagadougou only by relying on other people, who could (for example) get me there.

Not caring doesn't seem to get in the way

Shane 2015-07-18

My argument was that (cognitive) neuroscientists don’t need to worry about philosophy because it (mostly) isn’t relevant to the science. Your argument here is that they do, because if they don’t listen to philosophers, then philosophers will think they talk nonsense and they use words like computation and representation inappropriately. But they don’t care what philosophers think!

Your argument appears to be based on the assumption that non-philosophers should care about the philosophical problem of reference or intentionality (i.e. that “Ouagadougou” has to have some link to Ouagadougou for it to have “meaning”). (And part of your argument is based on the mentalese-style representationalist theory of mind, which as I mentioned before, though of historical interest, has little to no importance to current scientific understanding of how the brain works). But what psychologists/neuroscientists care about is how we can use information (or whatever is learnt and stored in the networks of the brain) to do stuff, like store the information you supplied in your comment, and answer questions about what the capital of Bourkina Fasso. And there doesn’t need to be some proximal or causal story to a something in the world - “Ouagadougou” - in order to do that.

I am looking forward to seeing your discussion of the consequences of wrong-headed assumptions, however, those consequences (as I understand it) are mainly about consequences beyond science - i.e. how we see ourselves and society etc… And here I can see philosophy being helpful.

Limits due to draftiness

David Chapman 2015-07-18

Writing this book incrementally, over many years, causes many problems. One is that most of it is currently either entirely absent, or appears only in draft form, usually vague and sketchy.

If this page were complete, I hope it would explain why I would disagree with some of your last comment, or would clarify how you misunderstood what I was saying (so that we would agree after all). But I expect that writing it will be a couple of full-time weeks worth of work; and I think I probably can’t explain the line of argument any better in a comment.

One part of the writing will be checking in on the current state of cognitive neuroscience, which I’m out of date on. It’s possible that it has changed enough that what I was going to say in the page is obsolete and irrelevant. I think that’s unlikely—but until I’ve done the reading, I can’t be sure.

Thanks for your comments—they were incisive and helpful!

Okay, if I may enter this

Obnoxious Stranger 2015-08-22

Okay, if I may enter this discussion…

While I sort of agree that current outlook on cognition tends to perilously gloss over “extracorporal” aspects of cognition, I think it’s quite fair to say that cognitive processes of a grown up human are relatively autonomous.

Thought experiment:

If someone (who already knows capital of Bourkina Fasso) were to be isolated in a room for, say, two weeks, and then presented with a “what is capital of Bourkina Fasso” question, one would quite definitely remember it’s Ouagadougou and not, say, Berlin.

A “can not sync with human collective” error will not happen :)

As to “neuro-representationism”, I can’t help but clumsily try to defend the poor thing, or at least its weakest versions (that are quite less broken, IMHO, than their mentalese progenitors)

“weak” version of “neuro-representationism” does not imply existence of either “mentalese language” or some kind of “platonic realm of representations”, it merely suggests that, for you to be capable to (autonomously) make claims about capitals of foreign countries, your brain needs to encode both specific information about them and the wider contextual framework needed for you to be able to make sense of the question and concepts involved (if you can’t tell what a “country” is, how can you discuss what it’s capital is?) , which does not appear to be an overtly absurd claim.

Also, to the best of my limited understanding, this approach does not demand that the construct being “weak-sense represented” in this manner is a real object or something you interacted with personally (so it can be “Radagast”, “communism” and even “a blivet” though some might argue that the last two items are tautological ;) )

It’s representationism in weakest possible sense, and frankly it does not look like there is much in terms of sane physicalist alternatives to it.

Fun sidenotes:
1) there must be some kind of encoding scheme in the brain for this to work, so you can of course claim that it tries to “sneak mentalese through the back door”.
However this scheme is not a “language” unless in most general sense (in this sense genetic code itself, as well as processor instructions and file systems would also count as “languages”)

2) If 1) is true, than it just might so be that there are concepts that can not be “represented” in the brain, “literally unthinkable” things, which is a curious idea.

3) what if some part of philosophy crucial for a completely coherent epistemology actually belongs to this “literally unthinkable” category, and how would we even find that out ?

The duck/rabbit flip

David Chapman 2015-08-23

Hello Obnoxious,

I’m torn, because I would like to respond substantively to your (unobnoxious and perfectly sensible) comments. But as with Shane’s similar points above, it’s unlikely that anything I could say would be helpful, short of actually writing the page itself (and probably several other pages in addition). That would take several full-time weeks worth of work (at least).

From within the representationalist view, what you say is obviously right. From within the non-representational view, it’s obviously wrong.

There’s a profound perceptual shift that one needs to make in order to switch those views. It’s like the duck/rabbit and young/old woman pictures. If you have only seen the old woman, it’s completely clear that she’s old, and anyone who says it’s a young woman is insane or idiotic. And vice versa. Once you’ve seen both, with a bit of practice you can flip back and forth fluidly.

I don’t know how to induce that flip in other people. I was as incredulous as anyone when I first encountered the nonrepresentational view, and I resisted it vitriolically for several months. Eventually I saw enough bits of the picture in isolation that the whole thing suddenly flipped.

Animism is not Cartesian Dualism

Herbal Panda 2015-09-01

“The natural human view (of pre-modern people) is that the mind is not a physical thing. It is the “ghost in the machine”: a ‘spook.’ “

I think this is incorrect. If there is a “natural human view”, it is animism - the view that “mind” is a pervasive property of the entire universe, and that all things - including things that modern people may not consider to be “alive” or “conscious” - partake of this property.

Panpsychicism/animism vs dualism

David Chapman 2015-09-01

Hmm, yes, you are probably right. I’ve seen both claims made by experts in the field. At a guess, the “natural view” is vague, and resembles panpsychicism or dualism depending on context.

Embodied Cognition

Gary Basin 2016-05-24

Hey David,

I’m really enjoying reading through all your work and it has triggered some delightful changes in me. Thank you!

If it does not require too much exposition on your part, I was wondering if you could hint at how closely your views of cognition (“interactional dynamics”, etc) hew towards some of the recent writing on embodied cognition? Specifically stuff like this blog or the work of Anthony Chemero

Embodied cognition

David Chapman 2016-05-25

Hi Gary, glad this has been enjoyable!

I don’t know the current work on embodied cognition well. (Thanks for the pointers; I’ve added them to my to-read list.) From what I’ve read, it’s closely similar to the work Phil Agre and I did in the mid/late 80s, and draws on some of the same earlier philosophical and psychological work. (Gibsonian perceptual psychology and American Pragmatism, for instance.) Although we were among the first to champion this view specifically against the mainstream cognitive science of the time, we weren’t alone; we found several like-minded folks in various other fields. Lakoff was the only one who became well-known, but there were others. Phil organized an interdisciplinary workshop sometime around 1990 that was quite exciting.

I felt at the time that this approach had considerable promise, although probably not in AI in the short term. I thought seriously about switching fields to psychology or philosophy or something, but decided instead to do something entirely different.

The approach seemed to vanish completely for a couple decades (although maybe I was just out of touch). The Clark and Chalmers “Extended Mind” paper was the first sign of a revival (that I know of). It seems to be gathering momentum lately, which I’m happy about.

I would say that most of what I have seen from the renewed movement lacks a piece we thought critical, which was social situatedness. We got that from ethnomethodology and from Heidegger.

The embodied approach may go a long way toward understanding non-social animals. For humans, sociality is so fundamental that looking at embodiment without it risks serious distortion.

When I went to college I met

anders horn 2016-05-25

When I went to college I met some linguistic professors who self identified as cognitive linguists (They explained that “cognitive” meant that behaviorism was evil and Chomsky was ridiculous (linguistics doesn’t exactly move at light speed)). They told me that the way words were stored in the brain had to be whole utterance at a time because the evidence contraindicated all the simpler previous theories. Also they were big fans of language as social and embedded (and that its purpose was coordination rather than communication) and gave me “understanding computers and cognition” and “metaphors we live by” to read. The older one of them was thrilled about the idea of change in the meaning of words being driven by abductive reanalysis. They didn’t give me “Vision Instruction and Action” to read but expect they would strongly agree with the section on reference being grounded in shared focus. I’m reading more about AI because I want to make a simulation that reproduces the kinds of semantic change that we see in language but because so much of the patterns in language are there directly because of patterns in the (lived) world I need for the agents to have something to communicate about before I can see if it acts something like language.

Something to communicate about

David Chapman 2016-05-26
I want to make a simulation that reproduces the kinds of semantic change that we see in language but because so much of the patterns in language are there directly because of patterns in the (lived) world I need for the agents to have something to communicate about before I can see if it acts something like language.

Yes, that seems right to me!

Language is almost entirely about people communicating in order to collaborate. It’s inseparable from practical activity (such as making breakfast, but also including social activity, e.g. relationship maintenance).

Linguistics has almost entirely ignored this, which unfortunately makes most of it nonsense. There are exceptions, of course.

FWIW, I think the “conversation analysis” approach is the most likely to be fruitful. One of the last thing I did in AI before leaving the field was to try to model some bits of its insights. (I didn’t finish that, and never wrote about it.)

Physically remembering impossible actions

Kris 2016-05-26

When talking about Fighting Games with my community members, we often physically act out the techniques we see on screen in order to communicate them to each other. When remembering what a technique is, but can’t quite remember the name of it, the bodily action springs to mind, then the name might follow.

This is very amusing when you have strings of impossible movement, such as animations canceling immediately into other animations. Seeing a person go from one action and as fast as possible into the next one, or imitating the action of flying in the air while being firmly on the ground.

They physically remember humanly impossible action, because that is the most expedient frame of reference for remembering what characters do in a fighting game! I would have thought that surely they would remember the physical button & joystick inputs for the techniques, but that is not the case. The button & joystick inputs are the physical shortcut to produce the bodily-framed memory of physically impossible action! Delightfully weird!

Any explanation of your non-representational view?

Jeremy Wright 2016-10-27

David,

You said:

From within the representationalist view, what you say is obviously right. From within the non-representational view, it’s obviously wrong.

There’s a profound perceptual shift that one needs to make in order to switch those views. It’s like the duck/rabbit and young/old woman pictures. If you have only seen the old woman, it’s completely clear that she’s old, and anyone who says it’s a young woman is insane or idiotic. And vice versa. Once you’ve seen both, with a bit of practice you can flip back and forth fluidly.

I don’t know how to induce that flip in other people. I was as incredulous as anyone when I first encountered the nonrepresentational view, and I resisted it vitriolically for several months. Eventually I saw enough bits of the picture in isolation that the whole thing suddenly flipped.

For clarification: are you saying you don’t think our knowledge is encoded within our own minds (or bodies at least)? As far as I can tell it must be, and if it is then any scheme for encoding that knowledge could rightfully be called a “representation”.

Also you say you don’t know how to induce the “flip” in others, can you at least point to what flipped you? Or give a brief account? Meaning, meaningfulness, and meaninglessness are all easily accounted for from a representational view, so I’d love to understand why you would abandon it.

The flip

David Chapman 2016-10-27

I’m sorry that I haven’t written much about this yet. As I said above, there’s no way to explain this concisely. It involves a major paradigm shift, in which you realize that much of what you believe about minds is wrong. Most people need to come to understand how many of the individual bits are wrong before the whole picture flips over.

are you saying you don’t think our knowledge is encoded within our own minds (or bodies at least)?

Right; in many cases it can’t be. I gave an example in passing in this post; search for “socially distributed knowledge”.

Clearly things-in-the-head play some role, but they are often not sufficient. And, generally, understanding things-in-the-head in terms of representations is misleading.

“Mental representation” is a bit like phlogiston. It’s not that specific empirical claims about it are wrong; it’s that it doesn’t cut reality at the joints, so you have to throw the concept away and start over.

can you at least point to what flipped you? Or give a brief account?

Yes, “I seem to be a fiction” is about this. (In part, at least.) You could search for “Dreyfus” if you don’t want to read the whole thing. His book Being-in-the-World is what flipped me; and unfortunately it might still be the best available account. I hope to do better eventually.

Meaning, meaningfulness, and meaninglessness are all easily accounted for from a representational view

Actually not. Attempting to do this always runs into insoluble logical paradoxes. The snarl of problems around “original intentionality” is one manifestation, for instance.

Things-in-the-head

David Chapman 2016-10-27

The snarl of problems around “original intentionality” is one manifestation, for instance.

Let me unpack that one more step…

Suppose that there is a physical thing-in-the-head that you claim represents the knowledge that “George Washington was the first President of the United States.” What physical property makes it represent that?

No one has found an answer to that, even in principle. By “in principle,” I mean the problem is not that we don’t know enough about neurophysiology to answer it.

To make the issue clearer, it may be better to think about a physical thing in a computer that supposedly represents this knowledge. What physical property could make a thing-in-a-computer represent that?

(Mostly people who believe in mental representations think that computers could have them too, and there’s no in-principle difference between things-in-computers and things-in-brains. There are exceptions, like John Searle.)

Encodings

Evan H 2017-01-01

OK, I think your computer analogy may have “flipped” me. Either that or hopefully I’m on the cusp but not quite over the edge yet. I’ll try to summarize my current understanding.

I’ll start with the thermostat example. There, we have a direct causal link between two things (temperature and angle). It has to work, because of physics. The same way a ball has to fall if dropped. A thermostat whose angle does not encode the temperature is broken, by definition. There is no alternative explanation.

On the other hand, neural “encodings” only make sense in the context of intelligent interpretation. You cannot look at some set of neurons which supposedly encode a fact, in isolation, and determine if they are correct/incorrect/broken. Using the computer analogy, the bit string “001011011001” could represent “George Washington was the first president of the USA” given a program that interprets it that way, but that is just a tautology. In principle any bit string could “represent” any fact, given the correct program to interpret it. Therefore there is no “universal” or “correct” encoding for any given fact. Encodings exist, in the sense that some physical state exists in the brain, but there is in principle no reason to believe that different brains use similar encodings, or that a given brain’s encoding is stable over time, or that encodings must be localized in a single brain - e.g. writing or speech are examples of brain-external encodings of information that, like computer programs, are inherently arbitrary but are also capable of being interpreted.

As part of that intelligent interpretation, many “facts about the world” (e.g. “George Washington was the first president of the USA”) must be interpreted in a social context, meaning an individual must incorporate brain-external encodings. Going back to the thermostat example - a brain that believes John Adams was the first president of the USA is not necessarily a priori “broken”, in the way that a thermostat which does not encode the temperature is broken - there are matters of historical contingency as well as the matter of how that brain came to hold that belief.

Halfway down the rabbit hole

David Chapman 2017-01-02

Evan, thanks, this is excellent!

The position you explain here is roughly that of Brian Cantwell Smith in his On the Origin of Objects. I think if you read the Amazon blurb on it, you will recognize some of what you said there. For example, his notion of “the middle distance” between direct causal coupling and irrelevance; and that for something to count as a representation, it has to be capable of being wrong. This seems importantly right, and I think that book is a huge contribution, and I recommend it highly.

That said, I think the rabbit hole goes deeper. Maybe a way to see that is starting from:

the bit string “001011011001” could represent “George Washington was the first president of the USA” given a program that interprets it that way

The question is: what does it mean for a program to interpret “001011011001” as “George Washington was the first president of the USA”? What property of the computation makes it do that? Or, given a particular computation, how can we tell whether or not it is doing that? I don’t think Smith gives an adequate answer to that.

A note written on paper is a representation only because someone can read and interpret its meaning. Searle’s “original intentionality” argument is that this is also true of representations in computers; their meaning is derivative from human use. His assumption is that people are different, in that we somehow individual humans have a power of “original” representation, not derived.

To go deeper, you have to regard all representations (in the “second sense” of “representation”) as at least potentially communicative. Or, to radicalize the claim, there is no original intentionality. All meaning is derived from interpretation potentially by someone else.

Science overreaching itself (into the pseudoscientific?)

Trevor 2018-07-26

My argument was that (cognitive) neuroscientists don’t need to worry about philosophy because it (mostly) isn’t relevant to the science. Your argument here is that they do, because if they don’t listen to philosophers, then philosophers will think they talk nonsense and they use words like computation and representation inappropriately. But they don’t care what philosophers think!

Scientists should care if they project their findings into assuming that follows in other areas (as pointed out by Chapman with the neurons in V1 cortex representing image edges and then projecting that process onto how one’s knowledge of Ouagadougou is similar). This is potentially bad for science because it can lead to scientists ignoring issues that actually need to be challenged and researched more. Maybe I am wrong here, as I am not familiar with the current research being done, but it seems that epistemological humility is important for science. That is, scientists can take a pragmatic approach (‘if it works it works’) but also be conscientious of all the sticky philosophical issues that demarcate when the pragmatic approach or findings are overreaching beyond what it can actually lay claim to. Who knows, by taking philosophy more seriously it could change how scientists view issues like these, and perhaps allow for someone to think of a hypothesis or some way to try to move towards empirically testing these now tenuous philosophical issues.

That can’t happen however, if scientists shrug off philosphical considerations just for the purported sake of pragmatism. I have been reading a lot of Haidt (“The Righteous Mind”) and other related things (The Interface Theory of Perception), and it seems that even in just how one phrases something it can alter perception and action towards that given thing. (E.g. when phrased, ‘x amount of people will die’ if you take this action vs. ‘x amount of people will be saved if you take this action,’ people were much more likely to take the riskier option if phrased in the negative/death focussed approach; also, ‘should the laws be changed to dimish the voting age from 18 to 16?’ incurred a lot more negative response as it was percieved as an affront to authority/status quo v. ‘should the laws be changed to give 16 and 17 year olds the right to vote?’ which was framed in a rights-based style and elicited greater positive response). Thus, when discussing anything it is important to consider the high likelihood of all of these ‘background’ moral or social or cultural processes - and therefore why at least greater philosophical competency could be important to making scientists be more scrupulous and epistemoligically humble/less overreaching.

I find this argument

Julius Schmidt 2019-08-16

I find this argument confusing. A mind (human or animal) needs a model of the world around it in order to effect action. If it is to be useful, it needs to satisfy your criterion of causality. If the mind is basically a fancy computer (no ghost in the machine), this model has to be coded in a certain way.

Now, given a coded representation of the real world, a well-engineered system will tolerate modification and still yield sensible results, through extrapolation. If I somehow ‘hack’ your mind to make you think there is a black cat in the room, you will act as if there really was a black cat there.

Of course, we don’t actually need anything that far-fetched, since you can just imagine that there is a black cat in the room, i.e. take your own mental representation, modify it and break the link with reality. Your mind doesn’t really care whether or not the representation represents reality, just like your calculator doesn’t care whether the numbers you enter ‘mean’ something.

Maybe we are really agreeing here and we are just disagreeing over what ‘representation’ means, but to me the idea of a ‘mental object’ or ‘abstract(ed) representation’ is a straightforward extension of a ‘thermostat-style’ representation.

Further thoughts

Julius Schmidt 2019-08-17

I had a look at the Dreyfus paper on Heideggerian AI and now I’m even more confused. It’s really unclear to me what the positions even are.

Maybe an analogy to control theory:
The classical AI position seems to be like naive open-loop control. You build a model of the system, invert it and then you can control it (…NOT).
Heidegger strikes me as similar to simple closed-loop control.
I.e. a thermostat! A PID controller does not involve a representation of the system it is trying to control.
What I think happens in the brain (and what AI) needs to strive for is closer to things like more complex control approaches, which combine modelling with continuous feedback. (cf. Kalman filter)
You can nitpick the details but I think it is mathematically provable that such systems necessarily develop models of their surroundings.

Really, both the classical AI and Heidegger strike me as confused stances here, fixating one aspect (representation / embodiment) and denying the other.
I think the correct model is that humans develop a representation of their environment which is continually updated as new information comes in.
(I think culture also integrates into this model. Sure, knowledge is ‘socially distributed’, but the representation acts as a ‘cache’; I don’t become completely stupid just because I’m isolated)

Representation

David Chapman 2019-08-17

Yes, I’m sorry, this isn’t an easy thing to explain. (If it were easy, I would have finished writing this page years ago.) In the late 80s I spent much of my time explaining it to cognitive scientists, and some got it, and some didn’t. It requires a major flip in your understanding.

If you actively want to try to get it, I’d suggest reading Phil Agre’s Computation and Human Experience.

David,

Kaj Sotala 2020-04-20

David,

Now consider “I know that Ouagadougou is the capital of Bourkina Fasso.” I’ve never been there; I don’t know anyone who has been there; I don’t know anything about the place; I wouldn’t recognize it if I were dropped there. My causal coupling with Ouagadougou is extremely indirect. Or “I know that Radagast was a wizard”; a true statement, even though Radagast doesn’t exist and never did. A causal account seems inherently impossible or useless.

I am confused by these examples; the causal chain is certainly slightly longer than in the case of vision, and the nature of social knowledge is slightly different than the nature of physical knowledge, but I don’t see why that should make a causal account impossible.

For the Radagast example: Tolkien wrote the Lord of the Rings, in which Radagast was stated to be a wizard, thus establishing Radagast as a wizard in LoTR. Some copy of LoTR then ended up in your hands and you read it, leaving you with a memory of having read LoTR and the fact that Radagast was a wizard within that novel. If you engage more with the LoTR fandom and franchise, you might see this fact referenced more often, accumulating more memories of it.

For Ouagadougou: some legislative process made the decision to make Ouagadougou the capital of Burkina Faso. Knowledge of this decision was then recorded in various procedures, documents, news articles etc., and eventually made it to some encyclopedia or other work which you read, leaving in your mind a representation of having read this fact.

In both cases you hear or read some statement which is causally entangled with the fact in question, allowing you to learn of this fact. The causal account may involve multiple steps, but it is still there.

Of course, to be precise, in these explanations the brain doesn’t represent the fact itself, but rather the memory of having heard about it… but that seems mostly like an episodic/semantic memory distinction, in that after one or more episodic memories of hearing the fact are recorded, it tends to get extracted into a general form into semantic memory which represents just the statement of the fact, without necessarily saving the original source of that claim.

Suppose that there is a physical thing-in-the-head that you claim represents the knowledge that “George Washington was the first President of the United States.” What physical property makes it represent that?

Not being American, I think that the first time I learned about George Washington might have been when I played Day of the Tentacle as I child; the player encounters George Washington in the game, and finds out that he was an early POTUS. So the physical thing-in-my-head that represented this knowledge, would have been the visual memories of playing the game and reading the dialogue. (Though to be exact, while it was pretty strongly implicit that he was going to be the first President of the United States, I am not sure I ever explicitly put together that he was in fact the very first… until reading your comment here. So actually the physical thing in my head that represents the fact that George Washington was the first President of the United States, might actually be the memory of this very conversation that we are having right now! Yeah, I’m not very read up on American history.)

An additional thing that makes these memories a representation of this fact is that, if someone asks me “who was the first President of the United States”, I end up answering “George Washington”; and if asked “how do you know”, I might also reply with an account of how I learned of this. (As I just did in the previous paragraph.) So another physical property that makes the physical thing-in-my-head represent George Washington being the first POTUS, is that it is involved in the causal chain of someone me asking a question and me then speaking (or writing) the answer of “George Washington”.

I am not sure why the computer example would make this clearer, since I can just substitute a robot-me which plays Day of the Tentacle and participates in the comment section of your blog, and get essentially the same answer. :-)

But if you want a different example, I was actually just coincidentally playing with the “World Map Quiz” app for memorizing the capitals of different countries; in the case of that program, the physical thing-in-the-phone would be the bits which tell me “correct answer!” when it asks me “what country is Ouagadougou the capital of” and I choose Burkina Faso on the map. Again, this is a correct representation of Ouagadougou being the capital of Burkina Faso, by virtue of it being part of a physical causal chain which started from the fact of Ouagadougou being declared the capital of Burkina Faso and that knowledge then being transmitted all the way to this app… and also by virtue of it being able to transmit that fact further to my brain, from which I can tell other people about it and extend the chain, etc.

(Also, since I have been on a

Kaj Sotala 2020-04-20

(Also, since I have been on a predictive processing kick recently, an answer in terms of that framework might be: something in the brain represents the capital of Burkina Faso being Ouagadougou, if the brain consistently predicts that other people will treat Ouagadougou as being the capital of Burkina Faso, and causes behavior which is in line with that.)

The problem with stubs

David Chapman 2020-04-20

Most of Meaningness, ten years after I started writing it, is still just “stubs”—placeholder pages with just a bit of text saying what is supposed to be there eventually. This is bad for lots of reasons, and I apologize for the agonizingly slow rate of production.

In the case of this page, one consequence has been repeatedly pointing out in this comment section that I haven’t yet explained the topic, and can’t explain it short of actually writing the page, which isn’t likely to happen soon (because other parts of the book have higher priority).

If you want to explore the topic further, it has been a major issue in the philosophy of mind, and cognitive science more generally. It was hashed out particularly in the 1980s; all possible positions were explored then, and all were found to be unsatisfactory. Since then, the field has mostly left the issue as an unproductive morass, best avoided. When philosophers take positions on it, they argue along the lines of “this possibility is less bad than that one,” rather than coming up with anything new.

So you can read about that in mainstream sources, and I’d encourage you to do so. (The stub references the SEP article which is an obvious starting point.)

I don’t have anything new to say either. The goal of the page is mainly to point out that no version of representationalism is widely accepted as having resolved the fundamental problems. It will then suggest that this is because representationalism is wrong. This is also not new; it goes back to at least the 1970s, and is a common view in current “4E” cognitive science.

I’m closing comments on the page for now. Over the five years of discussion here, I think we’ve collectively done a surprisingly good job of pointing out some of the major issues, and this archive may be useful for some readers, in lieu of the actual page.