Comments on “Abstract Reasoning as Emergent from Concrete Activity”

Comments

This was a fun post! It gave

Romeo Stevens's picture

This was a fun post! It gave rise to the thoughts that auditory schemas are what we are doing when we discuss genres of music or a strange accent suddenly snaps into comprehension. Visual schemas are one of the reasons chain stores are popular (reduced cognitive load). Linguistic schemas are the way we have specificity in our blanks as Gendlin describes: how do you know what the word you were looking for is when you've found it?

30 Years on are you still confident that cognitive cliches are the most abstract structures? I see them as part of the structure of meta-heuristics, but not fully covering that space. Maybe it is that I don't yet know how to represent some of it as cognitive cliches.

Cognitive cliches

Yeah, I don't know what to think about cognitive cliches. I vaguely intend to revisit them soonish, to try to make them useful for something or other!

After reading the article,

After reading the article, the leading statement, "We believe that abstract reasoning is not primitive, but derived ... from concrete activity.", seems to primarily refer to interactions with inanimate objects; i.e. between one mind and an inanimate world.

A very different view of cognition, while acknowledging "man the engineer/tool-maker", makes a detailed argument for development (within a lifetime) of abstract thought as imaginable only in a context of developing sociality. This is Michael Tomasello's The Cultural Origins of Human Cognition(1999). He also wrote the complementary Origins of Human Communication(2008). Tomasello has spent a couple of decades turning out (with many coauthors) an amazing series of studies, largely of how primates, and human infants and children deal with various situations, and writing a few books combining massive empiricism, analysis, and philosophical depth.

I recently wrote a Ribbonfarm article "From Monkey Neurons to the Meta-Brain" drawing on his work (but equally on work of a couple of very different disciplines), which I think looks at some of the themes you write about.

I've been reading and admiring your work for some time, first attracted by "Building a Bridge to Meta-Rationality...".

Between one mind and an inanimate world

between one mind and an inanimate world

Yes, I think that's a pretty fair criticism of this paper.

Our understanding did draw heavily on ethnomethodology, which studies social interaction. The paper makes some gestures in that direction, in talking about internalization and reflection, for example. Other parts of our work tried to draw this out more explicitly. However, as AI researchers, we were drastically limited by the tech available at the time, which wasn't up to the job. The final bit of AI research I did (in 1992 I think) was on language use to support collaboration. I gave up on AI part way through that project, and never wrote it up.

Thanks for pointing at Tomasello's work! From a quick glance, it seems very interesting. I'll read more when I get a chance.

Brief intros to Tomasello

"The Ultra-Social Species" is an 8-page overview for as general an audience as he'd ever reach of some major conclusions with quite a bit of experimental backing.
"Understanding and sharing intentions: The origins of cultural cognition" is a dense 16 pages followed by 30 pages of 1000-word responses by a distinguished group incl Jerome Bruner, Philippe Rochat (the main author cited in Sarah Perry's Essence of Peopling), and Franz de Waal, and 6 pages of response to the responses.

This and the last post

Peter's picture

There's an interesting symmetry between some of the topics here and the comments on the previous post. The quip-level vague oversimplification is "the comments in the previous post were saying that things conventionally thought of in the world must be at least partly in people's heads, this post is saying that things conventionally thought of as in people's heads must be at least partly in the world."

(The relevant bit of the comments thread: "we agree that "objective, mind-independent truths" about things like bits of rock is a really silly idea ")

I’m annoyed that cognitive

I’m annoyed that cognitive “science”—a mistaken, unscientific ideology of meaningness—has continued to exert a harmful, distorting influence on our understanding of ourselves in the meantime.

As a cognitive science graduate wanting to defend my field, I feel the need to point out that research on embodied cognition is very much cogsci research as well. :P And it's not a particularly new development to cognitive science, either.

Speaking overbroadly, as usual

Yes, condemning cognitive science outright is indefensible, of course. Overly-broad statements are my usual rhetorical failure mode. Also, I haven't followed the field closely in 25 years.

That said, my impression is that a substantial majority of the field still takes for granted metaphysical assumptions that are plainly false and harmful.

Also, although embodiment is one significant aspect of "the concrete-situated view," it's not the main point. There's all those other E words... plus items not on the list. And, to address them, I think you need to pretty well start over. Conceptually, that is. Specific empirical findings may hold up... but the understanding of their significance needs extensive revision.

it's not a particularly new development to cognitive science

Yeah, that was sort of the point of my "I told you so in 1986" bit. Although it wasn't new in '86, either, just new to AI and to analytical philosophy of mind. Dreyfus had been trying to get both of those fields to pay attention since the 1960s, with zero success, until Lucy Suchman at Xerox PARC was able to translate them for cognitive psychologists, and then to Phil and me, and then we were able to translate them to AI and analytic philosophers. [This is a simplification of the history, of course; there were lots of other people involved—John Haugeland and Mark Bickhard, for two examples.]

Given that this stuff has been around for decades, a reasonable objection is "if it is right, it ought to have out-competed the alternative by now." I don't fully understand why that hasn't happened. There's a lot of historical specificities, involving opportunities missed for accidental reasons. But that isn't good as an over-all explanation.

I think that the problem is that the view is actually more difficult to understand, and way more difficult to explain. I'm working on that now, and it's hard.

routines

anders's picture

When I read your and Agre's publications a year and a half ago I was most thrilled by the accounts of routines forming. There wasn't as many routines described as I would wish, so I made my own.
On February 11 2016 I had the opportunity to write the date of installation on several hundred "greenlite" lightbulbs. I wrote down how the routine changed as I did it.
Yesterday I did the same thing to about 20 bulbs to replace ones that had gotten wet and observed the routine I used to compare.

February 11 2016 routine:
(using left hand) pick up box, rotate, open top flap of box with thumbnail.
press them to the outside of the box so they stay open
holding the box with right hand, pull bulb out of box with thumb and index finger of left hand.
set down box
pick up and uncap pen with right hand
holding the lightbulb in left hand I write wight my right, rotating the bulb as I write to keep my writing hand comfortable.
I then put the bulb back in the box with my left hand (the flaps naturally go into the right position) and leave the top open

Some changes
space is getting cluttered so I put the processed bulb boxes back in the case and pull out the unprocessed boxes because it is more difficult to pull out a box from a packed case.
My grip evolves from cradling the bulb -> gripping bulb between thumb and index finger at the base and apex-> gripping the sides of the bulb with my thumb and first two fingers and pressing it against my chest for support->a cradle grip where my index and middle finger support the screw base and the rest support the bulb->only the index finger supports the top (The screw end always points away from me so that the date will be written right side up)
I start pressing the top flap of the box down as I open it to keep it open
I try opening the flap with my index finger instead of my thumb
I empty cases onto the table before starting to fill them with processed boxes
replace marker with fresh marker

More changes
I no longer set down the pen between bulbs
when I place processed bulb into a box, sometimes it catches on the top flap, so I start bending the top flaps down farther.
the table got crowded so I moved processed cases to the floor
I got a marker with a smaller tip so that I have better control

August 15 2017
I grab a box with my left hand with the thumb lined up with the edge of the flap.
I bring it over and grab the base with my right hand (I reposition my left hand at this step if my thumb isn't lined up)
As I brought the box to my right hand I automatically pointed the marker away from the box to keep from marking it. (I can either keep the marker between my thumb and index and point it to the side of my hand, or keep it between my index and middle and point it off to the top of my hand)
I undo the flap with my left thumb and I pass it under my hand to hold it open against the box with my less important fingers.
I stick my thumb and first two fingers into the box and pull out the bulb and discard the box.
since bulb is resting against the base of my thumb I use my right hand to push the screw end around so that my left hand cradles the bulb. (because my left hand is too slow at spinning the bulb)
I then write on the bulb with my right hand, rotating it with my left to keep the writing easy.

Routines

anders — This was interesting; thank you! Sorry to be very slow to follow up.

Phil and I collected dozens, possibly hundreds, of observations like this in paper notebooks. We intended to write them up in a long publication tentatively titled Fieldwork or maybe The Computational Theory of Breakfast or The Phenomenology of Breakfast.

Unfortunately, that was one of the many sub-projects we abandoned when we concluded AI wasn't going to happen in the '90s, and gave up.

Predictive processing

I've been reading Scott's predictive processing posts and now have some kind of incoherent question I'd like to ask you. I may just be missing part of your argument. Anyway I hope this makes some sense...

As far as I can see, what he says about bottom-up processing fits quite well with what you're saying here. I.e. we're mostly detecting and responding to changes in our local environment, so the environment is cuing us in to what is relevant. We don't have to build and update complex internal representations of it.

But then for the top-down part, he says:

The top-down stream starts with everything you know about the world, all your best heuristics, all your priors, everything that’s ever happened to you before – everything from “solid objects can’t pass through one another” to “e=mc^2” to “that guy in the blue uniform is probably a policeman”.

As I said in the comments, that's a lot of things. I don't really buy it! But I do buy that we need some top-down ordering principles, some kind of assumptions we're imposing on our perceptions. Something to make sense of the experience of e.g. resolving the dalmatian from the spotty image.

Do you also tackle this top-down side of things anywhere that I've missed? Any argument for how we don't end up with this gigantic pile of hypotheses on the top-down side, in the same way that reacting to our immediate environment reduces the problem on the bottom-up side?

The "background problem"

how we don't end up with this gigantic pile of hypotheses on the top-down side

This may be exactly the right question to ask!

This is what Dreyfus and Searle called "the background problem." (Other philosophers have other names for it.) One way of putting it is that absolutely everything you know about anything might be relevant to any decision. People somehow seem to figure out those relevancies fluidly.

It was when we realized we had no idea how to address this that Phil and I gave up on AI. If you take a cognitivist approach—i.e. representing knowledge using something like language or logic—the combinatorics are utterly impossible. And we had no good alternative.

Snarky paragraph from my current "intro to metarationality" draft:

What color are your socks? Take a look. “Most likely blue, p=0.998253689701. Calculated from 9045 alternative hypotheses, including my numerical estimates of the probability that I have just experienced a micro-stroke that altered color processing in my visual cortex, that someone is shining a hidden blue spotlight on them, and that I am living in a computer simulation run by deceptive alien superintelligences.”

I haven't read Andy Clark's book, or anything about it other than Scott's posts. However, it sounds to me like he's fallen into one of the standard naive AI pits: superficially attractive approaches that are known not to work. Specifically:

OMG!!!! Kalman filters explain ALL THE THINGS!!!

[Pro tip: they don't. And people have been falling into this pit for decades. Kids, don't try this at home! Kalman filters: not even once.]

Andy Clark is a smart guy so maybe there's more to the book than that.

It would be useful for someone to write a book that says "Here are the first thirty ideas you will have when you set out to do AI. Everyone who has set out to do AI has had some subset of these. Each has very-thoroughly-explored failure modes. It's possible in theory that you can make one of them work despite the strong reasons we'll give to believe it can't work, even in principle, but you ought at least to know why thousands of smart people found it didn't."

Bayes, neural nets, Kalman filters, and theorem proving are all in the first dozen things everyone rediscovers, gets wildly excited about, shouts a lot about, and then (if they are lucky) calms down once someone older and wiser explains why they don't work.

Sperber and Wilson Relevance

Anonymous's picture

Are you familiar w Sperber and Wilson? It comes to similar conclusions to Tomasello but through a more philosophical route acknowledging Paul Grice one of the Berkeley circle incl Dreyfus that you have mentioned, I think. The idea is that language and cognition followed human's development of highly advanced mirroring and intuiting and sharing other's intentions.

Sperber & Wilson

Thanks, yes, I know that work. (Not sure which of us you were asking!)

The Wittgenstein/Austin/Grice/Searle lineage points in the same general direction as Heidegger and ethnomethodology, but is considered insufficiently thorough-going by proponents of the latter.

Putting the pieces together

Thanks, this is helpful. I'm trying to put the pieces together, though maybe I should just be patient and wait for your metarationality review, and see what questions I have left after that.

One more try, though, if that's OK, to make sure I'm getting it...

I think when I read your stuff on this before, I wasn't clearly making the distinction between the problem of working out what is relevant in the sensory data you obtain from your environment and the problem of working out what is relevant out of your existing knowledge.

Scott's posts with the top-down/bottom-up distinction have made me identify it much more clearly (though I'm still not completely sure if you think this top-down/bottom-up framing is actually a useful one? I don't mean the specific Kalman filter instantiation, but just the general idea of dividing things up this way).

If I'm understanding correctly, the situated approach helps with the bottom-up side (you care about the object in front of you, not some random object). But there's no comparable trick for the top-down side:

It was when we realized we had no idea how to address this that Phil and I gave up on AI. If you take a cognitivist approach—i.e. representing knowledge using something like language or logic—the combinatorics are utterly impossible.

That makes sense and I think I was asking the right thing.

And we had no good alternative.

... And is this still the case, that you have no ideas for where an alternative would be?

Top-down/bottom-up

if you think this top-down/bottom-up framing is actually a useful one?

Yes. On a priori grounds, but also it's clear from neurophysiology. Unfortunately, the nature of the top-down stuff seems not to be accessible with available neuroscience methods, and remains largely unknown. (Afaik—I don't follow neuroscience closely because it doesn't seem to be going anywhere; but if someone figured this out it would be a huge breakthrough, and I'm pretty sure I'd hear about it.)

is this still the case, that you have no ideas for where an alternative would be?

Well, the situation does also give hints for the top-down stuff. That is, if you are solving a differential equation, you probably can safely forget everything you know about the Timurid Empire for the nonce. But how this would work mechanistically? I haven't a clue. It's easy to wave your hands about "spreading activation," but people have been trying to make that work for half a century, and it doesn't.

Words, Words, Words (cf youtube.com/watch?v=-R_lv6_5Mvg)

"The Wittgenstein/Austin/Grice/Searle lineage points in the same general direction as Heidegger and ethnomethodology, but is considered insufficiently thorough-going by proponents of the latter."

You're losing me here, partly as I haven't really tried to get Heidegger (or tried somewhat and gave up; maybe with Dreyfus' help some day). Too many words about words about words. I'd like more philosophy to be closer to "How To Do Things With Words" (which I maybe 1st heard of through Fernando Flores, whose name came to mind when I looked up Lucy Suchman and noted her connection with "Conference on Computer-Supported Cooperative Work", and thought surely Flores and Winograd presented at one of those conferences, which they did, more than one I think).

What is refreshing about Tomasello is he does make somewhat densely analytical arguments, but is never far from facts, of which he has a lot to work with from the 200+ papers he co-authored in his long tenure leading the Max Planck Institute Department of Developmental and Comparative Psychology.

A singular datum: before uttering an intelligible word, infants point to things, often it seems because they want the experience of seeing them jointly with you (from T's lab work and my own experience). They also intuit and often try to correct false beliefs by about 14 months, although they won't verbally get false beliefs until they are 4 years old.

I construct an argument that takes a major bit of Tomasello's theses, including that bit, and deals with action parsing by computers and humans, and the theory of dreams as simulation of social reality in https://www.ribbonfarm.com/2017/07/18/from-monkey-neurons-to-the-meta-br...

If you should ever take a look at it, I'd very much appreciate your comments.

Coincidences

Couple of weird coincidences here!

One is that, when your comment arrived, I was reading Winograd and Flores' 1986 Understanding Computers and Cognition, which I last looked at shortly after it was published. I'm re-reading it as background for my introduction to metarationality, which is tentatively titled In the cells of the eggplant, after this dialog from the W&F book:

A. Is there any water in the refrigerator?

B. Yes.

A. Where? I don't see it.

B. In the cells of the eggplant.

(Was "Yes" true?)

The second coincidence is that about a week ago I opened a house-moving box, which I hadn't looked in for many years, searching for something entirely unrelated. On top of the stuff I was looking for, there was a half-inch stack of academic papers, which were all of the ones I had saved when I left that world in the early 1990s. One of them was a paper by Lucy Suchman, titled "Speech Act: A Counter-Revolutionary Category." After flipping through the eight or so papers in the pile, I set that one aside as possibly worth re-reading.

Coming just now to answer your comment, I thought "I should take a look to see what she had to say there," and discovered that the paper is specifically a critique of Winograd & Flores. (This is really pretty weird!)

It appears to be an early (1991) version of what was later published as "Do categories have politics? : the language/action perspective reconsidered" (1994). Apparently a dozen people, including Winograd and Flores, and separately Phil Agre, wrote replies, and she wrote a reply to their replies (1995).

I haven't (re)read any of that, yet. It's ancient history—but I think it also was an important line of inquiry that got dropped by accident rather than due to running into serious intellectual obstacles.

I've had your Ribbonfarm piece open in a tab since they published it. I'm afraid I've gotten badly behind on reading (and everything else) in the past few months, as nearly all my time has been taken up with family responsibilities.

Flores new book, etc.

Recently I was very surprised to see Flores had issued an anthology of his thinking, with the help and encouragement of his daughter. I got to know Flores slightly in the mid-80s, in connection when he was selling Action Technologies' The Coordinator, mostly to roaming entrepreneurs who ran it on suitcase sized Compaq computers the networked via a variant of the UNIX mail/usenet technology. Later it evolved into an enterprise-class system which has probably run out of steam by now.
The main thing Flores did at the time was run workshops more often than not aimed at corporate culture (precursor of Venkat, Tiago Forte, ...). So he wasn't that much of a writer, but accumulated sets of workshop notes that make up the bulk of the new book. I have the Kindle version free via "Kindle unlimited" FWIW.

Oh yes, the book

The book is Conversations For Action and Collected Essays: Instilling a Culture of Commitment in Working Relationships , 2013
by Fernando Flores and Maria Letelier

Flores' "Conversation for Action" book

That does look very interesting! Well-reviewed on Amazon and GoodReads, too.

I'm collecting a list of books on meta-systematic approaches to management/leadership, and am including it there. (I plan to actually read, or at least skim, all of them, and summarize. At some point!)

A hierarchical model

Thank you for the recommendations! And sorry about the novel. I'd love to finish it. I get so little time to write that it will probably never happen: less important than most of the rest of my IOUs.

Heidegger

Bad Horse's picture

Thank you for posting that!

I was trying to find the relevent sections in Heidegger on situated activity. Loren & Dietrich 1996 (https://www.aaai.org/Papers/Symposia/Fall/1996/FS-96-02/FS96-02-017.pdf) cited "Brooks 1990", the "Elephants don't play chess" paper, which said nothing. The correct reference was Brooks, Rodney (1991). Intelligence without representation, Artificial hltelligence 47 p. 139-159. But /that/ merely pointed me to "P.E. Agre and D. Chapman, Unpublished memo, MIT Artificial Intelligence Laboratory, Cambridge, MA (1986)", which led me to search for anything that might be that memo.

(BTW, folks in AI at MIT frequently cited unnamed or unobtainable internal MIT memos in the period 1980-2000, and usually more than half of their citations would be to other things published at MIT. The prominence of MIT in the field, combined with its insular nature and its successful pursuit of and near monopoly on media attention, did a great deal of damage to the field, IMHO, by marginalizing all other work in the field. So it stings some to hear you complain about your work being ignored.)

But anyway. Thanks!

Situated activity

The term "situated action" came to us from Lucy Suchman's book Plans and Situated Actions (although it has some history before that). Heidegger doesn't use the term, although Suchman was profoundly influenced by Heidegger (partly through Dreyfus, who was on her thesis committee).

Heidegger's word for roughly the same thing is "circumspection." I don't recall the German word for that, and I'm not sure "circumspection" is always used as the English equivalent (different Heidegger translators may have chosen different ones).

I'd recommend Dreyfus' Being-In-The-World if you'd like to follow up—it's not an easy read, but it's much easier than Heidegger himself!

Add new comment

To post a comment, you must enable Javascript and reload this page.

Navigation

You are reading a metablog post, dated August 4, 2017.

The next metablog post is Pattern and Nebulosity: Deconstructing Yourself podcast.

The previous metablog post was Ignorant, irrelevant, and inscrutable.

This page’s topics are History of ideas and Rationalism.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2017 David Chapman.