Comments on “The ethnomethodological flip”

Add new comment

Flipping by conceptual metaphor

Nick Hay 2020-01-30

I came at this flip from a different angle, the core of which I think was reading about the conceptual metaphors of mathematics in Lakoff and Núñez’s Where Mathematics Comes From. I read this starting from a rationalist/Bayesian mindset with the AI goal of figuring out how one might implement a (ideally eventually superintelligent!) system that could do mathematics and logical reasoning, including handling tricky problems like shifting ontologies. The hope was seeing how humans did it would give some insight. But there was something irritating/compelling with how humans did things in this different, messy, biological way....

Interestingly, conceptual metaphor like ethnomethodology is also a study of reasoning in practice, but using a cognitive linguistics lens.

Ethnomethodology, the 5th E

David Chapman 2020-01-30

Yes, Lakoff’s work is compelling. I haven’t read his book with Núñez (and intend to).

Generally ethnomethodology is compatible with “4E” cognitive science (although the work process is quite different). One of the Es is “embodied,” which Lakoff understood earlier than almost anyone else in the field.

4E and ethnomethodology are both historically rooted in early 20th-century phenomenology, although by somewhat different paths.

Rationality as game-based

Kaj Sotala 2020-07-06

Rationality developed as a collection of tools for reasoning better in certain sorts of difficult situations in which people typically think badly. Naturally, rationalism focuses its explanations on those situation types. It takes them as prototypical, and marginalizes and silently passes more typical sorts of situations and patterns of thinking and acting. This emphasis tends to make rationality seem universally effective.

Gambling games and board games are fun partly because humans are inherently bad at them, and yet we can get better with practice. They are fun also because they are fair, so we can accurately compare skill levels. Making games learnable and fair requires engineering out nebulosity: uncontrolled extraneous factors that are “not part of the game.” That also makes games particularly easy to analyze formally. Much of technical rationality was invented either specifically to play formal games, or by taking formal games as conceptual models for other activities.

Formal games are a tiny part of what most people spend most of their time doing. They are also misleading prototypes for most other things we do, which intimately involve nebulosity.

This seems like a very important insight. I would also add that there seems to be a thing where rationalist formalizations of real-world situations are initially very counter-intuitive and hard to learn exactly because they require stripping away the reasonableness that people usually use for thinking about problems… and after one has learned to think in the way that the rationalist framework requires, one may dismiss objections of its unsuitability on the basis of “yes, it was unintuitive to me too at first, but you’ll get it eventually”. (Some resemblance to Kegan 4 mistaking K5 for a K3, there.)

I’m also somewhat reminded of this bit from Jo Boaler’s “The Role of Contexts in the Mathematics Classroom”:

One difficulty in creating perceptions of reality occurs when students are required to engage partly as though a task were real whilst simultaneously ignoring factors that would be pertinent in the “real life version” of the task. As Adda [1989] suggests, we may offer student tasks involving the price of sweets but students must remember that “it would be dangerous to answer (them) by referring to the price of the sweets bought this morning” [1989, p 150]. Wiliam [1990] cites a well known investigation which asks students to imagine a city with streets forming a square grid where police can see anyone within 100m of them; each policeman being able to watch 400m of street. Students are required to work out the minimum number of police needed for different-sized grids. This task requires students to enter into a fantasy world in which all policemen see in discrete units of 100m and “for many students, the idea that someone can see 100 metres but not 110 metres is plainly absurd” [Wiliam, 1990; p30]. Students do however become trained and skilful at engaging in the make-believe of school mathematics questions at exactly the “right” level. They believe what they are told within the confines of the task and do not question its distance from reality. This probably contributes to students’ dichotomous view of situations as requiring either school mathematics or their own methods. Contexts such as the above, intended to give mathematics a real life dimension, merely perpetuate the mysterious image of school mathematics. Evidence that students often fail to engage in the “real world” aspects of mathematics problems as intended is provided by the US Third National Assessment of Educational Progress. In a question which asked the number of buses needed to carry 1128 soldiers, each bus holding 36 soldiers, the most frequent response was 31 remainder 12 [Schoenfeld, 1987; p37]. Maier [1991] explains this sort of response by suggesting that such problems have little in common with those faced in life: “they are school problems, coated with a thin veneer of “real world” associations”.

Developing from reasonableness to rationality to metarationality

David Chapman 2020-07-06

rationalist formalizations of real-world situations are initially very counter-intuitive and hard to learn exactly because they require stripping away the reasonableness that people usually use for thinking about problems… and after one has learned to think in the way that the rationalist framework requires, one may dismiss objections of its unsuitability on the basis of “yes, it was unintuitive to me too at first, but you’ll get it eventually”. (Some resemblance to Kegan 4 mistaking K5 for a K3, there.)

Yes to both parts of this!

Part Three takes a partly cognitive-developmental approach to explaining rationality. We learn to do rationality by learning how to strip contextual interpretation, which requires learning to suppress reasonableness. That is difficult and actually painful at first. “Word problems” like the bus one are supposed to help you do this, but empirical evidence suggests they don’t work well.

There’s a “J-curve” to the developmental trajectory. First you learn to strip context in order to reliably carry out purely formal procedures (such as factoring a polynomial). Ideally you master that late in high school or early in undergraduate education. Then gradually you learn to add context back in, which is required in relating the formalism to reality. If you are lucky, you get some ability to do that by the end of the undergraduate period, and develop it further either in graduate school or in professional work.

It’s a “J” curve because as you develop from advanced rationality into meta-rationality, you gain a broader perspective than mere reasonableness is capable of, and can take much larger contexts into account in your reasoning.

Definition

Rob Alexander 2020-12-01

One thing I don’t see in that text is a clear definition of exactly what your mean by “the ethnomethodological flip”. Would a good candidate be:

“Changing your prototype of “good thinking” from rationality to everday reasonableness.”?

"Change in explanatory priority"

David Chapman 2020-12-02

Ah, no, both sorts of thinking are good in different circumstances; neither is better overall.

The “flip” is at the theoretical level: the “change in explanatory priority.” That is, changing from explaining reasonableness in terms of rationality to explaining rationality in terms of reasonableness.

Embodied Artificial Intelligence

Stephen 2020-12-29

I enjoyed reading Computation and Human Experience, which you have recommended many times and to which you have contributed its main research. However, I apologize if there isn’t a better place to comment about the book. If there is, which this comment is entirely about, please point me in the right direction (including moving the discussion elsewhere).

As I haven’t fully absorbed your latest writing, it may coincide only superficially with the main focus of this site, since it touches on ethnomethodology, grounded meaning, reasonableness, and models of the world. This may especially be true if AI no longer interests you. It is my foremost hope that this is not the case, and you will write more about AI and AI forecasting.

Also, if I happen to have any wrong or confused thoughts about the book’s arguments, which I cannot anticipate upon a stream-of-consciousness reflection of what was learned, I greatly apologize in advance, and you may retract or edit things out, even the entire comment, whenever you wish.

Computation and Human Experience recapitulated my experiences as a software engineer working on large systems for several decades, and as an AI researcher and engineer. It also gave a perspective that remains too rarely found in technical texts; this is not in the sense of being interdisciplinary, which would be a prosaic and meaningless characterization, but justifying certain approaches through its narratives and drawing useful generalizations about intelligent system design. It foreshadowed the rise of use case design, embodied systems, the importance of distributed, subsymbolic representations, the decline of the waterfall model in favor of more continuous integration, and models of intelligent systems in which time, efficiency, and other engineering constraints play an important role; this is in the sense that practical application should feed back on the model into theory, in order to make the system actually useful, which often calls for major revisions to the abstractions themselves. I find that this pattern is not only applicable to the efficient execution of a system’s implementation, but comes up when designing an efficient interface between the user interacting with it.

I retained a sense of how “symbolic grounding” is not usefully about consciousness, but can relate to an embodiment problem of intelligence. “Meaning” doesn’t accord with an idealistic mental representation; rather, it emerges from the realization of a model and its interplay between its environment.

The idea of “metaphors” as invariant patterns across otherwise incompatible epistemologies was resonant with independently driven reflections. But I now see the importance of making sound presuppositions about the nature of human cognition and how they should change our model of a generalized AI system, despite these universals (metaphors) in the model space being suggestive of multiple viable paths.

I wanted to write the above to reflect on how to best draw (epistemological, practical) lessons from the book, in light of AI recently finding itself on a more robust path. A fully intelligent system, however, may or may not emerge without some degree of top-down, more systematic approaches, as the book agrees with on some level. We might therefore continue to learn from failures tracing back to the implicit (yet course-altering) presupposition of dualism, idealism, well-defined mental states, among other misleading motifs of representationalism that persist in predominant philosophies.

If it is the case that bottom-up models are primary, while structured models are necessarily secondary, I nonetheless would arrive at the conclusion that we might find it important to draw from both symbolic and sub-symbolic models of cognition in order to realize a contemporary (neural network) architecture and control system that, together, dispenses with any misleading epistemologies. In this way, a robust system might be implemented efficiently because it best finds some useful “middle ground”.

On the other hand, if bottom-up approaches suffice to scale up to a general AI system, it would be fascinating to see how abstraction emerges from its bottom-up calculus. It’s obvious that abstraction is more than just epiphenomena of the mind, but contributes to a model’s intelligence, as language can be used to think about a lot of things, and because we notice stereotypes (templates) of problems that arise seamlessly that can be reused to solve other problems more efficiently.

If you believe that meaning is fundamentally subsymbolic, embodied, implicit, and nebulous, then you might also believe that the latter approach is necessary and sufficient. I only have a hunch that this is the case, not any principled explanation (yet). But I think it’s important to answer this for reasons of safety and understanding (explaining) the nature of intelligence/AI. I would be interested to hear from you (as a contributor of the work) regarding:

  1. Whether general intelligence might emerge without implementing any explicit models of the environment.

  2. How distributed representations of implicit kinds might give rise to general intelligence, and if so, how we would know that they are reliable planning systems, which are still conceptualized as predetermined, static models over actions.

  3. The main reason, however, for my bringing AI up with you is the conviction that it’s of critical importance to achieve robust AI in order to set the stage for more meaningful, enjoyably useful ways of life. The changes that occur from AI may destabilize society in ways that seem to imply a tragic future; however, in my opinion, changes can generally be handled well enough, so long as safe systems are the ones that society selects for. If you have responses about any of this, including necessary constraints to the model space in general, AI safety, or the societal implications of AI, I request a reply that states your prior expectations. More importantly, I request an eventual series of posts exploring (much further) the dynamics of atomized societies in the context of transformative AI.

Gellner critique

joe 2021-02-02

Ever seen this strange sociological critique of ethnomethodology by Ernest Gellner (in 1975)? He’s sniffing at it as a kind of Californian self-obsessed conformist hippie fad. I found it rather confusing (though funny!), but the last couple of pages also sound pretty prescient—a kind of negatively characterized fluid mode, “DIY subjectivity”, reductio ad solipsism. Quite disorienting. (Started reading Gellner via Cosma Shalizi fwiw, he of the famously impeccable taste… http://bactra.org/notebooks/gellner.html )

Here’s the piece:
http://tucnak.fsv.cuni.cz/~hajek/ModerniSgTeorie/literatura/etnometodologie/gellner-ethnomethodology.pdf

Wrongly assuming Romanticism

David Chapman 2021-02-03

Yes; unsurprisingly this piece is well-known to anyone doing ethnomethodology. There’s a discussion in Michael Lynch’s Scientific practice and ordinary action, for instance (pp. 26-28).

Gellner made the common mistake of assuming that any critique of rationalism must be the Romantic anti-rational critique: that rationalism neglects critical aspects of subjectivity, which Romanticism valorizes.

This was exactly wrong. If anything, ethnomethodology could be criticized for the opposite: rigorous refusal to deal with subjectivity at all (on the grounds that, as observers, we have no access to it). It is more similar to behaviorism than Romanticism in this respect.

I frequently run into the same misunderstanding of my own work (e.g. in twitter discussions with self-described rationalists).

Ignorant, irrelevant, and inscrutable” discusses this. Ethno comes in the “inscrutable” category (i.e. meta-rational).

Add new comment:

You can use some Markdown and/or HTML formatting here.

Optional, but required if you want follow-up notifications. Used to show your Gravatar if you have one. Address will not be shown publicly.

If you check this box, you will get an email whenever there’s a new comment on this page. The emails include a link to unsubscribe.