Comments on “A credibility revolution in the post-truth era”

Comments

The irony of Meaningness hastening civilisational collapse

shane's picture

Some critical reflections:

“Science and engineering don’t work like they used to.”
This seems like both a false narrative and tautologically true. They don’t work like they used to, and that is a good thing! We have a replication crisis not due to the corruption from some fantasy golden age of research, but through trying to fix the problems of the past.

In terms of a slowing down of STEM knowledge, a seemingly reasonable alternative (but less fun) view is that the low hanging fruit has been plucked and it is just the really hard stuff left. In psychology for example, a lot of the big effects seem to have been found, and what’s left is tinkering round the edges.

“The science replication crisis and the scarcity of exciting new technologies are symptoms.”

An argument could be made that the claim we are now in a “replication crisis” in some areas of psychology is because of the (exciting) new technologies. New forms of media (twitter, blogs, podcasts) allowed the creation of new forms of social capital and inversion of power structures, along with new public knowledge discourses (previously only the subject of late night conference banter); open source technologies allowing data sharing (e.g. OSF); the rise of open source programming language (r) allowing new forms of analysis or facilitating the ease of new analyses techniques (e.g. simulation studies) and allowing reproducible analyses; new forms of methodological technologies (e.g. pre-registered reports).

These technologies didn’t cause the replication crisis, but without some of these developments we would likely not be discussing the fact we are in “replication crisis”, because they have helped provided solutions for problems long known about.

Is “nearly all” research crap these days? STEM fields have massively expanded post war (in-line with massification of HE which helps fund them). Research can be “excellent”, “good”, “ok”, “bad” and “really bad” (i.e. outright fraud). But let’s say the quality of the research has stayed the same over the last 40 years, say 20% “good” and 80% “bad”. If you have 10,000 people working in a field instead of 1,000 or 100, this means there will be much more bad research. But also much more good research. Or maybe you want to argue it is now just 10% high quality, compared to the “good ole’ days”. Still much more quality research than in the past. Or is it really much worse than that? How exactly do you define “nearly all”? Is it just 1%? Less? I am calling BS on this claim, because while there are factors that might weaken research (e.g. distorted incentive structures), there many counteracting factors on the whole that mean research gets better over time (e.g. methodological improvements, theoretical developments etc…).

Let’s consider an estimated replication rate of 50% across experimental psychology (https://replicationindex.com/2019/04/05/estimating-the-replicability-of-…). Ok, 50% seems pretty bad (though very disputable). But does it follow then that “nearly all” science is shit? That because some studies in psychology don’t replicate, then astronomy is in crisis? Why is it that you need to make that rhetorical move from some fields to (all) science? It is no accident that some fields of science are more nebulous than others (due to methods and strength of theoretical base; see https://simplystatistics.org/2016/08/24/replication-crisis/). Is it due to a post-truth “collapse of meaning” or because some fields are just really hard?

So I don’t buy the narrative, nor the awfully dismissive nostalgic attitude that everything is shit these days (“meaningless trivial rote crank turning me-too nonsense”). Maybe there is more of it, and maybe it is more visible, particularly if you are an outsider, to a field subject to confirmation bias looking to confirm preconceptions.

In terms of a solution it seems like you are advocating something like a rich victorian gentleman view of science. The modern day example might be a tech entrepreneur. If you are wealthy enough, then you indulge your creativity with playful curiosity (like writing books about meaningness *-)). Nice work if you can get it! But does it scale?

The neo-liberal approach of how value is assigned is a solution to the problem of scale. If you are an institution, you want to a return on investment by the research masses. Yes, this has a corrupting influence, but when in human history have humans not been corrupted by power and prestige? Like other utopias, I am not optimistic about the possibility of a Chapman utopia on a grand scale. Whether there are substantially different better options that what has evolved over remains to be seen (or is an “empirical” question).

You describe the “systems according to which research and development are funded, communicated, organized, evaluated, and rewarded” as being “rational”.

Maybe you wouldn’t disagree (and have written a book about it), but from what I understand of your writings, rational non-human systems provide a framework in which meta-systematic human (systems) operate in.

Have you sat on a funding panel for research? Do you think the panelists sit around and think, hmm application C, this seems like a trivial non-reflective valueless research proposal, addressing a non-problem by someone who clearly doesn’t care about the subject matter who just wants to crank out publications, let’s give them some money? Decisions like this are made by complex flawed humans who care deeply about the subject matter (and often who highly value curiosity driven research) while at the same time are keeping on eye on their career and prestige factors, because that’s what humans do. In making decisions they are negotiating complex and often competing imperatives, working within rational systems (like assigning applications a score), Kegan stage 3 considerations (not pissing off tribal elders, helping out their mates), and negotiating a very complex set of considerations and ways of attributing value, and operating as expert professionals in I think what you would call very meta-systematic ways. Note the systems that evaluate research are mostly designed exactly to allow selection of good science - peer review is done anonymously (so is limited in building career capital, and to limit social pressures), and outside of institutional imperatives (i.e. for free), by people who “actually” care about research. It isn’t perfect, but meta-rational considerations on how to improve are always constantly occurring in competition with the necessity for systems to be relatively stable to operate and the operation of power dynamics which might inhibit change. Questions of what research problems to address, the limits of methods, and other meta-systematic investigations are part and partial of normal science (though labour might be divided between the habitual crank turners and the those oriented to the meta-side). Integral to the scientific mindset and enterprise is a critical attitude that means there is a never-ending justification of your research value and the integrity of its methods.

While I love “Meaningness”, I am unconvinced by some/most of the hyperbolic grand narrative because no matter how bad they seem (or you want them to be), I suspect the operation of (rational/meta-rational) systems within society like science are in many senses optimal, given the constraints and incentive structures operating in at a particular time and place. Pattern and nebulosity are always in a dance. In science/psychology you have the < .05 = truth systematic perspective (echoed in this blog post with talk of research being “true” or “false”), in combat with the nebulosity “let’s estimate the effect size with a confidence interval” type approach. You have those that have excessive confidence in pattern (the “truth” of research findings) vs. those who tilt towards excessively to nebulosity (or, rather the pattern in nebulosity, e.g. your “[nearly] all research is crap, because some research is crap” view). That they are in tension is a good thing, and the constant negotiation between them (which sometimes leads to revolution) is the “meta-rational” operation of normal science, rather than some revolution that needs to happen instigated by contrarian enlightened outsiders.

Not only am I unconvinced, I also very suspicious of the arguments orientated around the emotionally loaded word “crisis”. Often those that argue for a crisis do so to help drive a personal and emotional charge for “mission”, and/or to persuade others to follow a proscribed solution to that purported crisis. In your framework, this would appear to stem from a confused stance - rejecting nebulosity and attaching excessively to pattern (“all science is rubbish/in a crisis”).

“Rationality is under attack from irrationalists, newly empowered by the internet. They have proclaimed a “post-truth era” and have undermined or destroyed vital systematic institutions with “the truth depends on who is asking, and why.” This could produce a civilization-ending catastrophe.”

You justify the crisis-talk with the appeal of a need to avoid it to stave off “civilisational collapse”. This comes from someone empowered by the internet bypassing institutional mechanisms that would help prevent wild and silly claims like “nearly all science is worth less than nothing” entering into our knowledge sphere and influencing people, someone who with the “science is now garbage” shibboleth is pushing a post-truth/meta-rational narrative to create a irrational/meta-rational counter culture (though the promise of new-new age “fluid mode” self-transformation) by undermining our systematic institutions and hence hastening our supposed systematic collapse.

Some irony here perhaps?

What is a "Chapman utopia", exactly?

emily's picture

In any case, the systematic institutions seem to already be in the process of collapse, so there’s that.

The cool thing about people who understand the lack of inherent existence of phenomena is that they tend not to become fixated fascists, even when they have really great ideas and insights. (I’ve never met David, but would group him in this category (if pressed) after reading big chunks of this book.)

Bringing forth the science utopia

sean's picture

In relation to Emily and system collapse, I think the Meaningness point is not that systems are collapsing, but the justification for them is collapsing (which may or may not lead to collapse of the system itself).

I can’t do justice to the question in your comment title in a comment. What it looks like is spread across the Meaningness book though some of it probably isn’t written/published yet. Actually though, as one of the STEM lords who might be running things then it probably will be quite good (at least for me).

Meaningness, however, does explicitly argue against utopia-like views (e.g. in relation to politics), but when it comes to views on science that seemingly goes out the window.

Not believing in inherent essences isn’t necessarily an obstacle to fascism. Conceivably fascism could emerge as a “pragmatic” solution. I am not an expert on any of this (but this is Meaningness, so…) but as I understand it Franco’s regime was happy to appropriate ideologies for functional purposes. But I think David skews more technocracy than fascism.

Add new comment

Navigation

This page is in the section Meta-rationality: An introduction,
      which is in In the Cells of the Eggplant.

The next page in this section is The structure of The Eggplant.

The previous page is Clouds and eggplants.

General explanation: Meaningness is a hypertext book. Start with an appetizer, or the table of contents. Its “metablog” includes additional essays, not part the book.

Subscribe to new content by email. Click on terms with dotted underlining to read a definition. The book is a work in progress; pages marked ⚒︎ are under construction.