Is this an eggplant which I see before me?

Correspondence fairy maintaining the truth of eggplantness

Perception plays two important roles in rationalism:

Rationalist theories typically make several assumptions, implicitly or explicitly:

Specifically what kind of information crosses the interface constrains theories of rationality. What is the division of labor? What work rationality must take responsibility for depends on what work perception can do. This is partly an empirical question (about how perception does work), but partly also a theoretical, conceptual one (about what could work in principle).

So far, every rationalist theory of this sort has run into insuperable in-principle difficulties. Some of these problems were discovered through conceptual analysis by the logical positivists, and were a major reason they abandoned their whole program. Computer vision research worked through others in great technical detail in the 1970s and ’80s. Both these are fascinating stories, but telling them would take another book, and most of the details are not relevant here.

Although diverse technical difficulties suggested many different problems, the underlying issue was always the same: unavoidable nebulosity. So this chapter sketches ways nebulosity complicates the division of labor between perception and rationality. That motivates an alternative understanding of their relationship, which I’ll explain in Part Two. Parts Three and Four rely on that understanding.

This chapter presents four rationalist theories of perception. We’ll find reasons to think each is unworkable. They may seem a bit silly in retrospect, but they are not straw men. They are simplified versions of theories that were major research programs, in different fields, for decades. The aim here is not to prove that no rationalist theory can be adequate, but to explain some specific obstacles. These trouble spots suggest an alternative approach.

  1. It would be ideal for rationalism if perception delivered a set of statements about what the objects in your environment are, with their types and relationships. That’s what rationality wants to get started with. However, assigning objective types and relationships often requires reasoning that goes far beyond what could be expected of perception.

  2. Instead, perception might deliver statements involving only a fixed set of objective, sensory properties of the world, such as shapes and colors. Then it’s rationality’s job to make sense of those. If perception says there’s something red and round, rationality might conclude it’s an apple. However, there are many other sorts of round red things. Much finer discriminations of perceptual properties are required to draw a conclusion. In general, there doesn’t seem to be any fixed perceptual vocabulary that is sufficient to support reasoning.

  3. Sometimes reasoning has to go all the way “down to the pixels,” in which case it’s not clear what work is left for a separate perception module. Maybe rationality can do the whole job? This also does not seem feasible.

  4. There’s strong scientific evidence that biological perception is not objective, reliable, or unbiased. Maybe rationality should be based on measurements taken with objective instruments instead? Unfortunately, instruments can’t be objective either. Sometimes they can be more objective than perception, but they too cannot deliver the absolute truths required by rationalism.

The alternative explanation in part Two drops all the typical rationalist assumptions. Rationality and perception are not modules, and the boundary between perception and higher cognition is nebulous. Information flows in all directions, but perception does not interface with rationality directly; reasonableness is an intermediary. Perception is egocentric and purpose-relative, not objective and factual, but that is usually what we need in practice.

Perception to formulae

Higher cognition, including rationality, is usually taken as running on something like language or logic. So, from rationality’s point of view, perception ought to deliver a set of statements about the world, such as a list of logical formulae describing everything in your field of view.

(Perception’s output ideally also should be guaranteed true, so rationality has some certainties to start from. In fact, perception is not entirely reliable, due to optical illusions for instance. The unreliability of perception caused logical positivism a great deal of trouble, and was one reason it failed. However, in principle this might be handled in a probabilistic framework, so I won’t discuss this issue further.)

The question is, what sorts of predicates (“words”) can appear in the statements perception produces as outputs? Put another way: what ontology do perception and rationality use to communicate at their interface?

Here perception faces all the same problems we’ve earlier seen rationality facing. For instance, “eggplant” is nebulous; what counts as one depends on circumstances and purposes. How is perception to make that judgement? As another example, referring to “object396106407” implies that perception solves the individuation and “Cosmic Object Registry” problems. Likewise, whether one thing is “in” another is sometimes nebulous.

Jiló bush
Jilós, courtesy Remi Nono-Womdim

A further difficulty is that you can learn new terms linguistically. If I tell you a jiló is a plum-sized variety of eggplant that is scarlet when ripe, you would probably recognize one just from that description. How can the perceptual machinery output jiló(object683501482) then? The first time you see one, at least, it seems some deliberate and arguably rational reasoning would be involved.

He said it’s scarlet; this one is sort of reddish orange; I guess that is what ‘scarlet’ means? Anyway it’s obviously not a regular eggplant, but it’s probably closely related because it’s about the same shape and shiny and the sepals look the same… yeah, I’ll go with ‘jiló’.

Otherwise, the definition of “jiló” would need to get “pushed down” into the perception box, so it could do much the same work. But then it seems perception is being forced to do a job that properly belongs in the rationality box. Perhaps every sort of reasoning might be required for accurate judgement in some case or other, so there’s nothing left that’s solely rationality’s responsibility.

So, while it would be convenient for rationality if perception did all the hard work, this division of labor is probably infeasible.

Reasoning from a neutral observation vocabulary

Radicchio
Radicchio image thanks to my foresight while shopping

The first approach drew the boundary at too “high” a level. Perhaps we can move the interface down, so perception does less work and rationality more?

A plausible alternative might be for perception to output statements involving only a fixed set of sensory properties of the world, such as shapes and colors. Then it’s not perception’s job to make the inherently nebulous, ontological judgements about what sorts of things you perceive. It describes objective physical features, and sorting out nebulosity is rationality’s problem.

Logical positivists who pursued this model called the set of predicates that could appear in perception’s outputs a neutral observation vocabulary. It should be “neutral” in not privileging any particular ontology, such as types of things that may exist, or the properties or relationships among them. The opposite would be a theory-laden vocabulary, whose terms implicitly include substantive assumptions about the world—such as eggplants being a distinctive sort of thing. That would be a problem because perception is supposed to deliver “starting” beliefs, bare facts that don’t depend on any theoretical suppositions.

A neutral observation vocabulary would also make the job easier for perception. Unfortunately, it would make it harder for rationality. Too hard; and also still too hard for perception. Let’s take those two problems in turn.

Suppose you have an object in front of you, namely object396106407. Should you believe it is an eggplant? When you look at it, prod it, or chew it, you receive objective “sense data.” Given a sufficient collection of those, you are justified in concluding that it is indeed one.

How? You know something like “∀x purple(x) ∧ oval(x) ∧ moderatelyfirm(x) ∧ bitter(x) ⇒ eggplant(x).” This lists the sense data that allow you to conclude that something is an eggplant. So if you receive the sense data purple(object396106407), oval(object396106407), moderatelyfirm(object396106407), and bitter(object396106407), you know what it is.

Unfortunately, there are purple, oval, moderately firm, bitter things that are not eggplants—radicchios, for example—so that is inadequate. What collection of sense data is adequate to conclude that something is an eggplant? You’d have to add many more criteria to exclude such “false positives.”

White eggplant
Image courtesy Emmett S.

Also, there are white eggplants. (That is where the name came from originally! They are egg-shaped, and indistinguishable at first glance from hen eggs.) Also, some varieties are spherical; and if you grow one in a box, it might come out cubical. Also, they get soft if you leave them too long in the refrigerator. These “false negative” exceptions also pile up, seemingly indefinitely.

Finding rational conditions to believe something is a member of a macroscopically meaningful category (“eggplants”) faces much the same difficulties as the problem of giving rational conditions for something to be a member. The problems mainly arise from same source: nebulosity. Just as no precise definition of eggplantness is possible, no fixed set of sensory criteria can tell you whether something is a eggplant. That’s partly because what counts as an eggplant depends on purposes and circumstances.

Perception into a neutral observation vocabulary

You can’t mistake a radicchio for an eggplant, because they are not just “purple” and “oval”; they are not the same color or shape at all. Finer distinctions are apparent. You need more information than can be captured at that level. What observation vocabulary would do the job?

As we saw in “When will you go bald?,” color terms like “red,” “gray,” or “purple” don’t straightforwardly correspond to anything in reality, nor anything in sensory experience. The words are often useful, and adequate in many circumstances, but in others you need finer-grained descriptions. What language could express those? There seems to be no fully general way of describing color information better than point-by-point red, green, and blue brightnesses values. And ultimately, color is nebulous, inseparable from texture and context, and it seems no linguistic description could be adequate.

An eggplant is “oval,” although it’s a distinctive sort of ovalness, not the same as the ovalness of a radicchio. Worse, consider clouds. You can, at a glance, distinguish cumulus and cirrus clouds based on their shapes. But the shape of a particular cloud, if you look harder, is extraordinary complex, with all sorts of frilly and streaky and wispy bits. What language of shapes would capture that? There is no general way of describing unique shapes other than the point-by-point outline of an object. And clouds don’t even have outlines! Ultimately, shape is nebulous; the shape of a cloud cannot be described fully accurately and precisely, because there’s nothing precise there.

The aim in devising a neutral observation vocabulary was to be neutral with respect to ontologies of the world by disallowing terms that refer to object types and theory-laden properties. However, these problems with color and shape suggest that it’s not possible to be neutral with respect to an ontology of perception itself. If perception does any processing to summarize the retinal image, the limits of that computation will show up at the interface between perception and rationality, and will shape what sorts of “starting beliefs” rationality can work with.

It seems, then, that in difficult cases, rationality needs access all the way down to the retinal image.

Pushing rationality down to the pixels

The great thing about rationality is that if you give it true inputs, it guarantees true outputs. It’s reliable that way. It’s also universal; you can reason about anything using formal logic. Or anyway, that’s the theory.

Is perception rational? If not, what good is it? After photons hit the retina, perception is some sort of computation, so we can model it formally; it should be just another rational process. It’s unconscious formal inference—or so some rationalists would like to think.

So we can try to push rationality all the way down to the pixels, so to speak. This approach abandons one of the typical rationalist assumptions, that there is a modularity boundary between perception and rationality. It’s a “one box” model.

The information delivered by the eye is something like “at such-and-such an instant, several photons within such-and-such a frequency range arrived at a cone cell at such-and-such a position on the retina.”1 This is a reassuringly objective and physical fact; and you can reason about a set of such measurements.

While treating perception just as an application of general-purpose rational inference might be possible in principle, attempts to do so in practice run into seemingly insuperable conceptual and computational obstacles.

Conceptually, just declaring that rationality will do the whole job doesn’t address the question of how. The one-box model requires rationality to solve both kinds of problems we saw in the “neutral observation vocabulary” model—the problems that seemed insuperable for rationality, plus the ones that seemed insuperable for perception. To be credible, the approach requires a plausible explanation of how one can reason from pixels to statements, and none has yet been found.

Computationally, the problem is the sheer quantity of work that would be required. The human retina has a resolution of tens of megapixels, delivered about sixty times per second.2 Applying general logical inference to a billion data points per second does not seem feasible, for either computers or the brain.

Indeed, the brain is not one general-purpose box. Perception is partly modular. The first few stages of perceptual information processing use specialized neural circuitry to compute fixed, efficient algorithms that are dissimilar to general rational inference.

Still, uniform one-box models remain popular. “Deep learning” systems that start from pixels are surprisingly good at image classification, and some researchers hope to extend them to general rationality. However, much of their success appears to derive from arranging them specifically to compute convolutions, which had long been known as a special purpose method in the early stages of visual processing. Further, image classification is not general perception, and deep learning systems are notoriously bad at recognizing spatial relationships.3 It also seems unlikely that deep learning systems can be made to reason with nested logical quantifiers, which is presumably part of rationality.

Probabilistic (“Bayesian”) one-box approaches are also popular now. They seem to depend on the implicit belief that probabilistic inference encompasses the whole of rationality. As we’ll see in the chapter on probabilism, this is unambiguously mistaken. These approaches also do not (yet) include any worked-out practical theory of how to begin statistical inference from pixels.

Maybe someday somehow some one-box, rationality-does-everything model can be made to work. It doesn’t seem a promising approach to me.

Instruments instead

Since biological perception is subjective, unreliable, and ontologically biased, maybe it’s the wrong starting point for rationality. Some logical positivists came to that conclusion, in any case. Instead, they suggested, reliable knowledge must based on artificial scientific instruments, which measure objective physical properties and have unambiguous numerical outputs. If a pH meter with a digital readout says the pH is 5.7, there is no room for doubt that it is actually saying 8.3.

This rethinking seemed promising for a while. As a result, “empirical” has come to often mean “grounded in scientific experiments” rather than “grounded in human perception.” Likewise, “rational” is now sometimes taken to mean “a conclusion justified by a scientific experiment.”

Unfortunately, this armchair philosophical theory idealizes scientific instruments in a way no working scientist would. It would be lovely if you could point a spectrophotometer at a fruit and it would give you an reliable, objective measure of its color, but they don’t work like that. They can’t, for all the same reasons eyes can’t. “Color” is not an objective property.

Laboratory apparatus is always also, to varying degrees, balky and capricious. All instruments may go out of calibration, while still being approximately or probabilistically right. Worse, they may give good results usually, but be wildly off in unusual, hard to define circumstances.

Scientists spend much of their time bullying equipment into behaving well enough for long enough to get an experiment done. Making meaningful scientific measurements usually involves frequent application of common sense, specialized technical knowledge, and practical laboratory know-how, much of which can’t be codified. pH meters, for example, are notoriously persnickety. You have to calibrate them before every use, and clean and store them carefully after each use. Even then, if they give a reading that seems wrong, you clean them again and try again, and reset the calibration again, and eventually give up and replace their “glass electrode” that is the actual sensor.

The meaning of any experiment depends on an unbounded set of unwritten usualness conditions. Whether and how much to trust an instrument is always a matter of interpretation; a non-rational judgement call. That’s one reason we do control experiments, to deal with as many usualness violations as is feasible. In practice, this can work very well, but there are no guarantees.

There’s another problem: scientific instruments are not ontologically neutral; their outputs are “theory-laden” and not bare facts. You can only believe a measurement is meaningful—never mind reliable—if you already accept particular concepts, assumptions, and theories; and those are always to some degree nebulous and uncertain.4 For example, the value of a pH meter reading depends on an ontology in which pH is an actual physical property. From high school chemistry, this would seem to be unproblematic: it is the negation of the logarithm of the hydrogen ion concentration. On the other hand, some mainstream authorities in physical chemistry say that pH is conceptually incoherent, physically meaningless, and cannot be measured even in principle.5

Instruments do let you make observations the unaided senses can’t, of course. DNA evidence could be helpful in judging whether something is an eggplant if you are otherwise unsure. But it is never entirely reliable: eggplant DNA might show up in things that aren’t eggplants for lots of reasons, and a DNA test could fail to find eggplant DNA in an eggplant for lots of reasons. And DNA can’t address the ontological issues at all. It can’t tell you whether a gboma counts as an eggplant; and no evidence can nail down what species something is in cases where that is inherently nebulous, such as “ring species.”

For all these reasons, there’s no set of scientific measurements that could determine, as definite fact, that object396106407 is an eggplant—any more than any set of perceptual observations could.

  1. 1.This is a highly simplified and inaccurate version of what the eye does, for many reasons; it’s more nearly correct as an account of the CCD array in a digital camera. Individual retinal photodetectors (rod and cone cells) are noisy. Also, the retina itself performs significant signal processing, so by the time information reaches the brain it has already undergone extensive transformation.
  2. 2.This analogy to digital cameras is again somewhat inaccurate; the retina is heterogeneous and complicated.
  3. 3.A good explanation for non-specialists is Jordana Cepelewicz’s “Where We See Shapes, AI Sees Textures,” Quanta Magazine, July 1, 2019. Two significant academic papers (among many) are Jason Jo and Yoshua Bengio, “Measuring the tendency of CNNs to Learn Surface Statistical Regularities,” arXiv, 2017; and Wieland Brendel and Matthias Bethge, “Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet,” arXiv, 2019.
  4. 4.The Stanford Encyclopedia of Philosophy article “Theory and Observation in Science” includes an overview of the theory-ladenness problem, and several of the others discussed in this chapter.
  5. 5.Working chemists routinely ignore theoretical concerns about the meaningfulness of pH, and theorists disagree sharply about its significance. For one entry point into the controversy, see Robert de Levie, “Potentiometric pH Measurements of Acidity Are Approximations, Some More Useful than Others,” Journal of Chemical Education, 87:11 (2010), 1188–1194.