A first lesson in meta-rationality

Bongard problem #5

The contents of the six boxes on the left all have something in common. The six on right also all have something in common, which is the opposite of the ones on the left. What is it?

This is called a Bongard problem. I will suggest that Bongard problems are a particularly simple example of meta-systematic cognition, or “meta-rationality.” This post might be the first lesson in a course that trains you in meta-rational thinking.

The problem above is pretty easy. Here is a somewhat more difficult example:

Bongard problem #4

(Ignore the distracting skeuomorphic ring-binder graphic in the middle; it’s not part of the problem. What makes the six boxes on the left different from the six on the right?)

Another problem of medium difficulty:

Bongard problem #38

Bongard problems work inside-out from most puzzles. In a typical puzzle format, you are given a system of rules, and a specific case, and you have to figure out how to apply the rules to that case. For example, the Sudoku rules: the goal is to fill a 9×9 grid with digits so that each column, each row, and each of the nine 3×3 subgrids contains all of the digits from 1 to 9. In a specific Sudoku puzzle, some of the squares in the grid are already filled in, and you have to fill in the rest while obeying the rules.

In a Bongard problem, you have to figure out what the rule is. You are given twelve specific images, and the result of applying the rule to each. (The rule assigns an image to either the left or right group.) Once you have discovered the rule, applying it to new images would be trivial.

The rule for the first problem, at the top of this page, is “figures with straight line segments on the left, ones with smooth curves on the right.” The second is “convex vs. concave.” And the third is “triangle bigger than circle, vs. circle bigger than triangle.” I found the third a bit more complicated, because I got distracted by the positioning of the two figures in space, and was especially confused by the containment relationships. But all that turns out to be irrelevant.

What makes Bongard problems interesting, and in some cases very difficult, is that there is no explicit limitation to what sorts of rules there may be. However, in a well-formed Bongard problem, there should only be one reasonable rule. Once you see it, there is an “aha!” moment, with an enjoyable jolt of understanding.

Here’s a harder one:

Bongard problem #104

This one is still harder:

Bongard problem #112

If you enjoy puzzle solving, you might like to work through some more in Harry Foundalis’ collection of nearly three hundred. (You don’t need to do that to understand this post, though.)

If you got stuck on the last two above, the solutions are in this footnote.1 (If you hover your mouse over a footnote number, you can read its text. Or you can click on it to jump to the feet.)

How you solve Bongard problems

To solve a Sudoku puzzle, you work within a system of rules. This is the essence of systematic thinking, or formal rationality.

To solve a Bongard problem, you discover a rule—a very simple system. This makes for a “minimal,” “toy,” or “petri dish” version of meta-systematicity. Meta-systematic cognition evaluates, chooses, combines, modifies, discovers, or creates systems—rather than working within one.

My earlier post “How To Think Real Good” mostly describes ways of thinking meta-systematically, in science and engineering domains—although I wasn’t conscious of that when I wrote it! Much of it is about problem formulation. As I wrote:

Finding a good formulation for a problem is often most of the work of solving it.

The Bongard problems take this principle to an extreme. Each problem is simply to figure out what the problem is!

Many of the heuristics are about how to take an unstructured, vague problem domain and get it to the point where formal methods become applicable.

The essence of Bongard problem solution is similar: you need to find the formal structure that makes sense of the data. The difference is that Bongard problems are much less vague than real-world ones typically appear on arrival. They are also simpler, of course. Between crispness and simplicity, you can solve a Bongard problem in seconds, minutes, or hours, where STEM ones can take days, weeks, or years.

Lovelace and Babbage: perhaps we should build a model!
From The Thrilling Adventures of Lovelace and Babbage

I read Doug Hofstadter’s mind-altering book Gödel, Escher, Bach: An Eternal Golden Braid on the philosophy of artificial intelligence in 1979, but hadn’t looked again since. When researching this post, I discovered that Bongard problems were first popularized by a brief discussion in GEB—which I had entirely forgotten.2 It turns out that Hofstadter’s analysis develops some of the same themes as “How To Think”; I return to them in this post.3

In “How To Think,” I wrote that:

Any situation can be described in infinitely many ways. Choosing a vocabulary, at the right level of description, for describing relevant factors, is key to understanding.4

As Hofstadter explains in detail, this is also how you solve Bongard problems. There’s two aspects: levels of description, and relevant factors. As for the first, I wrote:

A successful problem formulation has to make explicit the distinctions that are used in the problem solution.

You start by recognizing simple figures, such as triangles and squares. Then you start building up descriptions in terms of properties and relationships. Some of the figures are big, some are small; some point up, some point down. Some are inside others; some touch others; some are right or left of others. At a next level, you inventory properties and relationships of properties and relationships. In some figures, circles only touch triangles that point up; or there are always three little things inside a big thing; or the centers of all the big figures form a line. The difficulty of a Bongard problem depends, in part, on how many levels of description are involved.

Regarding the second aspect I wrote:

A successful formulation has to abstract the problem to eliminate irrelevant detail, and make it small enough that it’s easy to solve.

Hofstadter describes this in terms of “filtering and focussing.” Even in Bongard problems, crisp and simple as they are, nearly everything you could say about a diagram is irrelevant. Typically—but not necessarily!—exact positions and angles and sizes don’t matter, for example. They need to get filtered out, while you focus on what matters.

For example, consider this problem:

Bongard problem #55

Initially, you probably see a variety of shapes, each with a tiny blob attached. Presumably, all the shapes on the left are similar in some way, and all the shapes on the right are similar in a different way. So you start looking at properties of the shapes—rounded versus angular, counting numbers of sides, looking at which direction the indentation points. But all this turns out to be irrelevant. The solution depends on an entirely different way of looking at the problem. (Answer in footnote,5 in case you are stuck!)

There’s an obvious difficulty here: if you don’t know the solution to a problem, how do you know whether your vocabulary makes the distinctions it needs? The answer is: you can’t be sure; but there are many heuristics that make finding a good formulation more likely.6

In general, though, in a multi-level search process, you need to be able to recognize incremental progress. In Bongard problems, you are searching for an adequate description vocabulary. With the more difficult problems, you find that particular descriptive terms give partial insight, and then you build on them.

But, your assessment of whether you have made partial progress is always uncertain, until you have the final solution. Hofstadter:

The way a picture is represented is always tentative. Upon the drop of a hat, a high-level description can be restructured, using all the devices of the later stages.

What happens when you get stuck? I wrote:

Vocabulary selection requires careful, non-formal observation of the real world. If a problem seems too hard, the formulation is probably wrong. Go back to reality, and look at what is going on.

This is the experience of Bongard solving: repeatedly looking at the diagrams—sometimes to deliberately check a tentative formulation; but sometimes in a less directed way, dropping back to a lower level of description, hoping that a pattern will pop out at you.

Stage 4.1

Rationality, a form of systematic thought, is “stage 4” of human cognitive development. Meta-rationality, or meta-systematicity, is stage 5.

In “A bridge to meta-rationality vs. civilizational collapse,” I suggested that it may be critical to human survival for more STEM-educated people to learn meta-rationality. (Not to be alarmist or anything.) That will require a “bridge” or “path.” I’ve begun working toward a curriculum for teaching meta-rationality.

As a “sandbox,” or “finger-painting version” of meta-rationality, Bongard problems may be an ideal first lesson. As an “abuse of notation,” we could imagine “stages” 4.1, 4.2, and so on, up to 4.9 and 5.0. The first module of the curriculum, 4.1, would present the various types of meta-systematic cognition in a STEM-friendly format. Maybe we can develop sandboxes for evaluating, choosing, combining, modifying, and creating systems—just as Bongard problems are a sandbox for discovering them.

(Solving large numbers of Bongard problems is not necessary; doing that is probably not particularly effective training in meta-rationality. The point is to understand how solving them is meta-rational, which probably requires working through only a dozen or two.)

The goal here is to make meta-rationality easy and fun. That is important in demonstrating its existence, and its value, before getting to the horrifying stage 4.5 realization that systems can never be made to work in their own terms, and rationality’s promises of certainty, understanding, and control are all lies. Rationalists resist explanations of meta-rationality, because they sense—and rightly fear—the nihilism of 4.5. It can only be possible to lead rationalists beyond stage 4 if they have confidence that stage 5 is real and attractive: neither woo nor nihilistic despair.

Look Ma, no woo!

For stage 4 rationalists, it’s easy to mistake stage 5 meta-rationalism for stage 3 pre-rationalism. Stage 3 is long on “woo”: supernatural beliefs, pseudoscience, and wishful thinking.

The Bongard problems are a great introduction to meta-rationality for STEM people, because they definitely involve no woo. There appears—at first—to be nothing “mushy” or “metaphysical” involved.

Solving them obviously involves rationality. It feels very similar to solving ordinary puzzles, which are paradigms of systematic cognition. This illustrates the important point that stage 5 does not reject systematicity: it uses it. In fact, at this point, a rationalist might object that solving Bongard problems is clearly just rational, and that there is no meaningful distinction between “meta-rationality” and ordinary rationality. (They might invoke the Church-Turing Thesis.) I’ll come back to that later in the post. But if this is your reaction, I have succeeded in sneakily leading you 0.1 of the way along the path to stage 5.0, without triggering your stage 4.0 eternalist defenses.

Solving Bongard problems does seem to involve “intuition”—leading to flashes of “aha!”. Woomeisters love the word “intuition,” so we should be wary of it. Mostly, “intuition” just means “mental activity we don’t have a good explanation for,” or maybe “mental activity we don’t have conscious access to.” It is a useless concept, because we don’t have good explanation for much if any mental activity, nor conscious access to much of it. By these definitions, nearly everything is “intuition,” so it’s not a meaningful category. Woomeisters’ implication is that, because we don’t have good explanations for “intuition,” therefore it works according to some crackpot theory (which may involve “quantum,” “cosmic,” or “transcendent”), which proves we are all God and can live forever.

In fact, the “intuition” or “insight” involved in solving Bongard problems probably involves no special mental mechanisms.7 This is probably true of meta-rational cognition in general. It’s probably also true of rationality versus pre-rationality.8 Rational thought is not a special type of mental activity. It is a diverse collection of patterns for using your ordinary pre-rational thinking processes to accomplish particular kinds of tasks, by conforming to systems. Likewise, meta-rationality is a diverse collection of patterns of ordinary thinking that accomplish particular kinds of tasks, by manipulating systems from the outside.

We be Science peeps

Stand back! Cat iz goin to do SCIENCE!

Bongard problems are sometimes used as a simple model for science. Solving them involves using observations to create hypotheses, checking the hypotheses against empirical data, and refining your hypothesis when data disconfirm it. As a model, it’s valuable in pointing out that science is not just problem solving, it’s also problem formulating.9 Scientific problem formulating is not taught much, probably exactly because it requires going beyond stage 4.

We saw, in the “tiny blobs” problem, that re-formulation is often critical. Hofstadter writes:

There are discoveries on all levels of complexity. The Kuhnian theory that certain rare events called “paradigm shifts” mark the distinction between “normal” science and “conceptual revolutions” does not seem to work, for we can see paradigm shifts happening all throughout the system, all the time. The fluidity of descriptions ensures that paradigm shifts will take place on all scales.10

You can’t do science well without having some meta-rational skills. In fact, you probably can’t do it at all. This makes the point that meta-rationality is not some special woo thing for Enlightened Masters only. You also probably can’t survive daily life in the modern world without some ability to think rationally. However, both rationality and meta-rationality are skills. You can learn to do them better; and that gives you increasing power in certain domains.

But Church-Turing!

I intuit that some rationalist readers have had increasingly high-pressure jets of steam coming out of their ears as they read this post, because I seem to be missing the fatal flaw in the whole story.

These puzzles can obviously be solved systematically and rationally. From the Church-Turing Thesis, we know there’s nothing special going on! We know humans can’t do anything more than a computer can. If a human can use these supposedly “meta-systematic” types of reasoning, they are just another system. Whatever algorithm people use to solve Bongard problems, it is an algorithm, so it is a rational system.

There are two and a half answers to this.

First, the objection applies equally to rationality versus irrationality. If people are algorithms, and they are often irrational and unsystematic, then algorithms can be either rational or not; systematic or not. In that case, “rationality” and “systematicity” are properties of certain kinds of computation, but are not characteristic of computation in general. So this can’t be a valid objection to characterizing certain non-systematic sorts of computation as “meta-rational” or “meta-systematic.”

Second, the objection turns partly on the ambiguity of the terms “system” and “rationality.” These are necessarily vague, and I am not going to give precise definitions. However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.11 If a person is an algorithm, it is probably an incomprehensibly vast one, which could not written concisely. It is probably also an incomprehensibly weird one, which one could not consciously follow accurately. I say “probably” because we don’t know much about how minds work, so we can’t be certain.

What we can be certain is that, because we don’t know how minds work, we can’t treat them as systems now. That is the case even if, when neuroscience progresses sufficiently, they might eventually be described that way. Even if God told us that “a human, reasoning meta-systematically, is just a system,” it would be useless in practice. Since we can’t now write out rules for meta-systematic reasoning in less than ten kilograms, we have to act, for now, as if meta-systematic reasoning is non-systematic.

The half answer is that the Church-Turing Thesis probably has almost nothing to do with thinking. Probably its only implication is that people can’t compute HALTS? or other uncomputable functions; which says nothing about irrationality, rationality, or meta-rationality. (Meta-rationality does not require computing the uncomputable!) This is a half answer because there’s more to say than fits in a blog post. A talk by Brian Cantwell Smith is relevant to this, and to other themes of Meaningness, and is excellent.

AI, Bongard, and human-complete problems

Bongard problems were originally designed as a test domain for artificial intelligence, in the mid-’60s, when AI still looked easy. There has been very limited progress.12 Despite many AI researchers finding the problems fascinating, few have tackled them, because on reflection they come to seem extremely difficult. The number of properties and relationships that must be recognized and represented is large, and their possible combinations produce an exponentially increasing number of feasible hypotheses at each successive level of description.

Such progress as we’ve had in AI since 1990 has come almost entirely from brute force. Computers are so fast that combinatorial explosions are less daunting now than during the golden days of the 1970s and ’80s, when most known AI algorithms were invented. Could Bongard problems now be brute-forced, by generating and testing billions of hypotheses, where earlier programs were only able to consider dozens? I don’t know. I’m tempted to try! But I’ll explain reasons to think that can’t work.

A “human-complete” problem is one that is so hard that if a computer could do that, it could do anything a person can do. Any solution to the problem would also have to be a complete solution to emulating human intelligence.

Early on, artificial intelligence researchers concentrated on trying to solve human-complete problems, because they thought AI would be easy, so they might as well just do the whole thing at once, rather than wasting time implementing subsets of human-level intelligence. Once we realized AI is hard, we took the opposite approach: avoid human-complete problems, because we know those are currently infeasible.

The best candidate for a human-complete problem is carrying on an ordinary undirected conversation over a text chat connection.13 Chatting is likely human-complete because it’s so open-ended. Essentially any topic could come up in a causal conversation, and then you’d have to say something sensible about it. (If you are the sort of person reading this post, you might even want to talk about Bongard problems and how to solve them!)

It seems Bongard problems might also be human-complete—or even “intelligence-complete.”

Bongard problems have a quality of universality to them. They depend on a sense of simplicity which is not just limited to earthbound human beings. The skill of solving them lies very close to the core of “pure” intelligence.14

The reason, again, is that almost anything might be relevant. The kinds of concepts and reasoning needed is unbounded. For instance, this problem involves totally different considerations from the ones you’ve seen so far:

Bongard problem #199

(Stumped? See the footnote!15)

A program to solve Bongard problems would need, for instance, basic “intuitive” physics knowledge. In fact, we could probably encode any kind of intelligence or knowledge into a Bongard problem.16 This makes it plausible that Bongard problems are, indeed, human-complete.

Check this out: a meta-Bongard-problem, about Bongard problems!

Bongard problem #200

Here each of the twelve boxes contains a Bongard-type problem (each with only six examples, rather than the usual twelve). How are the six problems on the left different from the six on the right? (Solution in footnote.17)

The likely human-completeness of Bongard problems doesn’t mean a program couldn’t, in principle, do as well as people; only that we are currently very far from knowing how to write one.

“Deep learning” is the only interesting advance in AI since 1990. It’s mostly just brute force, but some of the applications have been impressive. I’m seriously interested in understanding its power and limits. (Some of the hype suggests that human-level AI is imminent. I don’t think so.) I’m pretty sure deep learning would get nowhere with Bongard problems. I would be very surprised and impressed and excited if I were wrong!

Meta-rationality, nebulosity, and metaphysics

Spaced-out cat in space

I hope I’ve persuaded you now that meta-rationality is emotionally safe. I hope you are intrigued: you can see that “meta-rationality” is plausibly a thing, and useful, and you can do it, and you want to learn more about how. Developing that curiosity is the goal of the “4.1” module of the meta-rationality curriculum. In “4.2,” we can look at ways of developing meta-systematic skills.

But now I want to scare you a little. (Probably this isn’t good pedagogy. Probably I should withhold the nasty surprise as long as possible…)

Deviously, I have led you over the edge of a cliff. You have started walking on a cloud, and you haven’t noticed because you haven’t looked down. At 4.5, you will see that there is no ground beneath you, and then will you fall into the abyss of nihilism. You will fall, that is, unless you have learned the skills of cloud-treading! They are one of the main parts of this curriculum—precisely because I want to give you the tools to walk confidently over that chasm.

Nebulosity” is the central theme of Meaningness. Literally, the word means “cloud-like-ness.” I use it to refer to the inherently vague, ambiguous, and fluid character of all meanings. “Eternalism” is the denial of nebulosity: it is the insistence that meanings can be made precise, definite, clear, and unchanging. Eternalism claims that meaning has some ultimate foundation: an eternal ordering principle that supports our understanding. Rationalism is a species of eternalism.

Eternalism is false. There is no ultimate grounding for meaning. Our knowledge and understanding can only ever be vague, ambiguous, and fluid. This is not merely a matter of uncertainty; it is that the concepts we use, our vocabularies for description, are always nebulous to some degree. Reality itself is also always nebulous, to some degree.

We are always already walking on clouds, because there is no ground anywhere. There is only ever an illusion of ground—and once we are free of that illusion, vast new territories open up for exploration.

This recognition is the midpoint, and the key, to the transition from stage 4 to stage 5. The rest is learning to be comfortable with groundlessness, and gaining means for navigating the realms beyond systematicity. That is the domain of the complete stance, which avoids both eternalism and nihilism because it conjures with both nebulosity and pattern.18

Whoa! Dude! You’ve wandered off into space! What has this got to do with Bongard problems?

The “crispness” of the Bongard diagrams is deceptive. I wrote above that:

There appears—at first—to be nothing “mushy” or “metaphysical” involved.

Consider this mushy problem:

Bongard problem #97

The answer is obvious: triangles versus circles. Except that most of them aren’t. We have to see things that objectively aren’t triangles or circles as triangles and circles.

Hofstadter describes this as the “fluidity of descriptions,” “play,” “slippage,” and “bending.”

One has to be able to “bend” concepts, when it is appropriate. Nothing should be absolutely rigid. On the other hand, things shouldn’t be so wishy-washy that nothing has any meaning at all, either. The trick is knowing when and how to slip one concept into another.19

This navigates between the Scylla of eternalism (rigidity, or fixation of patterns) and the Charybdis of nihilism (meaninglessness).

And now… the dreaded metaphysics. I’m going to show you three problems in a row. The first two are easy; the third, not so much. Give it a serious try!

Bongard problem #85
Bongard problem #86
Bongard problem #87

I don’t want to spoil this for you by discussing the solution immediately, in case you are reading this before finishing. So let’s pause for a brief lyrical interlude.

     If he had seen this dainty creature,
     Golden as saffron in every feature,
     How could a High Creator bear
     To part with anything so fair?
     Suppose he shut his eyes? Oh, no:
     How could he then have made her so?
—Which proves the world was not created:
Buddhist philosophy is vindicated.

(That’s by the rationalist epistemologist and logician Dharmakirti.)

The answer to the third problem is “four line segments vs. five.”

“But,” you may object, “two of the examples on the right have only three line segments!” And that’s because I led you down a garden path again. The first two problems set you up to think about line segments in wrong ways for the third. What counts as a line segment in the third problem is “a straight bit leading up to a junction or termination.” The first example on the right side has three horizontal “segments,” split at the junctions, plus the two vertical ones; the H-shaped one has four vertical segments, plus one horizontal one.

Here is another problem of the same sort. I spent ten minutes trying, and failing, to solve it. You may find it easier, knowing that the solution is similar to the previous one.

Bongard problem #90

(Solution in footnote.20)

The point of these examples is that what counts as an “object” depends on the context. This was true in the “mushy” problem above, too. One of the “circles” can also be described as a group of triangles. Is it one object, or many? Neither answer is “objectively true”; which one is useful depends on what you are doing.

This is a metaphysical, or ontological, observation. Ontology asks: what kinds of objects are there?

And part of the answer is: objects, and their boundaries, are not objective features of the world. They are necessarily nebulous: cloud-like. Reality does not come divided up into bits; we have to do that. But we can’t do it arbitrarily, and just impose whatever boundaries we like, because reality is also patterned. The individuation of objects, and placement of boundaries, is a collaboration between self and other.

This is much more true of everyday physical reality than of Bongard problems (which are, after all, relatively crisp). I began to explain that in “Boundaries, objects, and connections.” That page asks: is a jam jar one object, or two, or three? It depends on your purposes at the time you look at it. Where, in a mixture, does jam stop and yogurt begin? There is no objective answer. Eventually that page will introduce a large discussion of how this works, and what it implies.

If you think I’m wrong, stupid, or crazy, how about Richard Feynman? Check out his discussion of the vagueness of objects here.

That inherent vagueness is central to the Meaningness explanation of meaning, and to “fluid” or “meta-systematic” cognition. You may be able to begin to see why now!

If you’d like further hints, you can read Alexandre Linhares’ “A glimpse at the metaphysics of Bongard problems.” Linhares collaborated with, and built on the work of, Hofstadter and Foundalis. He also built on the work of Brian Cantwell Smith, whose talk I recommended above. Smith’s On the Origin of Objects develops an account similar to that I’ll explain in Meaningness. (Any decade now.)

This also connects closely—and not coincidentally!—with the work on active, task-directed machine vision in my Ph.D. thesis. It’s a model for how we use visual processing to individuate objects. More about that later, too, I hope!

  1. 1.In the first hard problem, in the left group, one circle passes through the center of the other one; on the right, neither passes through the center of the other. In the left group of the second one, the points are spaced equidistantly on the horizontal axis; in the right group, they are equidistant vertically.
  2. 2.It’s pp. 646-662.
  3. 3.It would be interesting to know how much influence GEB had on “How To Think”—either the original, abandoned, early-’90s project; or my blog post from three years ago. I’m reasonably certain I wasn’t conscious of any influence, but the book had a huge impact on me near the beginning of my engagement with AI.
  4. 4.I’ve mashed together some phrases from different parts of “How To Think” here, so you won’t find exactly this passage—or some of the other “quotes”—there.
  5. 5.On the left, the blob is a bit counterclockwise from the indentation; on the right, it’s clockwise.
  6. 6.Quoted from “How To Think.” Hofstadter and I both suggest that these heuristics are important and numerous, but it is a weakness in both our accounts that we don’t actually give good examples! He says this would be “very hard,” at least when attempting sufficient definiteness to program an AI system (p. 661). I’m somewhat more optimistic, at least about explaining them adequately for human use.
  7. 7.See GEB p. 661 on this.
  8. 8.I say “probably” because we don’t actually know how any sort of thinking works.
  9. 9.Science is, perhaps even more importantly, also problem choosing. Even less has been written about problem choosing than about problem formulating. I have good intentions to rectify that at some point.
  10. 10.p. 660; emphasis added.
  11. 11.All rationality is systematic, but not all systems are rational. Sharia, traditional Chinese medicine, and waterfall development methodology are all definitely systems, with elaborate rules and expert disputation over geeky details. Plausibly, these systems are not rational, however. A “rational” system is one that is “good” or “effective” according to some criterion. There are various theories of what the right criterion should be. From a meta-rational perspective, there is no best such criterion; different ones have different advantages.
  12. 12.By far the best work is the 2006 Ph.D. thesis of Harry Foundalis, supervised by Hofstadter. Foundalis also maintains the catalog of Bongard problems I linked earlier. Interestingly, he stopped working on AI for several years because he was concerned about its ethical risks.
  13. 13.This is often described as the “Turing Test,” although Turing’s original version involved a person trying to pass as the opposite sex. This tells us quite a lot about Turing, but perhaps not so much about artificial intelligence.
  14. 14.GEB, pp. 661-2, lightly paraphrased.
  15. 15.The configurations on the left are stable, taken as depicting physical objects in gravity; the ones on the right would fall over.
  16. 16.Foundalis’ thesis provides various interesting examples. Generally, it’s considered that a proper Bongard problem depends only on “general” knowledge and intelligence; it’s not “fair” to test specifics such as understanding of Medieval musical instruments—although you could indeed build that into a Bongard-style problem, with photographs of lutes, gitterns, citoles, and racketts.
  17. 17.The problems on the left concern individual properties of objects; those on the right concern numbers of objects.
  18. 18.I’m talking Dzogchen here, covertly.
  19. 19.GEB, pp. 654-5.
  20. 20.This one is about the number of “groups” of white blobs. A black blob separates groups. On the left there are three groups; on the right, four.