Comments on “Where did you get that idea in the first place?”


AI for math

I don’t know enough about the Olympiad to have much of an opinion. It sounds similar to the Putnam Competition, which I know a little about. The problems there usually depend on recognizing some trivial trick, such as an algebraic simplification based on a coincidence of arithmetic, that makes the solution simple.

It’s plausible that current-generation theorem provers, plus maybe some knowledge base of typical competition problem trick patterns, would do well at that.

Generally, the pattern over the past 70 years has been that computers are good at things people are bad at (like math) and can’t do things people find easy (like making breakfast).


Kenny's picture

I wonder if modern AI, i.e. machine learning, will eventually stumble on meta-rationality.

Having read you for a while, I was struck by something after reading this post about historical iron manufacturing:

Without some rational system (that’s good enough), all reasoning is ‘reasonableness’, even thinking ore deposits would replenish themselves (which is true for ‘bog iron’) after enough time:

As with many ancient technologies, there is a triumph of practice over understanding in all of this; the workers have mastered the how but not the why. Lacking an understanding of geology, for instance, meant that pre-modern miners, if the ore vein hit a fault line (which might displace the vein, making it impossible to follow directly) had to resort to sinking shafts and exploratory mining an an effort to ‘find’ it again. In many cases ancient miners seem to have simply abandoned the works when the vein had moved only a short distance because they couldn’t manage to find it again. Likewise, there was a common belief (e.g. Plin. 34.49) that ore deposits, if just left alone for a period of years (often thirty) would replenish themselves, a belief that continues to appear in works on mining as late as the 18th century (and lest anyone be confused, they clearly believe this about underground deposits; they don’t mean bog iron). And so like many pre-modern industries, this was often a matter of knowing how without knowing why.

Is a (good enough) rational system of reasonableness possible? I’d guess your answer is ‘no’ and I’m inclined to agree, but I’m not that confident about it. Given that AI is (reasonably) targeting human-like intelligence, maybe we can (someday) cobble together a ‘reasonably rational’ artifical but human-like and human-level intelligence. (That seems possibly reasonable to me!)

(I think it’s reasonable that, in practice, AI people are mostly trying to mimic human behavior because I’m relatively convinced that it’s not rationally possible to define or recognize ‘intelligence’ in general.

One thing that I think is interesting about machine learning is that it’s not entirely rational, or even that much at all. Practice seems to have far outstripped the capacity for rational theories to explain it fully. Like historical mining (and probably modern mining too still!), machine learning is mostly reasonable. Machine learning products, e.g. trained models, seem to be obviously reasonable too, which is fascinating because of how capable they are anyways. (Tho, and I think you’d agree, we shouldn’t be surprised about this!)

Add new comment


This page is in the section Part One: Taking rationalism seriously,
      which is in In the Cells of the Eggplant.

The next page in this section is The Spanish Inquisition.

The previous page is What can you believe?.

General explanation: Meaningness is a hypertext book. Start with an appetizer, or the table of contents. Its “metablog” includes additional essays that are not part of the book.

To hear about new content, Subscribe by email subscribe to my email newsletter, Follow Meaningness on Twitter follow me on Twitter, use the Syndicate content RSS feed, or see the list of recent pages.

Click on terms with dotted underlining to read a definition.

The book is a work in progress; pages marked ⚒︎ are under construction.