I can't resist asking this companion question to the one of Gowers. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and its special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.

So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics.

I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine.

Added, 11 December: Being a faulty human, I had an inexplicable attachment to the past tense. But, being weak-willed on top of it all, I am bowing to peer pressure and changing the title.

I realize the title is a bit of a joke. But perhaps there is something to it. I have wondered, could an ordinary human possibly produce so many thousands of pages of output?
–
Donu ArapuraDec 11 '10 at 14:37

7 Answers
7

A simplicial set is surely an idea which would be more natural to a computer. Breaking a shape up into simplices is still something a human would do, because simplices are contractible geometric objects whose gluings one can explicitly describe. But to pass from this to finite strings with face and degeneracy maps, and then to base your theory on that, is pure computer-thought... and, like any good computer idea, extremely pretty.

The insight behind simplicial sets is, I think, a bit deeper than that. The motivation you've given is for simplicial complexes. It's completely non-obvious from a homotopy perspective that simplicial sets should be able to model spaces so well up to homotopy.
–
Harry GindiDec 10 '10 at 3:55

2

As I understand it, a simplicial complex in fact is, in some sense, how objects are encoded in computer-adided design. Furthermore, it might be argued that it is simplicial complexes that are fundamental, and the move to simplicial sets might have been made by any old computer, once it was required to consider morphisms. The proof that this is a good model for spaces, I agree is far more involved.
–
Minhyong KimDec 10 '10 at 4:59

1

@Harry- That's what I meant! Simplicial complexes are still a human idea... but simplicial sets are the computer idea, and are extremely pretty. @Minhyong- I would argue that simplicial sets go far beyond simplicial complexes (despite the existence of geometric realization), and are therefore "more fundamental". Braids form a simplicial set (face-map = deleting a strand, degeneracy= cabling)- but I have no idea how they might form a meaningful simplicial complex.
–
Daniel MoskovichDec 10 '10 at 12:55

I dont understand the computer human distinction here and why simplicial sets are natural to computers. Anyway, dear Daniel, who are the computeroids most associated to the simplicial set idea through the ages?
–
Gil KalaiDec 10 '10 at 21:02

I think the human/computer dichotomy you set up should be extended to
a human/mathematician/computer trichotomy, just because a substantial
portion of "mathematical maturity" is about learning to think like a computer,
in your sense.

Anyway I've just put that in place to try and shore up my example. It seems
that humans read "let's say we have X and Y..." and automatically take the
extra step of assuming X and Y are unequal. Computers wouldn't bother to
take this extra step. Mathematicians, or at least I myself, split into
the X=Y and X$\neq $Y cases but try to obviate that split when writing
down a proof.

Conventional computers follow a program written by a human. I think, for example, Daniel Moskovich's answer about simplicial sets is something that a human programming a computer (or a computer scientist) would think of when trying to program a computer.

Formalisms like these are things that we humans think of when programming a computer. Hence we have a tendency to think of the as "more mechanical", or "more like a computer", etc.; but I think it's a mistake to think that this is something that a computer "would come up with" on its own. Really it's us humans that come up with them, just when we are thinking in terms of computing.

There are computers which can be thought of as actually "thinking" in a way similar to humans (as opposed to just following a program), e.g. IBM's Watson computer. They need some large data set to learn from, though (just like we do), and if this large data set is all of the mathematics created by humans, then I think the mathematics produced by the computer would look a lot like things "a human would think of"!

I suppose asymptotics for certain functions (e.g., Prime Number Theorem), or any sort of conjecture based on large empirical evidence, would count, but that's probably not what you mean.

Perhaps more interesting is the following. In high school/college, I was briefly interested in automated theorem proving and read about this (I don't remember the source and may miff the details--perhaps someone can help). Around the 60's or 70's, someone wrote an AI program to use numerical evidence to have a computer "deduce" many theorems/conjectures in number theory. They showed their answers to Knuth, and he marked the ones he thought were mathematically interesting. At least one thing that stood out was several interesting "results" on highly composite numbers, which I think Ramanujan may have studied as well.

This paper on homotopy and set theory seems to take this question seriously: if you restrict yourself to posetal categories and try to do model categories in a brute-force naive way, you arrive to definitions of some set-theoretic invariants...So maybe we can say that Shelah is a computer. ;)

I think that many conjectures from number theory (which I think count as insights) might be more obvious to a computed than a human being since they would have access to a huge amount of empirical data from which to discover patterns and obtain estimates as to the probability that something is true or plausible. This is a very effective way to discover theorems in number theory. The method of discovery is along the lines described by Polya in his books on plausible reasoning in relation to Euler's discoveries.

To obtain the same level of confidence humans need insight and proof which in number theory is often really hard to obtain.

There are some mathematicians like Ramanujan, Euler, Gauss who had similar abilities but this is quite rare.

Also mathematical results that are accessible by humans must be true for a reason i.e. there must be a reasonably short deductive route from known theorems. Work of Chaitlin and others suggests that some theorems are not true for any reason i.e. they are not amenable to any deduction from a set of axioms of less complexity than themselves. On the borderline there must be profound mathematical results that are close to being empirically true in that sense. You would imagine that computers might have a better chance of understanding and perceiving these results since they might be able to reason more effectively from a much wider vantage point empirically speaking given their massive processing power.