"Everyone knows what a curve is, until he has studied enough mathematics to become confused through the countless number of possible exceptions."

Felix Klein

What notions are used but not clearly defined in modern mathematics?

To clarify further what is the purpose of the question following is another quote by M. Emerton:

"It is worth drawing out the idea that even in contemporary mathematics there are notions which (so far) escape rigorous definition, but which nevertheless have substantial mathematical content, and allow people to make computations and draw conclusions that are otherwise out of reach."

In mathematics, by mathematicians. Everything is clear? I suppose mathematics is still live nowadays...
–
kakazFeb 25 '11 at 21:08

5

"Everything is well defined in modern mathematics" - We really don´t know that for sure yet (i.e. consistency of ZFC)... "Mathematics is more about correctness than about truth." -I would argue that it is more about the relative truth. Being about correctness is contains too much of a self-purpose...
–
efqFeb 26 '11 at 1:55

kakaz: what you're "playing" with isn't mathematics then, for mathematics has these notions well-defined. It's like asking what the smallest positive real number is, there simply isn't one. Similarly, it's not useful or horizon-broadening to talk about things which several generations of mathematicians have already thought carefully about and have discarded precisely because they are NOT interesting--being ill-defined and therefore impossible to deal with in a mathematical fashion. (contd.)
–
Adam HughesFeb 27 '11 at 20:51

6

Closing this thread seems more like punishing someone for being an amateur rather than enhancing the quality of the site. Now and throughout history, I believe, a large percentage of the most interesting mathematics revolves precisely around those notions that are used but not (yet) clearly defined. A big list of such subjects seems extremely valuable to me. Vote to reopen.
–
Louigi Addario-BerryMar 1 '11 at 16:01

29 Answers
29

One of the most important contemporary mathematical concepts without a rigorous definition is
quantum field theory (and related concepts, such as Feynman path integrals).

Note: As noted in the comments below, there is a branch of pure mathematics --- constructive field theory --- devoted to making rigorous sense of this problem via analytic methods. I should add that there is also a lot of research devoted to understanding various aspects of field theory via (higher) categorical points of view. But (as far as I understand), there remain important and interesting computations that physicists can make using quantum field theoretic methods which can't yet be put on a rigorous mathematical basis.

It is not fair to say there is no rigorous definition of a Feynman path integral. It is a probability measure on the space of tempered distributions. The difficulty is in constructing nontrivial examples. This is what constructive field theory is about.
–
Abdelmalek AbdesselamFeb 26 '11 at 18:59

Surprised nobody mentioned fractal yet. (Chaos has been mentioned but the connection is tenuous.)

No satisfactory definition of fractal exists. Mandelbrot tentatively defined a fractal as a set whose Hausdorff dimension is strictly larger than its topological dimension. But this leaves out many sets that most people agree are fractals, and it's hard to extend to other objects (like measures) that one also wants to consider as fractals.

Taylor defined a fractal as a set with coinciding Hausdorff and packing dimensions. His goal was to leave out too irregular objects (for which different concepts of fractal dimension may differ), but according to his definition any smooth object is a fractal, and clearly fractal sets such as Bedford-McMullen carpets are left out.

In applied fields, a fractal is often defined as a set having some kind of similarity: small parts are similar to the whole set, perhaps in a statistical or approximate sense. While many fractals arising in practice do enjoy this feature, this is still a very vague definition.

Some authors consider any set or measure in Euclidean space to be a fractal, when the goal is to study properties typically associated with fractal sets, such as Hausdorff dimension.

At the end of the day, there is agreement that giving a universal definition of fractal is impossible, yet it is a useful concept to have around, and people know a fractal when they see it.

The notion of a self-similar structure in general is an 'intuitive' one. It is given various precise definitions in specific contexts, but the general idea is usually left to the imagination.
–
Colin ReidMar 2 '11 at 20:05

Could a natural definition of fractal be "not of integer dimension" (whatever definition of dimension you take)?
–
Andreas RüdingerSep 30 '11 at 20:26

1

@Andreas, there are sets of integer dimension which have all the hallmarks of fractality. For example, consider the four corner Cantor set, constructed replacing the unit square $[0,1]^2$ by the four corner squares of side length $1/2$ and continuing inductively. This object is the epytome of a fractal (strictly self-similar, purely unrectifibable), and has Hausdorff and box counting dimension $1$.
–
Pablo ShmerkinSep 30 '11 at 23:55

@David: You're missing the point. The "field with one element" is neither a field nor has it one element (probably). It's a figure of speech to describe a more or less hypothetical object that should behave in some aspects like a field with one element should if it existed.
–
Johannes HahnFeb 27 '11 at 23:29

7

@David: Wherever one sees "$\mathbb{F}_1$", one should think "universal base", not "field with one element". To elaborate, the idea of the "field with one element" is an algebro-geometric concept that works roughly as follows: The universal base in commutative algebra and algebraic geometry is $\operatorname{Spec}(\mathbb{Z})$. However, this is in many ways annoying, since we usually expect in geometry that the universal base be a point. At a technical level in alg. geom., there are a number of reasons why we would like our base to be a field, and so to prove things about (continued)
–
Harry GindiFeb 28 '11 at 7:36

6

the category of all algebro-geometric objects (which necessarily is the category of appropriate objects over the base $\operatorname{Spec}(\mathbb{Z})$), we have to attack things indirectly by proving things in all characteristics (when such proofs are even valid!). The idea of $\mathbb{F}_1$ is to find an appropriate category of "generalized commutative rings" that has a deeper base than $\operatorname{Spec}(\mathbb{Z})$ and generalizes algebraic geometry in the classical case. One algebraic approach I've seen is Jim Borger's approach via $\lambda$-rings, which is on the arXiv (contd)
–
Harry GindiFeb 28 '11 at 7:42

5

Another approach, using theory of $E_\infty$-ring spectra, is originally due, I think, to Jack Morava. This approach, called Spectral or "Brave New" algebraic geometry, is founded on the framework developed in the recent work of Toen-Vezzosi and Lurie. If you believe that the theory of $E_\infty$-ring spectra really generalizes the theory of commutative rings, then this seems natural, but conversely, the fact that the universal base $E_\infty$-ring spectrum indeed has all of the expected properties of $\mathbb{F}_1$ seems like convincing evidence that this is an (continued)
–
Harry GindiFeb 28 '11 at 7:55

5

extremely natural extension of the classical theory. The major drawback of this theory is that the universal base $E_\infty$-ring spectrum is an object that exists only up to homotopy (it is a "virtual" field), and further, it happens to be a notoriously difficult object to compute with (for instance, computing its homotopy groups is by definition equivalent to computing the stable homotopy groups of spheres, which have only been computed to a stunningly low finite degree (somewhere between 30 and 40 are known, depending on who you ask)).
–
Harry GindiFeb 28 '11 at 8:01

1) The notion of explicit construction. Seeking explicit constructions to non-constructive replace existence proofs is an old endeavor. Computational complexity offers, in some cases, formal definitions (constructions that can be dome in P or in polylog space.) But these definitions are slightly controversial. In any case people looked for explicit constructions before any explicit definition for the term explicit construction was known.

2) The notion of effective bounds/proofs. There are many important problems about replacing a proof giving non effective bounds with a proof giving effective bounds. Usually I can understand a specific such problem but the general notion of effectiveness is not clear to me. (A famous example: effective proofs for Thue Siegel-Roth theorem.)

3) Elementary proofs. I remember that finding elementary proofs for the prime number theorem was a major goal. I was told what this means many times and in a few of those I even understood. But the notion of "elementary" proof in analytic number theory remained quite vague for me.

It's good that these are vague, because it guarantees that we can always look for yet more explicit construction, yet more effective bounds and yet more elementary proofs!
–
darij grinbergFeb 26 '11 at 17:15

2

I always thought that explicit constructions had more to do with decidability than with computational complexity. Many constructions have branchings (if $a \neq 0$, divide by $a$, otherwise do something else...) that are not at all helpful if you cannot decide which branch you should follow.
–
Thierry ZellFeb 28 '11 at 0:12

I'm not sure how well this fits the bill, but in algebraic geometry and number theory, the notion of mixed motives is still undefined, although people have a fairly good idea of what properties they want the category of mixed motives to have.

Indeed, they are so much of undefined that even the wiki link pops up an error ;)
–
efqFeb 26 '11 at 13:13

Thanks for alerting me to the broken link. I seem to never get the hang of these links.
–
Alex B.Feb 26 '11 at 14:25

3

One should mention that, thanks to Beilinson, Voevodsky, Déglise-Cisinski and others, we have a good candidate for the Derived category of mixed motives over most bases. The definition of an Abelian category of mixed motives still relies on Gorthendieck's Standard Conjectures on algebraic cycles.
–
AFKFeb 26 '11 at 17:56

There are several examples in set theory; the three I mention are related so I will include them in a single answer rather than three.

1) Large cardinal notion.

I have seen in print many times that there is no precise definition of what a large cardinal is, but I must disagree, since "weakly inaccessible cardinal" covers it. Of course, if you retreat to set theories without choice then there may be some room for discussion, but this is a technical point.

People seem to mean something different when they say that large cardinal is not defined. It looks to me like they mean that the word should be used in reference to significant sign posts within the large cardinal hierarchy (such as "weakly compact", "strong", but not "the third Mahlo above the second measurable") and, since "significant" is not well defined, then...

However, it seems clear that nowadays we are more interested in large cardinal notions rather than the large cardinals per se. To illustrate the difference, "$0^\sharp$ exists" is obviously a large cardinal notion, but I do not find it reasonable to call it (or $0^\sharp$) a large cardinal.

And large cardinal notion is not yet a precisely defined concept. A very interesting approximation to such a notion is based on the hierarchy of inner model operators studied by Steel and others. But their meaningful study requires somewhat strong background assumptions, and so many of the large cardinal notions at the level of $L$ or "just beyond" do not seem to be not properly covered under this umbrella.

2) The core model.

This was mentioned by Henry Towsner. I do not think it is accurate that we were proving results about it without a precise definition. What happens is that all the results about it have additional assumptions beyond ZFC, and we would like to be able to remove them. More precisely, we cannot show its existence without additional assumptions, and these additional assumptions are also needed to establish its basic properties.

The core model is intended to capture the "right analogue" of $L$ based on the background universe. If the universe does not have much large cardinal structure, this analogue is $L$ itself. If there are no measurable cardinals in inner models, the analogue is the Dodd-Jensen core model, and the name comes from their work. Etc. In each situation we know what broad features we expect the core model to have (this is the "not clearly defined part"). Once in each situation we formalize these broad features, we can proceed, and part of the problem is in showing its existence.

Currently, we can only prove it under appropriate "anti-large cardinal assumptions", saying that the universe is not too large in some sense. One of the issues is that we want the core model to be a fine structural model, but we do not have a good inner model theory without anti-large cardinal assumptions. Another more serious issue is that as we climb through the large cardinal hierarchy, the properties we can expect of the core model become weaker. For example, if $0^\sharp$ does not exist, we have a full covering lemma. But this is not possible once we have measurables, due to Prikry forcing. We still have a version of it (weak covering), and this is one of the essential properties we expect.

(There are additional technical issues related to correctness.)

But it is fair to expect that as we continue developing inner model theory, we will find that our current notions are too restrictive. As a technical punchline, currently the most promising approach to a general notion seems to be in terms of Sargsyan's hod-models. But it looks to me this will only take us as far as determinacy or Universal Baireness can go.

3) Definable sets of reals.

We tend to say that descriptive set theory studies definable sets of reals as opposed to arbitrary such sets. This is a useful but not precise heuristic. It can be formalized in wildly different ways, depending of context. A first approximation to what we mean is "Borel", but this is too restrictive. Sometimes we use definability in terms of the projective hierarchy. Other times we say that a definable set is one that belongs to a natural model of ${\sf AD}^{+}$. But it is fair to say that these are just approximations to what we would really like to say.

Andres, regarding your proposal that weakly inaccessible cardinals cover all large cardinal notions, how about the notion by which $\theta$ is fairly big if $V_\theta\satisfies$ ZFC? The least such cardinal is not weakly inaccessible, since it has cofinality $\omega$, but I would still regard this as a large cardinal notion.
–
Joel David HamkinsFeb 26 '11 at 21:11

Yes, this is in accordance with what I meant: It makes sense to think of this as a "large cardinal notion" (just as with most rungs of the ladder that is the consistency strength hierarchy) but I wouldn't call it a "large cardinal".
–
Andres CaicedoFeb 26 '11 at 21:20

The notion of canonicity (with respect to maps and objects) has thusfar evaded attempts by mathematicians to formalize it. If I remember correctly, Bourbaki tried to give it a definition based on some ideas of Chevalley, but, at least to my knowledge, it was deleted from later drafts of the Elements because it was not a particularly useful notion (or perhaps it just didn't work out. There was a thread on MO asked by Kevin Buzzard about this particular section of Bourbaki, and maybe you could find more details there). Jim Dolan more recently tried to give a definition of a canonical transformation between functors, but his notion is essentially that of a transformation that is natural when restricted to the core groupoid. However, this doesn't really capture all of the cases that we want, and I don't know of any serious attempt to make use of the notion.

What are the cases you want that are not included in "natural transformation"?
–
darij grinbergFeb 26 '11 at 10:09

2

@Darij: I'm not an expert on Dolan's definition, but I think that the fine people at the nLab have given a pretty coherent idea of what such things could be used for (although, even they are a bit skeptical that it's a useful notion): ncatlab.org/nlab/show/canonical+morphism
–
Harry GindiFeb 26 '11 at 10:16

1

"made without choices" is the usual "definition" The problem is that in many cases (maybe in all cases?) you just don't notice that you have made arbitrary choices along the way. Of course you can always say it's the canonical object given these choices, but if these choices are not "natural" (another one for the list btw) then the notion will not be useful because there would be no intrinsic reason for other objects to have this set of properties.
–
Adam GalOct 7 '11 at 13:37

So-called Stiff ODEs might qualify. In the literature one finds plenty of different attempts to define the notion of a stiff initial value problem for an ODE, some of them more, some less precise and they all try to capture the phenomenon of rapid step size decrease when numerically integrating some IVPs with explicit schemes whereas some implicit schemes do very well without slowing down significantly. In fact, some authors use this as the definition of a stiff IVP.

Some people say an equation is "stiff" if explicit methods require a very small step size to work, but the solution is still smooth. I think it is also "defined" as having components that vary on very different length scales.
–
Darsh RanjanFeb 26 '11 at 18:04

1

I use the working definition "stiff problems are problems that explicit Runge-Kutta cannot efficiently solve"...
–
J. M.May 8 '11 at 18:53

For a number of years, different authors were using different definitions of "chaos", but I think that has settled down now.

"Quantum group" may be a good answer. If Wikipedia can be trusted on this issue, "In mathematics and theoretical physics, the term quantum group denotes various kinds of noncommutative algebra with additional structure. In general, a quantum group is some kind of Hopf algebra. There is no single, all-encompassing definition, but instead a family of broadly similar objects."

TBH defining what a "quantum group" is doesn't seem to me like the most pressing problem in quantum group theory. "Classical groups" were long undefined, yet not hindering their exploration. As long as we know how every single quantum group we need is defined...
–
darij grinbergFeb 26 '11 at 10:16

3

@Darij: Well, it would be nice to have an axiomatization that allowed us to prove things simultaneously about say, quantized enveloping algebras and quantized coordinate rings of algebraic groups. But I agree that it doesn't seem pressing...
–
SheikraisinrollbankMar 3 '11 at 2:38

In Leo Corry's book Modern Algebra and the Rise of Mathematical Structures
, he chronicles how mathematicians have tried to give a formal definition of structure via lattice theory, Bourbaki's set theoretic structures, and category theory. At least according to Corry, the concept is still elusive and not really captured by any of the attempts.

Technically, calculus generally uses limits instead of infinitesimals. And there are logical systems (e.g. nonstandard analysis) in which genuine infinitesimals are rigorously defined. However, people find infinitesimals easier for intuition even in the context of the standard analysis. This type of infinitesimal reasoning generally then needs to be transformed into standard proofs.

For these to qualify, don't infinitesimals need to be "used but not clearly defined in modern mathematics"? I am not sure they satisfy both these conditions, unless by "used" you mean "as a pedagogical aid". (That's not to downplay Loeb measures etc etc but where they are used they are most assuredly "clearly defined".)
–
Yemon ChoiFeb 28 '11 at 6:35

Yemon, I agree with you completely. I said they are "almost" in this category. They are not usually used, and if so they are often defined rigorously. But there is still a category of intuitive, non-rigorous uses of the infinitesimal concept.
–
David HarrisFeb 28 '11 at 12:30

The set of equivalence classes of irreducible, smooth representations of a reductive $p$-adic group $G$ should be partitioned into finite subsets called $L$-packets. Each $L$-packet should correspond to a Langlands parameter, but since this correspondence remains conjectural, $L$-packets are not defined in general. In some important cases, one knows exactly what the $L$-packets are. For example, if $G$ is a general linear group, then the $L$-packets are singletons. For other groups, there are some properties that $L$-packets are believed to satisfy, but that's not a definition.

I believe though that the situation is in better shape now than it once was. The proof of the fundamental lemma should allow Arthur to complete his work on the stable trace formula for classical groups, and lead to a theory of local $A$-packets for classical groups. Since in the tempered case $A$-packets and $L$-packets coincide, and since the general description of $L$-packets reduces to the description of tempered $L$-packets for Levi's, my understanding is that this should also lead to a theory of local $L$-packets for classical groups.
–
EmertonFeb 27 '11 at 20:35

That a mathematical idea be "clearly defined" is itself an idea that perhaps could be more clearly defined ... one candidate for a more rigorous assertion is that a mathematical intuition be formally decidable. Moreover, widespread intuitions that are eventually proved to be decidable versus undecidable have an illustrious history in mathematics.

These reflections lead to the suggestion this community wiki's question would be better-posed mathematically (and might perhaps be more useful too) if it were amended to read:

"What intuitions are commonly embraced and/or have proved to be broadly useful, but nonetheless are formally undecidable, in modern mathematics?"

One specific example that comes to mind is Emanuele Viola's theorem, with its implication that the set of Turing machines {M} associated to P has no decidable runtime ordering. Viola's proof of undecidability was eye-opening to me, and it has filled the valuable role of leading me to wonder "What else is out there?"

To show the utility of these reflections, Section 1.5.2 of Sanjeev Arora and Boaz Barak's well-respected textbook Computational Complexity: a Modern Approach is titled "Criticisms of P and some eﬀorts to address them". I have often wished that Arora and Barak had written more on this theme. With the help of Viola theorem, this wich becomes specific and rigorous: a section titled "What properties of P are not decidable in modern mathematics?"

No doubt many more examples of "undecidable intuitions of modern mathematics" could be posted, and it would be great fun to read other people's examples. However, it seems inappropriate to amend the topic of a community wiki in such a fundamental respect, and so I am posting this amended question as a suggested general "answer" instead.

I was about to post this comment myself when I saw it here.
–
Michael HardyMar 3 '11 at 18:06

(I.e. I was about to post this answer.)
–
Michael HardyMar 3 '11 at 18:06

1

@John, could you post your rephrased question as a separate question? I think it would be very worthwhile to see what responses the MO community has to your rephrased question.
–
Colin TanMar 11 '11 at 5:47

@Colin, I am preparing to do precisely what you ask ... it turns out that these qiestions are to a some extent addressed in Juris Hartmanis' article <i>Feasible computations and provable complexity properties</i> (1978) ... and it is taking awhile to decide how to phrase this question in a well-posed yet stimulating way.
–
John SidlesMay 16 '11 at 14:26

Left/right derived functors. If $F$ is an additive functor from a category $A$ to another category $B$, then the left/right derived functors of $F$ go from $A$ to... where? Not to $B$ certainly, because this would require global choice on $A$ or break canonicity.

There seem to be solutions nowadays, with the notions of derived categories and anafunctors. Unfortunately, there seems to be no introductory text yet which would systematically develop homological algebra in a clean way, without cheating and speculating over one's head. I am more than glad to be proven wrong...

After all in the applications the axiom of global choice can be avoided. This is some theorem in set theory which others may explain better than me.
–
Martin BrandenburgFeb 26 '11 at 11:16

Martin, do you mean the result that adding global choice is a conservative extension of ZFC?
–
arsmathFeb 26 '11 at 11:27

I think that Grothendieck's $\delta$-functors and their relatives, triangulated functors, give a language that allow us to speak of a derived functor as something defined up to 2-natural equivalence.
–
Leo AlonsoFeb 26 '11 at 12:28

3

@Darij: You're in luck. The book Homotopy limit functors on model and homotopical categories by Dwyer-Hirschhorn-Kan-Smith gives the precisely correct abstract definition of a derived functor. One actually ends up with a theory very close to Lurie's $\infty$-categories but with less of the simplicial formalism. It's actually at the point that we can declare "case closed" on this particular question.
–
Harry GindiFeb 26 '11 at 14:53

1

Here's a clear and considerably shorter account by Kahn and Maltsiniotis that is at the same time more general than Dwyer-Hirschhorn-Kan-Smith. It clarifies the interrelations between various approaches and also has the advantage that it isn't written with a homotopic bias: math.jussieu.fr/~maltsin/ps/bkgmdef.pdf
–
Theo BuehlerFeb 26 '11 at 18:11

'Applied Mathematics' is a much-used term in modern mathematics, but I've yet to find a universally-agreed upon definition. Given its use as a major category ('pure' vs 'applied') and repository of sundry generalizations ('non-rigorous','relevant', 'not deep', 'critical to science', etc.), surely a precise definition is in order.

In the MSC, there is only one MSC code with this phrase (00A69). Based on this, maybe 'Applied Mathematics' is a field of inquiry which is not important

I too find the term "applied mathematics" confusing. I think part of the confusion I have with it, is that by using the term there is the direct implication that all other branches of mathematics are non-applied, non-relevant. And that's certainly not the case -- a lot of non-"Applied Mathematics" has quite a few real-world applications. So I much prefer to just call people simply mathematicians and describe precisely what they do, rather than support the pure/applied division.
–
Ryan BudneyJun 18 '11 at 20:04

Agreed. It is more illuminating to describe the specifics of a mathematician's pursuits than the broad labels of 'applied/pure'.
–
Nilima NigamJun 18 '11 at 21:24

The notion of a solution concept in game theory. Although the most famous example of such---Nash equilibrium---is rigourously defined, as are several others (correlated equilibrium, rationalizability, sequential equilibrium, etc.), there is no satisfactory general definition of the type of object of which these are tokens. Indeed, the purported definition that appears in this Wikipedia article is, in a sense, as far from informative as it could be without incurring a type mismatch.

In my view it is the most used and probably important notion which is not clearly defined. But of course I have only very limited view ...
–
kakazFeb 26 '11 at 11:34

15

My view of this is the opposite: the beauty of the Church-Turing thesis is that it does give a precisely defined and widely agreed upon meaning to "effectively calculable". From a mathematical perspective, the thesis is itself a definition, which takes a somewhat vague notion (anything that can be computed by any systematic method or algorithm) and equates to it a rigorous mathematical concept (recursive functions).
–
Henry CohnFeb 26 '11 at 15:01

I do think the busy beaver function isn't effectively calculable in the intuitive sense, as well as the technical sense. It's a well-defined function, but we have no algorithm for actually computing it. We do have an algorithm for proving lower bounds that will eventually converge to the true answer for any given case, but the convergence is incredibly slow (there is no computable upper bound on the time to convergence) and there is no way of knowing when convergence has happened. Being able to recognize when you have arrived at the answer seems like an essential property of algorithms.
–
Henry CohnFeb 26 '11 at 17:10

5

kakaz: There is no algorithm for computing the busy beaver function, you are misunderstanding a side remark in that wikipedia link. You are also misunderstanding the Church-Turing thesis. When we use notions such as "effectively computable" we mean the formal notions, not some vague intuitions. As Henry is pointing out, the Church thesis, which is not mathematics, can be seen as a definition.
–
Andres CaicedoFeb 26 '11 at 17:34

The mathalogic of usual language (that come from Aristotele) when we argue about a proof, expecially "reductio ad absurdum" rule, that is a implicit faith act on the coherence of what we are speaking. THe use of a relation (as $\in$) before define relation in the set theory contest, and using a "naive natural finite quantity manipulation" before bulding the theory of set or some well formalized mathematics theory.

ABout no too professional mathematics:

concept and existence of points, and spaces as a set of pont.

Concept of natural number set $\mathbb{N}$ as a "infinite in act" (quite it all gived) and no as "ever could add some to it" (infinite as ever continuing bulding), infact the passage to "infinite in act" need Peano axioms.

The different domains of mathematics (analysis, geometry, algebra, probability, combinatorics etc) are not completely clearly defined and overlap. This is of course not a problem when doing mathematics but sometimes when trying to categorize some work.

The problem with dimension is not the lack of a clear definition, but the opposite: there are too many.
–
Emil JeřábekSep 16 '11 at 15:12

Couldn't one ask: What do (types of) structures have in common, for which one defines "dimension"?
–
Hans StrickerSep 16 '11 at 15:17

@Emil: Isn't this one and the same problem? To be a clear definition is to be unique (up to equivalence), so "lack of a clear definition" means "no or more than one non-equivalent definition".
–
Hans StrickerSep 16 '11 at 15:29

@Emil: I don't believe that the word "dimension" was choosen accidentally to name completely unrelated notions. So it remains unclear what is common to these assumedly not unrelated notions.
–
Hans StrickerSep 16 '11 at 16:51

In analysis, the concept of a limit at infinity vs. a limit at a real number $r$.

Typically, there is a whole list of definitions of various limits $\lim_{x \rightarrow a} f(x) = b$, depending on whether $a$ and $b$ are ordinary reals or $\pm \infty$. You may have 9 separate definitions of the limit, one for each case. This situation repeats itself any time a limit is used implicitly, for example if an integral converges to a real or to $\pm \infty$, a series converges, and so on.

Everyone knows that these definitions are really the same, but it seems more cumbersome to have a single unified definition than to have separate definitions that are, informally, the same concept. It is this covert "intuitive sense" in which all the definitions are the same that is not clearly defined.

I don't think this is in the spirit of the question. If one really insists, it is possible to provide a compact definition of those "nine different limits" (consider extended reals, and so on...), and there is absolutely nothing unclear about it.
–
Mariano Suárez-Alvarez♦Sep 30 '11 at 2:18

Perhaps this answer is being voted down because $\aleph_0$ is a cardinal number, which is a well-defined notion. But the issue of which algebraic objects constitute "number systems" is kinda fuzzy, yes.
–
Mark GrantFeb 26 '11 at 7:40

11

Nobody really uses the notion "number" in modern mathematics without specifying what kind of number is meant.
–
darij grinbergFeb 26 '11 at 10:08

8

@darij: doesn't that support what Frank is saying?
–
Carl MummertFeb 27 '11 at 1:32

6

The question asks for things that are both poorly-defined and used; darij is pointing out that it is not used.
–
JBLFeb 28 '11 at 3:36

Maybe situation in Matroid theory where there is several strict axiomatization schemes but its equivalence is not easy to prove, is interesting here. Probably there should be some generalization which would tie this different approaches into one, more or less obvious notion.
There is even terminology connected with that phenomenon by G.C.Rota Cryptomorphism http://en.wikipedia.org/wiki/Cryptomorphism

matriod itself - which has several strict and not easy-equivalent definitions. It is like a finger pointing on the moon, and we still look on a finger... But OK - I am probably terribly wrong.
–
kakazFeb 26 '11 at 16:41

4

This is a bad example. The notion is precise, it does not matter whether some equivalence is trivial or not.
–
Andres CaicedoFeb 26 '11 at 17:35

How about the natural numbers $\mathbf N$ and the real numbers $\mathbf R$? If they were both clearly defined, then (for example) mathematicians would agree that the continuum hypothesis has a truth value, even if that value is not known. But there's not such agreement, and some will dispute that CH even has a meaning, much less a truth value. Even keeping it just to $\mathbf N$, all "definitions" that I know of are circular (e.g. the Peano axioms in second order logic just kick the unclarity up to the level of the predicates that the induction axiom quantifies over). The Hilbert $\omega$-rule is similarly self-referential. Yet we (mostly) agree that all arithmetic formulas (even, say, $\Pi^0_{100}$ formulas) do have truth values, that there's a (not effectively describable) first-order theory of "true arithmetic", etc. It just comes back to "the naturals, I mean the ordinary naturals, you know, 0, 1, 2, 3..." which comes across as a little bit faith-based ;-).