Plentyoffish dating forums are a place to meet singles and get dating advice or share dating experiences etc. Hopefully you will all have fun meeting singles and try out this online dating thing... Remember that we are the largest free online dating service, so you will never have to pay a dime to meet your soulmate.

Because errors in computation are less likely to be "caught " by someone who doesn't understand the underlying math.

I'm tempted to say that this might have to do more with prior experience than understanding the math. The error would be caught because the result "doesn't make sense" in the sense that it violates prior experience with performing that operation with similar numbers. This would be noticed regardless of whether or not you actually knew how to perform the calculation. Actually finding where the error occurred would require understanding of course but noticing that the answer "doesn't make sense" may be based on entirely different circumstances.

To give an example, consider a physics problem in which the result is a calculation of the radius of a star and your result is 23 m. I don't have to understand a thing about how that number was calculated to know that this number doesn't make sense.

A good case in point is fast-food restaurant workers no longer being able to mentally compute making change for a transaction.

Historically I doubt they did the calculation mentally. They used a counting algorithm where they count back the change. This doesn't require much understanding. They don't even have to see this as addition (though it is - that understanding isn't required to perform the algorithm). If the total was, say, $4.76 and you give me a $20, I count up:

This procedure can be followed by anyone who can count by various multiples and knows the corresponding multiples of each money denomination.

If they are not coached on at least the proper way to punch in a transaction, by doing so improperly, they have no idea that they might be giving the wrong change and frequently do so, lacking the critical analysis skills to "flag" the idea (for instance) that $16.00 change ought not be due to a customer who gave them a twenty on a $16.00 meal.

I would think in many cases this is a matter of negligence, not paying attention or perhaps an overtrust in the computer mechanism (the answer doesn't make sense but the computer told me this was the answer and it knows better than I do). My prior experience (as a cashier) would tell me that whenever the total comes to a number around $16 and the customer gives me a $20, the change should be in the $4 range. This doesn't mean I have to know a thing about adding and subtracting.

But my interpretation could be wrong as well. I suppose a study would actually have to be done. My conjecture is that it's going to have more to do with familiarity than understanding.

My conjecture is that it's going to have more to do with familiarity than understanding.

And I think you are probably right, however, my point was that there should be an understanding of the math at some level, even if only in the relative terms of logical inference based on prior experience. A better example than making change might be someone tasked with using "best fit" software for statistical data. The software might find that (for instance) the data yields a "best fit" for extrapolation based on say a polynomial function. If the user doesn't understand the nature of the data, he would likely go with the computer's "decision", where if he had been more mathematically "savvy" and acquainted with the nature of the data, he might have known that the data were empirical approximations of an exponential function, which while technically not giving as good a fit for existing data, would probably yield more accurate values farther away from the data cluster.

What does that have to do with being easier? I would think memorization is much easier than attempting to understand it. There does seem to be different faculties involved. There was one study which seemed to suggest (at least for the particular task used in the study) that memorization was inversely correlated with understanding.

I can't recall the name of the study but IIRC, the study consisted of two groups (call them SAME and DIFFERENT). Both groups were given the same reading material on a particular topic organized in a particular way. Then the subjects in the group SAME were given the same material organized in the same way. However, the subjects in the group DIFFERENT were given the same material but organized differently. The SAME group had better responses with respect to memory whereas the DIFFERENT group had better responses with regarding to understanding (the understanding questions had to do with applying the information.)

The SAME group received information organized in the same way which required little thought and provided another opportunity for memorization. The DIFFERENT group received the same material but since it was presented in a different manner, they had to integrate it with the previously read material. This would seem to require them to understand the material better.

The moral of this story (if we should take a moral from one study which we probably shouldn't) would indicate that there are (1) different faculties involved with memory and understanding and (2) understanding is aided by, perhaps, representing it in multiple ways.

It depends on your objective. If all you want to do is to be able to use a formula in the exact ways you were taught, then learning by rote is much quicker and easier. But if what you want to do is to be able to manipulate the formula to work out additional situations that you have not been shown how to work out, then understanding is the best method.

Back in the 90s, there were no prices per kg listed for packets. Most people would see that a bigger packet that had a discount seemed to cost less overall, and would buy it. I would actually work out how much it was with the discount. A lot of the time, buying the smaller packet was cheaper.

RE Msg: 80 by loverofwisdom:

But if I want to find the roots to a quadratic it would be much easier to have an available formula or even better, a computer/calculator that gives me the results. That's "easier" and more "useful". Understanding requires a lot of work and thought which the vast majority of students aren't going to even give a damn about. At least that's my take.

We were actually told by my maths teacher to not rely on a calculator or a computer except for the final results. Our maths teacher explained that each calculator or computer takes approximations, which result in a miniscule error. However, if you then use a calculator to get an interim result, and then plug that back into the calculator to work out the final result, then the miniscule error that you got from the interim calculation often expands quite rapidly to give a very large error in the final result.

I'm not sure why this would make it any easier for students. It could very well increase their understanding assuming any of it takes. Most of mathematics is too abstract to just throw it at them.

Just out of curiosity, what about a field like statistics? In my experience with undergrad statistic courses, the bulk of them emphasize taking formulas and applying them in correct situations. To even begin understanding where the formulas come from would require an extensive background in mathematics (for example, linear algebra) and I'm guessing some of that does get covered in a theoretical statistics graduate program. But I don't see how any of it is "easy".

This is actually a great example of understanding versus rote learning. In my very first probability lecture in university, our probability professor gave us an example of how people misunderstand statistics.

He cited that in one tabloid newspaper, it said that in a poll, the Conservative Party would get 52% of the vote, giving the impression they were likely to win in the next election. However, in the Times, it said that in the same poll, the Conservative Party would get 52% +/-5% of the vote, with a 90% confidence interval. He then explained that this meant that we have a 90% confidence that the Conservative Party would get somewhere between 47% and 57% of the vote, with a 10% probability that they could get less than 0%-46% of the vote or 58%-100% of the vote. Puts a whole different spin on things that way. You realise that this poll doesn't tell you all that much at all.

I actually see this quite often here. People will quote a study, that show a result based on a calculation. Then if I look up the actual study, and the actual calculation that was done, and then I work out the results for myself, then I often find that the study shows an entirely different story. Often it proves something else entirely.

And I think you are probably right, however, my point was that there should be an understanding of the math at some level, even if only in the relative terms of logical inference based on prior experience. A better example than making change might be someone tasked with using "best fit" software for statistical data. The software might find that (for instance) the data yields a "best fit" for extrapolation based on say a polynomial function. If the user doesn't understand the nature of the data, he would likely go with the computer's "decision", where if he had been more mathematically "savvy" and acquainted with the nature of the data, he might have known that the data were empirical approximations of an exponential function, which while technically not giving as good a fit for existing data, would probably yield more accurate values farther away from the data cluster.

Would any of this actually require that I know how to perform the calculations myself? How would you determine if the software failed to find which (and I'll assume we're restricting to the four main ones because I'm frankly not familiar with how any particular software would deal with other types of functions) of the four "best fits" (the technique would be to calculate the correlation coefficient see which one is closest to 1 (or -1) (at least that's my guess on how the software works).

So my question is how would you, even if you knew how to perform the relevant calculations, know that the computer failed to give you the correct best fit line? Obviously if the data seems to be some sort of curve and it's giving you a line (in terms of how the "looks" when plotted) you might have cause for concern. But in all honesty, unless I did the calculation several times (and got the same result at least a couple of those times) I wouldn't trust my own calculations since there are many of them involved. Effectively your data forms a 2 by n (or is it n by 2? I forget which one is which offhand) matrix where n is the number of data points. The sheer number of calculations is going to be fairly high and the chance of you making a mistake along the way (even if you're punching numbers on the calculator) are pretty high.

Or are you suggesting that the calculation was correct but the data wasn't entered properly? If that's the case, I don't know how one would tell the difference unless you have expectations on what the result should be (such as plotting eV's versus frequency in the photoelectric experiment - the expectation being that the "best fit" is linear with a slope somewhere around in the vicinity of Planck's constant - but this doesn't require that I know how to calculate the "best fit" line or why that calculation gives me a "best fit")

It depends on your objective. If all you want to do is to be able to use a formula in the exact ways you were taught, then learning by rote is much quicker and easier. But if what you want to do is to be able to manipulate the formula to work out additional situations that you have not been shown how to work out, then understanding is the best method.

I definitely agree. And it seems that a little comment of mine has moved us down to this old question on education and what and how we should teach and ironically I find myself defending a position I don't agree with (I frequently use the example that jeremy used of teaching the quadratic equation by deriving it as opposed to simply memorizing it).

The vast majority of people won't ever become mathematicians nor will they ever be required to need to work out how to apply a formula in new situations (which seems to be restricted to mathematicians and perhaps physicists) so why do they need to understand it at all?

We were actually told by my maths teacher to not rely on a calculator or a computer except for the final results. Our maths teacher explained that each calculator or computer takes approximations, which result in a miniscule error. However, if you then use a calculator to get an interim result, and then plug that back into the calculator to work out the final result, then the miniscule error that you got from the interim calculation often expands quite rapidly to give a very large error in the final result.

I would actually think "sig figs" would be more appropriate here (unless you can think of another example in mind). If I'm attempting to find an approximation, it's typically in the context of, say, a problem in physics or some other discipline in which some measurements are taking and those are used to make other computations. The "sig figs" approach more or less carries the "uncertainties" in the measurements with it throughout each calculation. It might not be appropriate to have that precise of a number.

Granted, if I'm solving a problem, I go as far as I can algebraically, then plug in numbers and get the wrong answer but I've never claimed to be any good at calculating :)

This is actually a great example of understanding versus rote learning.

I agree; though my point (assuming I got around to making it) was that the statisticians that are going to go out find a job doing work for some firm wouldn't need to know where the formulas came from, how to derive them or anything of that sort. They simply need to know (or be able to look up) what formulas to use, when, where and how to use them (and know how to use software that does the calculation because no one in their right minds would want to do any of those calculations, even with a calculator) and then "interpret the results" which does require some understanding but not much.

He cited that in one tabloid newspaper, it said that in a poll, the Conservative Party would get 52% of the vote, giving the impression they were likely to win in the next election. However, in the Times, it said that in the same poll, the Conservative Party would get 52% +/-5% of the vote, with a 90% confidence interval. He then explained that this meant that we have a 90% confidence that the Conservative Party would get somewhere between 47% and 57% of the vote, with a 10% probability that they could get less than 0%-46% of the vote or 58%-100% of the vote. Puts a whole different spin on things that way. You realise that this poll doesn't tell you all that much at all.

I think there are probably several layers of what are going on here and I think this example is good to the point.

To generalize we'll assume that I have a generic undergrad statistician (I hope I don't offend too many statisticians here). They are familiar with a variety of formulas and they have a general idea of what they mean and how to apply them and what the results mean. So they'll know what a "90% confidence interval" is and they'll know that a drug trial, compared to placebo in a double-blind study with a p value of .02 means (roughly speaking) and they are familiar with the formulas to derive those things.

But the formulas come out of nowhere at least they seem that way (they do to me at least... I've seen so many tests that are used in experimental design and have no idea where these convoluted formulas come from - many of their origins, I believe, are from linear algebra but I've never seen their derivation). In that sense I would say I don't understand those formulas and here I'm using "understand" in a different sense than above. It's one thing to be given a formula and "understand" what it is, how to use it and what the results mean and it's another thing to have an "understanding" that allows one to derive the formula and the meaning derived from that understanding is much deeper.

It depends on your objective. If all you want to do is to be able to use a formula in the exact ways you were taught, then learning by rote is much quicker and easier. But if what you want to do is to be able to manipulate the formula to work out additional situations that you have not been shown how to work out, then understanding is the best method.

I definitely agree. And it seems that a little comment of mine has moved us down to this old question on education and what and how we should teach and ironically I find myself defending a position I don't agree with (I frequently use the example that jeremy used of teaching the quadratic equation by deriving it as opposed to simply memorizing it).

The vast majority of people won't ever become mathematicians nor will they ever be required to need to work out how to apply a formula in new situations (which seems to be restricted to mathematicians and perhaps physicists) so why do they need to understand it at all?

That's quite true. There is one thing I've noticed, though.

We don't need to understand the quadratic equation in everyday life. But the vast majority of people don't ever use the quadratic at all.The same is true of calculus. The vast majority of people don't need to understand calculus. But they don't use it either.Many people need to use percentages. But they have to understand them, or people won't understand the difference between a discount, and an increase.

To be honest, is there anything in maths that most people need to use, but not understand? I cannot think of anything offhand. The only time they need to learn something but not understand it, is that which is on the exams to be tested only by rote, but not understanding. But that's an arbitrary condition of their exams. They wouldn't have to learn it at all, or would have to understand it, if that was what their exams required. Moreover, these exams are there to test their ability. If they test rote memorisation of fomulas that over 90% will never have reason to need, then they are being tested on their ability to memorise useless facts without understanding them. That's great training for swallowing adverts that claim to have great products, but are actually useless when you think about what they are really saying, or for swallowing government claims that actually makes no sense. But outside of that, where on Earth is the benefit?

There are the formulas that are used by physicists and engineers. But they need to understand them as well. That's why so many physicists are now called theoretical physicists. If you don't understand the basis of the formulas, you cannot extend their use beyond what you are taught. But that is what the jobs of physicists are engineers are, to push the boundaries of science, and to make things that have never been done before in exactly that way and in those conditions. They are always pushing the boundaries and are always using formulas in ways they have not been taught. So they HAVE to understand the basis of those formulas.

A classic example of this is Green's Lemma, something that most physicists are very familiar with. It is used all the time in physics. It is called a Lemma, because it is a direct result of Cauchy's Reside Theorem, which in itself is a direct result of Cauchy's Integral Theorem. Cauchy's Integral Theorem says that any continuous function over a closed simple loop always adds up to zero. The Residue Theorem simply states that this is only true when there are no singularities in the area, that being points at which differentiation break down, and as a result, the actual result is the sum of the values of the function at all those singularities, which are called "residues".

This has a very important consequence. Since Green's Lemma is based on Cauchy's Integral Theorem, if we want to use Green's Lemma, 90% of the time, all we need to do is to break the circuit into sets of simple closed loops, and then observe where differentiation breaks down, and simply add up all the residues at the singularities.

An easy example is to prove that gravity acts on any spherical body as if the body were compressed into the centre of the body. We CAN use Green's Lemma. OR, we can simply observe that the area of a sphere will have continuity and differentiability at every point except for the centre, so the effect of gravity overall is only that which acts at the centre. Easy peasy. Makes physics much simpler.

It also makes it really easy to calculate the resistance of a wire, much easier than the complex integration normally required to obtain it using Green's Lemma.

Speaking of physics, 1/3rd of my maths O-Levels and A-Levels were physics problems, which we called applied physics. They consisted of all sorts of problems in physics, for instance problems in buoyancy, mechanics, ballistics, and friction. I found understanding the basis of these formulas made it really easy to work out almost any problem in physics.

We were actually told by my maths teacher to not rely on a calculator or a computer except for the final results. Our maths teacher explained that each calculator or computer takes approximations, which result in a minuscule error. However, if you then use a calculator to get an interim result, and then plug that back into the calculator to work out the final result, then the minuscule error that you got from the interim calculation often expands quite rapidly to give a very large error in the final result.

I would actually think "sig figs" would be more appropriate here (unless you can think of another example in mind). If I'm attempting to find an approximation, it's typically in the context of, say, a problem in physics or some other discipline in which some measurements are taking and those are used to make other computations. The "sig figs" approach more or less carries the "uncertainties" in the measurements with it throughout each calculation. It might not be appropriate to have that precise of a number.

Yes. One problem is that we often know what the error margins are in each step. But we often don't work out just what they will contrbute to the final result. So often, we can get a result that is exactly right in the data, but nothing like what we expected to see, because the errors are so huge that what we actually seeing is something entirely different with errors so large that their results move all the way into the things we are looking for.

Granted, if I'm solving a problem, I go as far as I can algebraically, then plug in numbers and get the wrong answer but I've never claimed to be any good at calculating :)

The beauty of the method is you don't have to be. The method can be programmed into a computer, or even into an Excel spreadsheet. Then you don't need to calculate it yourself. But the method comes from the maths. So really, calculations depend on good understanding of what the maths is telling us.

This is actually a great example of understanding versus rote learning.

I agree; though my point (assuming I got around to making it) was that the statisticians that are going to go out find a job doing work for some firm wouldn't need to know where the formulas came from, how to derive them or anything of that sort. They simply need to know (or be able to look up) what formulas to use, when, where and how to use them (and know how to use software that does the calculation because no one in their right minds would want to do any of those calculations, even with a calculator) and then "interpret the results" which does require some understanding but not much.

That would be true if being a statistician did not require great knowledge of maths. But the opposite is true. If you want to become a statistician, you need to get a Maths degree.

He cited that in one tabloid newspaper, it said that in a poll, the Conservative Party would get 52% of the vote, giving the impression they were likely to win in the next election. However, in the Times, it said that in the same poll, the Conservative Party would get 52% +/-5% of the vote, with a 90% confidence interval. He then explained that this meant that we have a 90% confidence that the Conservative Party would get somewhere between 47% and 57% of the vote, with a 10% probability that they could get less than 0%-46% of the vote or 58%-100% of the vote. Puts a whole different spin on things that way. You realise that this poll doesn't tell you all that much at all.

I think there are probably several layers of what are going on here and I think this example is good to the point.

To generalize we'll assume that I have a generic undergrad statistician (I hope I don't offend too many statisticians here). They are familiar with a variety of formulas and they have a general idea of what they mean and how to apply them and what the results mean. So they'll know what a "90% confidence interval" is and they'll know that a drug trial, compared to placebo in a double-blind study with a p value of .02 means (roughly speaking) and they are familiar with the formulas to derive those things.

But the formulas come out of nowhere at least they seem that way (they do to me at least... I've seen so many tests that are used in experimental design and have no idea where these convoluted formulas come from - many of their origins, I believe, are from linear algebra but I've never seen their derivation). In that sense I would say I don't understand those formulas and here I'm using "understand" in a different sense than above. It's one thing to be given a formula and "understand" what it is, how to use it and what the results mean and it's another thing to have an "understanding" that allows one to derive the formula and the meaning derived from that understanding is much deeper.

Probability comes from common sense, and I do mean ALL of probability comes from common sense. It's one of the few areas of maths that have basically been made up from what we understand, and not from simple observations or from prior proofs. For instance, we all know that the probability of a coin landing heads is 1/2. But how do you know that? Is there some theorem that proves this beyond doubt? Nope. Do we all sit there and throw coins up 1 million times, and count the number of times it lands heads? Nope. We say that if the probability of both sides landing are equal, then we have 2 chances to land, so if only one lands, the probability is 1/2. It's based on human intuition, and human understanding.

Statistics is when we want to work out the probabilities, but we don't have the necessary information to do so, and we can gather some of the information, but only a small part of it. So we gather that small part of the information, and we attempt to work out what the whole lot looks like. So statistics is rather how to make probability calculations in a practical way. In that respect, it's even more easy to understand than probability.

@ LoverOfWisdomWhat I meant in my little statistical scenario was that the computer software by itself would only be able to calculate a curve of best fit to an existing scatterplot. If one didn't know in advance what type of mathematical curve best described the computed result, one might be inclined to go with it. HOWEVER, suppose one knew in advance that the scatterplot ought to conform to a different curve (in my example, an exponential curve). One would then reasonably deduce that curve values well outside the scattergram would be more accurate if it had been fitted to an exponential curve than say a polynomial one chosen by the computer. In such a case, one with an understanding of the type of curve the scattergram really represents, and wishing in particular to get more accurate "predicted values" some distance from the scattergram, would probably instruct the computer to fit a best exponential curve rather than letting the computer "decide" to use a polynomial approximation.

It struck me that it might be a suitable example where an understanding of the underlying mathematics (i.e. knowing the type of curve the scatterplot represents) would prove more advantageous than simply punching in the data and letting the computer decide what to do with it.

...which I said more precisely in the portion of my post you quoted :)

Yes, it does state it, if you are aware that it is what is being stated. But it's often skipped over in school and you just learn it as a formula without understanding.

Just as an aside, and I think this one's more interesting, how would explain 0!=1? The more concrete definition of factoral doesn't have any clear meaning for 0 so that would be an example of convention. (We could, of course, treat the factoral as a gamma function but that's a bit more abstract.)

Are you talking about problem solving versus proving? Like I asked Jeremy how might he use his method to find 2^1/x=sqrt(2). He gave me an answer but I could go further and say, now find x using your method 2^x=7. Those who know formulas may immediately say log 2 (7)=x. But using Jeremy's method how might we arrive to this?

It's not my method as much as just what is actually taking place, I think.

But to answer you, 7 is the number is the first column (the products) and x is the matching number in the second column (the sums).

Are you talking about problem solving versus proving? Like I asked Jeremy how might he use his method to find 2^1/x=sqrt(2). He gave me an answer but I could go further and say, now find x using your method 2^x=7. Those who know formulas may immediately say log 2 (7)=x. But using Jeremy's method how might we arrive to this?

To the extent I understand what Jeremy's doing, it seems to me (and he can correct and/or clarify where need be) that he is attempting a few things:

(1) A reduction of a variety (all? or at least the ones that can be so reduced) of mathematical concepts to counting (which is related to the cardinality of sets though sets are not the abstract entities of set theory but "groups of objects" where the "objects" are physical.)

(2) That this reduction amounts to a "grounding" of the concepts. Those that can't be so reduced are not "grounded". This I believe is analogous to much of the history of mathematics that required the "grounding" to not be a reduction to "groups of objects" but to a geometrical representation or to give a more recent examples of "grounding" to logic (logicism of Frege and Russell) or "grounding" to set theory.

I'm not entirely clear what "grounding" does other than takes a preferred ontology or methodology and attempts to reduce everything else to it.

(3) That education in mathematics (to achieve understanding) requires this "grounding" which may be characterized as I have above but another thought that has come to me in regard to what Jeremy is attempting to do and might be useful here (and Jeremy can chime in here of course) is the distinction between concrete and abstract.

Notice that I gave an abstract formula (which to Jeremy wasn't understanding) and he represented my same formula but in such a way that it gave instance to a concrete realization of that same abstract formula.

There is, I think, probably good indication that providing concrete examples as pedagogical tools serve good "roads" to understanding, I don't think the concrete examples are the understanding.

To use his example, he provided a pattern that seems to hold for the number two and that illustrates the relationship, however, there is no indication, in the way that it's presented, that it holds for all numbers. Someone might see that concrete example and perhaps come to memorize that concrete example but have no clue how to abstract from it to a more general case (or even understand why it can be generalized... not all concrete examples generalize in the way that one comes to understand them).

I think if I were asked to explain why the formula holds (and to illustrate my point on why I think the concrete/abstract thing is relevant here) I would begin by defining exponentiation which I'll define in terms of multiplication: x^y means that I multiply x by itself y times (where x and y are integers - it can be generalized to rational numbers but let's leave that issue aside.)

Now suppose that I have the following multiplication:

x^y = x1 * x2 * ... * xy (where x1=x2=...xy; these are all intended as subscripts)By the commutative law of multiplication I can group them anyway I like. Suppose I group the first k in one group and the second group would then have y-k elements:

Now I'm going to use a substitution by letting a = k and b = (y-k). Observe that a + b = y which will be the third substitution:

x^(a+b) = x^a * x^b

This proves the formula generally (provided that x, a and b are all natural numbers). This is what I mean by a deeper understanding. It satisfies Jeremy's "grounding" criteria since I define exponentiation in terms of multiplication, I could define multiplication in terms of addition and addition in terms of counting. However, it's "abstract", not "concrete" which I think is probably part of what Jeremy's concerned with (he can chime in of course). If I were to present this to a high school algebra class, with the exception of perhaps a couple of mathematically inclined students, most of them will look at me with blank stares. Jeremy's way would perhaps provide a better stepping stone to this. But I think this is closer to understanding the formula, not the concrete presentation Jeremy presented.

Problem solving seems to require something different. For instance I could ask how many regular polygons fit in a circle who's radius forms the diagonal of every regular polygon inscribed within the circle?

How is that different from proving? The beauty of mathematics, to me (and probably to most mathematicians), is the process. The results are less interesting to me (and to some extent less meaningful independent of the proofs). To give an example, consider the Quadratic Reciprocity Theorem. There are over 200 proofs of this theorem. Gauss, who first proved it, found 8 of them in an attempt to understand the theorem.

Granted, problem solving is probably much more closely related the process of finding the proof than the proof itself. Proofs are what's left after the mathematician has hidden all of his toys and clothes underneath his bed.

We don't need to understand the quadratic equation in everyday life. But the vast majority of people don't ever use the quadratic at all.The same is true of calculus. The vast majority of people don't need to understand calculus. But they don't use it either.Many people need to use percentages. But they have to understand them, or people won't understand the difference between a discount, and an increase.

To be honest, is there anything in maths that most people need to use, but not understand? I cannot think of anything offhand. The only time they need to learn something but not understand it, is that which is on the exams to be tested only by rote, but not understanding. But that's an arbitrary condition of their exams. They wouldn't have to learn it at all, or would have to understand it, if that was what their exams required. Moreover, these exams are there to test their ability. If they test rote memorisation of fomulas that over 90% will never have reason to need, then they are being tested on their ability to memorise useless facts without understanding them. That's great training for swallowing adverts that claim to have great products, but are actually useless when you think about what they are really saying, or for swallowing government claims that actually makes no sense. But outside of that, where on Earth is the benefit?

I think at this point it would be helpful to define terms. I'm hoping my previous post has helped clarify this further. I'm going to stay with the quadratic equation for this example. I'm going to intentionally equivocate on the word "understanding" because I believe we are all talking passed each other with what we mean when each of us says "understanding".

Suppose I have some formula such as:

ax^2 + bx + c = 0 (or some algebraic equivalent.... interestingly another way to "understand" the formula is to recognize that ax^4 + bx^2 + c = 0 is another formula where the formula can be applied).

I have to understand some basic things in algebra in order to recognize that this is a place where I can apply the quadratic equation to find x in terms of a, b and c. Once I have made that recognition, I am able to apply the formula and get the solution. That's some level of understanding.

I have a different sort of understanding when I understand it as a graph in the x-y plane and I recognize that the quadratic provides me with roots (e.g. where the graph crosses the x axis). This would be a geometric interpretation of the issue.

It's not the original geometric interpretation since initially they found roots but were only concerned with the positive part since a negative length was meaningless. The "Father of Algebra", Al Khwarizmi is concerned with areas of squares and rectangles of course. That would be another understanding.

But another kind of understanding (and I would consider deeper) would be a derivation of the formula. This is what Jeremy proposed. Instead of having algebra students memorize some formula that they'll forget in a month, why not show them how to derive it?

My point was that one could use the formula and know how to use it and be able to use it in an assortment of different situations and not understand it. What do I mean by "understand" here?

There are the formulas that are used by physicists and engineers. But they need to understand them as well. That's why so many physicists are now called theoretical physicists. If you don't understand the basis of the formulas, you cannot extend their use beyond what you are taught. But that is what the jobs of physicists are engineers are, to push the boundaries of science, and to make things that have never been done before in exactly that way and in those conditions. They are always pushing the boundaries and are always using formulas in ways they have not been taught. So they HAVE to understand the basis of those formulas.

I don't doubt that this can be useful for those physicists who do so (and I acknowledged that in the previous post so I'm not sure why you're harping on this point) but I suspect you're overstating the point here. Not all physicists are theoretical physicists and not all of them push the boundaries of science by finding new applications to formulas by knowing how those formulas were derived. That doesn't it mean it doesn't happen of course; it simply means I think you're overstating your point.

To give an example, consider how de Broglie took the wave-particle duality of light and applied it to matter and then did some simple algebraic manipulations to find the wavelength of a particle. This took some insight and it certainly "pushed the boundaries of science", yet it didn't require any complex mathematical understanding.

In my limited experience with physicists, most of them aren't concerned with proof in the way that mathematicians are; they are merely concerned with whether or not the math is useful for their given problem.

That would be true if being a statistician did not require great knowledge of maths. But the opposite is true. If you want to become a statistician, you need to get a Maths degree.

I don't know what it is in the UK, but that is not true here at all. I solid background in mathematics would be required for someone becoming an academic researcher in statistics I'm sure but many organizations have a need for statisticians and they don't need that mathematical background. The actuary profession is an example. As far as I can recall, the only math required on the examination was some multivariable calculus (which of course doesn't require any proofwork at all).

Probability comes from common sense, and I do mean ALL of probability comes from common sense. It's one of the few areas of maths that have basically been made up from what we understand, and not from simple observations or from prior proofs. For instance, we all know that the probability of a coin landing heads is 1/2. But how do you know that? Is there some theorem that proves this beyond doubt? Nope. Do we all sit there and throw coins up 1 million times, and count the number of times it lands heads? Nope. We say that if the probability of both sides landing are equal, then we have 2 chances to land, so if only one lands, the probability is 1/2. It's based on human intuition, and human understanding.

What do you mean by "common sense"? (This is more of a rhetorical question...)

I'm not following how you're deriving that the probability is 1/2 here. This is primarily because probability is axiomatic and I think you're assuming things that are implicit in the axioms without explicitly stating them.

For example, they can be stated as found here: http://mathworld.wolfram.com/ProbabilityAxioms.html

As you note, there are only two relevant events and they are equally probable (the coin is "fair") and mutually exclusive. The fact that the probability that the coin lands on heads is 1/2 follows directly from these assumptions as well as the axioms.

I have always been curious as to see how "fair", say, a US quarter is. Since both sides have different markings ("heads" and "tails") they are not symmetric. There might be some small, yet, statistically significant difference from 1/2 which one could test provided that a large enough sample were taken. But that's just a musing a mine.

What I meant in my little statistical scenario was that the computer software by itself would only be able to calculate a curve of best fit to an existing scatterplot. If one didn't know in advance what type of mathematical curve best described the computed result, one might be inclined to go with it. HOWEVER, suppose one knew in advance that the scatterplot ought to conform to a different curve (in my example, an exponential curve). One would then reasonably deduce that curve values well outside the scattergram would be more accurate if it had been fitted to an exponential curve than say a polynomial one chosen by the computer. In such a case, one with an understanding of the type of curve the scattergram really represents, and wishing in particular to get more accurate "predicted values" some distance from the scattergram, would probably instruct the computer to fit a best exponential curve rather than letting the computer "decide" to use a polynomial approximation.

I guess I'm not clear on what your example would amount to. I mean the only analog to the cashier example would be entering the numbers incorrectly but I'm not sure how you would notice that and if you did, it wouldn't require that you know how to compute regression lines.

To complicate the problem further, if you're not restricting yourself to the 4 types that can be derived from linear regression (linear, logarithmic, exponential and power), then you have potentially infinitely many curves that will fit the data and it's unlikely there's much meaning to which one fits the best (couldn't you always find a better curve?).

If you restrict yourself to the 4 from linear regression, the calculation the computer performs will be the same as the one I'm going to do and to tell which one is best, I'm going to look at the correlation coefficients for each (which is what the computer will do) and I'm pretty sure the computer will do a better job of calculating than I will (I'll mess up).

It struck me that it might be a suitable example where an understanding of the underlying mathematics (i.e. knowing the type of curve the scatterplot represents) would prove more advantageous than simply punching in the data and letting the computer decide what to do with it.

The best I can come up with as an example is in my own experience and it will support my original claim. When I was using excel to find the best fit line (and I was only concerned with linear since I expected the results to be linear since I was calculated Planck's constant by plotting E=hf). When I performed the calculation in excel, the numbers were way off. Essentially when excel does the computation, it does some rounding and when you're dealing with extremely small numbers, it messes things up. To fix the problem, I had to rescale the numbers by multiplying by some multiple of 10 and factoring that into my final calculation.

What I would point out from this is as follows:

(1) I knew it was incorrect because the line was nowhere near the data. So I suppose it requires the understanding (keep in mind, we mean many things by this one term) that the line is supposed to be close to the data but that understanding doesn't require knowing anything about the underlying math.

(2) I hadn't taken my second semester of linear algebra at the time so I didn't know how linear regression worked; I didn't understand the underlying math. It was all a mystery to me (and frankly didn't care at the time... I figured someone worked out the math so I was just applying the formula... or rather the software which is applying the formulas.)

Yes, it does state it, if you are aware that it is what is being stated. But it's often skipped over in school and you just learn it as a formula without understanding.

I guess I don't know what you mean by this (and refer you to my discussion in msg 99 which covers this same issue). I suspect the only difference between your illustration (which is a concrete formula using specific numbers) and my formula (which is an abstract formula using variables) is that yours is concrete and mine is abstract. You can apply concrete formulas (to some extent... but since they are concrete how do you tell if they apply anywhere else? The abstract and general formula indicates it applies elsewhere).

Once you have that formula (and I showed how you can derive it), you can then derive x^0=1 by any number of methods.

Your Excel example made me think of a better one than the curve fitting algorthm I was trying to use to make my point. In the earlier days of computers/calculators, most software math routines were calculated in base two and presented in base ten. The first microcomputer accounting software might have been a bit of a nightmare to accountants & bookkeepers who didn't know this and might have gotten truncation errors from the conversion. Needless to say, the "wild west" of microcomputer accounting didn't last long and programming languages were very quick to introduce binary coded decimal math as a programming option, to keep the accountants from burning the programmers at the stake.

Picture Joe bookkeeper with his new computeized accounting system during this "wild west" period. What does he know of binary math, or BCD? Which type is his accounting software using? What's the difference? All he knows, is that he used to be very proud of balancing his books to the penny, and since he started using that damn computer, he was always a penny out (and he was going nuts trying to figure out where it got "lost"). Here is a situation where knowing about relevant theory (like binary/decimal conversion and truncation error) in advance might have saved him a lot of grief. Therein lies my point; while we might be able to get by letting the machines do the thinking for us, we should know enough of the theory ourselves to understand the results and not just read them off as though they are the word of God.

Math is weird ?.I always found math, physics, chemistry, philosophy and litterature very important and they all swim in an ocean of logic and knowledge . Sometimes it takes much thinking to understand math for example : even imaginary numbers whereas " X2 + 1 = 0 " is used in engineering and in many applications . ( X squared plus one equal zero and X is an imaginary number ) .

I think at this point it would be helpful to define terms. I'm hoping my previous post has helped clarify this further. I'm going to stay with the quadratic equation for this example. I'm going to intentionally equivocate on the word "understanding" because I believe we are all talking passed each other with what we mean when each of us says "understanding".

Suppose I have some formula such as:

ax^2 + bx + c = 0 (or some algebraic equivalent.... interestingly another way to "understand" the formula is to recognize that ax^4 + bx^2 + c = 0 is another formula where the formula can be applied).

I have to understand some basic things in algebra in order to recognize that this is a place where I can apply the quadratic equation to find x in terms of a, b and c. Once I have made that recognition, I am able to apply the formula and get the solution. That's some level of understanding.

I have a different sort of understanding when I understand it as a graph in the x-y plane and I recognize that the quadratic provides me with roots (e.g. where the graph crosses the x axis). This would be a geometric interpretation of the issue.

It's not the original geometric interpretation since initially they found roots but were only concerned with the positive part since a negative length was meaningless. The "Father of Algebra", Al Khwarizmi is concerned with areas of squares and rectangles of course. That would be another understanding.

But another kind of understanding (and I would consider deeper) would be a derivation of the formula. This is what Jeremy proposed. Instead of having algebra students memorize some formula that they'll forget in a month, why not show them how to derive it?

My point was that one could use the formula and know how to use it and be able to use it in an assortment of different situations and not understand it. What do I mean by "understand" here?

I agree we need to understand some terms. We are going into epistemology here. But linguistics of the word is also useful to actually grasp what we are attempting to describe.

What exactly do we mean by "understanding"?

The first thing we see, is that understand is a transitive verb. You can say "I understand quantum physics." You can also say "I understand" after someone has said something, meaning "I understand (what you said)." But we cannot just say "I understand" without an object, or an implied object. Understanding is not a concept in itself. Understanding is only in reference to an idea or concept. We can only truly say we "understand" something. When we leave out the something, it has to be implied.

The second thing we see, is that understanding is "under-stand". It isn't exactly clear why we use those 2 words together until we give it a little thought. We can say we "stand by something", in that we stand by the acceptance that a certain idea is true. When we say we "under-stand" something, we are actually saying we "stand under" something. We are not saying we stand by the idea or concept itself. We are saying that we stand by what is under that idea or concept.

What exactly do we mean by standing under something?

Let's take an example: say you explain something to someone else. All the other person hears is words. If we ask "do you understand?", we do not mean if they have heard the words. We mean that we are expressing a statement by use of those words, and we are trying to ascertain if the other person grasped the statement that we are attempting to express. But how can the other person actually grasp the statement? We never said the thing we are trying to get across. We only said words.

However, we believe that if we construct those words in a certain order, that naturally leads to the statement itself. For instance, say we say "pass us the salt". These are just words. We don't mean literally pass the salt by us. We mean to aks the other person to hand the salt to us. But we believe that the joining of those words in that order, naturally leads to a mental picture, that is what we think of by saying "pass us the salt".

That's what we mean by "understand". We mean that the other person has "stood" by the common meaning of each word, that each represents a concept, "pass", "us", "the" and "salt" and used them to construct a different concept, that is the combination of them all, in a certain way. So the person doesn't really "understand" the sentence. The other person "stands" by the meanings of each word, that "under"lie (lie "under") the overall concept being expressed.

Effectively, when we say "I understand", we mean to say that we stand by what is "under" the sentence. We know what the bases of the sentence are, that are "under" the sentence, and we "stand" by their combination.

We can see this in semitic languages as well. "Understanding" = "Binah" in Hebrew, which is the causative conjugation of "Boneh" = "Build". In Hebrew, we see understanding as being a concept that is "built" from other concepts in exactly the same way.

So, when we talk about "understanding", we are not talking about grasping a concept directly, for we are talking about things we cannot see. If we could see them directly, we would say we "know", such as "I know the Sun is yellow". We are talking about agreeing certain concepts that underpin the concept we are attempting to grasp, and by agreeing on the underpinning concepts, we can then build the same concept in our heads, even though we are 2 different people, and we cannot transfer the concept directly. When we say we "understand" the Sun is a big ball of fire, we are agreeing that we don't see any way that the Sun could just shine yellow light on its own, and we do see that if it was like a fire, then it could emit yellow light, like a fire.

The same is true of driving. We can understand how brakes work, in 2 ways.

One way is to explain how the brakes actually function, and how that relates to the laws of motion and friction. With that, we can construct a working idea of how brakes operate in reality. We can figure out that doubling the speed will more than double the time to brake. We can figure out that surfaces like ice can lower the coefficient of friction, and that can extent the time to brake to as much as 10 times what it is normally.

The other way is to experience driving itself without understanding how brakes and friction work. But then, we are non-plussed. Our reason tells us that doubling the speed will double the braking distance, but not more. Our reason tells us that braking is braking, and it should not really matter what surface we break on, so breaking on ice should be the same as breaking normally. So invariably, most of us really don't "get" braking right away. It takes time to learn how to brake properly.

But why does it take time? What exactly are our brains doing when we are going through this learning process? Well, our brains need a set of rules to run on. So in order to develop good braking skills, our brains need to develop a set of principles that describe braking adequately enough to predict how the brakes will perform in different situations. But for those principles to be accurate, they have to match the principles that work in reality. So it's likely that our brains just figure out a rudimentary set of principles that very closely match what we know from science. But then we are making our brains "reinvent the wheel".

It's far more likely that our braking skill will improve much quicker, if we just learn the principles of science, and then have some practise to see them working in real life, so that we can see that they match reality very closely. Then our brains don't need to reinvent the wheel. They can just accept the principles of science, and use them directly as principles to predict braking.

The same is true of most things. If they are things we naturally grasp anyway, then we don't really need to even learn them. We already "understand" them, in that we already know the principles they are based on, and then build on them without having to learn them. We only need a bit of practise and that's it.

But if they are things we don't really grasp naturally, then we have 2 choices:1) Learn by lots and lots of examples, so many, that we reinvent the wheel.2) We just learn the principles "under" them, and then we still do lots of examples. But the first 3 or 4 are just there to show us that the principles work. The rest are there to just get us used to remembering the underlying principles and using them.

To learn something not very well, #1 tends to be the most useful. That's fine if we only need it just in a few cases, and with a huge margin of error, such as when we only want to get a 50% mark in an exam, and never use it again.

To learn something very well, #2 tends to be the most useful. That is when we intend to use the concept often, and/or we desire great accuracy in our results. We can do #1 and achieve that. But it will take many, many examples and a lot of reinventing the wheel. The quickest way is simply to learn the principles and then to apply them.

ax^2 + bx + c = 0 (or some algebraic equivalent.... interestingly another way to "understand" the formula is to recognize that ax^4 + bx^2 + c = 0 is another formula where the formula can be applied).

I have to understand some basic things in algebra in order to recognize that this is a place where I can apply the quadratic equation to find x in terms of a, b and c. Once I have made that recognition, I am able to apply the formula and get the solution. That's some level of understanding.

Here, you are learning the underpinning principles of how to apply the equation, what a is, what b is, what c is, what a square root is, etc. In a car, it's like being told that pressing the brake pedal stops the car. You can stop the car. But you really don't understand about stopping distances. You still need a lot of practise.

I have a different sort of understanding when I understand it as a graph in the x-y plane and I recognize that the quadratic provides me with roots (e.g. where the graph crosses the x axis). This would be a geometric interpretation of the issue.

Graphical interpretations are really representations of the derivation, not the formula. The method of derivation is what causes the graph to look like it does, as it is just a scaling and movement of the graph of y=x^2. But without the derivation, you don't see that. You still just get an idea of what the formula does, but not clear enough to really use it well. It's similar to driving the car, and then marking out on the road how long it takes you to stop at different points. You get an idea of stopping distances. But you really don't have any true feel for braking either. You still need practise.

It's not the original geometric interpretation since initially they found roots but were only concerned with the positive part since a negative length was meaningless. The "Father of Algebra", Al Khwarizmi is concerned with areas of squares and rectangles of course. That would be another understanding.

Here, we are getting closer to the derivation, and that understanding can be extrapolated much easier to the formula and the graphical representation.

My point was that one could use the formula and know how to use it and be able to use it in an assortment of different situations and not understand it. What do I mean by "understand" here?

You mean what you used in the first instance, to know what "a" is, what "b" is, what "c" is, what a square root is. Again, it's like being shown the brake pedal, and being told that pressing it stops the car. You can technically stop the car. It's useful to know that, if you never drive, but might be with someone who has a heart attack, and you have no phone, and so the only way to get them to hospital is for a non-driver to drive them. But if you need to brake often and/or accurately, you will either have to practise braking a heck of a lot, till your brain reinvents the wheel, or you need to learn how brakes and friction work, and then practise to let your brain start using that knowledge. One way is far more efficient than another. It's faster to learn, and it gives us far more accuracy, and it's closer to reality than what your brain imagines is reality. Why make your brain invent a fictional reality, when you can just tell your brain how things actually work in reality?

I don't doubt that this can be useful for those physicists who do so (and I acknowledged that in the previous post so I'm not sure why you're harping on this point) but I suspect you're overstating the point here. Not all physicists are theoretical physicists and not all of them push the boundaries of science by finding new applications to formulas by knowing how those formulas were derived. That doesn't it mean it doesn't happen of course; it simply means I think you're overstating your point.

All physicists are theoretical physicists. They make quantitative conclusions about reality based on mathematical formulas and experiments. Saying you don't need to understand the derivation of a formula just to improve your experiments, is rather like attempting to make a better form of brake without knowing the laws of friction. You can improve a brake just by taking an existing brake and testing it. But you're going to make a far better brake if you understand its derivation, and how it came to be built the way it currently is now.

Theoretical physicists, however, seems to refer to people who investigate physics using either only theory and never experiments, or well over 90% of their time is spent in theory, and very little else is actually put to testing experiments. But applied mathematicians do that anyway. So for most intents and purposes, they are applied mathematicians. In past times, when mathematics was lauded, they probably would have called themselves applied mathematicians, like the Bernoullis. But as there is a Nobel prize for physics but not for maths, and as there has been so much greater regard and funding for physics than for mathematics, they got more respect, more accolades, and more funding, by describing themselves as theoretical physicists. So I believe that it's more a matter of academic politics that they choose to describe themselves as theoretical physicists rather than applied mathematicians.

To give an example, consider how de Broglie took the wave-particle duality of light and applied it to matter and then did some simple algebraic manipulations to find the wavelength of a particle. This took some insight and it certainly "pushed the boundaries of science", yet it didn't require any complex mathematical understanding.

De Broglie studied mathematics as well as physics. He was at the turn of the century, when Euclid's Elements were still taught in geometry, and so derived proofs were a given. He studied at the Sorbonne, and the French highly prize mathematics, especially at the Sorbonne. So there is almost no doubt in my mind that De Broglie knew mathematics far better than I do. He might not have known knot theory. But I am sure that he knew the mathematics that was pertinent to his studies in physics, including the derivation of the formulas that he might have used.

In my limited experience with physicists, most of them aren't concerned with proof in the way that mathematicians are; they are merely concerned with whether or not the math is useful for their given problem.

Yes, and that's why we are still struggling to make significant headway in most areas other than in areas where theoretical physicists (applied mathematicians) inhabit, such as quantum physics. Think about everything we've figured out in the last 50 years. 90% of it is based on earlier physics which was developed using lots of mathematics. 90% of that is really engineering, taking existing ideas we already know, and trying to make something useful out of it, like semiconductors, or microwave ovens. If we think about it, we've got an explosion of new technologies coming out, which is why the computer has overtaken our lives. But we seem to have very little in the way of really new concrete physics.

Of course, I could be wrong. But in my view, we are spending thousands or millions of times what we used to in the sciences, and yet we aren't getting thousands or millions of times as significant results as we used to, even 150 years ago. That to me says that science is becoming much less efficient. But considering that mathematics is IMHO, really all about just taking things we can do in the real world, and making them much more efficient, or taking things we cannot yet do, and showing how we can do them, and we don't seem to be that bothered on being all that interested in mathematics, that's not really surprising. It's rather like employing an efficiency expert, and listening to him when it comes to how many employees we need, but not really paying any attention to how our company has to run to make it more efficient.

Mathematics is all about improving efficiency of existing methods, and making the impossible possible. It just makes sense to me, to make that the priority, as then we get a lot more bang for our buck. We can do 1 thousand experiments that cost $1 million, and learn 1 thousand things. Or, we can do 1 thousand experiments, that cost $1 million, and learn 1 million things.

I don't know what it is in the UK, but that is not true here at all. I solid background in mathematics would be required for someone becoming an academic researcher in statistics I'm sure but many organizations have a need for statisticians and they don't need that mathematical background.

In the UK, statistics jobs routinely require a degree in mathematics, or a degree in statistics, which covers the same material as statistics courses in a mathematics degree, with all the derivations, only that statistics degrees focus much more on the statistics course and cover them in much greater detail. The reason is simple: as the saying goes "there are lies, damned lies, and statistics." If you understand the derivations of the formulas, then you know what they mean. If you don't, you can easily be misled by statistics, and the very last thing you want is a statistician being misled by statistics.

The actuary profession is an example. As far as I can recall, the only math required on the examination was some multivariable calculus (which of course doesn't require any proofwork at all).

A friend on my course only took maths as a degree, because he planned to be an actuary. He was a mature student at 23. Before he applied, he chose to be an actuary. He found out that most actuaries are maths graduates, and the institute of actuaries prefers maths grads. I later on looked into it, and met someone who got in to study to be actuary who didn't do a degree. He told me that it was hell on Earth to even be considered. All in all, the easiest and quickest way to become an actuary, and the type of students they prefer, is to get a maths degree.

Probability comes from common sense, and I do mean ALL of probability comes from common sense. It's one of the few areas of maths that have basically been made up from what we understand, and not from simple observations or from prior proofs. For instance, we all know that the probability of a coin landing heads is 1/2. But how do you know that? Is there some theorem that proves this beyond doubt? Nope. Do we all sit there and throw coins up 1 million times, and count the number of times it lands heads? Nope. We say that if the probability of both sides landing are equal, then we have 2 chances to land, so if only one lands, the probability is 1/2. It's based on human intuition, and human understanding.

What do you mean by "common sense"? (This is more of a rhetorical question...)

I'm not following how you're deriving that the probability is 1/2 here.

Chuck a coin in the air. We assume that it will land on one side or the other. We assume that both sides are pretty much the same. So it's just as likely to come down on heads as on tails. So if we toss the coin 10 times, we expect 5 of those times to be heads and 5 to be tails. We know that sometimes we get a "run" of unlikely possibilities, such as 10 heads in a row. But we expect that the more times we throw it, the more unlikely that will be. So over time, we expect in 1000 throws, 500 will end up heads and 500 tails. 5/10 = 1/2. 500/1000 = 1/2. We expect that with 2 equally likely possibilities, that any one of those possibilities will happen half the time. So the chances of any one possibility happening once is half the time of 1, 1/2.

This is primarily because probability is axiomatic and I think you're assuming things that are implicit in the axioms without explicitly stating them.

For example, they can be stated as found here: http://mathworld.wolfram.com/ProbabilityAxioms.html

It might be called axiomatic here. But it's really intuitive.

For instance, take the probability 0, which means something never happens. We actually mean that the probability of a coin landing on "towels", is 0, in that if there is no symbol of a towel on a coin, then the number of times it will land on a "towel" is zero, no matter now many times we throw it. The number of times a probability of 0 will happen is 0 times, no matter how many times the event takes place.

It's all intuitive. It's only axiomatic, in that it's currently not proved by a theorem in a course on probability, and generally, we refer to something as an axiom if it is not proved to us. Something in one course is called an axiom, even if it is proved in another course, because it's still not proved in the course we are studying. That's what we generally refer to as "axioms" in mathematics.

As you note, there are only two relevant events and they are equally probable (the coin is "fair") and mutually exclusive. The fact that the probability that the coin lands on heads is 1/2 follows directly from these assumptions as well as the axioms.

I have always been curious as to see how "fair", say, a US quarter is. Since both sides have different markings ("heads" and "tails") they are not symmetric. There might be some small, yet, statistically significant difference from 1/2 which one could test provided that a large enough sample were taken. But that's just a musing a mine.

I heard a mathematician worked this out, using applied maths. There is a small probability that the coin will land on its side. Also, any coin is slightly more likely to land on one side, which I think was tails. The head on the heads side actually has a little more volume in the design of the head. So it has a little more weight on the heads side. Gravity pulls that side down a little more. Wind resistance cuts a lot of that out, and it's only a tiny difference. But it's enough that there is a slightly greater probability for the heads side to land face down. That means that on average, out of thousands of times, a coin will actually land tails up more often than heads up.

I might be wrong about which side is more likely. But I do remember hearing that one side is very slightly more likely than the other. Without knowing the derivation, or working it out myself, I really cannot be sure.

Your Excel example made me think of a better one than the curve fitting algorthm I was trying to use to make my point. In the earlier days of computers/calculators, most software math routines were calculated in base two and presented in base ten. The first microcomputer accounting software might have been a bit of a nightmare to accountants & bookkeepers who didn't know this and might have gotten truncation errors from the conversion. Needless to say, the "wild west" of microcomputer accounting didn't last long and programming languages were very quick to introduce binary coded decimal math as a programming option, to keep the accountants from burning the programmers at the stake.

Picture Joe bookkeeper with his new computeized accounting system during this "wild west" period. What does he know of binary math, or BCD? Which type is his accounting software using? What's the difference? All he knows, is that he used to be very proud of balancing his books to the penny, and since he started using that damn computer, he was always a penny out (and he was going nuts trying to figure out where it got "lost"). Here is a situation where knowing about relevant theory (like binary/decimal conversion and truncation error) in advance might have saved him a lot of grief. Therein lies my point; while we might be able to get by letting the machines do the thinking for us, we should know enough of the theory ourselves to understand the results and not just read them off as though they are the word of God.

I think in this case understanding isn't still required; it's a sufficient condition to deal with this of course but it's not necessary.

At least from your scenario, Joe recognizes that his books aren't balanced. This recognition requires understanding of his accounting practices but doesn't require understanding of how the accounting software works.

Secondly, the programmers, since they do know the limitations of the software, could easily provide information with regard to the limitations of the software. I can know what the limitations are without actually understanding how the software works.

I think as a practical matter, understanding is good for extending into new territory but beyond that I think it's terribly cumbersome unless it's coupled with some good 'ol memorization, applying formulas, etc.

I like to think I devote much of myself to understanding topics as opposed to memorizing, learning formulas, learning to use them, etc. I can solve problems that others can be I have to "reinvent the wheel" every time I do and it's rather clumsy in some respects. It would certainly be much more convenient to have a ready-made formula for use which turns a 2 hour problem into a 2 minute problem.

I think as a practical matter, understanding is good for extending into new territory but beyond that I think it's terribly cumbersome...It would certainly be much more convenient to have a ready-made formula

Touché...You're right.

I think I was bemoaning the "loss" in the battle between man and machine. It has been my experience that since calculators came into fashion, people in general have forgotten how to do simple math in their head and can't even work out their share of the tip without a calculator. I was left feeling rather sad about the human condition. In simply accepting and trusting answers from an outside source, I feel in general we are losing the capacity for independent and critical thought. More and more, we seem to accept things on "faith." Most unfortunate of all...the loss manifests itself in much more than mathematics.

That's what we mean by "understand". We mean that the other person has "stood" by the common meaning of each word, that each represents a concept, "pass", "us", "the" and "salt" and used them to construct a different concept, that is the combination of them all, in a certain way. So the person doesn't really "understand" the sentence. The other person "stands" by the meanings of each word, that "under"lie (lie "under") the overall concept being expressed.

I'm not sure how far I want to venture off into linguistics but I have difficulty with what you say here. I don't think a sentence is a reduced to words being ordered in a different way where each word has a "common meaning". Meaning very well may be holistic, contextual and so on. Part of what I was attempting to point out is that we all are using the word "understand" but we all may mean something different by it and that we would be talking passed each other if we thought that we are all using some "common meaning" of the word "understand".

It's far more likely that our braking skill will improve much quicker, if we just learn the principles of science, and then have some practise to see them working in real life, so that we can see that they match reality very closely. Then our brains don't need to reinvent the wheel. They can just accept the principles of science, and use them directly as principles to predict braking.

Why is it "far more likely"? My intuition tells me the opposite. I suspect that since, as you claim, there are two ways of understanding (corresponding roughly to "know that" and "know how") then there may very well be different brain processes involved in these. The impact of "know that" on "know how" might be rather limited. Know how is developed through practice (as you note) but of technique, which does not require much in the way of "know that". Technique may be developed in various ways but learning technique may simply be a matter of training and mimicking.

We might think of this as the distinction between a "science" and an "art" and we might think of the old practice of craftsmanship with a master and student. The master takes on the student and shows him the practices of the tradition.

Here, you are learning the underpinning principles of how to apply the equation, what a is, what b is, what c is, what a square root is, etc. In a car, it's like being told that pressing the brake pedal stops the car. You can stop the car. But you really don't understand about stopping distances. You still need a lot of practise.

I'm not sure what you're point is here. Your example talks about an "understanding" (knowing the relevant principles of physics and how car brakes are designed) or "science of braking" as contrasted with the "art of braking" which requires practice.

Stopping distance itself could be "understood" in both ways here and needn't be classified to one. I can understand the physics involved and make a calculation as to the stopping distances or I can develop experience through driving and encountering different situations (rain, snow, dirt road, etc) and what appropriate stopping distances are right for different situations on past experience. This latter could be "felt" without any specific idea of calculation (e.g. without knowing that you need to stop such and such a distance in such and such conditions). Characterized crudely, one can understand in once sense but not in the other, but there is certainly room for overlap here.

Graphical interpretations are really representations of the derivation, not the formula. The method of derivation is what causes the graph to look like it does, as it is just a scaling and movement of the graph of y=x^2. But without the derivation, you don't see that. You still just get an idea of what the formula does, but not clear enough to really use it well. It's similar to driving the car, and then marking out on the road how long it takes you to stop at different points. You get an idea of stopping distances. But you really don't have any true feel for braking either. You still need practise.

I don't understand what you mean here at all. How is the "graphical interpretation" a "representation of the derivation"? And why is not a representation of the formula?

Let f(x) = ax^2 + bx + c

Graph the function and find where the graph intersects 0. That represents the formula.

This tells you what you are finding when you write the solutions to the quadratic. It doesn't tell you how to calculate of course but it's part of understanding the formula because it tells you information that will help you understand when to apply that formula.

All physicists are theoretical physicists.

No, they aren't. Theoretical physicists are concerned with developing theory. Not all physicists are concerned with developing theory. Some are concerned with applying theory. Some are concerned with constructing experiments to test theory.

They make quantitative conclusions about reality based on mathematical formulas and experiments. Saying you don't need to understand the derivation of a formula just to improve your experiments, is rather like attempting to make a better form of brake without knowing the laws of friction. You can improve a brake just by taking an existing brake and testing it. But you're going to make a far better brake if you understand its derivation, and how it came to be built the way it currently is now.

I don't understand your analogy. You're basically requiring that a physicist be a pure mathematician. The physicist doesn't need to be. They just need mathematics to apply to a given area. That doesn't mean that physicists can't or don't understand the mathematics; many do. But I don't see how it's essential.

De Broglie studied mathematics as well as physics. He was at the turn of the century, when Euclid's Elements were still taught in geometry, and so derived proofs were a given. He studied at the Sorbonne, and the French highly prize mathematics, especially at the Sorbonne. So there is almost no doubt in my mind that De Broglie knew mathematics far better than I do. He might not have known knot theory. But I am sure that he knew the mathematics that was pertinent to his studies in physics, including the derivation of the formulas that he might have used.

Your claim was that big breakthroughs in physics required understanding how to derive formulas. I pointed out that his derivations don't require you understand how to derive the formulas.

Yes, and that's why we are still struggling to make significant headway in most areas other than in areas where theoretical physicists (applied mathematicians) inhabit, such as quantum physics. Think about everything we've figured out in the last 50 years. 90% of it is based on earlier physics which was developed using lots of mathematics. 90% of that is really engineering, taking existing ideas we already know, and trying to make something useful out of it, like semiconductors, or microwave ovens. If we think about it, we've got an explosion of new technologies coming out, which is why the computer has overtaken our lives. But we seem to have very little in the way of really new concrete physics.

What do you mean by "new concrete physics". My guess is that this is probably going to have to do more with the difference, in the Kuhnian sense, of "revolutionary" versus "normal" science.

Chuck a coin in the air. We assume that it will land on one side or the other. We assume that both sides are pretty much the same. So it's just as likely to come down on heads as on tails. So if we toss the coin 10 times, we expect 5 of those times to be heads and 5 to be tails. We know that sometimes we get a "run" of unlikely possibilities, such as 10 heads in a row. But we expect that the more times we throw it, the more unlikely that will be. So over time, we expect in 1000 throws, 500 will end up heads and 500 tails. 5/10 = 1/2. 500/1000 = 1/2. We expect that with 2 equally likely possibilities, that any one of those possibilities will happen half the time. So the chances of any one possibility happening once is half the time of 1, 1/2.

So now you are defining the probability in terms of the expectation value? Fundamentally this is a binomial probability function and expectation value requires knowledge of the probability of p. The probability of p for the coin toss can be inferred from the axioms as well as the the assumptions of we mean by a coin toss (that there are exactly two outcomes, that both outcomes are equally likely, that each trial is independent, etc).

It might be called axiomatic here. But it's really intuitive.

I don't know what you mean by "intuitive". To me, "intuition" is influenced by many things, including human psychology, cultural and linguistic factors, prior experiences, etc. There is no indication that all humans have the same "intuition". In fact, there's a discipline within analytic philosophy (since there are a great number of analytic philosophers who, in practice, seem to think that intuition is a good indication of "truth") which is called "experimental philosophy" and one of their concerns is to see what people really "intuitively" believe.

For instance, take the probability 0, which means something never happens. We actually mean that the probability of a coin landing on "towels", is 0, in that if there is no symbol of a towel on a coin, then the number of times it will land on a "towel" is zero, no matter now many times we throw it. The number of times a probability of 0 will happen is 0 times, no matter how many times the event takes place.

Intuitively that may be what people think it means; that would be an instance of "not understanding" probability. Suppose I select, at random, some real number. The probability that I'll select any given number is 0, yet I'm going to select one number (and there's a 100% probability that it will be transcendental.)

The converse of what you say is true - if it's not possible it has a probability of 0.

In fact, many results in probability and statistics are counter-intuitive - the Monty Hall problem or the Birthday paradox (it's a paradox since it violates people's "intuitions" of what they'd expect).

As you note, there are only two relevant events and they are equally probable (the coin is "fair") and mutually exclusive. The fact that the probability that the coin lands on heads is 1/2 follows directly from these assumptions as well as the axioms.

So what does "intuition" have to do with this?

I heard a mathematician worked this out, using applied maths. There is a small probability that the coin will land on its side. Also, any coin is slightly more likely to land on one side, which I think was tails. The head on the heads side actually has a little more volume in the design of the head. So it has a little more weight on the heads side. Gravity pulls that side down a little more. Wind resistance cuts a lot of that out, and it's only a tiny difference. But it's enough that there is a slightly greater probability for the heads side to land face down. That means that on average, out of thousands of times, a coin will actually land tails up more often than heads up.

I might be wrong about which side is more likely. But I do remember hearing that one side is very slightly more likely than the other. Without knowing the derivation, or working it out myself, I really cannot be sure.

If you can find the study I'd be interested. Was it theoretically based or empirically based?

I'm also doubting that the coin could land on its side (provided there isn't a table or wall to prop it up.) I'd want to see the math on how a spinning coin plus the force of gravity acting on it would result in it landing on its side.

Math weird ?.If math is weird then logic and reason are weird too .It was said long time ago :" every mathematician is a philosopher but not every philosopher is a mathematician ". Well.....math, any math takes brain and logic .

Why is it "far more likely"? My intuition tells me the opposite. I suspect that since, as you claim, there are two ways of understanding (corresponding roughly to "know that" and "know how") then there may very well be different brain processes involved in these. The impact of "know that" on "know how" might be rather limited. Know how is developed through practice (as you note) but of technique, which does not require much in the way of "know that". Technique may be developed in various ways but learning technique may simply be a matter of training and mimicking.

We might think of this as the distinction between a "science" and an "art" and we might think of the old practice of craftsmanship with a master and student. The master takes on the student and shows him the practices of the tradition.

1) I heard this POV when I was young. It was a problem for me, for nearly all the things that most people find easy to understand, I found hard. Nearly all the things that most people find really hard to understand, I found really easy, but logical. What everyone else called an "art", that you either "got" intuitively or didn't, I approached as a "science", that it was just logical, and anyone could understand it. What everyone else thought of as a science, that it was logical, and everyone could understand, I thought of as an art, that you either "got" it or you didn't, and I didn't, not at all. I really felt like I was totally unsuited for this world. Everyone else thought the same about me.

Nevertheless, over time, I found that when it came to the things that I had always seen as an art, something that one either "got" or didn't get, that every so often, someone would tell me something about it, and then suddenly a light would come on in my head, rather like the cartoons when Tom or Jerry gets an idea. From then on, I found that I was able to act in those areas, even intuitively. It was like knowledge improved my intution immensely. I found this happened again and again and again, so often, that if I could not grasp something intuitively, I would seek out any germ of knowledge, because anything might give me that flash of insight that would activate my intuition.

Then, when it came to others, I found exactly the same was true. There was much that others did not understand at all, that I took for granted. When I was younger, I found this incredibly difficult, as the things I was excited about, no-one seemed to understand at all. But I did find one thing. The stuff that I had learned easily, that others found impossibly hard to grasp, if I explained them to the other person in terms of things they DID already understand, they DID "get" those things, and suddenly, they were able to develop in those areas intuitively. After a few months, they even took it for granted that these things were easy, even though only a few months before, they declared them impossible to grasp the basics of.

What I found amazing, though, was that when I was successful in putting across an idea in terms of other's people's interests, things they already understood, that people would tell me, that in 5 minutes, I had explained to them what no-one had been able to teach them in over 10 years, even though they are their teachers really put in incredible efforts.

However, when others could not relate their prior experiences to what I was saying, no matter how clear I was, no matter how perfectly the idea was to real life, they were unable to accept it. They saw it as purely arbitrary, based on axioms plucked out of thin air, as if it was castles built on clouds. They just could not think of it as anything more than purely hypothetical questions that had no foundation or link to reality.

2) I ended up concluding that there are no "arts", and no "sciences". Each subject is as much an art as a science, and vice versa, since how someone saw any subject, different from person to person. Whether a person said a subject was an art or a science, was dependent on their personal experiences, and nothing more.

I also ended up concluding that each person is not limited to seeing any subject as an art, or a science, that one can change the perspective one uses, as easily as one can change the hand one plays tennis with. One can take an artistic approach to any subject, demanding that one only acts from inspiration and intuition, but not from method or from reason. Or, one can approach any subject scientifically, being methodical and using reason.

But to switch perspective comfortably, one's approach must be based on what one already considers truth. If it is just a set of rules that do not build on what one already accepts as true, then the mind sees it as arbitrary, no matter how many times one is shown that it matches reality. However, if it is related to what one already knows, then one seems to accept that as truth.

So, when you say "my intuition tells me", I hear, "my experiences have given me a perspective on the world, and in that perspective, the following seems obvious". It says a lot more about your personal experiences of life, your background, than anything else. But it doesn't say much about reality.

3) What was really interesting to me, was that this wasn't just limited to understanding or not understanding a subject either. Even within a subject, I found the same. In my driving lessons, some things were more naturally intuitive to me, and some was not. But even in the things that were naturally intuitive to me, I would naturally reach a limit in my improvements from intuition and practice alone. I would hit a brick wall. When that happened, practice just would not take me any farther.

However, when my driving instructor would then explain to me a bit about how the car worked, in particular, how that related my actions to what the car did as a result, then my intuition look a huge leap forwards. For example, I was really not all that good at bay parking. Then my driving instructor explained to me that when you turn the wheel left, that the car doesn't turn left, but turns left in a circle. I still wasn't all that good at bay parking. But when I disengaged my brain, and just parked intuitively without thinking, I'd park almost perfectly. He said that if I did that every time, I couldn't improve on it. This wasn't a one-off, either. It happened again and again in my driving lessons.

Then I began to remember that this had happened many times in my past, with things I was naturally good at, like maths and computers, and with things I was naturally not good at, like cooking. I'd improve intuitively just with practice, but then I'd hit a brick wall, and I couldn't seem to get farther. Then I'd come across an explanation of how things worked, something that made it much clearer exactly how my actions affected the results, and suddenly, with no effort on my own, my actions would take a flying leap when I wasn't thinking about it, and just went purely with my intuition.

I also ended up concluding that intuition is improved by our knowledge. But only when it gives our subconscious self greater insight into how to better relate what we do to what happens, so that it can better tailor what we do, to what we want to achieve as a result. But even then, it doesn't seem to happen, unless we believe it wholeheartedly, and that only seems to happen when we relate that understanding to what we already accept as definitely true.

So in a way, science helps art, and art helps science. It helps me to think of the process called intuition as the subconscious doing all the work for us. It uses rules of thumb, rules that tell it how our actions related to the results, to work out for us what to do to best achieve our intended results. But it has to get those rules from somewhere. It gets them from our memories. When we learn something from practice, then our subconscious takes all our prior experiences, and analyses them, in order to make rules of practice. But it can be painfully slow to learn from our experiences. We can add to that, by simply introducing existing rules of thumb that we know work very well, into our memories, by simply learning them. All our memories are stored equally. So our subconscious has access to all of them, including the rules of thumb that we have learned. However, it doesn't just buy into anything that we've heard. We can get things wrong. Only things we are extremely sure of, seem to be things that the subconscious uses. Also, if they don't relate to what we do, then they are no help to the subconscious in directing our actions. So both must be true, for our subconscious to make use of them. However, if they are, then the subconscious also tests them out, by performing actions and seeing if they result according to the predictions made by our rule of thumb. If they are confirmed, even if they are largely in error, then the subconscious simply goes back to using its experiences to improve them, just like it does with the rules of thumb it has. But, as it can take a long time for anything to find a pattern, handing a pattern to someone, gives you a serious advantage, and speeds up the learning process exponentially.

All physicists are theoretical physicists.

No, they aren't. Theoretical physicists are concerned with developing theory. Not all physicists are concerned with developing theory. Some are concerned with applying theory. Some are concerned with constructing experiments to test theory.

Sorry. I wasn't aware of why there is a difference. My point was that whatever a physicist's goal, he is always relying on theories to achieve those goals.

They could claim that the theories have been experimentally proved. But it doesn't matter if you tested a theory 1 billion times. There is no guarantee that they will be true in the next second. So at most, it's an intuitive leap to suggest that the theory will still be true in the next second, one that has not yet been tested in practice, because once you've confirmed that the theory in still true in the next second, the next second is gone. So whenever you construct an experiment, you're always coming back to relying on theory, just not the theory you are currently testing.

They make quantitative conclusions about reality based on mathematical formulas and experiments. Saying you don't need to understand the derivation of a formula just to improve your experiments, is rather like attempting to make a better form of brake without knowing the laws of friction. You can improve a brake just by taking an existing brake and testing it. But you're going to make a far better brake if you understand its derivation, and how it came to be built the way it currently is now.

I don't understand your analogy. You're basically requiring that a physicist be a pure mathematician. The physicist doesn't need to be. They just need mathematics to apply to a given area. That doesn't mean that physicists can't or don't understand the mathematics; many do. But I don't see how it's essential.

It's a little different to how you see it. A pure mathematician is normally someone who develops new theories in pure mathematics. People who know the derivation of mathematical theorems used to be people who simply knew maths.

However, it might help to understand the problem, if I give you an example in physics. Newton's Second Law of Motion is famously quoted as F = ma. However, what he actually said was that force was proportional to the rate of change of momentum. That has a consequence in calculating the fuel needed to escape the Earth's gravitational pull. Most people would simply say that F = ma, and so the required energy is that which can generate the force to bring a spacecraft to escape velocity. However, it isn't that simple. You have to add the mass of the fuel. So most people underestimate the amount of fuel needed to bring the craft to escape velocity. But even then, it isn't that simple. As the spacecraft gets faster, more of the fuel is used up, so the mass decreases over time, and the amount of fuel needed to make the ship go even faster decreases over time as well. But most people don't take this into account either. So they would over-estimate the amount of fuel needed again.

So to know how much fuel a spacecraft needs to reach escape velocity, NASA has to employ people who understand the derivation of the laws of physics.

The same is true in mathematics. Many times, our intuition tell us one thing about the rules that we learn from mathematics, based on our personal experiences. But just like in physics, they are often wrong. If we know the derivation of mathematical formulae, then we know when they will and will not apply. We don't even need to work them out in all situations. If we know the derivation, our intuition will normally work it all out for us, and will tell us when we need to adjust our calculations.

But if we don't have those derivations stored in our memories, or we've read them, but not understood them, then our intuition won't use them. Then we're no better than someone who knows the formulas of physics, but doesn't understand the physics behind them.

Your claim was that big breakthroughs in physics required understanding how to derive formulas. I pointed out that his derivations don't require you understand how to derive the formulas.

To understand what De Broglie said, you don't need to understand the maths behind them. You don't even need to understand physics at all, to understand what De Broglie said. But it's at least a correlation that the man who made the discovery, was someone who knew the maths behind them. It might be coincidence, or it might be causation.

We can use scientific methods to determine which is true. A method that is quite useful for this, is the Baconian Method:

The Baconian method consists of procedures for isolating the form nature, or cause, of a phenomenon, including the method of agreement, method of difference, and method of concomitant variation.

Bacon suggests that you draw up a list of all things in which the phenomenon you are trying to explain occurs, as well as a list of things in which it does not occur. Then you rank your lists according to the degree in which the phenomenon occurs in each one. Then you should be able to deduce what factors match the occurrence of the phenomenon in one list and don't occur in the other list, and also what factors change in accordance with the way the data had been ranked. From this Bacon concludes you should be able to deduce by elimination and inductive reasoning what is the cause underlying the phenomenon.

Thus, if an army is successful when commanded by Essex, and not successful when not commanded by Essex: and when it is more or less successful according to the degree of involvement of Essex as its commander, then it is scientifically reasonable to say that being commanded by Essex is causally related to the army's success.

http://en.wikipedia.org/wiki/Baconian_method

When we look at the great discoveries in physics, such as the ones made by Newton, or Einstein, or Bernoulli, I've noticed that nearly all of them had a significant mathematical background. Even Ben Franklin did a lot of investigations into mathematics. We do see a few discoveries here and there by people who didn't know maths, like Edison. But it does seem to me that the greatest discoveries of physics were mostly made by people who knew maths very, very well, and the most prolific of physicists, the physicists who came up with lots and lots of developments, seem to mostly be people who had a very strong background in mathematics. That's how it seems to me.

So the Baconian Method suggests to me that it's not just coincidence that De Broglie came up with his observations and had a strong background in maths, but it's actually a causation. Strong maths background => much greater understanding of physics => more and greater discoveries in physics.

What do you mean by "new concrete physics". My guess is that this is probably going to have to do more with the difference, in the Kuhnian sense, of "revolutionary" versus "normal" science.

Scientific discoveries that make for paradigm shifts, like gravity, and relativity, do seem to be on the short straw lately. It seems to me that most of the breakthroughs happening nowadays in physics, are all natural consequences of previous discoveries in physics, particularly in the early breakthroughs in quantum mechanics and relativity. When I hear of breakthroughs in physics, it seems to either be scientific theories that promise a lot, but seem to not be verified, even after decades, such as string theory, or scientific inventions, that again promise a lot, but require experiments that cost trillions, but again seem to make very little headway, even after decades of research, such as nuclear fusion reactors. Probably the biggest scientific invention in my lifetime is the computer, and after that, the mobile phone. They are both examples of technology and engineering. About the biggest thing in physics that I've heard of lately, is the LHC. But so far, it's not got results. Of course, it's not been around long. But Fermilab is a slightly smaller version in the USA, and I hear precious little coming out of there.

I've wondered why there is so much output in engineering and technology that has transformed our lives, but not that much I see from physics, when in previous centuries, we learned so much from physics. I've also noticed that mathematics has been on the decline since the 50s, but that before this, it was a given that university students had to know mathematics far better than even many of the top mathematicians today. I doubt it is coincidence. It seems to me, again using the Baconian Method, that there is a direct proportion between the developments in physics and the depth of knowledge of physicists in mathematics.

So now you are defining the probability in terms of the expectation value?

Other way around. The expectation of a quantity is calculated from the probabilities of its possible values. However, that's statistics, not probability. Statistics is applied mathematics, the mathematical equivalent of non-theoretical physics.

I don't know what you mean by "intuitive". To me, "intuition" is influenced by many things, including human psychology, cultural and linguistic factors, prior experiences, etc. There is no indication that all humans have the same "intuition". In fact, there's a discipline within analytic philosophy (since there are a great number of analytic philosophers who, in practice, seem to think that intuition is a good indication of "truth") which is called "experimental philosophy" and one of their concerns is to see what people really "intuitively" believe.

Exactly. There are even mathematicians who have written books on how probability is WRONG. Probability is based on our experiences of real life. Without that, it would be just another possible model of mathematics, that has nothing to do with the chances of a coin landing on heads.

Intuitively that may be what people think it means; that would be an instance of "not understanding" probability. Suppose I select, at random, some real number. The probability that I'll select any given number is 0, yet I'm going to select one number (and there's a 100% probability that it will be transcendental.)

Probability is pure mathematics. It doesn't have to apply to real life. It provides tools for use in applied mathematics like statistics, to be used only when it fits real life.

In fact, many results in probability and statistics are counter-intuitive - the Monty Hall problem or the Birthday paradox (it's a paradox since it violates people's "intuitions" of what they'd expect).

As you wrote, intuition is dependent on people's individual experiences. Some of those experiences match reality. Some don't, like women who think ALL men only want sex.

As you note, there are only two relevant events and they are equally probable (the coin is "fair") and mutually exclusive. The fact that the probability that the coin lands on heads is 1/2 follows directly from these assumptions as well as the axioms.

So what does "intuition" have to do with this?

Intuitively, each side has equal probability. So intuition dictates that in 10 toin cosses, each side will get equal amounts, 5 each. The same is true for any amount. Each side gets half.

If you can find the study I'd be interested.

It was quoted on a programme. But I've forgotten it. I tried to look it up online earlier. But I didn't find it.

Was it theoretically based or empirically based?

It was deduced theoretically using the laws of physics. But from what I understand, it has been verified empirically.

I'm also doubting that the coin could land on its side (provided there isn't a table or wall to prop it up.) I'd want to see the math on how a spinning coin plus the force of gravity acting on it would result in it landing on its side.

It happens. I used to use toin cosses to decide things often between friends. I've seen it happen quite a lot, a lot more than I would ever have expected intuitively.

At least from your scenario, Joe recognizes that his books aren't balanced. This recognition requires understanding of his accounting practices but doesn't require understanding of how the accounting software works.

Today, accountants younger than 50 have been trained on computers. They don't understand the ledger system from which the accounting practices were derived. They only know how it works on computers. So invariably, if the wrong figures are put in, and the computer prints out the wrong answers, the accountant doesn't realise that the books don't balance.

My mother was an accountant who trained on the ledger system, and who worked on computers. But because she didn't like exams, she was never qualified as an accountant. So she would do the books, and a chartered accountant would then check the work, correct it, sign it, and send it to the Inland Revenue. I cannot count the number of times she complained that the chartered accountant had made errors that she had to sort out with the Inland Revenue. Fortunately, she was trained in the ledger system. So she could find the errors and fix them. If not for that, the companies that she worked for would have faced heavy fines and massive audits.

Secondly, the programmers, since they do know the limitations of the software, could easily provide information with regard to the limitations of the software. I can know what the limitations are without actually understanding how the software works.

They do. But accountants are other people are loathe to even care what significance that has on their work. I used to develop accounting packages as a programmer. If we didn't program the package to tell the accountant when the numbers would produce an error, the accountants wouldn't twig there was a problem.

Both of these are paradoxes, in that they contradict our intuition.Our intuition tells us that accountants should know when the computer is wrong. But in reality, they almost always go with the computer's results, even when reality shows it's wrong.Our intuition tells us that programmers should provide information about the limitations of software, in a way that accountants can understand easily, and that accountants take note of that in their use of accounting software. But in reality, programmers normally bury the limitations of software in the back pages, and as part of the specifications, in ways that accountants do not easily grasp. Even when accountants are fully aware of the limitations of software, like that all accountants know that any calculator will round off their calculations, and will generate a small error, they almost always never take that into account.

Unless people are trained to think about the consequences of their actions, they don't think about them, even in accounting.