Google AdSenseGuest Advertisement

math itself cannot be wrong.
1+1=2.
x is always equal to x.
in my opinion it's the ASSUMPTIONS that lead you to faulty reasoning, not the math.

if you observe a phenomenon and build a math model to describe it but the model gives faulty predictions then the ASSUMPTIONS you have made must be questioned. giving the model a once over helps.

maybe you are talking about revisions.
sure, math models can be revised, but usually it's due to new discoveries or, you guessed it, faulty assumptions.

- laymans point of veiw.

Click to expand...

You may want to look up Einsteinian math. 1 + 1 + 1... +n = 1

Math is nice because it doesn't always have to be a hundred deviations in a linear algebraic equation or expression. A proposition can be composed of a simple Aristotlean syllogism void of actual numbers. The machine you are using involves millions of such logical exoression tried and retried each second. On the lowest level they reduce to binary machine code. Humans are most comfortable with base 10. Some mathematicians and those of us in the sciences often need to refine things like PI, for example, to a more exact quantity.

If we look at basic chemistry, from the simplest technical level, we add together say 2 + 1 to resolve from 3 + 9. We end up with two or three isomers yielding only 2 +1 of the desired isomer.

Math is nice because it doesn't always have to be a hundred deviations in a linear algebraic equation or expression. A proposition can be composed of a simple Aristotlean syllogism void of actual numbers. The machine you are using involves millions of such logical exoression tried and retried each second. On the lowest level they reduce to binary machine code. Humans are most comfortable with base 10. Some mathematicians and those of us in the sciences often need to refine things like PI, for example, to a more exact quantity.

Click to expand...

A more exact quantity? Pi is pi is pi. Decimal or binary representations will be inexact but pi is pi is pi.

I get the distinct feeling your actual knowledge of mathematics is a long way short of where you think it is.

Actually the example usually given is 1 raindrop + 1 raindrop = 1 raindrop. I don't have time to dig up something most mathematically proficient people have already seen.

Click to expand...

That's not 'Einsteinian math'. I asked you to define what that even is and you haven't. Secondly your 'equation' didn't refer to anything like that. Thirdly I've got a maths degree and I'm a professional mathematician, hence why I asked you to justify your statement because it didn't align with anything I'm familiar with.

Theory and conceptual design is like the horse and the math is like the cart. The theory is supposed to lead the math. The math, in turn, like a cart, can be used to store and haul all types of goodies. There is a limit based on how strong the horse or theory is.

But what would happen if the cart led the horse or math was allowed to lead theory? I suppose this depend on the situation. It can work on a straight road going down hill. The theory or horse can follow or appear to stem from the math. But this arrangement would break down going uphill or if the roads had turns, unless the horse pushes the cart, so the cart appears to be leading. But pushing the cart involves less control and results in more randomness in the motion.

One common aspect of science that has the cart before the horse is the theory of randomness and probabilities. Before investigating a phenomena, one starts with the assumptions of statistical math. The specific theory needs to follow from the math which comes first.

There are cases, such as statistical mechanics of gases, which is like rolling down hill in a straight line.This works fine. But there are also cases, where we just assume random math and then formulate theory after the math, with less rational control since a horse cannot push the cart accurately.

For example, the evolution of life from nothing, assumes random events and therefore has the math cart first. Any accepted theory (horse) has to conform to this math cart leading. If you assume random and then calculate the odds the theory can never be proven, yet is still accepted in the world where carts leads horses. If you try to put the horse first with a logical theory, this looks weird in the world of carts leading horses.

There are two areas of science, pure and applied. Applied science, like in a factory, is more final results orientated. In applied science, if you have a process and cannot explain how it works, this may not be important as long as the process works and quality is good. You can even treat it like a black box. There as room for a math cart to lead the horse if QC is better. A better pure science theory but lower quality product is less acceptable.

Since business often funds pure science, there is a results mentality. Business has no problem letting the cart lead the horse. This mentality can filter into pure science causing the approach to become linear. Each turn in the phenomena will become a range of separate linear paths; specialty cart before horse theory, with each sort of moving downhill toward results=more money.

That's not 'Einsteinian math'. I asked you to define what that even is and you haven't. Secondly your 'equation' didn't refer to anything like that. Thirdly I've got a maths degree and I'm a professional mathematician, hence why I asked you to justify your statement because it didn't align with anything I'm familiar with.
You failed.

Click to expand...

You're right. I failed to put in the right keywords for search, so will have to dig through my machine. I save a lot of public domain articles and have found the one example before, I'll find it again, but it's on the far bhack burner. The keyword of your post is outlined. I just had a conversation about this, only involving quantum packets, with a colleague in real life. Both of us pretty much agreed it's indroduced in Calc II, but I thought it might have been mentioned in Calc I too. When I have it, or a publication that it can be found in, I'll visit here again.

BTW... It may show up in wave packet theory somewhere. The example always used I'm familiar with is the raindrop splattering. Same as two raindrops colliding. How many raindrops result? What are their volumes on the average? What is the central volume?

i do not consider math models fragile.
they are either right or they are wrong, "close" is not right.
a right math model is about as solid as you can get without the actual reality.
-my opinion.

Click to expand...

Actual experience shows that given an axiom systems, there are a great number of additional axioms one can add that render the system self-contradictory. In this sense, software is fragile, since software is inherently prone to bugs -- unintended consequences of instructions given to a machine. Large parts of modern software engineering practice are devoted to making software systems robust to the point that they fail piecewise and not catastrophically. Mathematical models however treat all axioms and givens in the same space of logical thought and if you can prove a contradiction, the entire system is unreliable -- with respect to itself -- not just the physical system it is modeling.

The prior (also undeveloped) section of the outline was "Is a fragile model of more or less use to science than a flexible and ambiguous model?" So if you are flexible in your terms or otherwise ambiguous of what is or isn't predicted by your model or what is or isn't evidence against it, then your model is robust. "Rainy days are depressing" is ambiguous and flexible, "Wholly overcast days where no disc of the sun can be seen and natural illumination never rises above 15% of tropical cloudless noon standards are associated with 80% more of high-solarization populations reporting 3 of more symptoms of clinical depression" is less ambiguous and already mathematical in nature. Clearly some data could potentially be gathered that clearly refutes the second statement while leaving the first relatively unmolested.

I can't quite envisage what's fragile about a number or a function that takes numbers to numbers. I suppose the halting problem could represent a certain fragility, which leads to incompleteness theories. Perhaps it's because theories are axiomatic, so the fragility is about choosing the axioms. That is, axiomatic logic is based on a frail choice of "initial" or "a priori" data.

I would think that a fragile model is more useful to society because we learn from our mistakes, but this process of weeding out is taking much longer now, as experiments are more difficult, time consuming, and more expensive.

Are we at a period in history where we are limited to aesthetic models? If so, then we are entering into another golden age of crankdom. There was a small window period when advances in science became increasingly dependant on expensive experimentation, pushing out the dabblers and enabling experts to become more exclusive, a boundary once drawn between the experts and the foolishness of the inexpert masses. A time that assisted in the purging of pseudoscience, but the internet is allowing a new era of undisciplined knowledge back into society. It is a trade off with innovation. We have to expect of certain amount crankery whenever there is rapid increase in communication and consolidation of information.

Click to expand...

I believe the Internet has been a great enabler of the crank population to link up and organize. Instead of working off a mimeograph machine in San Diego like creationists did in the 1980's, Ken Ham has a multi-million dollar sham museum with no actual research being conducted. Anyone can build an ebook or register for an ISBN number and these are being touted as demonstrations of their worthiness rather than their obvious narcissism.

math itself cannot be wrong.
1+1=2.
x is always equal to x.
in my opinion it's the ASSUMPTIONS that lead you to faulty reasoning, not the math.

Click to expand...

Math foundations is about what axioms (assumptions) day-to-day math is based on. Historically, it was with great horror realized that naive set theory -- the basis on which 1+1 = 2 and x = x were built -- was unreliable. Other set theories, including ZFC, were more successful. But the list of definitions and axioms accepted varies from mathematician to mathematician and from paper to paper. Proving that they aren't ever logically inconsistent can be onerous.

Math models of physics build on top of commonly accepted sets of axioms by introducing more axioms. And it's hard to do this without having unphysical infinities show up or other pathological behavior. Physicists can duck around some of the constraints of logic by limiting their models to a domain of applicability. But ultimately we don't have a single set of axioms which holds together self-consistently and usefully describes all known phenomena in the universe.

if you observe a phenomenon and build a math model to describe it but the model gives faulty predictions then the ASSUMPTIONS you have made must be questioned. giving the model a once over helps.

Click to expand...

Well, that's a separate way math models are fragile. If the model says x=3 and testing shows x = 3, that's good. If later testing says x=3.1±0.2, that's still good. If later testing says x=3.1±0.02, that's much less good. Finally, if testing shows x=5 in some cases, then the model is no good -- or perhaps just good in a more limited domain of applicability than originally proposed.

In physics, The Standard Model and General Relativity are the two most successful models we have. We don't know their limits, but we have good reason to believe they do have limits because all attempts to combine their low-energy behaviors into a single model that is reliable at high energies have failed due to the first sense of mathematical models being fragile. But even if we find out what those limits are for the individual models, in the region of current physics experience these models are very good effective models.

The keyword of your post is outlined. I just had a conversation about this, only involving quantum packets, with a colleague in real life. Both of us pretty much agreed it's indroduced in Calc II, but I thought it might have been mentioned in Calc I too. When I have it, or a publication that it can be found in, I'll visit here again.

Click to expand...

You haven't yet even defined what 'Einsteinian mathematics' is. I'm well versed in the material covered in introductory calculus courses, certainly more than you. Nothing in them say 1+1+....+n = 1. I'm actually working on an area of quantum mechanics right now, including things to do with coherent wave packets. None of it mentions what you've said.

The example always used I'm familiar with is the raindrop splattering. Same as two raindrops colliding. How many raindrops result? What are their volumes on the average? What is the central volume?

It's an analogy used to describe two particle collisions.

You really don't have that in your repertoire?

Click to expand...

Firstly that isn't Einsteinian mathematics, whatever that is. The only stuff Einstein did pertaining to fluids is the alteration to viscosity due to a static suspension. Secondly the fact rain drops (or any other fluid) combine and split doesn't mean 1+1+...+n=1. The rain drops still split according to things like volume conservation (assuming constant density, as is pretty much the case for liquids). In quantum field theory you can have 2 particles collide and arbitrarily many produced but that doesn't mean 2=n, that isn't what the mathematics says at all.

You're again showing two things. Firstly that you're grossly ignorant of mathematics and physics and secondly that you're a terrible troll.

That's not simple, since we have different models for relative velocity.
In the Newtonian case, which you may not know that you are implying since you say object 1 is stationary and the Newtonian model allows that in an objective sense, the magnitude of the velocity is \(\sqrt{ \left| a^2 + b^2 + 2 ab \cos \theta \right| }\) where theta is the angle between the direction of a and the direction of b. Essentially, this the content of your triangle formed by placing two vectors head to tail and preserving the direction of each.

That's not simple, since we have different models for relative velocity.
In the Newtonian case, which you may not know that you are implying since you say object 1 is stationary and the Newtonian model allows that in an objective sense, the magnitude of the velocity is \(\sqrt{ \left| a^2 + b^2 + 2 ab \cos \theta \right| }\) where theta is the angle between the direction of a and the direction of b.

Yes, I'm sure that you said object 1 was stationary. Yes, if this was objectively true then it implies a preferred rest frame, and I'm sure the Newtonian model of space and time had just such a preferred frame. Given that we are talking about the Newtonian model, then I am sure that my expression for the magnitude of the sum of vectors in a Euclidean space of 2 or more dimensions is correct.

Because of your curious contextomy that removes all discussion of why I used that one particular model, and your curious dismissive attitude towards the answer for the question as you posed it, I suspect you of trolling and the adoption of belligerent ignorance, therefore I will state this explicitly: The above result is model-dependent. A different model would give different results.

Yes, I'm sure that you said object 1 was stationary. Yes, if this was objectively true then it implies a preferred rest frame, and I'm sure the Newtonian model of space and time had just such a preferred frame. Given that we are talking about the Newtonian model, then I am sure that my expression for the magnitude of the sum of vectors in a Euclidean space of 2 or more dimensions is correct.

You give a result greater than the speed of light. Got something to add?Is this possible?

Because of your curious contextomy that removes all discussion of why I used that one particular model, and your curious dismissive attitude towards the answer for the question as you posed it, I suspect you of trolling and the adoption of belligerent ignorance, therefore I will state this explicitly: The above result is model-dependent.

Click to expand...

I know the forum rules prohibit the accusation of of trolling.

A different model would give different results.

Click to expand...

No, I only gave a simple example.
I gave an example as simple as possible of calculation, hoping that you could do this calculation.

At the beginning of the 20th century, such a new model was proposed: That the geometry of space time wasn't Euclidean (the only geometry known to Newton) but Lorentzian. (Of course, Lorentzian space-time does not admit that any object can be determined to be actually "stationary" but since you do not actually use "stationary" in any physical sense, it seems superfluous if you weren't trying to be deliberately obtuse, and in this case we ignore it.)

Two models, two answers to a physical situation. If experiment favors one model over the other, the fragility of mathematics requires us to discard the disfavored model as unphysical.

Click to expand...

You simply are not able to do a vector calculation, taking into account the SR. Are you?
Please give the general formula, from which you deducted those cases.(If you have know physics, then you have know that replacing with values ​​is the last stage.)