You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Empiricism vs. Logic

Nevertheless, there does seem to be some hostility from people who pride themselves on empiricism towards the value of logical argument (from axioms to theorems).

I want to discuss this. I believe this is the correct forum. The philosophy forum would not invite as any empiricists.

People seem to believe that the heart of science is in its empiricism. I think the empirical approach is important. But I see a lot of people dismiss the logical-philosophical aspects of science due to the "lack of empiricism."

If you are one of these people, can you explain your position?

Consider the success of geometry (in all its forms, not just Euclidean). What makes it so successful?

Consider the failure of empirical approaches to things like the stock market. What makes them so unsuccessful? Can the approaches taken by the Santa Fe Institute help in this regard?

Last edited by ygolo; 05-28-2010 at 06:49 PM.

Accept the past. Live for the present. Look forward to the future.Robot Fusion
"As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
"[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
"[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

Axioms and theorems are perfectly okay to use if you understand what they are and what they mean. Some idea of the real world is required for this. If you had science without experiments then the logic would be in fantasy world. If you had science without logic, then we'd probably be in the stone age. I'm not more for either. The balance depends on the context.

However, it just seems to me, that for many scientists, the approach of using axioms and theorems is simply dismissed as not being useful because what they are studying is "too complicated."

I am, in particular, looking at the approaches used in life and social sciences. Why are statistical approaches so heavily favored over deductive approaches?

We have the computing power now to do some amazingly complex simulations based on first principles. Examples include studying the formation of the solar system from the myriad of particles in the vicinity and the simulation of circuits with 100s of millions of transistors. There are tricks used here to make things tractable, but there may be tricks available in other fields as well. Computational chemistry has also been a great success, though I know less about this.

Yet, most of what I see is computing power thrown at data-mining like activities. I would like some insight into knowing why that is.

Certainly, our "first principles" could be way off, but if the assumptions are kept small number, there is less of a chance that we are correct by simply having enough knobs to tweak.

Accept the past. Live for the present. Look forward to the future.Robot Fusion
"As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
"[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
"[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

However, it just seems to me, that for many scientists, the approach of using axioms and theorems is simply dismissed as not being useful because what they are studying is "too complicated."

In my research we do an even mix of theorizing, computation, and experimentation. I spend as much time working out the physics of a computational model as I do testing that model in the lab via real world experiments. These computations and theories we use to formulate our ideas are all based on first principles, but unfortunately we do not have the technology to take a completely ab initio approach, for some of the things I do we'd be looking at an entire life time of processing time for me to get just a few of my calculations. So we rely heavily on empirical knowledge for some aspects of our research since we have no other way to verify the validity of our assumptions.

So it really isn't useful for a scientist to do anything but verify that there is a cause and effect relationship with everything. There honestly isn't any room for philosophy as a stand alone in our work. Something can be very logical and still lead you to the wrong conclusion without carefully analyzing the cause and effect at each step. If the logic is based off of incorrect assumptions then you will get the wrong answer every time (quite logically I might add). So in the complex sciences we have to test the validity of our assumptions through empirical methods (when the variables are too complex for ab initio computational methods) and sometimes believe it or not, even though our assumptions are based on first principles and we follow air tight logic, we don't get what we expect when we run the experiments and usually when we look at the individual system we are testing we find one variable that causes something about an assumption based on a first principle to be wrong. Not because the theory is wrong, but because when you have over a thousand variables it is sometimes difficult to apply first principles to the system as a whole in any kind of useful way.

An example might be that typically you would expect functional group A to react very quickly with functional group B, only in the system we are using for some reason despite what was predicted from first principles no reaction occurs in real life, so a crystal structure is taken of the starting material and NMRs are taken of the reaction intermediates and from that we find that the molecule adopts an unexpected geometry in the transition state which makes any reaction between group A and group B impossible. We'd be out millions of dollars if we'd based an entire research project on the assumption that those two groups would react, when they clearly do not (even though theoretically they should when taken out of the context of that specific system) - and this is why cause and effect are so important to most areas of science.

I think it's important to understand the strengths of logic vs. the strenghts of data and then use them both appropriately. The advantages of logic are that logic is very resource cheap and its conclusions are absolutely certain, and it's easier to remain impartial when using logic. The advantages of data are that it is tied to the real world, and it allows us to understand the details of a phenomena instead of just the general principles.

So I find that it's better to rely as much on logic early on as you possibly can. Think about the problem so that you can get to the heart of the question that you really want to answer. You can use equations, graphs, Venn diagrams, etc... to really understand the problem without having to spend much in the way of resources. Then construct a hypothesis, so that you can clearly spell out the possible conclusions before you see the data. I.e. "If the data shows A then my hypothesis is correct, but if the data shows B, then I must conclude the opposite...." This sort of thing prevents confirmation bias from happening.

Then you can collect data, perform experiments, etc.... This sort of thing tests your hypothesis and fills out the details that you might not have thought of to begin with. But really if you look at the approach I'm advocating, it relies on using logic as much as possible. That might be my personal bias, but I think that approach gives a way to reach dependable conclusions while making the most of resources.

My wife and I made a game to teach kids about nutrition. Please try our game and vote for us to win. (Voting period: July 14 - August 14)http://www.revoltingvegetables.com

I don't think logic, as humans use it, can take into account nearly enough variables to be truly accurate, nor can it account for all unknown and possible variables. It can never quite get precise enough predictions even if it is accurate and simply cannot take into account the full scope of what is occurring.

As such, it seems sensible to rely on empiricism as an anchor. So logic can both inspire and directly form frameworks from which the next experiment is performed, which it does with great efficiency.

As people have already said, this balance seems to work. One without the other would quickly become much less efficient in making new discoveries.

Originally Posted by ygolo

However, it just seems to me, that for many scientists, the approach of using axioms and theorems is simply dismissed as not being useful because what they are studying is "too complicated."

I think that varies a lot depending on which field one is referring to.

Originally Posted by ygolo

Certainly, our "first principles" could be way off, but if the assumptions are kept small number, there is less of a chance that we are correct by simply having enough knobs to tweak.

I think that's precisely why Empiricism is so heavily relied upon in most fields. Since even 'first principles' can be wrong.

Interestingly enough, I think psychology does this the most. Possibly because the odds of any principles, first or no, being wrong are very high in relation to something as complex and full of bias as human behaviour. Thus you see psychologists relying very heavily on empirical research, and only with great hesitation extrapolating past the raw data.

Usually a mathematical discovery is made but only finds a practical use fifty or a hundred years later.

And interesting exception to this is non-scalar networks, for the mathematics was invented in 1997 but a used was found one thousand years earlier by the Dominican Order.

However the Dominicans did use non-scalar networks successfully to hunt heretics, even without the mathematics.

An interesting relationship between logic and the empirical.

I had the impression that it was the empirical work that tends to show up first.

Originally Posted by spin-1/2-nuclei

In my research we do an even mix of theorizing, computation, and experimentation. I spend as much time working out the physics of a computational model as I do testing that model in the lab via real world experiments.

Cool. That's the type of work I would really be interested in doing. As I said in my first post. I am not proposing a dichotomy. I know both approaches are needed.

Originally Posted by spin-1/2-nuclei

These computations and theories we use to formulate our ideas are all based on first principles, but unfortunately we do not have the technology to take a completely ab initio approach, for some of the things I do we'd be looking at an entire life time of processing time for me to get just a few of my calculations.

I don't know the particulars of your case. However, I have found that many science departments don't take advantage of the engineering resources available at their school.

My B.Sc.'s are in Computer Engineering and Discrete Mathematics, and that makes me generally skeptical when people say that calculations would take a lifetime. I have regularly seen algorithm improvements that give 1000x the performance. In addition, parallel algorithms can often achieve super-linear scaling with the number of processors if you can properly make use of the caches available in the machine.

Another resource that I think is not explored enough is the use of custom hardware to do calculations. Again, if the science departments were to engage with the computer engineers co-located with them, there may be enough impetus for the engineers to develop custom HW (using FPGAs most likely, but maybe even custom silicon).

I also see some potential for mixing analog computation along with the digital computation to get simulation results. These are of course engineering projects, and the science departments would likely need to move forward using the technology currently available. However, launching engineering projects may allow a return to previously intractable calculations much sooner than just waiting for the next generation of Microprocessors.

Originally Posted by spin-1/2-nuclei

So we rely heavily on empirical knowledge for some aspects of our research since we have no other way to verify the validity of our assumptions.

I figured as much. Ultimately, that is the only way to check initial assumptions.

Originally Posted by spin-1/2-nuclei

So it really isn't useful for a scientist to do anything but verify that there is a cause and effect relationship with everything. There honestly isn't any room for philosophy as a stand alone in our work.

Could you elaborate on what you mean by this?

Originally Posted by spin-1/2-nuclei

Something can be very logical and still lead you to the wrong conclusion without carefully analyzing the cause and effect at each step. If the logic is based off of incorrect assumptions then you will get the wrong answer every time (quite logically I might add).

Certainly, I think empirical testing is indispensable. What I believe is that it may be worthwhile to revisit and correct the assumptions once they are proven wrong.

Originally Posted by spin-1/2-nuclei

So in the complex sciences we have to test the validity of our assumptions through empirical methods (when the variables are too complex for ab initio computational methods) and sometimes believe it or not, even though our assumptions are based on first principles and we follow air tight logic, we don't get what we expect when we run the experiments and usually when we look at the individual system we are testing we find one variable that causes something about an assumption based on a first principle to be wrong. Not because the theory is wrong, but because when you have over a thousand variables it is sometimes difficult to apply first principles to the system as a whole in any kind of useful way.

This confuses me. It seems like if you get incorrect results either you assumptions are wrong or the calculations/deductions are (even if is due to some variable being incorrectly handled or rounding error). In fact, that is a logical consequence...essentially a proof by contradiction.

Originally Posted by spin-1/2-nuclei

An example might be that typically you would expect functional group A to react very quickly with functional group B, only in the system we are using for some reason despite what was predicted from first principles no reaction occurs in real life, so a crystal structure is taken of the starting material and NMRs are taken of the reaction intermediates and from that we find that the molecule adopts an unexpected geometry in the transition state which makes any reaction between group A and group B impossible.

It seems to me, here, you either have a mystery pointing to missing/unknown first principles (why does the molecule adopt an unexpected geometry in the transition state?), or an improper input of ALL the relevant known first principles (the principles that lead to the unexpected geometry were missing from the original model).

Originally Posted by spin-1/2-nuclei

We'd be out millions of dollars if we'd based an entire research project on the assumption that those two groups would react, when they clearly do not (even though theoretically they should when taken out of the context of that specific system) - and this is why cause and effect are so important to most areas of science.

Again, I believe empiricism is indispensable and, of course, you have to do what is practical, in terms of grants, etc. But it seems like, in the case mentioned, unexplored avenues of research are staring you in the face.

We had a Nobel laureate speak at our school, and he said what was essential is the tracking down of what is unexplained. If your model says one thing should happen and experiment shows another thing happening, that is something unexplained.

I suppose the unexpected geometry is a partial explanation. But, if computation from first principles misses that the geometry in question should occur, that, by definition, means the computation was wrong. By logic, it also means that there was an incorrect "axiom" or an incorrect "deduction."

Originally Posted by erm

I don't think logic, as humans use it, can take into account nearly enough variables to be truly accurate, nor can it account for all unknown and possible variables. It can never quite get precise enough predictions even if it is accurate and simply cannot take into account the full scope of what is occurring.

Forever is a long time. Can we really say we'll never take into account the full-scope of thing?

Even accepting that to be true, correcting our theories when they disagree with experiment has lead to a great many new discoveries and predictions.

Accept the past. Live for the present. Look forward to the future.Robot Fusion
"As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
"[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
"[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

Forever is a long time. Can we really say we'll never take into account the full-scope of thing?

Of course it might eventually happen, but I thought far off future possibilities were irrelevant to the why of what is happening now.

Incompleteness theorems, Bell's theorem and similar may lend weight to the idea we will never take the full scope into account. Really though, the mere fact that there appears to be no way to know whether all factors have been taken into account, suggests that one will always have to favour Empiricism over Rationalism to some degree.

Originally Posted by ygolo

Even accepting that to be true, correcting our theories when they disagree with experiment has lead to a great many new discoveries and predictions.

That is the foundation of my point. It's the empirical evidence which acts as the foundation of the new theories. If the theory and the evidence disagree, the evidence is always considered the correct one, providing all the basic parameters of reliability have been met. In the end, scientific hypothesis, theses, theorems and such are all trying to predict the empirical, since the empirical is the reality, not just a representation (even if philosophical Idealism were true).

So whilst a balance of both works best, given human nature, empiricism remains the foundation.

I don't know the particulars of your case. However, I have found that many science departments don't take advantage of the engineering resources available at their school.

Our lab collaborates with mathematicians and engineers all of the time. They often develop our custom software, instrumentation, etc. In regards to your comments regarding the error examples I gave, the issue really is not with the assumptions, because in a vacuum the assumptions would be correct, but when applied to the system mitigating factors produce unexpected results. Sometimes you will come across the only system ever encountered in the entire world where functional group A and B don't react.

Imagine you have a ball of yarn, you can make assumptions about the physical and chemical properties of the yarn based on first principles, but if I ask you to stand at the top of a building and drop that ball of yarn and predict exactly what the structure of the ball of yarn will be when it hits the ground, this becomes increasingly more difficult. Especially when the weather, building, height, ground surface, and on and on keep changing. So while the ball of yarn remains the same and all of the first principle assumptions related to a ball of yarn in a vacuum remain the same, the second you place that ball of yarn into the system the variables change and it sometimes isn't even possible to identify all of the variables without empirical investigation.

This is why philosophy doesn't have much of a stand alone place in science. Theorizing is extremely important, but without the elusive theory of everything we still need empirical evidence to back up cause and effect associations. You must have cause and effect in science because it is essential to be sure that one thing causes another when building upon those concepts.

Ab initio calculations give us the best predictions but they are time consuming and therefore not always practical so semiempirical approaches are often necessary. I've provided a link you might find useful below.

"How long do you expect it to take? If the world were perfect, you would tell your PC (voice input of course) to give you the exact solution to the Schrödinger equation and go on with your life. However, often ab initio calculations would be so time consuming that it would take a decade to do a single calculation, if you even had a machine with enough memory and disk space." - Introduction to Computational Chemistry