The facts it seeks to describe do. The idea that it's worthwhile to try to describe those facts precisely is not in any way inherently masculine (even if one accepts Belial's statement that it's "held up as masculine by our society").

So you're just breaking with the biological essentialism again - which I've already said seems to me both the weakest part of Irigaray's argument, largely incidental to Irigaray's argument, and actually not nearly as radical as it looks at first glance because of the particularly narrow scope of Irigaray's argument.

In other words, this is not a point on which you can build a larger refutation of the argument.

Yes it is, because no alternate reason why the practice of trying to find precise descriptions of what the universe does is sexist or "sexed" has been presented. Saying that that practice is treated as 'something men do' by this society in the here and now is a completely different argument.

You're reducing this to two options, both of which are unsatisfying.

1) Science is gendered masculine because it shares characteristics with penises.2) Science is gendered masculine because of the social fact that it is treated as "a man's job" in society.

#1 is Irigaray's explicit position. It is, I think, flawed, but not necessarily wrong.#2 is nobody's position - it's deeply flawed, because, as you correctly observe, it is possible that even though the social phenomenon of science is dispiritingly male, that does not actually affect the work done.

But you're neglecting a third option:

3) Science is gendered masculine because the ideological values of science are ideological values that are associated with our vision of masculinity.

That is, it is not that science is practiced by men - it is that our society has a set of values that can be described as "masculine values," and that science has a set of values, and that these values correspond. And that, furthermore, there is a set of feminine values which run counter to the values of science.

This option, I think, both can serve as a basis for Irigaray's argument and is not problematic in the way you describe.

Random832 wrote:In my defense, #2 is a position that Belial has advanced (by substituting #2 for #1 and saying "suddenly the whole argument works") even if he doesn't in fact hold the view himself.

Actually, what Belial said was "While precision and logical rigidity aren't inherent male attributes, they are attributes that are cultivated and held up as masculine by our society." Which seems to me to be #3, not #2.

PhilSandifer wrote:3) Science is gendered masculine because the ideological values of science are ideological values that are associated with our vision of masculinity.

I do, for the record, also have a problem with #3, I was just having trouble putting it into words.

The idea that you can say that it is gendered masculine because its values are something that [society] associates with its vision of masculinity.

Really all you can support on that basis is that society perceives science as masculine. Which goes back to what I'm saying about this being part of the problem, and where I ultimately see it going is girls not being allowed to be taught real mathscience because it "is gendered masculine".

PhilSandifer wrote:3) Science is gendered masculine because the ideological values of science are ideological values that are associated with our vision of masculinity.

I do, for the record, also have a problem with #3, I was just having trouble putting it into words.

The idea that you can say that it is gendered masculine because its values are something that [society] associates with its vision of masculinity.

Really all you can support on that basis is that society perceives science as masculine. Which goes back to what I'm saying about this being part of the problem, and where I ultimately see it going is girls not being allowed to be taught real mathscience because it "is gendered masculine".

Keep in mind that this is Science as a social construction or phenomenon being discussed here, not the idealized task of Science the seeking of truth. Phil and Belial, does this also refer to the practice of science, such as sexism within university science departments?

PhilSandifer wrote:3) Science is gendered masculine because the ideological values of science are ideological values that are associated with our vision of masculinity.

I do, for the record, also have a problem with #3, I was just having trouble putting it into words.

The idea that you can say that it is gendered masculine because its values are something that [society] associates with its vision of masculinity.

Really all you can support on that basis is that society perceives science as masculine. Which goes back to what I'm saying about this being part of the problem, and where I ultimately see it going is girls not being allowed to be taught real mathscience because it "is gendered masculine".

Your turn from the ideological structure of science to science as a social practice is a bit misguided. The biggest issue is that you're assuming that gender bias is going to play out entirely (or even at all) in terms of a gendered difference in practice. On the Blag a while ago, Randall posted a quick overview of high-grossing movies of the last few years and the number of them that had female leads. It was clear, material evidence of gender bias in contemporary American cinema. And yet that bias does not translate to females abandoning the movies.

The question of what social phenomenon follow as consequences from a gender bias in the ideology of science is open in Irigaray - at least in the passage cited. And, more to the point, Irigaray is not one who advocates gender essentialism in the way you're describing.

I'm pretty sure that given the example it used, the original passage was talking about the idealized task.

I'm very sure it wasn't. Irigaray isn't much of a Platonist - I don't think she'd much care about the idealized task one way or another. She's going to be interested in science as a social phenomenon.

cnoocy wrote:Keep in mind that this is Science as a social construction or phenomenon being discussed here, not the idealized task of Science the seeking of truth. Phil and Belial, does this also refer to the practice of science, such as sexism within university science departments?

I do not think that Irigaray goes in that specific direction, but it's not an unreasonable application of the idea.

az_sandhawk wrote:Political debates. The targeted, expert use of irony (honed through literary criticism) against those who have authoritarian tendencies can move entire societies in directions that benefit their citizens (see Soviet Union, Warsaw Pact, etc, etc). It also helps train spies. And serves as a handy antidote (for the average citizen) for all sorts of bullshit ideologies (Marxists and Religious Conservatives, for example, hate it).

Argh. I'm really, really sorry to have to do this, because I know it's irrelevant to your point (with which I agree, by the way). But you're making a careless, sweeping generalization that crushes entire cities when you say that religious conservatives "hate" irony and literary criticism. It's not true. It's not even true for the majority. It may be true for the very vocal minority, but I just have to say that the vocal minority makes us cringe and hang our heads and wish they would shut up. It irritates the heck out of me when people say things like this in passing so you don't even get a chance to say "Wait wait wait wait wait. What??" Please, don't judge me by what other religious conservatives have said. It's illogical, it's silly, and it's most incorrect. I am a religious conservative. Yes, some religious conservatives are stupid and bigoted. Not all of us are, and I am not.

Are you a literalist? Do you believe that God created the world in seven days? Because it's written in the Bible? If you are a literalist, you are necessarily a structuralist (you say only what you mean and mean only what you say) which, logically, places you at odds with post-modernist, post-structuralist, deconstructivist philosophy. And you're going to inevitably - at some point - end up walking into a minefield the deconstructivists have put in place. ("How can God create a stone He cannot lift?") Every text, including the Bible, subverts itself.

This has nothing to do with what is conventionally thought of as bigotry, by the way. It's about pointing out that every system, including deconstructivism, has non-negotiables. Autocratic / totalitarian regimes require structuralism (or literalism, if we are going to have to be reductionist about it). It's almost as if literalism creates autocratic thinkers ("Only the objectively correct my speak." ala 1984 or Feminist or Marxist or Conservative Basic Human Decency). Ideas that don't fit must be suppressed in those systems.

Lunch Meat wrote:Goodness, I'm thinking we need another Godwin's Law that says "In any Internet debate, an insult along the lines of 'That idea is as stupid as religious fundamentalism/creationism/conservativism' will be used on one or both sides."

If you're not a literalist, there should be no problem. If you have no room for metaphor (no wiggle room for people who are different or who disagree) then pay no attention since you probably are my target.

And it's not that I think those folk are stupid. They have created a brutally logical self contained world view. They are welcome to it, as long as they do not impose it upon me. I do not function well in an environment in which literalists / structuralists impose their values on me.

Okay. Pardon me if I spoke out of turn--I was, unfortunately, reacting just as much to what I see in other parts of the Internet, and it is my nature to mention it if something bothers me or seems to make a baseless assumption. But I see a disagreement between the assertion you made in the first quote and the argument you're now putting forward. These statements in particular "If you are a literalist, you are necessarily a structuralist (you say only what you mean and mean only what you say) which, logically, places you at odds with post-modernist, post-structuralist, deconstructivist philosophy." and "It's about pointing out that every system, including deconstructivism, has non-negotiables. Autocratic / totalitarian regimes require structuralism (or literalism, if we are going to have to be reductionist about it)." seem to me to be saying, "Biblical literalists hate deconstructionism because it doesn't agree with their worldview." This, I can agree with. "And serves as a handy antidote (for the average citizen) for all sorts of bullshit ideologies (Marxists and Religious Conservatives, for example, hate it)." This...not so much (particularly the phrase "bullshit ideologies"). It sounds like "Religious conservatives hate deconstructionism because it proves them wrong and makes them look stupid." I think these two ideas are quite different. If the former is what you're saying, more power to you. If the latter, then I disagree. and I still think you were overly general to use the phrase "religious conservative".

az_sandhawk wrote:And yes, I know that you've probably had an English prof or two who has insisted that his or her pet ideology is the only way to go...For fun, mention deconstructivism and see what happens. Attempt to demonstrate how the pet ideology subverts itself. The response you get will tell you everything you need to know.

Actually, it was my gov/econ teacher. He was basically the best at saying these careless little things about Christianity totally in passing, and then running right along with his lecture (man, I wish I'd had one of those "Citation needed" signs). To be fair, he did let conservatives try to defend themselves every once in a while...but only the ones who were really bad at debating.

Lunch Meat wrote:Okay. Pardon me if I spoke out of turn--I was, unfortunately, reacting just as much to what I see in other parts of the Internet, and it is my nature to mention it if something bothers me or seems to make a baseless assumption. But I see a disagreement between the assertion you made in the first quote and the argument you're now putting forward.

Perhaps I'm not making the context clear. An ideology, in this context (political or social), is a framework for attempting to justify the seizure of control over the lives of others.

Lunch Meat wrote:These statements in particular

az_sandhawk wrote:If you are a literalist, you are necessarily a structuralist (you say only what you mean and mean only what you say) which, logically, places you at odds with post-modernist, post-structuralist, deconstructivist philosophy."

and

az_sandhawk wrote:It's about pointing out that every system, including deconstructivism, has non-negotiables. Autocratic / totalitarian regimes require structuralism (or literalism, if we are going to have to be reductionist about it)."

seem to me to be saying, "Biblical literalists hate deconstructionism because it doesn't agree with their worldview." This, I can agree with.

az_sandhawk wrote:And serves as a handy antidote (for the average citizen) for all sorts of bullshit ideologies (Marxists and Religious Conservatives, for example, hate it).

This...not so much (particularly the phrase "bullshit ideologies"). It sounds like "Religious conservatives hate deconstructionism because it proves them wrong and makes them look stupid." I think these two ideas are quite different. If the former is what you're saying, more power to you. If the latter, then I disagree. and I still think you were overly general to use the phrase "religious conservative".

I'm saying both.

Why are people who question their religious leaders generally kicked out of their religious communities? And what other term should I use then, for the theocrats in places like Saudi Arabia, Iran and a score of other countries, including the United States, who are attempting to seize or have already seized power and have systematically murdered anyone who speaks against them once they did seize power? If God is so powerful, and He's on their side, why do they have to murder and imprison people who disagree with them?

Perhaps it would help if you substituted "political" for "bullshit."

Deconstructivism is a threat to any ideology that serves as a framework of justification for assuming control over the lives of others because it highlights the contradictions inherent in any ideological framework.

In common parlance and in general effect religious conservatives, with their very public moral crusades, attempt to use their theological ideology to assert control over forces in society that they find distressing. It's autocratic and often insists that there is only one way to live. ("We had to destroy the village to save it.") Moralism is another form of tyranny.

Biblical literalists hate deconstructivism because one can, in fact, deconstruct the Bible - their supreme arbiter in any sort of conflict. To begin with, insisting that it is a text like any other (that you can deconstruct it just like a trade paperback) is insult enough, I suppose. The broad side of a barn example is, "How can God create a stone He cannot lift." A more subtle example would be, if only God is fit to judge, then what right does a worldly authority (i.e. the humans who make up the Church) have to tell me how to live a life?

And I don't know...Do you think that insisting that God created the world in six 24 hours periods whilst explaining that the weather has a purely scientific explanation looks stupid? Do you think that insisting that wearing a chador or a veil "liberates" a woman while not permitting women to drive looks stupid? Do you think that claiming there is a media basis against you while you are being interviewed by a major media outlet about your latest book looks stupid? Or at least contradictory? Anyone looks stupid when they claim that a fiction is a fact and that a fact is a fiction. And yes, I happen to think that anyone looks stupid when they contradict themselves. Well, except when they are in power - then they are dangerous.

It's the same thing with the Marxists. If the revolution is historically inevitable, then why do we need a bunch of people who are going to be privileged afterwards to tell us what it is and when to do it?

I find it appropriately fitting that you would notice one type of autocracy's (your left of center economics instructor's) susceptibility to deconstructivism but are too blind to notice the unavoidable and inherently autocratic and coercive nature of religious conservatism and its susceptibility to the same fate. Congratulations, you are an ideologue.

My English teacher in 11th grade had my class explore Literary Criticism with 5 Subtopics.DeconstructionNew Historiscim or some word involving historyFeminism Psychoanalytical Criticism And then another one.

I had Psychoanalytical Criticism. It was actually kind of fun, and we had to apply it to The Scarlet Letter

PhilSandifer wrote:No, it's not falsifiable. Why is this a problem? Falsifiability as an idea comes from Karl Popper, who never comes antwhere close to saying that all knowledge should be falsifiable. In fact, he's adamant that a second type of knowledge beyond scientific knowledge - what he calls metaphysical knowledge - is essential and valuable. So "not falsifiable" isn't a criticism.

Guess what. You can take some of the ideas from a person without taking all of the ideas of a person. AFAIK only religious fundamentalists claim otherwise.

SecondTalon wrote:A pile of shit can call itself a delicious pie, but that doesn't make it true.

Finally - an actual passage of alleged nonsense to talk about. OK. Let's go through this. First, we should note the core of the statement - basically, all this is saying is that the basic values of science - valuing fixity, certainty, precision, and rigidity - are values that contain an implicit gender bias. In the quote, this bias is being equated to essential rather than cultural reasons - it's not that girls are told in middle school that they can't do math, but that rigidity and precision are actively and essentially masculine on a biological level. (This, by the way, is the part of the argument I find most problematic)

So, in other words: boys are better then girls at science and phsyics because it involves maths?

It's close to that argument - more accurately, math, science, and physics, because they value precise, absolute truth, promote masculine values. It's entirely possible, under her argument, for girls to be as good at math as boys, better at math than boys, or worse at math than boys. All that matters is that the ideological values of math and science are what considers masculine.

Only problem is, that isn't exactly right either. You see, most science uses a branch of mathematics called statistics, which is all about fuzziness and what fuzziness actually means.

Belial wrote:In fact, all you have to do is build one little logical bridge and the whole thing is valid even if you don't buy all the essentialism (which I don't): That, while precision and logical rigidity aren't inherent male attributes, they are attributes that are cultivated and held up as masculine by our society.

Suddenly, the whole argument works again.

Unfortunately, there's a giant fucking wall blocking the very entry of the argument. The wall is inscribed with one word. Statistics. Break down that wall and I might have an idea of what Irigaray is actually trying to say. I have a sneaking suspicion she's not making an argument but just linking science up to sexual dimorphism in order to start debates. Does she even have any conclusions?

You see, even statistically the chaos that puts a roadblock to understanding fluid dynamics is hard to define.

SecondTalon wrote:A pile of shit can call itself a delicious pie, but that doesn't make it true.

Second, I am somewhat surprised at the number of people that ask the question the usefulness/benefit of the humanities (Ezbez, EtzHadaat, et al). I tend not to read the forums, so this may have come up a few times before, such as in the "Purity" discussion. The basis of the argument against the pursuit of humanities, or seeing the humanities as nothing more than codified opinions, seems to be utilitarian. That is, if the pursuit of a particular knowledge has no obvious benefit to society, it is more problematic to pursue such a knowledge if a more beneficial knowledge is available.

I've kinda extrapolated the argument against the utility of the humanities, and what seem to be the implied conclusions (P means premise, C means conclusion...in case any of you skipped the english or philosophy class where they taught about constructing arguments):

P1: Production of things is of benefit to society.P2: Production of things or technologies that are based on theories that permit the production of things, are useful and valuable.P3: The natural sciences are the means by which things are most effectively produced, or making things that are produced better.C1: The natural sciences produce things, or make the production of things better. Therefore the natural sciences are useful and valuable and are of benefit to society.P4: The humanities do not produce things.P4: The humanities do not produce things or technologies that are based on theories that permit the production of things.C2: The humanities are not of benefit to society, nor are they valuable or useful, because they do not produce things nor do they make the production of things better.

Now, correct me if I'm wrong in the premises as stated. It seemed most folks were reluctant to define what they meant by useful. Material things and some how making them better seemed to be the only measure that I could find as to why science was more useful than the humanities. Knowledge about the universe seemed conspicuously absent as a valuable contribution of science.

The issues I take with a utilitarian argument against the humanities are as follows. The first is that one must have a means of concretely and discreetly measuring usefulness. The second is that a utilitarian assessment of worth or value can only be applied over a clearly marked period of time and with omniscience. The position is ultimately untenable largely on account of the inability of anyone, at present, to develop a concrete and discreet measure of usefulness for knowledge that is not contentious or to acquire omniscience.

Like Phil, I think it supports a "nature of benefit is different" conclusion, but I also think it takes a different form. So, I'll present that form.

P1: The social value of any endeavor is directly proportional to the number of people it benefits.C1: Activities that benefit the self are of less social value than activities that benefit others.P2: A person's actions can only provide benefit to others in terms of material benefits.P3: Performing actions of benefit to others requires the ability to perform that action.P4: The ability to perform any action is acquired through training in the necessary skills and methods.C2: The most socially valuable fields of knowledge, study, and education are those that pertain to the skills and methods involved in performing actions that can provide material benefit to others.P5: The skills and methods required for performing actions that provide material benefit to others are based upon known facts about the material world.P6: It is possible to improve the provision of material benefit to others through refinement of known facts about the material world.C3: Fields of knowledge, study, and education that pertain to the refinement of known facts about the material world are of indirect benefit to society through their improvement of actions that provide benefit to others.C4: Fields of knowledge, study, and education that pertain to the refinement of known facts about the material world are the second-most-socially-valuable category of such.P7: Fields of knowledge, study, and education that pertain neither to the skills and methods for performing actions that provide material benefit to others nor to the refinement of known facts about the material world are variously termed "the arts and sciences", "the liberal arts", "the humanities", and "the social sciences".P8: No other categories of knowledge, study, and education than those described above exist.P9: Only the arts and sciences/liberal arts/humanities/social sciences provide direct benefit to the self.P10: Only the arts and sciences/liberal arts/humanities/social sciences provide non-material benefit to anybody.C5: The arts and sciences/humanities/social sciences, while of greatest value to the individual and of value to society, are the least socially valuable branch of knowledge, study, and education.

That, I think, is the full utilitarian argument: that a ploughman's part is more socially valuable than a philosopher's, and thus trade schools are more socially valuable than university sociology departments. In sneering terms, "The humanities are valuable, but only to the individuals participating in them since despite their academic content, they're just glorified hobbies, and natural sciences are only valuable because they let farmers, factory workers, and engineers do better work." In (semi-discredited) psych terms, "The social value of a field of academia is directly proportional to how low on the Hierarchy of Needs it's most directly applicable: self-actualization related academia is least valuable while physiological needs related academia is most valuable." In theatre department terms, "Acting, Directing, and Design classes are all more valuable than Theatre History classes since they teach you how to do theatre instead of teaching you about theatre."

Now, are all the premises and conclusions valid? Well, C1 kinda seems like a restatement of P1, but I've already spent too long writing this. Beyond that, I have the most doubt about P2, and P9 & P10 also seem dubious to me. P1/C1 are both certainly very contentious, though I agree with them. After all, things that enrich my life mean a great, great deal to me but ultimately mean squat to everyone but me and |People who are not me| > |people who are me|.

PhilSandifer wrote:No, it's not falsifiable. Why is this a problem? Falsifiability as an idea comes from Karl Popper, who never comes antwhere close to saying that all knowledge should be falsifiable. In fact, he's adamant that a second type of knowledge beyond scientific knowledge - what he calls metaphysical knowledge - is essential and valuable. So "not falsifiable" isn't a criticism.

Guess what. You can take some of the ideas from a person without taking all of the ideas of a person. AFAIK only religious fundamentalists claim otherwise.

You can. But in this case, the ideas don't extract easily - Popper argues pretty persuasively that you need metaphysical knowledge to even begin conducting scientific research. For instance, the idea that we can actually trust our senses (the whole Cartesian question of whether we are deceived by a malevolent demon) or the ideas of mathematics and logic are metaphysical, not falsifiable.

So in this case, while you can embrace falsifiability and reject metaphysical knowledge, you end up with a viewpoint even worse than that of religious fundamentalists - whacko cultists who ignore 90% of the religious text and create strange interpretations of the remainder.

william wrote:

PhilSandifer wrote:So, in other words: boys are better then girls at science and phsyics because it involves maths?

It's close to that argument - more accurately, math, science, and physics, because they value precise, absolute truth, promote masculine values. It's entirely possible, under her argument, for girls to be as good at math as boys, better at math than boys, or worse at math than boys. All that matters is that the ideological values of math and science are what considers masculine.

Only problem is, that isn't exactly right either. You see, most science uses a branch of mathematics called statistics, which is all about fuzziness and what fuzziness actually means.[/quote]

In other words, it attempts to rid the world of fuzziness by fixing it to precise numeric definitions. I also question the "most science" claim. That seems to me to be collapsing a lot of sorts of science into one in a way that is unhelpful. Physics doesn't primarily use statistics - in fact, physics, as Eugene Wigner points out in his essay "The Unreasonable Effectiveness of Mathematics in the Natural Sciences," mathematics becomes predictive rather than just descriptive. In biology, on the other hand, statistics are much more in play. In the social sciences, it's almost pure statistics inasmuch as math is a factor. I' not sure, for the purposes of this discussion, that it's useful to conflate them all, though. Certainly it seems to me that the Irigaray quote is limiting itself to physics, which is where the fluidity problem would come from.

Unfortunately, there's a giant fucking wall blocking the very entry of the argument. The wall is inscribed with one word. Statistics. Break down that wall and I might have an idea of what Irigaray is actually trying to say. I have a sneaking suspicion she's not making an argument but just linking science up to sexual dimorphism in order to start debates. Does she even have any conclusions?

You see, even statistically the chaos that puts a roadblock to understanding fluid dynamics is hard to define.

I'm not sure how statistics poses a problem here. Though I would say (and I'm speaking here less as a humanities person than as someone who is the child of two mathematicians and the husband of a scientist) that a statistical explanation of fluid dynamics is fundamentally less satisfying. Statistics, at least from a philosophy of mathematics standpoint, poses a problem for mathematics because it is aggressively and clearly non-Platonic. It becomes an asterisk to almost every claim about mathematics. And my sense is that many people would find a statistical model of fluid dynamics to be fundamentally different and less satisfying than a theoretical model that mirrors the way in which we understand the physics of solids.

Newton's laws are not median values of observable results. The theory of relativity is not a statistical model. The sorts of claims advanced by those scientific principles are epistemologically different from the sorts of claims advanced by Mendelian genetics.

That fluid dynamics are chaotic statistically as well seems to me immaterial, because I think statistics are already beside the point of the critique. Science will use statistics when purer mathematics are not up to the task, but they are generally speaking not the preferred approach for scientific theories. Certainly they are not the sort of mathematics that, for instance, are in play when Randall puts mathematics on the extreme end of the spectrum of disciplines.

You can. But in this case, the ideas don't extract easily - Popper argues pretty persuasively that you need metaphysical knowledge to even begin conducting scientific research. For instance, the idea that we can actually trust our senses (the whole Cartesian question of whether we are deceived by a malevolent demon) or the ideas of mathematics and logic are metaphysical, not falsifiable.

Metaphysical is a really bad term for that. I prefer nonphysical, since those are not based on our choice of reality.

Once you reach a certain level of physics(although at that point you can claim it's really chemistry), you use statistics. Specifically, the level on which you would do anything fluid. But you're right, this is irrelevant.

What must happen before this debate can continue is an operational definition of fuzziness. At the very least, a reasonable set of test cases. Without that, you're not arguing, you're throwing words around.

SecondTalon wrote:A pile of shit can call itself a delicious pie, but that doesn't make it true.

You can. But in this case, the ideas don't extract easily - Popper argues pretty persuasively that you need metaphysical knowledge to even begin conducting scientific research. For instance, the idea that we can actually trust our senses (the whole Cartesian question of whether we are deceived by a malevolent demon) or the ideas of mathematics and logic are metaphysical, not falsifiable.

Metaphysical is a really bad term for that. I prefer nonphysical, since those are not based on our choice of reality.

It's metaphysical because metaphysics is a classical and well-established school of philosophy that defies falsifiability. And because it's Popper's term, and you've got to dance with the one who brung you.

Once you reach a certain level of physics(although at that point you can claim it's really chemistry), you use statistics. Specifically, the level on which you would do anything fluid. But you're right, this is irrelevant.

What must happen before this debate can continue is an operational definition of fuzziness. At the very least, a reasonable set of test cases. Without that, you're not arguing, you're throwing words around.

I don't think there's a lot of ambiguity here. But let's go back to Irigaray - where we have firmness and fluidity.

Firmness: Cases where precise definitions are possible - binary dichotomies can be applied. A or B. Fluidity: Cases where something defies or resists a precise definition. Where a binary distinction fails - it is neither A nor B, or both A and B.

Science pursues A. Even when it goes with statistics and attempts to break up the truth value of 1 into fractional parts and create a flexible predictive model, it's just trying to create a model that deals with the fluidity by creating a precise formula to model the uncertainty with precision.

Okay. I think I mostly understand now. Can you give an example of something fluid?

(Sorry if I was a bit hostile. I think we were using different definitions for words. Most vicious debates that don't involve blatant falsehood(by which I mean debates with people who like Bush) are like that.)

SecondTalon wrote:A pile of shit can call itself a delicious pie, but that doesn't make it true.

william wrote:Okay. I think I mostly understand now. Can you give an example of something fluid?

Deconstruction.

More seriously, emotions. My feelings about my wife when she has done something that upsets me - whereby I simultaneously love her more than anyone in the world and want to scream at her. The ending of Fight Club, where the film cuts in a frame of a penis, indicating that Tyler Durden remains an active force in the film despite theoretically being dead. Or, really, any ambiguous moment in literature when something does not clearly get indicated. The interpretive process of understanding a metaphor.

In science, you can even get away with calling quantum states fluid. The usual model of the electron is statistical, but it tends to get used ontologically in a manner like geometry - the electron is said to actually exist in a probability cloud, simultaneously partially in multiple locations. Which is different from, say, modeling error rates.

PhilSandifer wrote:I'm not sure how statistics poses a problem here. Though I would say (and I'm speaking here less as a humanities person than as someone who is the child of two mathematicians and the husband of a scientist) that a statistical explanation of fluid dynamics is fundamentally less satisfying. Statistics, at least from a philosophy of mathematics standpoint, poses a problem for mathematics because it is aggressively and clearly non-Platonic. It becomes an asterisk to almost every claim about mathematics. And my sense is that many people would find a statistical model of fluid dynamics to be fundamentally different and less satisfying than a theoretical model that mirrors the way in which we understand the physics of solids.

I'm not sure why you believe this; the rules governing the behaviour of gases are derived statistically and nobody seems to have much of a problem with them. Thermodynamics falls under this category also.

PhilSandifer wrote:Newton's laws are not median values of observable results.

Actually, it is a standard demonstration in quantum mechanics that Newton's laws for the behaviour of a particle can be derived from the statistically expected values of the position and momentum vectors. This explains why "large" systems (such as anything you could see in a decent microscope) appear to behave according to classical mechanics; with 10^23 or so particles in the system you're observing you'd have to be fantastically lucky in order to see any visible deviation.

PhilSandifer wrote:Science will use statistics when purer mathematics are not up to the task, but they are generally speaking not the preferred approach for scientific theories.

I don't think that's the case; it's a question of what is appropriate for the problem. Supposing that we had a perfect analytical solution to the general n-body problem, we still wouldn't use it to model the birth of the Solar System or the weather or chemical reactions or a whole host of other things, for several reasons:(a) there are too many particles to track, and we don't even know exactly how many.(b) we don't have infinitely precise information on all of them (even supposing such a thing not to be ruled out by QM). (c) we want broadly applicable answers, not highly specific ones.(d) it would probably take a uselessly long time to calculate any results.[*]In other words, we start with a statistical description and we want to get another one at the end. If I'm modelling, say, the rate at which something dissolves in water, I want to just specify "100 mL of water at 32oC", which is a statistical description of the velocity profile of the aggregate of water molecules. I don't want to use an analytical solution that requires me to specify the position and velocity of each molecule. How could I then have confidence that another configuration would give me the same answer?

In other words, if we can model something accurately as a small number of objects, analytical solutions are nice (and we can usually derive the associated statistics). If we can only model it accurately as a large number of objects, we want a statistical solution.

[*] Though there are times when we can do lots of calculations very quickly. The appropriate quote here is from my Statistical Mechanics lecturer: "Now we have 10^24 integrals to do. We'll do 5x10^23 of them on this line, and the other 5x10^23 on the next line."

PhilSandifer wrote:Newton's laws are not median values of observable results.

Actually, it is a standard demonstration in quantum mechanics that Newton's laws for the behaviour of a particle can be derived from the statistically expected values of the position and momentum vectors. This explains why "large" systems (such as anything you could see in a decent microscope) appear to behave according to classical mechanics; with 10^23 or so particles in the system you're observing you'd have to be fantastically lucky in order to see any visible deviation.

To be clear, I was speaking of Newton's laws as he proposed them, not a current and quantum mechanical understanding of his laws. And I stand by this - Newton was not proposing an average model.

PhilSandifer wrote:Science will use statistics when purer mathematics are not up to the task, but they are generally speaking not the preferred approach for scientific theories.

I don't think that's the case; it's a question of what is appropriate for the problem. Supposing that we had a perfect analytical solution to the general n-body problem, we still wouldn't use it to model the birth of the Solar System or the weather or chemical reactions or a whole host of other things, for several reasons:(a) there are too many particles to track, and we don't even know exactly how many.(b) we don't have infinitely precise information on all of them (even supposing such a thing not to be ruled out by QM). (c) we want broadly applicable answers, not highly specific ones.(d) it would probably take a uselessly long time to calculate any results.[*]In other words, we start with a statistical description and we want to get another one at the end. If I'm modelling, say, the rate at which something dissolves in water, I want to just specify "100 mL of water at 32oC", which is a statistical description of the velocity profile of the aggregate of water molecules. I don't want to use an analytical solution that requires me to specify the position and velocity of each molecule. How could I then have confidence that another configuration would give me the same answer?

[*] Though there are times when we can do lots of calculations very quickly. The appropriate quote here is from my Statistical Mechanics lecturer: "Now we have 10^24 integrals to do. We'll do 5x10^23 of them on this line, and the other 5x10^23 on the next line."

I think here we're hitting on an existing rift between scientific theories and scientific practice. Yes - for a number of practical reasons we embrace statistical models to get through things quickly. And there's a great deal of statistical grunt work that gets done to work through things. But this seems to me distinct from the scientific practice of generating theories and accounts of things. We're flipping back and forth, essentially, between science's explanatory and predictive functions, and doing so inartfully.

PhilSandifer wrote:I think here we're hitting on an existing rift between scientific theories and scientific practice. Yes - for a number of practical reasons we embrace statistical models to get through things quickly. And there's a great deal of statistical grunt work that gets done to work through things. But this seems to me distinct from the scientific practice of generating theories and accounts of things. We're flipping back and forth, essentially, between science's explanatory and predictive functions, and doing so inartfully.

Science only incidentally has an "explanatory function"; the primary objective is the predictive function. scientific theories are judged solely on their ability to predict things that other theories do not already predict.

PhilSandifer wrote:I think here we're hitting on an existing rift between scientific theories and scientific practice. Yes - for a number of practical reasons we embrace statistical models to get through things quickly. And there's a great deal of statistical grunt work that gets done to work through things. But this seems to me distinct from the scientific practice of generating theories and accounts of things. We're flipping back and forth, essentially, between science's explanatory and predictive functions, and doing so inartfully.

Science only incidentally has an "explanatory function"; the primary objective is the predictive function. scientific theories are judged solely on their ability to predict things that other theories do not already predict.

I think that statement plays out better when talking about science as an ideal practice than when talking about science as actually performed.

PhilSandifer wrote:To be clear, I was speaking of Newton's laws as he proposed them, not a current and quantum mechanical understanding of his laws. And I stand by this - Newton was not proposing an average model.

True, but your original point was that you didn't think statistically based theories would be as acceptable as exact ones.

PhilSandifer wrote:I think here we're hitting on an existing rift between scientific theories and scientific practice. Yes - for a number of practical reasons we embrace statistical models to get through things quickly. And there's a great deal of statistical grunt work that gets done to work through things. But this seems to me distinct from the scientific practice of generating theories and accounts of things. We're flipping back and forth, essentially, between science's explanatory and predictive functions, and doing so inartfully.

I don't think this is as clear-cut as you seem to think. A statistical approach is inevitable whenever you are modelling a large system. Now, in one sense you are correct because the "fundamental" rules apply to single particles [I'm speaking loosely here, of course], but in practice whenever a useful generalisation can be made about average behaviour in a large system we do it. Then we use that as a simplified explanatory model whenever we are dealing with large-scale behaviour.

scarletmanuka wrote:I don't think this is as clear-cut as you seem to think. A statistical approach is inevitable whenever you are modelling a large system. Now, in one sense you are correct because the "fundamental" rules apply to single particles [I'm speaking loosely here, of course], but in practice whenever a useful generalisation can be made about average behaviour in a large system we do it. Then we use that as a simplified explanatory model whenever we are dealing with large-scale behaviour.

But that's openly fudging. Which, admittedly, science does a lot, but I have trouble treating it as an expression of science as an ideological structure.

scarletmanuka wrote:...whenever a useful generalisation can be made about average behaviour in a large system we do it. Then we use that as a simplified explanatory model whenever we are dealing with large-scale behaviour.

But that's openly fudging. Which, admittedly, science does a lot, but I have trouble treating it as an expression of science as an ideological structure.

I don't see that as fudging any more than, in mathematics, using an already-proved result to establish the next instead of insisting that everything be derived from first principles each time.

scarletmanuka wrote:...whenever a useful generalisation can be made about average behaviour in a large system we do it. Then we use that as a simplified explanatory model whenever we are dealing with large-scale behaviour.

But that's openly fudging. Which, admittedly, science does a lot, but I have trouble treating it as an expression of science as an ideological structure.

I don't see that as fudging any more than, in mathematics, using an already-proved result to establish the next instead of insisting that everything be derived from first principles each time.

Wait, what? When Euclid uses a previous conclusion for a new proof he gets away with it because he's dealing with a priori knowledge that is non-empirical. The Pythagorean theorem is not true because it has held in all observed cases, but because it is absolutely and necessary true.

Science does not claim this, at least in the Popperian model we've mostly been taking for granted here. Scientific explanations are presented as best guesses thus far, not as absolute truths. (Which is a different issue from precision.) Significantly, it's theoretically possible that tomorrow any scientific explanation could find itself refuted. So while science does not go back to first principles obsessively, it does at least leave any given first principle up for reconsideration. That's not the case in mathematics - the Pythagorean theorem is never going to be unproved. I mean, this is just the problem of induction again.

But more to the point, the issue here is that the statistical model of Newtonian mechanics is not, strictly speaking, true. It is good enough for any predictive purposes, and has great appeal for pedagogical purposes, but it is not part of scientific truth as such. It's an extremely useful fudging, but if you were to write up a rigorous explanation of everything, you would cover the fact that "In any conceivably likely case, this will happen, and thus these rules will govern the interactions."

I base this claim on the evidence that the closest thing we have to writing up a rigorous explanation of everything - science education - acknowledges the theoretical existence of quantum tunneling and non-Newtonian interactions of solids. Thus any remotely savvy scientist working with Newtonian physics knows that what they are working with is not rigorously true, but is merely a sufficiently good predictive model.

For the most part, previously proved results in mathematics do not suffer this sort of non-truth, and any that did would not be accepted in rigorous proof.

scarletmanuka wrote:...whenever a useful generalisation can be made about average behaviour in a large system we do it. Then we use that as a simplified explanatory model whenever we are dealing with large-scale behaviour.

But that's openly fudging. Which, admittedly, science does a lot, but I have trouble treating it as an expression of science as an ideological structure.

I don't see that as fudging any more than, in mathematics, using an already-proved result to establish the next instead of insisting that everything be derived from first principles each time.

[...]So while science does not go back to first principles obsessively, it does at least leave any given first principle up for reconsideration. That's not the case in mathematics - the Pythagorean theorem is never going to be unproved.

But similar things do happen in mathematics, e.g. with Euclid's fifth postulate. (While Euclid himself listed it as a postulate, it was widely believed to be provably true, until it was discovered that it wasn't and that non-Euclidean geometry could be made to work.[*]) A better example is the rebuilding of set theory. The difference is that in mathematics when this happens it doesn't exactly invalidate all the previous results, they just all get another set of conditions tacked on at the front. But then, this is similar to what happens in science anyway: the previous results get a footnote that says "valid approximation except under these conditions". There's some qualitative difference in that in science we acknowledge some level of inaccuracy even within the prescribed regime, but some level of imprecision is fundamental to science anyway, so I don't think this is a major difference at all.

[*] "This made a lot of people very upset and has been widely regarded as a bad move." - Sorry, I immediately thought of that when I was writing that sentence and then I couldn't not post it.

The point I'm trying to establish, though, is that we can't expect and don't want to model large-scale systems in the same way that we do small-scale systems. In large-scale systems we're not interested in exact behaviour, we want an abstraction of the overall behaviour of the system. Often we have to just get it from observing the system, but in some cases we can derive the large-scale behaviour from a knowledge of the underlying rules. When we can do this, we can get rigorously derived rules governing the statistical behaviour of the large-scale system. I don't think that these rules are any less acceptable than the underlying rules.

Of course the rules we have for the majority of large-scale systems are induced from behaviour, and I agree that these seem less acceptable in the explanatory sense (though, as you point out, in reality we have the same problem for the fundamental rules anyway). But where we can derive statistical rules for large-scale systems from the fundamental rules, these do not seem to me to be less acceptable as explanations of behaviour.

PhilSandifer wrote:But more to the point, the issue here is that the statistical model of Newtonian mechanics is not, strictly speaking, true. It is good enough for any predictive purposes

Well, for most predictive purposes. The fundamental challenge for any new refinement of an existing theory is to (a) show that the older theory is virtually correct in most cases, and to (b) make better predictions at the edges where the older theory has been failing. Once this has been achieved, you can still use the old theory as long as you make sure you're not at those edges; and if you are near the edges, you'd better at least be able to estimate how much you might be off by, so you know whether you have to bite the bullet and use the more complex theory to do your calculations.

PhilSandifer wrote:It's an extremely useful fudging, but if you were to write up a rigorous explanation of everything, you would cover the fact that "In any conceivably likely case, this will happen, and thus these rules will govern the interactions."

I base this claim on the evidence that the closest thing we have to writing up a rigorous explanation of everything - science education - acknowledges the theoretical existence of quantum tunneling and non-Newtonian interactions of solids. Thus any remotely savvy scientist working with Newtonian physics knows that what they are working with is not rigorously true, but is merely a sufficiently good predictive model.

Yes, but in general you don't want to be rigorous about everything. Normally you want to be only as rigorous as you need to be at that moment. Your example of science education is, in fact, a case in point. In science education we don't start off with M theory; we start off with very high-level abstractions and successively introduce refinements. So the student's concept of matter, for instance, goes through four or five revisions by the time they finish college. (Assuming they study science at college. ) And problems are expected to be thought about and solved using the highest possible level of abstraction. Of course, in an educational context this is almost always the one currently being studied but the principle applies in real life also.

scarletmanuka wrote:But similar things do happen in mathematics, e.g. with Euclid's fifth postulate. (While Euclid himself listed it as a postulate, it was widely believed to be provably true, until it was discovered that it wasn't and that non-Euclidean geometry could be made to work.[*]) A better example is the rebuilding of set theory. The difference is that in mathematics when this happens it doesn't exactly invalidate all the previous results, they just all get another set of conditions tacked on at the front. But then, this is similar to what happens in science anyway: the previous results get a footnote that says "valid approximation except under these conditions". There's some qualitative difference in that in science we acknowledge some level of inaccuracy even within the prescribed regime, but some level of imprecision is fundamental to science anyway, so I don't think this is a major difference at all.

I have to disagree here - something believed to be true and assumed to be provable does not have the same status in mathematics as something proven. I don't think this comparison holds up at all.

The point I'm trying to establish, though, is that we can't expect and don't want to model large-scale systems in the same way that we do small-scale systems. In large-scale systems we're not interested in exact behaviour, we want an abstraction of the overall behaviour of the system. Often we have to just get it from observing the system, but in some cases we can derive the large-scale behaviour from a knowledge of the underlying rules. When we can do this, we can get rigorously derived rules governing the statistical behaviour of the large-scale system. I don't think that these rules are any less acceptable than the underlying rules.

Neither do I, but those rules are predictive, not explanatory.

Yes, but in general you don't want to be rigorous about everything. Normally you want to be only as rigorous as you need to be at that moment. Your example of science education is, in fact, a case in point. In science education we don't start off with M theory; we start off with very high-level abstractions and successively introduce refinements. So the student's concept of matter, for instance, goes through four or five revisions by the time they finish college. (Assuming they study science at college. ) And problems are expected to be thought about and solved using the highest possible level of abstraction. Of course, in an educational context this is almost always the one currently being studied but the principle applies in real life also.

Sure. But again, my point here is about science as an ideology. And ideologically, science pursues these rigorous truths.

Yes, in practice, science education involves a process of revision. Or, as my wife puts it, Intro Chem is mostly about lying to students. Which is to say, it remains the case that there's a clear delineation between simplified "good enough" descriptions and true ones. A good scientist is expected to know that there's an implicit footnote that says "Look, this is an aproximation, but if you do out all the rigorous math you'll see that it's clearly going to happen functionally all of the time, so we don't rally need to get into the weird stuff that would happen in the cases so extreme we can't produce them."

Which is, to me, sufficient for the original claim all of this is getting sidetracked off of - that science, as an ideology, values precision and fixes the world according to absolute mathematics. That, after doing that, it creates an elaborate series of convenient hacks to make prediction and further work faster and easier does not seem to me to invalidate the claim.

scarletmanuka wrote:But similar things do happen in mathematics, e.g. with Euclid's fifth postulate. (While Euclid himself listed it as a postulate, it was widely believed to be provably true, until it was discovered that it wasn't and that non-Euclidean geometry could be made to work.[*]) A better example is the rebuilding of set theory. The difference is that in mathematics when this happens it doesn't exactly invalidate all the previous results, they just all get another set of conditions tacked on at the front. But then, this is similar to what happens in science anyway: the previous results get a footnote that says "valid approximation except under these conditions". There's some qualitative difference in that in science we acknowledge some level of inaccuracy even within the prescribed regime, but some level of imprecision is fundamental to science anyway, so I don't think this is a major difference at all.

I have to disagree here - something believed to be true and assumed to be provable does not have the same status in mathematics as something proven. I don't think this comparison holds up at all.

As I said, the rebuilding of set theory is a better example (but less well known and less easily understood) than Euclid's fifth postulate. Many results had been proven using the version of set theory current at the time (now, but not then, known as "naive set theory"), but it became clear that there were significant flaws in naive set theory when it was discovered that you could create paradoxes with it. Following this an effort was made to develop a form of set theory (axiomatic set theory) which would not be subject to paradoxes. Once this had been achieved, the (useful) earlier proved results were still proven, subject to the additional requirements of axiomatic set theory. (Obviously some of the earlier proofs, i.e. the paradox-generating ones, were now invalid in axiomatic set theory because they violated the new axioms. I don't know if any genuinely useful proofs actually used now-invalid constructions. I suspect that if they did, they could probably be recast in axiomatic set theory.)

This process seems to me to be very similar to what happens in science when we refine an old model.

PhilSandifer wrote:

The point I'm trying to establish, though, is that we can't expect and don't want to model large-scale systems in the same way that we do small-scale systems. In large-scale systems we're not interested in exact behaviour, we want an abstraction of the overall behaviour of the system. Often we have to just get it from observing the system, but in some cases we can derive the large-scale behaviour from a knowledge of the underlying rules. When we can do this, we can get rigorously derived rules governing the statistical behaviour of the large-scale system. I don't think that these rules are any less acceptable than the underlying rules.

Neither do I, but those rules are predictive, not explanatory.

I disagree. In my experience, these rules are used as explanatory rules for large-scale systems. And again, nobody seems to have a problem with it.

PhilSandifer wrote:Sure. But again, my point here is about science as an ideology. And ideologically, science pursues these rigorous truths[...]Which is, to me, sufficient for the original claim all of this is getting sidetracked off of - that science, as an ideology, values precision and fixes the world according to absolute mathematics. That, after doing that, it creates an elaborate series of convenient hacks to make prediction and further work faster and easier does not seem to me to invalidate the claim.

But ideologically, it is understood that even the "fundamental" rules are only considered approximations to the true rules, and that we can in fact never assume that we have the true rules, so everything we do is approximate to some degree. Yes, science values precision, and attempts to interpret the world according to increasingly precise rules, but it certainly does not treat them as absolute. This is even more so today, when it is widely known that quantum mechanics and general relativity are fundamentally incompatible, so we know we have something wrong somewhere.

scarletmanuka wrote:As I said, the rebuilding of set theory is a better example (but less well known and less easily understood) than Euclid's fifth postulate. Many results had been proven using the version of set theory current at the time (now, but not then, known as "naive set theory"), but it became clear that there were significant flaws in naive set theory when it was discovered that you could create paradoxes with it. Following this an effort was made to develop a form of set theory (axiomatic set theory) which would not be subject to paradoxes. Once this had been achieved, the (useful) earlier proved results were still proven, subject to the additional requirements of axiomatic set theory. (Obviously some of the earlier proofs, i.e. the paradox-generating ones, were now invalid in axiomatic set theory because they violated the new axioms. I don't know if any genuinely useful proofs actually used now-invalid constructions. I suspect that if they did, they could probably be recast in axiomatic set theory.)

This process seems to me to be very similar to what happens in science when we refine an old model.

Sure, but that still seems different from the Newtonian model as you described it - the Newtonian model went from rigorously true to for all practical purposes true. Set theory went from rigorously true to no, wait, now it really is rigorously true.

PhilSandifer wrote:

The point I'm trying to establish, though, is that we can't expect and don't want to model large-scale systems in the same way that we do small-scale systems. In large-scale systems we're not interested in exact behaviour, we want an abstraction of the overall behaviour of the system. Often we have to just get it from observing the system, but in some cases we can derive the large-scale behaviour from a knowledge of the underlying rules. When we can do this, we can get rigorously derived rules governing the statistical behaviour of the large-scale system. I don't think that these rules are any less acceptable than the underlying rules.

Neither do I, but those rules are predictive, not explanatory.

I disagree. In my experience, these rules are used as explanatory rules for large-scale systems. And again, nobody seems to have a problem with it.[/quote]

I think they're used with implied footnotes, though. A knowledgeable scientist is supposed to know and take for granted the "Unless, of course, a freak bit of chance happens and quantum tunneling takes place" appended to any statement of the interactions between solid bodies. It's known, and rarely relevant, so it's not stated. But that doesn't mean it's not there.

But ideologically, it is understood that even the "fundamental" rules are only considered approximations to the true rules, and that we can in fact never assume that we have the true rules, so everything we do is approximate to some degree. Yes, science values precision, and attempts to interpret the world according to increasingly precise rules, but it certainly does not treat them as absolute. This is even more so today, when it is widely known that quantum mechanics and general relativity are fundamentally incompatible, so we know we have something wrong somewhere.

The role of absoluteness in science is tricky. In good, Popperian terms, no, science is not absolute. But the role of mathematics in science is weird, because mathematics is absolute. So a mathematical model of science is... well, it's not tricky, but it's also something that's a real and unsolved problem in the philosophy of science. In particular in terms of its absoluteness.

scarletmanuka wrote:[...]Once this had been achieved, the (useful) earlier proved results were still proven, subject to the additional requirements of axiomatic set theory[...] This process seems to me to be very similar to what happens in science when we refine an old model.

Sure, but that still seems different from the Newtonian model as you described it - the Newtonian model went from rigorously true to for all practical purposes true. Set theory went from rigorously true to no, wait, now it really is rigorously true.

I think I'm looking at the mathematical results a bit differently from the way you are. (Also, it's not set theory itself I'm holding up as an example but results that had been proved with it, a subtle difference.) Any theorem in mathematics can be expressed as "If this bunch of conditions holds, this result is true." The rebuilding of set theory meant that all those results got an extra bunch of conditions tacked onto the first part, which to me seems little different to saying "assuming all velocities are much less than c". There is a difference, which I acknowledged from the first, in that the scientific version acknowledges that there is still some (negligible) error under this regime. We view this aspect differently; I find it minor because in science we acknowledge that even the more refined theory is only approximate anyway, but you consider it to be a significant difference. Both positions can be reasonably maintained (though of course I think mine is more reasonable - but then, that's why it's mine, and no doubt you think yours is more reasonable).

PhilSandifer wrote:I think they're used with implied footnotes, though. A knowledgeable scientist is supposed to know and take for granted the "Unless, of course, a freak bit of chance happens and quantum tunneling takes place" appended to any statement of the interactions between solid bodies. It's known, and rarely relevant, so it's not stated. But that doesn't mean it's not there.

Yes, I agree. But there are also the implied footnotes in everything, even at the fundamental level, that say "according to our current theories, which we fully expect will eventually be refined". Again, the difference between the two types seems smaller to me than it does to you.

PhilSandifer wrote:The role of absoluteness in science is tricky. In good, Popperian terms, no, science is not absolute. But the role of mathematics in science is weird, because mathematics is absolute. So a mathematical model of science is... well, it's not tricky, but it's also something that's a real and unsolved problem in the philosophy of science. In particular in terms of its absoluteness.

But if it wasn't tricky, we wouldn't have interesting conversations like this one!

I don't really agree with your comparison of the rebuilding of set theory with the overthrow of a scientific theory. From my layman's reading starting at the Axiom of Choice (and proceding like so), I get the idea that "naive set theory" was shown to be internally inconsistent--hence, the paradoxes. Axiomatic set theory was created to reprove the results of set theory with a logically consistent--and provably so--set of axioms. Whatever results mathematicians have derived from this new set theory are true forever, just as nothing will ever show a theorem in Euclidean geometry to be false. The "paradoxes" seen in current set theory like Banach-Tarsky are not paradoxes per se, but merely unsettling results.

Science has nothing analogous to axioms[1], only data and observations that we cannot even assume to be "true". By "true," I mean in a way that requires no verification. Mathematical axioms are not proven true, but assumed so, barring proof of inconsistency, which I believe is different than falsehood. Scientific data derives from imperfect measurements of the universe. In essence, scientists do not have access to the axioms of the universe, but try to derive them by careful observation, thus leading to the problem of induction. Mathematicians get to deduce their results from axioms, and thus are working in the complete opposite direction.

Anyhow, I've found this entire thread fascinating. Despite receiving my degree in physics, I've often found myself defending both humanities and the sciences from charges of uselessness and nonsensicalness. I really appreciate the insider perspectives on the Sokal affair and some of the more ... odd of the lit-crit statements on science.

I don't really agree with your comparison of the rebuilding of set theory with the overthrow of a scientific theory. From my layman's reading starting at the Axiom of Choice (and proceding like so), I get the idea that "naive set theory" was shown to be internally inconsistent--hence, the paradoxes. Axiomatic set theory was created to reprove the results of set theory with a logically consistent--and provably so--set of axioms. Whatever results mathematicians have derived from this new set theory are true forever, just as nothing will ever show a theorem in Euclidean geometry to be false. The "paradoxes" seen in current set theory like Banach-Tarsky are not paradoxes per se, but merely unsettling results.

There are still unresolved paradoxes in logic, see for example Curry's paradox:

Wikipedia wrote:The resolution of Curry's paradox is a contentious issue because resolutions (apart from trivial ones such as disallowing X directly) are difficult and not intuitive. Logicians are undecided whether such sentences are somehow impermissible (and if so, how to banish them), or meaningless, or whether they are correct and reveal problems with the concept of truth itself (and if so, whether we should reject the concept of truth, or change it), or whether they can be rendered benign by a suitable account of their meanings.

So if there are potential problems with the rules of logical inference, it's a little early to be comfortable about mathematics just yet.

Also, Banach-Tarski is really weird. Especially since the Axiom of Choice seems reasonable, but Banach-Tarski seems unreasonable. What's a mathematician to believe? Or, to put it the best way I've seen it:

antonfire's signature wrote:The Axiom of Choice is obviously true; the Well Ordering Principle is obviously false; and who can tell about Zorn's Lemma?

(for non-mathematicians: The Axiom of Choice, the Well Ordering Principle and Zorn's Lemma are all logically equivalent. So in any given system, they're either all true or all false. The Axiom of Choice seems intutively obvious, and leads to lots of useful theorems, so we like to believe it; but the Well Ordering Principle leads to the Banach-Tarski paradox, which shows that it is possible to cut a solid sphere into a finite number of pieces and reassemble them to make two solid spheres, each exactly the same as the original. This is something we'd prefer not to believe. So it's a bit of a conundrum!)(for mathematicians: yes, the previous paragraph is very loosely worded. It's for non-mathematicians, remember? Fill in the missing bits yourself.)

And finally, on xkcd tendencies, we should add: looking up a Wikipedia article you're using in a post, and editing it for grammar while you're there. And then updating your post to reflect your edit. [And then editing your post to talk about editing your post to reflect your Wikipedia edit...]

I don't really agree with your comparison of the rebuilding of set theory with the overthrow of a scientific theory. From my layman's reading starting at the Axiom of Choice (and proceding like so), I get the idea that "naive set theory" was shown to be internally inconsistent--hence, the paradoxes. Axiomatic set theory was created to reprove the results of set theory with a logically consistent--and provably so--set of axioms. Whatever results mathematicians have derived from this new set theory are true forever, just as nothing will ever show a theorem in Euclidean geometry to be false. The "paradoxes" seen in current set theory like Banach-Tarsky are not paradoxes per se, but merely unsettling results.

There are still unresolved paradoxes in logic, see for example Curry's paradox:

Sincere question, since it's been quite a while since I've gone through GEB rigorously, but isn't the existence of unresolved and unresolvable paradoxes one of two options left by Godel for logic? i.e. isn't this not surprising or revelatory in any way that Godel isn't already?