Begging the question

In my last post I described the syllogism “Photogenic people look good in photograps; Michelle Pfeiffer is photogenic; therefore, Michelle Pfeiffer looks good in photographs” as “begging the question”. A few people commented on that, so I thought I’d address this point of English usage.

In modern usage, “begging the question” has come to mean nothing more than “the situation suggests that an obvious question to raise at this time is blah blah blah.” For example, “The global financial meltdown begs the question: was there insufficient federal oversight of the American mortgage industry?” Though this usage is certainly common in civic discourse and the media, it is entirely a modern departure from the historic usage of the phrase. I try to eschew this modern usage when I say “begs the question”.

“Begs the question” is also sometimes used to mean “this argument raises additional questions which require additional investigation before we can accept the argument”. Though this is considerably closer to the traditional definition of the phrase, this is also not exactly what I mean.

When I say “begs the question”, I mean it in the traditional sense of “this argument is fallacious because it takes as a premise an assumption which is at least as strong as the thing being proven, and is therefore an unwarranted assumption.”

Let me give you another example of question begging, in the traditional sense, which might be more clear.

Suppose I asked “why are diamonds very hard but butter is very soft?” and you answered “diamond and butter are both made out of atoms; the atoms of diamonds are hard and the atoms of butter are soft.” You would have begged the question; your answer to my question “why are some things hard and some things soft” is “because some things are made out of stuff that is hard and some things are made out of stuff that is soft” — that is, you’ve avoided answering the question by providing an “explanation” that itself cannot be understood without answering the original question — namely, why is some stuff hard and some stuff soft? This pseudo-explanation has no predictive power; it doesn’t tell us anything new, it just circles back on itself. The explanatory assumption — that some atoms are hard and some atoms are soft — is stronger than the thing we are trying to investigate — the hardness and softness of two substances.

A non-question-begging answer would be “diamond and butter are both made of atoms; the atoms of a diamond are all identical and arranged in a stable, rigid lattice where every point in the lattice is reinforced by a strong bond to four other points. The atoms of butter are a disorganized collection of many different atoms grouped into different kinds of relatively complex molecules; though the molecules themselves are quite strong, each molecule of butter holds weakly to each other molecule. It takes only a small force to disrupt the loose arrangement of butter molecules but a very large force to disrupt the strong arrangement of diamond atoms. We perceive this difference in required force as ‘hardness’ on the human scale, but in fact it is a property that arises from the sub-microscopic-scale properties of each substance.”

Now, this explanation does *raise* more questions. It raises questions like “why are some lattices strong and some weak?” and “why are some objects composed of many different kinds of atoms organized into molecules, and some composed of just one atom?” Question-begging is not the act of raising more questions. Every good explanation raises more questions. What makes this explanation a good one is that it is testable and has predictive power; we can investigate the hardness or softness of other substances, and make predictions about what sorts of atomic structures they will have — or, vice versa, we can look at an atomic structure and try to figure out from it how hard the substance will be. We can invent other techniques for determining atomic structure, like x-ray diffraction crystallography or spectroscopic analysis, and use those to cross-check our “atomic theory of hardness”.

But the “because she’s photogenic” pseudo-explanation is clearly question-begging. Why does she look so good? Because she’s photogenic. Why is she photogenic? Because she looks so good. We have learned nothing about photogenicity (or the lovely Ms. Pfeiffer).

Similarly, if you ask “why is this code thread-safe?” and the answer is “because it can be correctly called on multiple threads”, we’ve begged the question. Why is it thread-safe? Because it’s correct. Why is it correct? Because it’s thread-safe. Again, we have learned nothing about the nature of thread safety.

I find when someone says "beg the question" they nearly always mean "raise the question". Like "peruse", it’s a word/phrase that has come in the vernacular to mean the opposite of its denotation. And it strikes me that there’s a parallel with software: just as people might use your software in ways you never intended, so do people use words "incorrectly". In both cases there’s no way to reset; we are stuck living with legacy.

The traditional meaning of “begging the question” is derived from an antiquated usage of “beg” meaning “to take for granted.” In that usage, a “beggar” was usually someone who camped on the local squire’s land without permission.

When this phrase had currency, it was in the dialect of people who had an enviable station in life; they were wealthy and powerful. The examples of its usage that survive today have been preserved because they were uttered by people who had something interesting to say and were good at saying it, but it is also true that people paid attention to what they said because of their wealth and power.

So why do people now say “beg the question” when they mean “raise the question”? As Ben says, it is quite an unnatural construction in the latter usage. In my opinion, the attraction is that it imitates the language of those who once did use it naturally, and the modern speakers hope that the virtues of these older speakers – intelligence, fluency, and social status – will somehow be transferred to them. It is just an unfortunate irony that the very misuse of the phrase should undermine the hopes that gave utterance to it.

Although it is true that language changes continually, this observation by itself is not an adequate guide to effective usage. If I wanted to communicate with the widest possible audience, I would avoid the phrase altogether. But in a blog for educated professionals, I see nothing wrong with promoting familiarity with the interesting ideas of the past. That familiarity is only possible if one can understand the language in which those ideas were stated. Kudos to Eric, then, for keeping the flame alight!

After programming the IBM 1130 – running core storage – 1973 ====> moving ahead to the first Micro Computers, I once, "Begged To Question". That is when Joe Cebula told me "Who cares", just know that what should be working is working and will work. If it does not work, make it work.

That kind of thing bugs me, too. My coworkers call me a grammar nazi. One of my pet peeves right now is the phrase "morning constitutional", which, for 3 bazillion years, has meant a walk (obviously in the a.m. hours) strictly for one’s health. However, some moron radio show host decided it would be funny to associate the phrase with a bowel movement. And, so — showing just how crazy our pop culture is– that usage seems to have gotten enough traction that people look at me very oddly now whenever I use the phrase. Quite annoying!

@Gabe: Assuming that your question is "Should we switch from using the expression ‘I couldn’t care less’ to the expression ‘I could care less’", and not some critisism of my grammatical stance, my answer is no(Leaving aside the fact that I have not heard anyone use either expression since primary school).

I did not suggest that Eric use "begging" to mean "raising", just that using it in the "historic" sense seems to lead to confusion. The term "circular argument" expresses the same concept(Aristotle be damned) in unambiguous terms(At least until arguments about circularity become common). I think, however, that Phil brings up a good point; it is an elegant phrase, for a more civilised age; both its misuse and its use here by Eric are attempts to borrow some of those percieved benefits, and I’ll chip in a Kudo for Eric too, for calling this out (is this the start of a series of five dollar idioms?).

Except that a circular argument depends on a premise derived from itself (the argument, that is), which is a relatively uncommon subset of an argument which requires an assumption that is at least as strong as the original question.

Sure, most people don’t get much from "begs the question", but saying it’s a circular argument is often just a way to open a distracting side argument about whether somethings atoms are harder because the thing is harder or not. I think just saying "That doesn’t explain anything" is much more understandable than "begs the question" in most cases, even if it is very confrontational :/.

I think the phrase “Thread Safe” has much more connotation behind it when it is used then you are giving credit.

What we are actually saying is that "thread safety was considered in the design", not "this code is ACTUALLY thread safe". Saying "thread safe", to be clear, means "thread safety is part of the design".

Look at it this way: nobody designs their code by default to be thread safe. Why bother, since most callers will not use the code in that fashion. The point is, in our community NO ONE assumes that code is designed thread safe. Is the Dictionary class designed to be thread safe in the .Net framework? We assume NO, unless it specially states “safe more multi-threading”, or simply, “thread safe”.

Now, does that mean it “is” thread safe? Who cares! That was not even in question. Nor is it in question whether or not it “works”. The question as to if the author consider thread safety in its design has been answered.

Your point about class differences is fair, but for a person of under 70 years to invoke or play with the phrase (be they British, American or otherwise) now, in the manner in which Eric has done, is to "raise the question" of whether this is an old fashioned (and more polite) way of calling BS on someone. As in "the statement as given beggars the value of what was _appearently_ said".

i.e. The statement taken at face value degrades the understanding of the terms of expression to the point that no serious listener can take it seriously without feeling their honor and intellectual honesty betrayed by their complicity in listening to the statement."

In other words; the listener has that queasy feeling that their attention to the statement given has been abused, with all the power-play connotations that your prior comment might imply, and then

some.

It is not about "raise the question", because the statement as given typically is cast in a way – or delivered with an context which is "understood" to be a context in which the listener should _not_

and is expected to _not_ raise questions. Normally, those who speak or write that someone has said something which "beggars" a question are "on the outside" and/or are willing to pay the political price for saying so (or did so because they are in a clear political contest).

Bugger all. We’re always left with a choice of whether to take it and figure out how to excuse our complicity, or to leave – with the oft-attendant loss of income, etc..

Glad to see some folks are not losing track of older useful expressions.

I would have to start my own blog to get enough space and give all the examples I have of answers that *raise* questions but, because the recipient is too stupid… oops, sorry… ahm… lacks the knowlege of the subject, shall we say, to ask the questions being *raised*, he/she decides that the answer *begs* a question by leaving him/her none the wiser and totally unsatisfied.

That’s why I’ve always believed that the help desk people are either saints or the primary source of domestic violence in my country (when they finally get home and don’t have to be nice anymore) 🙂

When we answer a question, we assume that the recipient has a certain amount of basic knowlege of the subject area. If he/she does, our answer will *raise* questions; if he/she does not, it will *beg* a question no matter what we say, and the usual ways to cover that up (word-weaving and evasiveness) will only make us look guilty in his/her eyes.

More to the point, I think the question "why is the code thread-safe" *cannot* be answered without coming up with a complete, specific, and consise definitions of "thread-danger", so to speak, prior to asking it. Whoever asks this question, should first describe the kind(s) of danger he/she expects and wants to ward against. Otherwise, the answer can only be more or less sophisticated variation of "because with this code, no one I know has ever run into any kind of problem they couldn’t handle". So anyone who does run into something can at least take some pride of being the first. 🙂

The definition itself isn’t begging the question either – it merely defines the two phrases as synonymous. There are lots of words that are synonymous to one another, and going from one to the other does not constitute begging the question.

What this is is _equivocation_, where having previously defined the two as synonyms, one then proceeds to speak about them as if being "photogenic" were a property that exists independently of photography.

—-

And back to the topic of multithreading…

"thread safe" does have a meaning. For example, for a collection class, "thread safe" means that if two threads add to the collection "at the same time" [i.e. sufficiently close that the execution of the method in the two threads overlap with one another], the end result will be that both objects are present in the collection (rather than, for example and not limited to: the program crashing, only one or the other object being present in the collection, the collection being left in some internally inconsistent state). If some code calls two methods [say, IsEmpty and Peek] of the class expecting the object to be in the same state from one call to the next, and fails or operates incorrectly due to the state having changed in between, it is not the class itself that is not thread-safe, it is the code calling it that is not thread-safe.

The fact that it is possible for some people to be *wrong* about the definition does not mean that it does not *have* a definition.

The definition I would give is "given multiple threads calling this method (or any of this set of methods) simultaneously (see my previous definition of "at the same time"), the return values [including out parameters] of each method and the final state of the object will always be such that there exists some order of non-simultaneous calls to the same methods (with calls from the same thread in the same partial order) that would have given the same results." With an implicit "if it matters to your code what, specifically, that order is, that’s *your* problem"

Back to photogenicity, I realized that while there’s nothing wrong with the syllogism itself, it’s not actually equivalent to "because she’s photogenic". "because" has a meaning of actual cause and effect which is not present in syllogisms. "If A then B; A; therefore B" is fine, and does not require A to be the cause of B – merely to be a piece of information that, if you are given it, you can reason from there to B being true. For example, "If something is a sphere then it is round. [X] is a sphere. Therefore X is round" – you were given the information that it was a sphere, and you can conclude that it is round. It’s not round _because_ it’s a sphere, it’s round because of gravity, or surface tension, or some manufacturing process (depending on which X) is.

You are absolutely right: "thread-safe" does have a meaning, and not being able to formulate a definition does not mean there is no definition. But your own example of a collection class shows that the definition of thread-safety *varies* based on what we’re dealing with. You have described the case of a collection; it may be different, for example, in the case of a workflow (from the top of my head, I can think of an additional danger of using up the .NET thread pool; also, I forgot the URL, but there is a wonderful article somewhere in the depths of MSDN, describing the way to create workflow-based ASP.NET applications: this one mentions the danger of too many threads being spawned by each HTTP request).

Threads may threaten to leave an object in an inconsistent state; they may cause unexpected and/or unpredictable change of a state; they may disrupt the timing for some kind of time-critical operation; they consume the CPU time and RAM (at least, by creating handles)… but I’m sure you don’t need me, or anyone else, to recount those things: they are, after all, common knowlege.

So, once again, there has to be a definition of "thread-danger" to help us choose and formulate the appropriate definition of "thread-safety". And I say "appropriate" because, as you say, there certainly, *always* is a definition for it, and we may intuitively "feel" it in its entirety, but need to narrow it down to the case at hands.

Then again: upon reflection, there *is* a set of requirements any code that may be executed in a multi-threaded environment has to conform to. For example, leaving no object in an inconsistent state: I cannot, from the top of my head, think of an example where you just wouldn’t care about the consistency of an object’s state, no matter how temporary and insignificant this object is. And this, of course, comes on top of the obvious "cause no crash" and "leave no rubbish, like ‘loose’ handles and the corresponding system objects, behind".

So it looks like there is, if not a complete definition, then at least a minimal set of requirements, for thread-safety. But the rest still depends on the specific situation.

I have my own notion of thread safety – it is not precise and I don’t know how well it matches up with other definitions.

To me thread safety boils down to a question of what happens to shared resources (shared meaning visbible across multiple threads), whether the resource is an object, a chunk of memory, a handle to a file on disk, etc. it also is further restricted (in the simple case) to modifications made to these resources, which means that readonly access to memory is always thread safe since the state of the object does not change. It’s not this simple when the shared resource is a chunk of code, such as an interrupt handler, where simply executing the code (for example, reading a value from an I/O port) causes side-effects.

An additional aspect of thread safety is ensuring that when changes to an object get published , or made visible to other threads, the changes are all visible simultaneously. For example, when changes are made to 3 different fields of an object, the code needs to ensure that ALL the changes are visible at the same time, so that all 3 fields are seen, not some partial mixture of fields, so that the object is always in a consistent state.

Some effects that need to be taken into account are compiler-related, such as ensuring that variables that may change asynchronously to one thread do not get hoisted out of loops such that the change to the variable by another thread would not get noticed. There are keywords like "volatile" to help with some of this.

There are many other thread-related issues too, such a deadlocks, livelocks, priority inversion, etc. that are indirectly related to thread safety.

So to me, calling something thread-safe has definite meaning, and I have a mental checklist I run through when examining code to determine if it is "thread safe". It’s not as simple as putting mutexes around all access to the object.

Much as Godwin’s Law predicts the eventual invocation of Hitler in any sufficiently long internet discussion, I’d like to propose an axiom in the same sociological vein. There is a correlation between A, the number of commenting readers of a blog that are of an analytical (not to say hairsplitting) bent, and B, the likelihood that any blog post that mentions "beg the question" will get more responses to this controversial point of usage than to the original intended topic. (A for analytical, B for beg.) I’d say that on average, you need about 3 such readers for a 50% likelihood and perhaps 6 for 95%.

Relatedly, the number of grammarian posts is probably big omega of n^2, where n is the number of such readers: each such reader tends to respond to every other at least once. This far outstrips the usual number, which is, what, n log n, maybe? That is, the rate of response per reader would be log n. Or perhaps it would be "blog n".