Posts Tagged ‘logic’

More than 170 years before Jean-François Champollion had the first real success in translating Egyptian hieroglyphs, the 17th century Jesuit scholar Athanasius Kircher was convinced he had cracked it. He was very wrong. Daniel Stolzenberg looks at Kircher’s Egyptian Oedipus, a book that has been called “one of the most learned monstrosities of all times” in Public Domain Review.

###

As we take care not to jump to conclusions, we might send thoughtful birthday greetings to Bertrand Arthur William Russell, 3rd Earl Russell; he was born on this date in 1872.

A philosopher, logician, mathematician, historian, and social critic, Russell is probably best remembered for Principia Mathematica (co-authored with Alfred North Whitehead), which attempted to ground mathematics in logic (though his essay “On Denoting” has also been celebrated as a “paradigm of philosophy.” He won the 1950 Nobel Prize in Literature “in recognition of his varied and significant writings in which he champions humanitarian ideals and freedom of thought.”

Like this:

Logical Fallacy Bingo

Definitions of each flavor of fallacy, and clean copies of the board at Lifesnow.com.

###

As we overcome our wistfulness on remembering that this is Oscar Wilde’s birthday, we might recall that it was on this date in 1793, nine months after her husband, the former King Louis XVI of France, was beheaded, that Marie Antoinette followed him to the guillotine. (Readers who are parents– or collectors– can find commemorative dolls here.)

This is a surprisingly ancient question. It was Aristotlewho first introduced a clear distinction to help make sense of it. He distinguished between two varieties of infinity. One of them he called a potential infinity: this is the type of infinity that characterises an unending Universe or an unending list, for example the natural numbers 1,2,3,4,5,…, which go on forever. These are lists or expanses that have no end or boundary: you can never reach the end of all numbers by listing them, or the end of an unending universe by travelling in a spaceship. Aristotle was quite happy about these potential infinities, he recognised that they existed and they didn’t create any great scandal in his way of thinking about the Universe.

Aristotle distinguished potential infinities from what he called actual infinities. These would be something you could measure, something local, for example the density of a solid, or the brightness of a light, or the temperature of an object, becoming infinite at a particular place or time. You would be able to encounter this infinity locally in the Universe. Aristotle banned actual infinities: he said they couldn’t exist. This was bound up with his other belief, that there couldn’t be a perfect vacuum in nature. If there could, he believed you would be able to push and accelerate an object to infinite speed because it would encounter no resistance.

For several thousands of years Aristotle’s philosophy underpinned Western and Christian dogma and belief about the nature of the Universe. People continued to believe that actual infinities could not exist, in fact the only actual infinity that was supposed to exist was the divine.

But in the world of mathematics things changed towards the end of the 19th century… In mathematics, if you say something “exists”, what you mean is that it doesn’t introduce a logical contradiction given a particular set of rules. But it doesn’t mean that you can have one sitting on your desk or that there’s one running around somewhere.

And that was only the beginning; then came advances in physics and cosmology… Find out what happened, and whether infinities do in fact exist (plus, discover– finally!– the attraction of string theory) in “Do Infinities Exist?”

As we remind ourselves that there’s always room in Hilbert’s Hotel, we might spare a well-ordered thought for German mathematician and logician (Friedrich Ludwig) Gottlob Frege; he died on this date in 1925. Frege extended Boole’s work by inventing logical symbols, effectively founding modern symbolic logic. He worked on general questions of philosophical logic and semantics (indeed, his theory of meaning, based on distinguishing between what a linguistic term refers to and what it expresses, remains influential). But relevantly here, Frege was the first to put forward the view that mathematics is reducible to logic– thus creating the context in which mathematical infinites can “exist” (in that they do not contradict that logic)…

As we shore up our syllogisms, we might recall that it was on this date in 1881, two days before his death, that British Prime Minister Benjamin Disraeli demurred from a visit by Queen Victoria, muttering “no, she will only ask me to take a message to Albert.”

James Kendall and his wife Rosie own and publish Brighton SOURCE Magazine. Recently, Rosie’s mother, a 90-year-old widow who lived through the Blitz, gave up her long-time home to move to assisted living. In the process of helping her make the transition, James documented the contents of his mother-in-law’s pantry, and gathered the photos in a collection he calls “Best Before…”

As we rethink the concept of slow food, we might wish a systematically-happy birthday to George Boole; the philosopher and mathematician was born on this date in 1815. Boole helped establish modern symbolic logic– he created symbols to stand for logical operations– and an algebra of logic (that is now called “Boolean algebra”). Boole made important contributions to the study of differential equations and other aspects of math; his algebra has found important applications in topology, measure theory, probability, and statistics. But it’s for the foundational contribution that his symbolic logic has made to computer science– from circuit design to programming– that he’s probably best remembered.

The quantum world defies the rules of ordinary logic. Particles routinely occupy two or more places at the same time and don’t even have well-defined properties until they are measured. It’s all strange, yet true – quantum theory is the most accurate scientific theory ever tested and its mathematics is perfectly suited to the weirdness of the atomic world. (…)

Human thinking, as many of us know, often fails to respect the principles of classical logic. We make systematic errors when reasoning with probabilities, for example. Physicist Diederik Aerts of the Free University of Brussels, Belgium, has shown that these errors actually make sense within a wider logic based on quantum mathematics. The same logic also seems to fit naturally with how people link concepts together, often on the basis of loose associations and blurred boundaries. That means search algorithms based on quantum logic could uncover meanings in masses of text more efficiently than classical algorithms.

It may sound preposterous to imagine that the mathematics of quantum theory has something to say about the nature of human thinking. This is not to say there is anything quantum going on in the brain, only that “quantum” mathematics really isn’t owned by physics at all, and turns out to be better than classical mathematics in capturing the fuzzy and flexible ways that humans use ideas. “People often follow a different way of thinking than the one dictated by classical logic,” says Aerts. “The mathematics of quantum theory turns out to describe this quite well.” (…)

Why should quantum logic fit human behaviour? Peter Bruza at Queensland University of Technology in Brisbane, Australia, suggests the reason is to do with our finite brain being overwhelmed by the complexity of the environment yet having to take action long before it can calculate its way to the certainty demanded by classical logic. Quantum logic may be more suitable to making decisions that work well enough, even if they’re not logically faultless. “The constraints we face are often the natural enemy of getting completely accurate and justified answers,” says Bruza.

This idea fits with the views of some psychologists, who argue that strict classical logic only plays a small part in the human mind. Cognitive psychologist Peter Gardenfors of Lund University in Sweden, for example, argues that much of our thinking operates on a largely unconscious level, where thought follows a less restrictive logic and forms loose associations between concepts.

Aerts agrees. “It seems that we’re really on to something deep we don’t yet fully understand.” This is not to say that the human brain or consciousness have anything to do with quantum physics, only that the mathematical language of quantum theory happens to match the description of human decision-making.

As we feel even more justified in agreeing with Emerson that “a foolish consistency is the hobgoblin of little minds,” we might recall that it was on this date in 1692 that Giles Corey, a prosperous farmer and full member of the church in early colonial America, died under judicial torture during the Salem witch trials. Corey refused to enter a plea; he was crushed to death by stone weights in an attempt to force him to do so.

Under the law at the time, a person who refused to plead could not be tried. To avoid persons cheating justice, the legal remedy was “peine forte et dure“– a process in which the prisoner is stripped naked and placed prone under a heavy board. Rocks or boulders are then laid on the wood. Corey was the only person in New England to suffer this punishment, though Margaret Clitherow was similarly crushed in England in 1586 for refusing to plead to the charge of secretly practicing Catholicism.

Suppose there is a town with just one male barber; and that every man in the town keeps himself clean-shaven: some by shaving themselves, some by attending the barber. It seems reasonable to imagine that the barber obeys the following rule: He shaves all and only those men in town who do not shave themselves. Under this scenario, we can ask the following question: Does the barber shave himself?

As we return to first principles, we might wish a carefully-reasoned Joyeux Anniversaire to Félix-Édouard-Justin-Émile Borel, a mathematician and pioneer of measure theory and its application to probability theory; he was born in Saint-Affrique on this date in 1871. Borel is perhaps best remembered by (if not for) his thought experiment demonstrating that a monkey hitting keys at random on a typewriter keyboard will– with absolute certainty– eventually type every book in the Bibliothèque Nationale (or, as oft repeated, every play in the works of Shakespeare, or…)– that is, the infinite monkey theorem.