AI: How Reliable is Intuition?

Intuition is often lumped into the various piles of nonsense that critical thinkers build when examining categories of spurious claims. However, if we examine it through the filter of philosophy, discarding intuition as so much bunk may simply be due to how it is defined.

The dictionary tells us that intuition is the ability to understand something immediately, without the need for conscious reasoning. This type of intuition, like mother’s intuition (or woman’s inuition), would seem to be an ability that includes “just knowing” something, or “just feeling that something is so”. And the good critical thinker would not rest on that as an explanation for knowing anything.

Philosophers, however, point out that this definition of intuition is incorrect; or that it is incomplete at the very least. The contention is, we simply think unconsciously much faster than when we consciously use symbolic language to examine the results. And so it only appears as though we “just know” something, or that we “just feel that something is so”.

A yet unpublished (but available with academic library access if you don’t want to purchase) paper by J.R. and J.R.C. Kuntz, to appear in the Review of Philosophical Psychology discusses seven accounts of intuitions that were provided to them by the researchers. He are the Kuntz’s seven conceptions of intuitions:

1) Judgment that is not made on the basis of some kind of observable and explicit reasoning process.

2) An intellectual happening whereby it seems that something is the case without arising from reasoning, or sensorial perceiving, or remembering.

3) A propositional attitude that is held with some degree of conviction, and solely on the basis of one’s understanding of the proposition in question, not on the basis of some belief.

4) An intellectual act whereby one is thinking occurrently [sic] of the abstract proposition that p and, merely on the basis of understanding it, believes that p.

5) An intellectual state made up of (1) the consideration whether p and (2) positive phenomenological qualities that count as evidence for p; together constituting prima facie reason to believe that p.

6) The formation of a belief by unclouded mental attention to its contents, in a way that is so easy and yielding a belief that is so definite as to leave no room for doubt regarding its veracity.

7) An intellectual happening that serves as evidence for the situation at hand’s instantiation of some concept.

But if we grant these definitions, and allow for intuition, can we use intuition to explain how we know something? How confident can we be of the conclusions that result from it? Can we be confident at all? What else must we take into account if we are to trust intuition? Anything? Any other thoughts?

The Afternoon Inquisition (or AI) is a question posed to you, the Skepchick community. Look for it to appear Tuesdays, Thursdays, Saturdays, and Sundays at 3pm ET.

14 Comments

Lot’s to unpack here. I’ll take just this bit of it. “How confident can we be of the conclusions that result from it?” Not at all. I think it would be a huge mistake to conclude anything from intuition. Intuition can be the beginning of a process, but not the end. I don’t think we can help having intuitive feelings and using them to frame further learning. We need to use our skepticism, however, to avoid the classic police “rush to judgment” or the scientific experiment that can only confirm a hunch not disprove it. I use the phrase “my guess is” many times a day. If there is one thing I’ve learned its that my guesses are frequently wrong.

In physics classes (especially in informal problem-solving and review sessions), we used to talk a lot about “physical intuition”. This was the ability to look at a situation or problem and quickly decide the best way (or at least a pretty good way) to analyze it. This could involve selection of a coordinate system, or points of reference for analyzing forces, or an appropriate transformation to phase space, or many other techniques. Many times doing this could make a seemingly intractable problem trivial or at least solvable. Physical intuition didn’t provide answers, just paths to a solution.

Practice helped a lot with developing physical intuition, but some people were much better at it than others. (Many of the people who weren’t so good at it could still make up for it through dogged determination, though.) It was a useful but not essential skill for doing physics, at least at the undergraduate level.

I never encountered any reference to this in any physics text book, but it was definitely a part of the culture.

I’ve never heard of this in reference to computer programming (my current field), but I think the same sort of intuition applies.

I’m not sure where this fits into the 7 categories of intuition, maybe 1 or 2 or possibly 7. In any case, the intuited proposition is not in any sense proven truth and can often be wrong.

As a freshman at MIT, one of the most common statements heard was “that idea is counter-intuitive”. I never used that term. When my intuition didn’t work on something, I changed my intuition. I had scary good intuition. I remember in one philosophy of science course we were talking about intuition in science and the prof asked about relativity effects on a massive spinning thing and I said of course there would be time dilation. He kind of looked at me funny, one of the other students asked was that correct and he said yes.

I couldn’t do the math, but I had the intuition to get the shape of the curve and the sign correct (usually), so people who could do the math sometimes came to me to see if they had gotten through pages of derivation and still had the sign correct.

I see intuition as a non-algorithmic method for doing computations. It is like the ability to estimate the number or relative size of a group. You don’t need to be able to count to estimate which group is larger, but if you know the counting algorithm, then you can count the members and come up with an exact relative size.

Some types of data don’t fit into an algorithm easily. There are algorithms for doing math with decimal numbers, there are not algorithms for doing math with Roman numerals. Trying to extract square roots with Roman numerals would be difficult.

If there is no algorithm to find an answer, because there is too much data (comparing 500,000 and 600,000 beans), or because there is insufficient time to implement the algorithm, or because there are lots of gaps in the data you have, then you have to default to a non-algorithmic process. Usually the output of a non-algorithmc process is a feeling. You feel which group is larger.

Human interactions are examples of things which cannot be done algorithmically. Humans are too complicated, there is never sufficient data to do a precise emulation, and emulating another human takes human-sized cognitive resources running “native”. It is simply not possible to emulate someone else with sufficient precision except for quite crude things.

I think that people on the autism spectrum have brains that are wired more for doing algorithmic-type computations and people who are NT are wired more for doing non-algorithmic-type computations. I think this relates to the things that people with Asperger’s are good with, physics, math, things that are simple because they are constant and don’t change. People who are NT have brains wired to do non-algorithmic stuff which makes them good at communicating with other NTs.

Unfortunately people who are NT and who can only do non-algorithmic computations, can only “think” in terms of feelings. They can’t do algorithmic calculations (easily). They can’t take an intuition and run it through an algorithmic process to see if it corresponds with reality by checking it against facts and logic. There feelings really do get in the way. This is why facts and logic are completely useless for arguing with people who are incapable of thinking except with feelings.

YECs feel their beliefs are correct and they do not have the ability to check them against facts using the algorithm of logic. That is also why all of their arguments are arguments from authority and never arguments from facts and logic.

The reason an algorithm is always superior is because an algorithm can always be checked because you know each step of it. If you have a thought process that includes steps you are unable to articulate, it is not an algorithmic process.

My understanding of the current research on the phenomenon of intuition is that it is a psychological system evolved to allow quick judgments based on incomplete information and pattern recognition. It is sub-conscious to allow it to work more quickly (without thinking) as well as to allow it to be easily overridden as new information comes in. Thus intuition should never be relied on over complete data.

I’ve read the beginning of a book “The Decisive Moment” which is about reasoned decisions vs. ‘gut feel’ decisions (which is at least similar to intuition), and when each way of reasoning works best. (I’ve only had the opportunity to read the book in snatches when visiting a friend.)

According to the book, there are circumstances where gut decisions are much superior to reasoned decisions, but mostly (only? haven’t read enough to know) when you’ve trained your ‘gut’. So if you’re an experienced racing car driver, and you see cars crashing in front of you, your gut will be a better guide on how to avoid them than your head will. Similarly if you’re a comedian judging the mood of your audience. But when deciding how to invest for your retirement you should use your head.

The author is an airline pilot, and attributes much of the precipitous decline in airliner crashes to this. In the 80’s, the simulators became good enough that pilots have been able to get a ‘gut feel’ for how to react to emergencies, instead of relying so much on analysis.

I have yet to see a non-academic/informal use of the word /intuition/ that cannot be replaced with /experience/.

Saying that someone is “using their intuition” means (in “everyday” terms) that they are “applying the result of a rough guesstimation on a biased recollection of former instances of this event”. It’s a combination of “this worked before” and “this seems like it should work”. This rarely finds the optimal solution, but it often finds _a_ solution. Incomplete information and pattern recognition (as blu said).

“Just knowing something” doesn’t happen. You pick up a lot of knowledge subconsciously, to the point where it can – in extreme cases – seem freakish/alien/god-given.

When it comes to “instinct”… Well, who is to say the brain shouldn’t “know” how to act in the same way the legs shouldn’t “know” how to walk? Self-organization and whatnot.

—

Honestly, I haven’t exposed myself to enough cases where the nonsplanation (can non-explanation be shortened to this? It can now, because I’m wearing glasses, damnit.) of “because intuition” is used. I would like if we had some specific examples here, so we could find out a) what the one who used “intuition” thinks it means, and b) what the actual explanation for the phenomena is. Anyone?

I think a lot of it boils down to how there are two basic kinds of processing in the human brain, roughly handled by the opposing hemispheres: they can be called “sequential processing” and “gestalt processing”.

“Sequential processing” is something we know quite a lot about, and works sort of like a computer. To really roughly describe it, it follows an “If A then B” type of processing. Logic, mathematics, language, analysis, and that kind of thing are entirely (or mostly) sequential in nature.

But “gestalt processing”, which is far more difficult to articulate (or reproduce), is an equally important part of how our brain works. It makes a “big picture” assessment. It handles things like recognition, understanding facial and body language demonstrations of emotion, noticing the salient and “important” elements in a scene or narrative, most spatial reasoning, anticipation, emotional reasoning, the usual kinds of day-to-day risk assessments (which are sometimes better handled by analysis), snap judgments and intuition.

Damage or disorders of right brain processing are often very, very strange and alien things to see and try to understand. We can comparatively easily imagine and understand left brain disorders, like aphasia, but someone who’s lost the capacity for, say, recognition, is something far stranger to us.

When we recognize an object, it isn’t by process of sequential analysis. We don’t say… “this is fluid moving matter… liquid!… contained in translucent container made of some kind of pliable but brittle material with a smooth texture… plastic! … the container shaped with a broad belly and thin neck… a bottle!… the liquid is at a certain point in the colour spectrum at such-and-such frequency… green!…. and its smell is acrid, alcoholic and minty…” Instead of going through all that, we just immediately identify the salient details, make the association, and bam! “It’s a bottle of mouthwash.”

Intuition, and gestalt processing, are an absolutely vital way for us to understand and interact with the world. As mentioned above, there are definitely times where an analytic approach is preferable, but there are also times where intuition is preferable. You can’t win at pool just by knowing a lot about trigonometry and physics. Both modes of thought are integral to human experience and survival.

We live in a world much more complex, and that requires far more nuanced decisions, than the one our brains originally adapted for. So sometimes we get mixed up, and use non-ideal modes of thought or understanding for a given situation or choice or whatever. But I don’t think it makes sense to privilege one kind of human thought over another, or to discount the value of certain kinds of cognitive processes, or ignore why we developed those abilities in the first place. When a tiger jumps out of the bushes, there’s no time to analytically weight the relative pros and cons of fighting, running or playing dead. You need to make a lightning-fast “gut decision”, based on intuition.

And analysis, as all us skeptics should know, can often be very deceptive and not nearly as rational as it initially appears.

I remember reading that people who have a hard time making decisions are people who have poor communication between their different brain hemispheres. The idea being that the left brain has to weigh all the myriad options logically, and often there is no obvious superiority between choices. But if it were more capable of listening to the left brain it could use the snap “gut” decisions of that hemisphere.
I have a really hard time choosing a meal from a restaurant menu, for instance, and that’s where, acc. to the above theory, my right brain could be helping me out, but I can’t seem to hear what it’s saying.

I think we all rely on intuition everyday, of course it is important and not something to just brush off. It isn’t really something to explain complex ideas with, however. I may enter a shady bar and choose to leave immediately because something just feels off, but I can’t call the police and have them investigate the place based on my feelings nor can I say that I avoided being assaulted by leaving because I just do not have enough evidence.

I think the cortex builds a giant structure of patterns of connections that represent pathways that have been reinforced between different channels of input and output. The thalamus receives signals from the cortex indicating some measure of expectation (if these nodes are expected, then adjacent nodes which have just fired should have increased their potential slightly), based on how incoming sensory information (organized by the thalamus) activates whichever pathways in the network. The thalamus signals the correct response to the nucleus accumbens based on the difference between incoming information and expected information.

Error detection (humor) means that incoming information matches a node or nodes that are currently being inhibited (i.e. inappropriate for the context) by pathways constructed by previous information. The error is corrected (positively reinforced) and a signal is sent to our peers (laughter). The new positive association strengthens per use, thus jokes become less funny over time.

Nodes higher in the structure indicate more abstract ideas, as the information from neurons corresponding to, say, individual cells on your retina, signal certain other neurons, the proper combination of which signal certain other neurons, etc., and data is thus organized (almost like a binary system!) into patterns of varying complexity that have been seen before (thus the pathway exists and positive signals are sent to the thalamus).

When higher-level (more abstract) nodes are signaling other higher-level nodes, and we lack a way to describe this, because so many possible pathways exist to more specific (thus describable) information, this is intuition.