People are overconfident. That is a clear signal in psychological research that is reliably replicated. At this point it can be taken as a given. The brain is a complex machine, however, and any one factor such as confidence interacts in multiple and complex ways with many other mental factors.

Questions that have not been fully addressed include the possible causes and effects of overconfidence. Dunning and Kruger famously isolated one factor – overconfidence (the difference between self-assessment and actual performance) increases as performance decreases. This effect (called the Dunning-Kruger effect) is offered as one explanation for what causes overconfidence – the competence to assess one’s own competence.

A new series of studies looks at another factor that may influence overconfidence, ideas about the nature of intelligence itself. Joyce Ehrlinger and her colleagues performed three studies looking at the effect of two theories of intelligence on overconfidence. The two theories in question are the notion that intelligence is largely fixed (called the entity theory) vs the idea that intelligence is highly malleable (called the incremental theory).

Ehrlinger hypothesized that entity theorists would be more overconfident than incremental theorists. In the first of three studies, this one involving 53 university students, she found exactly that. Subjects were asked questions to assess their attitudes toward intelligence, with statements such as, “You have a certain amount of intelligence, and you can’t really do much to change it” and “You can always substantially change how intelligent you are.” They had to rank such statements on a scale from “strongly agree” to “strongly disagree.”

Subjects completed a test of moderate difficulty and were then asked to guess how they did compared to others, from 0%-99%. They found that those subjects who agreed more that intelligence is fixed had an average self assessment of 76% while those who felt that intelligence is malleable had an average self assessment of 55% (by definition the true average was 50%).

So, there was a general overconfidence effect (the technical term for which is overplacement), but it was disproportionately present in the subgroup of subjects who believe that intelligence is fixed.

The next two studies addressed the question of whether or not having a fixed view of intelligence causes overconfidence and if so how. In order to make a causal inference you need to manipulate the variable, not just associate it. So in the second study the researchers did just that – they gave subjects one of two articles to read (saying it was a test of reading comprehension). One article discussed scientific evidence that intelligence is fixed, while the other article discussed scientific evidence that intelligence is malleable.

The researchers were interested in two variables, the degree of overconfidence and the amount of time subjects spent on easy questions vs difficult questions. The subjects were taking the test on computer, allowing them freedom to spend as much time on each question as they wished, and the program tracked that time.

The results confirmed what the researchers suspected, that the subjects exposed to the entity article had higher overconfidence than the incremental article, 68% vs 59%. This difference is not as big as when subjects self-sorted into these two groups but that makes sense. In the first experiment the researchers were looking at the subjects’ long held views of intelligence. What is remarkable is that in the second experiment these views were so easily manipulated by reading a single article (at least temporarily).

Further, the researchers found that those in the entity group spent more time on the easy questions and less time on the difficult questions than the incremental group. The researchers hypothesized from this that focusing on the easy tasks reinforced overconfidence, while spending time on difficult questions was humbling. They explored that question in the third study.

In this study they forced subjects to spend more time focusing on the easy or the difficult questions, by given them either time-consuming or rapid secondary tasks. In the attention-to-ease group, those who also had an entity theory of intelligence were more overconfident than their incremental peers, 67% vs 55%. In the attention-to-difficulty group, there was no statistical difference, 58% vs 61%.

What does all this mean? First, I have to point out that this is one moderate-sized study. The results are reasonably robust, but will need to be replicated before we can be confident in the results. However, taking them at face value:

The results are yet another confirmation of an overall overconfidence effect.

The results suggest that overconfidence is stronger among those who believe that intelligence is fixed, and much smaller among those who believe that intelligence is malleable

The correlation between beliefs about intelligence and overconfidence seem to be at least partly mediated by attention to easy vs difficult tasks.

This sounds like a type of confirmation bias. We reinforce beliefs about our own competence by focusing our attention on easy tasks that make us feel competent, and glossing over difficult tasks that would have a humbling effect on our self-assessment.

It is very interesting, however, that our beliefs about the nature of intelligence seem to be the determining factor here. Having a fatalistic view, that our intelligence is fixed and there is not much you can do about it, motivates us to reinforce our own sense of overconfidence. The exact relationship there needs to be further explored.

One hypothesis that occurs to me is that people who think there is nothing they can do about their level of intelligence are more highly motivated to feel that their intelligence is already above average, and they seek out confirmation of that belief.

Those who believe intelligence is malleable are more comfortable with also thinking that they are average or a little above average (they are still overconfident, but not by much) because they also feel there is something they can do about it. It can motivate them to focus on challenging tasks in order to become more intelligent, which in turn moderates their overconfidence.

This jibes with another trend in psychological research, the notion that attitudes can affect performance. Those who believe they are lucky actually do have better outcomes, because they are more willing to take advantage of opportunities. In this case those who think they can improve their intelligence are more willing to work hard to do so.

In fact the authors of the current study point to prior research which shows that having a malleable view of intelligence is associated with better school performance. They argue that we should therefore teach children incremental views of intelligence as a motivational strategy.

This seems reasonable, but I do think that this approach generally can be taken too far. The “you can do anything” motivational speaking can become counterproductive if it is unrealistic. I suspect there is a sweet spot, a balance between believing that hard work is effective and will pay off, while having realistic and achievable goals.

Another way to view the results of this study is that everyone has competing motivations with different net results. We all want to feel competent, and want to feel that we can improve ourselves, but also fear failure and find the prospect of working hard daunting. The question is – which is a stronger motivation.

In my experience (and this is now just my opinion) people tend to fall into various stable conditions. There are those who think that working hard is pointless, so why try. They may feel that a goal or a task is simply beyond their abilities, and so they are relieved from the burden of feeling that they should make an effort.

Others feels that they have the ability to benefit from hard work, and so that work is justified by the results. They are motivated to believe in the possibility of change in order to justify their hard work.

Further, the same person can have both views simultaneously about different things.

Conclusion

It’s interesting to think about all of the competing thoughts and emotions leading to the end result of behavior in people. With this series of studies we may have one more piece of this complex puzzle – people are overconfident, especially if they think intelligence is fixed, partly because they focus on easy rather than challenging tasks.

This kind of knowledge allows us to get outside ourselves, to manipulate the rules of the game rather than just be subject to them.

People certainly can improve their skills and knowledge. Talent counts for something, but hard work also counts. The research so far suggests that most people can master most tasks, it just may be a lot harder for some people and a lot easier for others.

We each have to decide where to spend our time and effort, and as with many things it is probably best to make rational and evidence-based decisions rather than letting emotions make decisions for us.

282 Responses to “What Causes Overconfidence?”

I think this is an excellent analysis of research that sounds very convincing. It also completely agrees with all of my own observations.

In my opinion intelligence results mostly from motivation, and from the patience that results when we are motivated to learn something.

Of course there are inborn aptitudes, and of course we don’t have time to accomplish everything. But it is sad that so many people believe intelligence and talent are fixed, and therefore they shouldn’t even make an effort. Even worse, they can try to convince others they don’t have the intelligence or talent to accomplish something.

having worked with mentally challenged people one thing i discovered: They can attain the same profound insights as any other person regardless of intellect. The major difference is the time it takes. Maybe a highly intelligent person can conclude X in a few minutes, the lower intellect reaches same insight in anytime between days to months. The insight itself isn’t any less important.

As such i deduced intellect is a consequence of faster (more efficient) working neural networks. Not some intrinsic value only for the few.

“Faster” does not necessarily mean “more efficient”. Optical illusions exist because the visual pathways (which must be very fast) take short-cuts that induce errors.

Once your cognition pathways have been led down a certain path, it can be difficult to get them to “see” a different path (which might even be more correct).

This is the problem of human hyperactive agency detection. Detecting the “agent”, even if it is not there is a lot safer when predators are around.

“Confidence” is a feeling, and as such is susceptible to all the errors that feelings are subject to. One cannot analyze feelings the way that one can analyze an argument built of facts and logic. One can demonstrate that an argument is based on repeatable facts and valid logic. There is no verifiable train of facts and logic for feelings.

A really interesting question is why are people overconfident in general? As a purely rational individual actor, one should do best by accurately assessing one’s abilities, chances of success, etc. in making decisions. Maybe overconfidence is adaptive at a societal level because groups of humans can benefit from having overconfident actors who reach for unlikely outcomes, thereby propelling the group to greater achievement. They can also fall back on the group in the case of failure, so the negative effects of overconfidence are mitigated. Anyway, it’s an interesting question and surely some sociobiologists have looked into this?

“having worked with mentally challenged people one thing i discovered: They can attain the same profound insights as any other person regardless of intellect. The major difference is the time it takes. Maybe a highly intelligent person can conclude X in a few minutes, the lower intellect reaches same insight in anytime between days to months”

And I suppose if they die before they conclude X then it was simply because they didn’t live long enough.

It got me to rethink some of my beliefs, which is the best compliment I can give a book. She also wrote a much more consumer oriented book titled Mindset. But I think most readers of this blog would probably prefer the book I linked to as opposed to the consumer one.

You may have “faith” that your feelings are based on logic, but if you can’t show me the steps, you have no logical way to assume that.

Pain may sometimes be adaptive, but things like phantom limb pain are artifacts of the limitations of our nervous system when damaged by trauma. Why does something damaged need to remain logical?

Complex systems that make life possible don’t need to be based on logic. Sometimes doing the illogical thing might be advantageous. If a predator is chasing you, being unpredictable is advantageous. If an organisms motions were guided purely by logic, then its motions would be predictable.

“Feelings and emotions are aspects of the complex systems that make life possible. We often are unable to comprehend them logically, but that doesn’t mean they are not logical.”

…

“Then I don’t know what you mean by “logic.” To me, it means something that happens for a reason.”

You are conflating reason (as in cause and effect) with logic. Obviously emotions do have causes — perhaps chemical or maybe some subconscious association with a good or bad outcome. But if you can’t “comprehend them logically”, it is difficult to make the claim that emotions are in any way logical.

And pretty much everyone but you realizes that it is almost impossible to make correct decisions unless you can avoid being influenced by emotion. Look at the current political circus, especially on the Republican side but also on the Democratic. Most impartial observers agree that Trump, Cruz and Sanders in particular are only doing as well as they are because of emotional appeal because virtually everything anyone of them has said is completely at odds with reality or political possibilities.

The formal mathematical system of logic is just a simplification of natural logic. I was using the word “logic” in a more general sense. How could I possibly not know about formal logic if I’m a computer programmer.

Formal logic may seem complicated but it is extremely simple compared to natural logical systems, such as human language, or DNA.

I just need to chime in and say, this may be the first time I’ve ever seen hardnose have what may be an honest to goodness conversation… So props, let’s keep it up please…or maybe I’m misunderstanding, please advise

There is no such thing as “natural logic”. Logic is a purely abstract activity. That reality seems to sometimes behave according to logic is something we need to measure and find out about reality.

Another way of looking at it, reason, or logic, is “substrate independent”. That is abstract reasoning is “the same” whether it is being done in a brain, on a computer, or on a piece of paper, and it doesn’t change over time, so it can be examined multiple ways.

Feelings are substrate dependent. They are mediated through physiological processes and depend on the “details” of those processes in the organism experiencing the feelings.

The feelings of someone else are much more difficult to understand than the logical reasoning of someone else. To understand someone’s feelings, you need to be able to reconstruct the idiosyncratic cognitive structures that generated and that are experiencing the feelings. You need to know someone pretty well to be able to do that, and that degree of knowledge between strangers is rare. Usually what people do instead is “project”, that is to impute a cognitive and emotive structure and then use that to emulate the individual.

This is the usual mechanism behind prejudice and bigotry. The bigot doesn’t actually understand the person who is the object of their bigotry, they have a false, cartoon, stereotypical version they have constructed and which they imagine is accurate. Their cartoon construct often tells a lot about the bigot, and is often self-serving. That is, it produces the result that the person using it wants to produce; justifications for the feelings of bigotry and hatred that the bigot has for the object of their bigotry. Virtually always the hatred and bigotry comes before any real understanding of “the other” and then precludes real understanding because perception has “latched” onto the false projection.

This is also how people prime themselves to be able to hurt others. They somehow imagine that their victim “deserves” the treatment they are getting.

Fiddlesticks. That’s what courts do, place emotion in the logical frame of the law, and reason. Logic applies across the board, whether people can express it or not by open language. Feelings are shaped by logic or you would be foraging like an ape from 2001 throwing clods of earth at wood ducks, feeling “real good” about that but not “real effective” in my terms. BTW my terms are not your terms, as you crucify open language by sophistry. http://1drv.ms/1tnKM6f

“Feelings are substrate dependent. They are mediated through physiological processes and depend on the “details” of those processes in the organism experiencing the feelings”

I think you may be talking on a different level than HN.
I think he is making an analogy between a futuristic computer and a human brain. At base, a computer runs on algorithms using logic gates. If the analogy between brains and computers holds, a sufficiently advanced computer could have feelings and these feelings would ultimately arise from the algorithms that constitute its program. Unless you are a dualist, this must also hold for human brains.

I am not a dualist. The brain does not run on “algorithms” in the sense that Turing equivalents use algorithms. Algorithms that a Turing Equivalent runs are “substrate independent”, that is they can be run on any Turing Equivalent.

A Turing equivalent could (in principle) “emulate” a brain, but there is no known way to do that and the degree of difficulty remains unknown. I suspect the degree of difficulty will remain unknown for at least a century.

The way humans think logically is by emulating a Turing equivalent. Emulating a Turing equivalent is how humans “think” using algorithms. That is difficult for most humans to do. Usually those algorithms run slow and clunky.

Usually the way people “think” is with feelings. Feelings run “native” in the brain (as in they “run” at the lowest level and there is no “higher” access). Feelings are sensitive to all the biochemical things that are going on in the brain. That is a “feature” because then the feelings have input from everything that is going on, such as energy status, food status, blood pressure, temperature, anxiety levels, immune system activity, fatigue status, odors, oxygen levels, blood sugar, and so on. As the substrate (the brain) changes, so do the feelings.

That feelings are sensitive to everything that is going on is important in deciding what to do. It also makes them useless for doing rational and logical thought.

Feeling that you are correct is a “feeling”. If you don’t have a chain of facts and logic, your feeling that you are correct is just a feeling and could easily be wrong. Each independent chain of independent facts and independent logic that leads to the same conclusion increases the likelihood that your conclusion is correct. If you don’t have multiple chains of facts and logic leading to your conclusions, you shouldn’t treat them as if they were correct.

The analogy between brain and computer is common but very misleading. If the brain is comparable to a computer at all, it would have to be to an analog computer. Analog computers do not use logic gates, or at least not the type of if/then or yes/no gates that are associated with clearly defined, reproducible results.

Analog computers are designed to combine or ‘integrate’ some number of inputs. Generally, these inputs consist of a continuous range of readings without a definite yes/no threshold. Also, the output/answer is rarely as simple as a yes/no answer. Instead, the output is usually a value somewhere within a continuous range of possible outcomes. Often, very slight changes in one or more inputs can dramatically affect the output. Needless to say, this is difficult or impossible to model with traditional logic tables.

IMHO, this is why it has been so difficult to model the brain in software. I think we are underestimating the difficulty by several orders of magnitude. Sure, there seems to be some progress in basic understanding, but we are a LONG way from the level of specificity and reproducibility that is required in order to claim that we understand how the brain works.

The same is true for emotions. For anyone to claim that their emotions are based on logic of any kind is to engage in the post-hoc rationalization already mentioned by d2u.

The brain is comprised of neurons, which sort of are like “gates”. Except that the “gates” have variable thresholds that change as the physiology of the brain changes in response to everything that the brain responds to.

What I am focused on is nitric oxide. Each nitric oxide sensor, senses the sum of NO from all sources, including the background. The background isn’t constant, it is a global control parameter that physiology moves up and down depending on what physiology is trying to do.

A high stress state is a low NO state (so as to disinhibit cytochrome c oxidase so as to maximize aerobic ATP production by mitochondria. NO regulates a lot of things, a major pathway is through soluble guanylyl cyclase which has a turn on threshold of ~10 nM/L or so. sGC is soluble, and generates cGMP in response to NO, so the generation of cGMP is a volume response; if a cell volume element has cGMP in it, then when NO changes, there will be a change in cGMP levels. This is sub-cellular length scales.

The normal background NO level is sub nM/L. That changes with sub-second time constants, mostly due to neurogenic production of NO by nNOS. Neurogenic NO is how acute vasodilation happens. In the brain, the fMRI BOLD signal is due to neurogenic NO causing sGC to make cGMP and dilate smooth muscle in various blood vessels in the brain.

Changes in the background NO level affect the range, onset time, duration, and magnitude of each and every NO signal, and on sub-cellular length scales and on sub-second time scales.

The “normal” brain auto-tunes itself so that all the cells are “in sync”, so that they can “communicate”, and that signals from one cell have “meaning” to the cells it communicates with. The autotuning has to be pretty severe, so that the billions of cells in the brain all work together.

When that auto-tuning gets out of whack, the brain can’t function. I am pretty sure that is what happens during general anesthesia with inhaled anesthetics (for which the mechanism remains unknown). The lipid soluble anesthetics get into the lipid membranes and perturb the various proteins that act as receptors and channels, so that they don’t work “exactly right”, that throws off the synchronization of the brain, and the brain stops working.

That auto-tuning is too complicated to be done via open-loop control. It has to be feedback controlled. I suspect that is why people need sleep, so as to “reset” the auto-tuning.

Why an analog computer? Neurons either fire or they don’t fire. That’s looks digital to me. Granted, whether or not a neuron fires depends on the sum of the inputs from a thousand other neurons reaching a certain threshold. But each of those thousand other neurons also either fire or don’t fire depending on their inputs from a thousand other neurons reach a cetain threshold. Granted also that those thresholds can be altered by some of those inputs. But that just makes brains much more complicated than a simple logic gate in a present day computer algorithm which usually has only a couple of inputs and a single output. But who’s to say what computers of the future may be capable of. And why should the emotional centres in the brain be any different? They are composed of neurons just like every other part of the brain. Why wouldn’t they function similarly? Granted that they may interfere with the “logical” parts of the brain coming to “logical” conclusions, but that is not to say that they don’t function just like the neurons in the “logical” parts of the brain.
(Perhaps petrossa is right that high functioning autistics are the future of the human race. And perhaps these humans will build only high functioning autistic robots devoid of those nasty interfering emotions).

If that is an accurate description (which I am no where near being able to leave comment pro or against) that seems to explain why there is such unpredictability in the way people think and react when different recreational drugs are used, when people are upset, tired, cranky, ‘in love’ ect ect.
‘We’ are just a mess of flawed thinking, and realizing this is the first step in becoming a more balanced and understanding individual in my opinion.

Remember, we are talking analogy, and analogies, by definition, are not intended to be an exact representation of an object. Rather, an analogy is intended to make it easier to understand or explain a particular concept or idea.

As d2u said, “The brain is comprised of neurons, which sort of are like “gates”. Except that the “gates” have variable thresholds that change as the physiology of the brain changes in response to everything that the brain responds to.” That variability makes it difficult or impossible to describe brain function using binary logic.

The trillions of interconnections combined with environmental changes (i.e. nitric oxide) make it completely impractical to model digitally — even if you could identify the ‘state’ of every ‘input’, there isn’t enough computing power in the universe to calculate the results. Which is why I think an analog computer is a much better analogy for the brain. For all practical purposes, the brain is an analog machine. It will probably be decades or even centuries before we have algorithms that can simulate the multitude of complex interactions that comprise abstract thinking or emotions.

As I have said before, I believe the universe is a digital computer (according to the theory of digital physics). I believe that everything is logic and logic is everything.

I do NOT believe that the defective reasoning of humans is the result of emotions or feelings (except when the brain is disrupted by drugs, insanity, etc.).

I think faulty reasoning is usually the result of incomplete data, and our data is almost never complete. We almost always have to “satisfice” when making decisions, to some degree, as Simon and Newell called it.

Yes, emotions often distort our thinking, but generally in rational ways. Seeing yourself as better than you are, for example, can lead to unfair judgements of others. But it makes you feel ok about yourself, which prevents depression and makes your own survival more likely.

If we knew all the variables — and we never do — then we could see how perfectly logical everything is. We would have to know the states of all future variables as well.

I’m late to the party here, and I suspect this has already been said, but why would a universe that is made of stuff operating according to logic necessitate human thought operating according to the same logic?

You think we have logic gates built into our cognition? If this were the case, logical fallacies could only occur when the subject is missing pieces of information — and there are plenty of thought experiments that put the lie to this.

“logical fallacies could only occur when the subject is missing pieces of information — and there are plenty of thought experiments that put the lie to this.”

Logical fallacies are defined by people who think they know all about what is or is not valid logical thinking. Of course no one knows all about that, so the logic of logical fallacies is often defective.

When the average person consistently makes some kind of logical “error,” it usually turns out that making the error has some kind of advantage for the person. Maybe it prevents them from wasting time, or from feeling bad, or whatever.

Just one example of a *proper* (hn definition) logical fallacy with no benefit to the person committing it and no missing information would undermine the notion that human thought operates according to logic (again, hn definition).

In fact, by your reasoning, all thought, period, would be logical.

Do you agree with this? Please do correct me if I’m mischaracterising your position.

I do NOT believe that the defective reasoning of humans is the result of emotions or feelings (except when the brain is disrupted by drugs, insanity, etc.).

I think faulty reasoning is usually the result of incomplete data, and our data is almost never complete. We almost always have to “satisfice” when making decisions, to some degree, as Simon and Newell called it.

This does not compute!! I agree that it is literally impossible to have ALL conceivable data available, but WHICH data we choose to “satisfice” is OFTEN heavily influenced by emotions. There are thousands of ‘controversies’ where both sides have access to exactly the same data but interpret it completely differently, e.g. AGW, Alternative Medicine, politics in general, etc.

Many of your own ‘contrary’ opinions are based on ‘gut feelings’ (i.e. emotion) as you even admit occasionally. There sure as hell isn’t any evidence/data to support some of the things you claim.

The idea that “if the brain is made of logical gates, then all human thought would be logical” is an example of invalid logic; the fallacy of composition. It is also an example of something that has a factually incorrect premise. The brain is not made of logical gates.

An analogy I like to use to contrast “feelings” and “algorithmic thinking” (as in using algorithms to find “an answer”) is that of comparing quantity. Humans and many animals can compare quantity. Chimps will choose a pile of more food items over a pile with fewer food items. Humans can do that too using a heuristic that responds to the question “which is more”.

“Which is more” is a quick, easy and cheap (cheap in terms of cognitive overhead and metabolic load) heuristic that works pretty well, usually. Humans can also use the counting algorithm (if the human has learned how to run the counting algorithm). With the counting algorithm, the answer is much more accurate (arbitrarily correct if done carefully). But there is significant “cost” to doing the counting algorithm.

For fewer than 10 objects, the counting algorithm is pretty easy, for 10,000 the counting algorithm would take a long time. For 10^23 items the counting algorithm is not feasible.

The intuition or feeling that one pile is more is not of arbitrary accuracy the way the counting algorithm is. The counting algorithm can be implemented on paper, with mechanical devices, or electronically. All correct implementations of the counting algorithm give the same answer. The “which is more” heuristic does not always give the same answer, when used on difficult cases; 10,000 vs 10,005.

When the average person consistently makes some kind of logical “error,” it usually turns out that making the error has some kind of advantage for the person. Maybe it prevents them from wasting time, or from feeling bad, or whatever.

Now you’re just babbling — this is a TEXTBOOK EXAMPLE of allowing ‘logic’ to be influenced by emotion.

“There are thousands of ‘controversies’ where both sides have access to exactly the same data but interpret it completely differently, e.g. AGW, Alternative Medicine, politics in general, etc.”

Wherever you find a controversy, you will find that the data is not adequate to definitively confirm either side. Of course, people who are devoted to one side will perceive anyone on the other side as an idiot.

We can see this in politics all the time — people who are devoted to one political ideology see opposing ideologies as crazy, stupid and evil. Their opponents can’t possibly be rational and well-intentioned if what they believe in is a bunch of lies.

But people who are not devoted to any particular ideology can usually see good and bad in all of them.

“Wanting to save time is logical, therefore using heuristics instead of “logic” is often logical.”

Wait, you are talking about different kinds of logic here. One that breaks down the possible criteria for an action into base components – AND/OR… blah blah blah, and another that says, “doing whatever is good for me is logical.”

Not the same thing. I call that the fallacy of equivocation, but I’m sure you benefited from it so it’s still logical, right?

It seems that way to you because you are an extremist. that is, your own perspective seems entirely correct to you, while an opposing perspective seems stupid and crazy.

What you fail to recognize is that no one has all the data, so both sides of any controversy usually contain a mixture of truth and error.

BS. Many controversies have been resolved with empirical evidence which can be verified and reproduced regardless of emotion or opinions. Especially in regard to medical practice, some RESULTS are demonstrably better and thus more correct and logical. If you can’t recognize and admit that, then you are either a liar or a fool.

hn, you are using terms with multiple and idiosyncratic meanings inconsistently.

Logic is an abstract symbolic process. If you change what the symbols mean (the terms in the argument), you change the argument. Using terms to mean one thing in the premises and another thing in the conclusions is not any kind of “logical” argument.

“The kind of heuristics you mentioned are necessary for saving time. Wanting to save time is logical, therefore using heuristics instead of “logic” is often logical.”

Yes, it can be logical to make the decision to not use logic to make a different decision. That does not make the decision made without using logic in some way “logical”.

You want to have an emotional definition of “logic”, as “anything I feel is logical is logical”. That is simply not correct. You are free to adopt that definition, however it is useless in actually thinking logically.

If you think about what logic really is, you can separate it from what you learned from some textbook.

Computer programming is entirely logical. But the only real difference between a computer world and the natural world is degree of complexity. The computer world is restricted and limited, and all variables are known.

“And btw anthropologists who studied primitive people have noted that their ability to think rationally and to be practical was the same as ours.”

“Congrats on admitting that. I wish I had been keeping a bank of things you said so I could demonstrate how contradictory they are.”

I don’t see any contradiction in that statement, or between that statement and anything else I said. I said I think people are generally rational, and errors result from incomplete access to information, especially the future, which we can only guess about.

Yes, after they have been resolved they are no longer controversies. Anything that is still a controversy is probably not resolved yet.

Which is exactly the point and the reason you are so wrong. There are many examples of ‘controversies’ which are caused solely by emotion. Galileo is an excellent example. NO additional information was needed to see the truth of his observations, but many people continued to disagree solely because of emotional attachment to older incorrect and ILLOGICAL beliefs.

Emotions are frequently illogical. And lay off the “not sufficient data” BS. If you are emotionally unwilling to look for or fairly judge the evidence, it still proves that emotions can often cause illogical decisions.

We base a lot of our opinions on what authorities and experts tell us. It has to be that way, since we can’t do all the observations and experiments ourselves. People were even more willing to trust authorities in the past, and that is why certain things remained controversial. The same thing continues today when religious fanatics believe a creation myth and ignore the observations of scientists.

The problem is not emotions. The problem is having to decide between conflicting authorities.

“but you are overlooking the fact that the authorities themselves were mislead by emotion.”

Usually I ignore comments by people who infantile enough to think insulting your opponents is how to win arguments.

But your statement is so obviously wrong I felt I should answer. The authorities believed what they had been taught by earlier authorities. Obviously. There is no need to stick emotion into everything.

Not that emotions aren’t important, of course they are, but they are not the cause of our logical errors.

You WANT to believe that because you think if you try hard enough you can overcome your emotional biases and be real smart, and then you can feel superior to the ignorant emotional masses.

Not that emotions aren’t important, of course they are, but they are not the cause of our logical errors.

You WANT to believe that because you think if you try hard enough you can overcome your emotional biases and be real smart, and then you can feel superior to the ignorant emotional masses.

So … let me get this straight. You are accusing me of making a logical error because my thinking is clouded by emotion. Thanks for proving my point.

BTW, my liar/fool comment was NOT intended as an insult — merely an observation. And if you won’t or can’t see the glaring logical contradiction between your own two statements, then you are undeniably either a liar (troll) or else too foolish to recognize a very simple logical error.

Some people (especially you) would like to think that their opinions are solely based on logic, but incorrect logic doesn’t count since the whole point of understanding and using logic is to arrive at CORRECT decisions.

Interesting stuff. I don’t think anyone here would deny that emotions play a role in decision making. Just the opposite actually. It’s important to be aware of your emotions so you can try to avoid being mislead by them.

BTW, where did you learn to ‘talk’ southern drawl. You sounded just like some of my southern friends.

I think you have even more variation in England. I’m always surprised at all the different dialects in London alone. I was even more shocked at how little I could understand when the wife and I toured the entire UK.

I just fact checked myself on ‘mid-Atlantic’, and I was wrong — what I meant was kind of accent that everyone (except Joey) on ‘Friends’ has, or Kelsey Grammar, or NBC news anchors, or Steve Novella; generic American, not mid-Atlantic. I think I might be revealing a subconscious bias here.

AFAIK, most ‘experts’ consider the Midwest to be the ‘purest’ American accent. Kind of makes sense. There are far more ethnic conclaves on the East Coast where the early settlements were often formed in isolation from one another and the original settlers remain culturally dominant to this day.

OTOH, the Midwest was the true melting pot where many different cultures mixed together. With no one dominant dialect, I guess the language all blended together.

In spite of what I said about southern accents, on reflection I do have a wealth of examples to pull from. American movies/series do tend to dominate in English speaking countries, and nowadays we seem to be getting a lot of regional accents in our imports — Fargo, The Wire, Sopranos, Bloodline, Ray Donovan. Maybe British drama just ain’t as great; I don’t think we really have any big ticket equivalents. You will obviously be getting crap like Downton Abbey, but in terms of real accents that’s like me watching Humphrey Bogart films (although I have only seen the adverts for Downton, would rather gauge out my own eyes than watch that).

There is a wide range of accents within the UK, no doubt. E.g. The North-West region (I picked on this because it’s local to me): Within a few miles you can hear Manc transition to (Scouse, and then 30 miles later you’re in Wales, where a lot of people still speak a Celtic language, but you know what it sounds like when they speak English. I think we’re in a melting pot situation too though, accents seem to be becoming less distinct and regionalised — or at least dialect is.

“Conveniently ignoring the many emotional decisions that are unhealthy, dangerous or just plain wrong — no survival value there.”

I have never ever said or implied that errors are impossible or that errors never happen! You are a perfect example of error-prone reasoning. That doesn’t mean you are irrational, just kind of stupid (no insult intended, just an observation).

I suspect you are right about melting pots. I do think the widespread availability of the same entertainment throughout the English speaking world plays a role in that. Also, of course, the Internet and Youtube, etc.

Even though more accents and ethnicities are available, I think that exposure to a greater variety means that no one accent will dominate and they will all tend to blend together.

“If everything is made out of logic (information) then things wouldn’t be haphazard and meaningless and pointless.”

How can everything be made out of rules, observations, relationships, descriptions of possible interactions, etc… without some “stuff” for them to act upon?

Information itself is meaningless without something to refer to, isn’t it? Or are you referring to information that describes itself? It’s pretty much in the category of “logic” in that respect — it needs “stuff” to be meaningful. So what is this “stuff”?

I really have been pondering your deposits. I think you are, like Bill Bailey, part troll, and part sincere but confused. It’s as though your parents were rockers (materialists) so you rebelled by getting into country (woo/religion/dualism/informationism) when you were a teenager, but you just can’t quite purge yourself of your upbringing and still see everything in rock terms, just with substituted vocabulary and a chip on you shoulder.

If you take out the contradictions from what you say, hn, you are a materialist with an identity problem.

The vocabulary implies “stuff” to act upon — and this may be a limitation of vocabulary, or our current understanding. This is all fine, in a way — maybe the rules that govern the interactions of matter are a property of matter itself. Perhaps there is no “space”, with its own set of rules, in which this “stuff” exists.

“I think that exposure to a greater variety means that no one accent will dominate and they will all tend to blend together.”

I think dialect is more transmissible. People use out of region expressions to appear novel/cool; the availability of said out of region expressions is on the up (to say the least). Accents are more closely associated with immediate friends/family, although I can see that changing quite rapidly with the new breed: my friend’s 2 yr old son showed me a YouTube shortcut on my phone that I didn’t know existed (not once, randomly, but seven or eight times, deliberately).

The problem with philosophy is that once you’ve got yourself trapped in a particular philosophical point of view, there is a tendency to see everything in terms of that philosophy. It can actually be a block to real understanding. Science starts with observations and facts about the universe and is driven forward by accumulating more observations and facts. Any philosophical point of view that drives this process along is useful. The rest is simply a waste of time and effort or “for entertainment purposes only”.

There is a real tendency to deny science that contradicts your philosophical point of view. In the case of “the conscious universe”, “the holographic universe”. “universal consciousnes”, “the tao of physics”, “the non-local universe”, “wholeness and the implicate order” etc etc etc., the main problem is that quantum physics, at the very least, lends no support for, and, at worst, actually contradicts pretty well all these points of view. A common theme is “consciousness creating reality”. There is no scientific support for that view and, in fact, the scientific evidence is strongly against it.

There is almost a secular religious zeal behind these speculations, similar the theologists’ secularisation of god as the “ground of all being”. Bohm has been and gone – a mix of interesting ideas and wild speculation in his time. Physics has simply passed him by.

BJ7, that is not a fault of philosophy, it is a fault of human cognition. Once you start thinking about things a certain way (even if that way is later shown to be wrong), that becomes your brain’s normal “default”.

“A common theme is “consciousness creating reality”. There is no scientific support for that view and, in fact, the scientific evidence is strongly against it.”

There is no scientific evidence against it, none at all. It’s easy to say there is scientific evidence against an idea you don’t like, much harder to show what that supposed evidence is, or to explain how and why to think it is against the idea.

Not only is there no evidence that “consciousness creates reality”, it requires a complete misunderstanding of quantum physics to reach that conclusion.

To quote a couple of quantum physicists:

“If Bohr and Heisenberg had spoken of measurements made by inanimate instruments rather than “observers,” perhaps this strained relationship between quantum and mind would not have been drawn. For, nothing in quantum mechanics requires human involvement”

“To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be the video camera or even the air in the room”

That’s all this nonsense is about – misunderstanding the meaning of the word “observer” in quantum physics!

Rene Descartes: “But I really hit rock bottom when I started to doubt there was any external world at all, and even mathematical truths. I wasn’t even certain that two plus two equalled four.
Sextus Empiricus: “Wow, that’s intense man. How did you get out?”
Rene Descartes: “I found God”

———

Sextus Empiricus: “Voltaire, enough. This is a safe place”
Voltaire: “What a load of $#!+”

“That’s all this nonsense is about – misunderstanding the meaning of the word “observer” in quantum physics!”

You think you understand it but you don’t, no one does. You hate to think the universe could be conscious, so you believe anyone who says it isn’t. No one has a definitive answer, no one is even close to an answer. You believe what you like to believe, and reality has nothing to do with it.

If the universe is conscious, all kinds of things make sense and fall into place. For example, we would expect life to evolve in a conscious universe. If the universe is dead and mindless, then the evolution of life would depend on zillions of impossible accidents.

In the beginning there was god and through god everything else was created.
In the beginning there was consciousness and through consciousness everything else was created.

There is not even the beginnings of an explanation there.
Science derives the complex from the simple. Those paradigm shifters want to start off with the complex and somehow kid themselves that they have solved the problem!

“Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being”

“The problem with philosophy is that once you’ve got yourself trapped in a particular philosophical point of view, there is a tendency to see everything in terms of that philosophy.”

The problem with your statement is that you are using the term philosophy, when worldview or perspective is more appropriate. If someone is “trapped” by philosophy, that is of their own creation and has nothing to do with philosophy itself. Of course, people may hide behind philosophy (or even ‘science’) to shelter their ideological commitments, but we usually call those out as flawed arguments, or occasionally pseudoscience or woo.

“You hate to think the universe could be conscious, so you believe anyone who says it isn’t. No one has a definitive answer, no one is even close to an answer.”

Once again, HN hides behind false equivalence. In other words, no one really knows anything, so whatever he wants to believe is just as good as anything else. It is a good way of sheltering your beliefs, but it is not a good way of actually understanding anything.

In that post I was referring to the link in the post directly above mine.

If you follow the links in the links, you get the following:
philosophy -> metaphysics -> ontology -> Bohm’s implicate/explicate order
The last is Bohm’s philosophical/metaphysical/ontological interpretation of quantum physics.
I think the poster who provided that link is embedded/buried in Bohm’s philosophical/metaphysical/ontological interpretation of quantum physics

There is an aspect to the Dunning-Kruger that I often hear overstated a bit: that incompetence is the characteristic that makes people unable to truly assess their abilities. It seems to me that illusory superiority explains the phenomenon without invoking a strong asymmetry in competence about assessing one’s competence.

If most people see themselves as above average, the least competent will be overestimating their abilities the most, because they are further from the mean. There is a ceiling effect for the most competent, so they will not overestimate their competence by nearly as much.

“I don’t know guys – science is just too complicated and we simply don’t know enough yet to make any definite statements about – we can never REALLY be sure of anything unless we know absolutely everything about a subject”

There is a distinction between what is known as “the observer effect” and “the measurement problem”.

Non physicists like Chopra use “the observer effect” to posit a role for consciousness in quantum physics. This is simply a misunderstanding of the meaning of the word “observer” in experiments in quantum physics. It simply refers to the measuring device or experimental apparatus (for example the sensors placed at the slits of the double slit experiments). It simply does not require a conscious observer.

Physicists like Bohm, on the other hand, are using “the measurement problem” to wildly speculate about a conscious universe. And it IS wild speculation. Moreover, it doesn’t explain anything. As I pointed out in a previous post, scientists are always looking for simpler explanations for the complex world we see around us. Cconsciousness is the most complex feature of our universe and demands an explanation. Evolution is the best explanation we have and it is totally plausible. Consciousness comes at the end of a long line of evolutionary change starting off with simple molecules and building up through single cells collections of cells, integrated bodies with brains that gradually develop consciousness from a rudimentary form as has evolved in ants to the highly developed form as seen in some humans. It makes absolutely no sense to place consciousness at the beginning of this process. It is a complete non explanation. It starts off with the very thing we need to explain!

I don’t know what motivates some physicists to posit a conscious universe, but it is not science. It is wild philosophical speculation. It is worthless wild philosophical speculation driven by who knows what internal urges or needs.

It is not wild speculation. Both are theories — that the universe is conscious or it is not conscious (materialism). Neither one has experimental evidence yet.

“Evolution is the best explanation we have and it is totally plausible.”

That statement is impossible to understand. There is nothing about the conscious universe theory that contradicts evolution.

Evolution is plausible in the conscious universe theory, it is utterly implausible in the materialist theory. Even Dawkins, the famous materialist/atheist, admits his theory is extremely implausible. But it HAS to be true, because materialism HAS to be true. Why?? Because he likes materialism and he hates religion.

No one is saying they understand everything, and no one is saying they understand nothing.

Ummm … Straw Man alert. It is more accurate to say that we characterize your position as:
“We don’t know everything, therefore my opinion is just as valid as yours.”

The point is that there are serious challenges to your materialist worldview.

No, just no. There are alternate hypotheses WITH NO EVIDENCE. Big difference.

You constantly claim that we don’t WANT to believe in the supernatural, etc. But that is another straw man. What most of us actually want is to believe in things for which there is good evidence and reserve judgment on everything else. This is the only world view to date with a proven track record of successfully being able to understand, manipulate and predict events in the world we live in.

Your world view, i.e. “You can’t prove I’m wrong, therefore I’m right” is used by thousands of mutually incompatible belief systems — none of which has ever been able to do anything actually USEFUL with the “knowledge”.

It doesn’t matter if it is ESP, quantum consciousness or anything else supernatural. Mere speculation is useless, except perhaps as a testable hypothesis. But if the tests are consistently negative or else not testable at all, then it makes no sense to accept the hypothesis as true, even provisionally. It’s fine to keep testing new things as our total body of knowledge increases and as our techniques improve, but to assume that ANYTHING might be true simply because it has not been disproved yet is a recipe for self delusion.

“but to assume that ANYTHING might be true simply because it has not been disproved yet is a recipe for self delusion.”

No one thinks that, and you know it. Materialism is a philosophy which has not been proven or supported by any scientific evidence. The conscious universe theories of some physicists and neuroscientists are at least on an equal basis with materialism.

It depends on which theory you think makes more sense. Modern physics is pretty hard on materialism, in general, but there is extreme resistance to some of the newer ideas. Sean Carroll, for example, clings desperately to a philosophy of physics that is completely out of date.

Materialism is a philosophy which has not been proven or supported by any scientific evidence. The conscious universe theories of some physicists and neuroscientists are at least on an equal basis with materialism.

Fair enough. After all, there is no “Theory of Materialism” — it’s only a hypothesis. Of course, it has been a very (many would say spectacularly) successful hypothesis when it comes to explaining how the world “seems” to work.

Tell you what … I will concede your point when you start responding to my comments solely by using the fruits of the “conscious universe hypothesis”. Your choice of method will be fine as long as it is above and beyond the known results of the materialism hypothesis. ESP, telekinesis (i.e. typing on my keyboard, etc.) — pretty much anything you want that can’t be duplicated by materialism alone without some additional “conscious universe” refinement or complete replacement of the hypothesis.

It’s only fair after all. You’ve already explained that “The conscious universe theories of some physicists and neuroscientists are at least on an equal basis with materialism.”

Your responses to my posts leads me to the conclusion that you don’t understand the opinions or wild speculations that you support. You know the conclusions their proponents have reached, but you don’t understand how they got there. It is sufficient that these opinions or wild speculations support your world view or philosophy. This is why you don’t engage in the conversation isn’t it? You don’t know how to. You don’t know how to support those views with arguments because you don’t understand the arguments.

My guess is that you would have no idea how to answer the following questions (your own words without quotes or links) to show that you actually understand these concepts.

– What is “the observer effect”?
– What is “the uncertainty principle”?
(What is the essential difference between the the above?)
– What is “the measurement problem”?
– What is “non-locality”?
– What is “the conscious universe”
(What feature of quantum physics led to the idea of “the conscious universe”?)
– What is “the wave function”?
– What is “collapse of the wavefunction”?
(Is “the wave function” a mathematical symbol or a physical field?)

And, while you’re at it, would you please…

Link to a single experiment in quantum physics that demonstrates that consciousness plays a role in the outcome of that experiment.

Sorry, that is utter BS. Material effects have been (repeatedly and reliably) proven to have material causes. NO exception has ever be verified. At minimum, materialism is a damn good working hypothesis.

Universal consciousness has never been proven to have any validity or even any utility other than perhaps as an interesting premise for a science fiction novel.

“HN, do you distinguish between physicalism and materialism? Do you think that BJ7, Steve Cross and the other regular participants in these threads view physicalism and materialism as synonymous?”

People use words like materialism, physicalism and naturalism basically as an opposition to supernaturalism. But supernaturalism is usually not defined, so no one ever really knows what anyone is talking about.

A theory like Bohm’s implicate orders, for example, can encompass everything we normally call “natural” or “supernatural.”

There are other encompassing ideas besides Bohm’s, and newer ideas involving quantum physics are always coming along (so, of course, materialists cringe every time they hear the word “quantum”).

Materialists don’t like to be called materialists any more, since that philosophy is obviously obsolete (we know there are no ultimate little bits of matter). But “physicalism” and “naturalism” don’t really make sense either.

Materialism, naturalism, whatever, are not responsible for scientific advances. It’s true that science gradually freed itself from the dogmatic authoritarianism of the medieval church. But that does not mean science is inherently materialist, or naturalist, or anti-vitalist. Biology was predominantly vitalist until the 20th century.

Now science will, hopefully, eventually, free itself from the dogmatic authoritarianism of modern materialism.

You’re being played. Steve (and me/others) have explained in clear detail from many angles the defining need for methodological naturalism in science.

Spell it out and ask The Troll about it. He will ignore you (arguing in bad faith as always) and prattle on about something else where he feels he has some advantage in advancing his agenda of keeping the Merry-Go-Round of attention on himself and his dumb thoughts.

So as everyone can clearly see, the resident troll has no idea about even the basics of the subject about which he presumes to pontificate. It’s not that he won’t engage in the conversation, he can’t engage in the conversation. He simply doesn’t understand the arguments. He’s got no idea what is meant by “the conscious universe” or any of the other questions put to him. I’m sure I’m not the only one who reads his posts with the sounds of “blah blah blah” echoing in his brain.

You’ve repeated the same opinions countless times but never provided a shred of evidence or at least a reasonable explanation of why your opinions are valid.

This:

Materialism, naturalism, whatever, are not responsible for scientific advances. It’s true that science gradually freed itself from the dogmatic authoritarianism of the medieval church. But that does not mean science is inherently materialist, or naturalist, or anti-vitalist. Biology was predominantly vitalist until the 20th century.
Now science will, hopefully, eventually, free itself from the dogmatic authoritarianism of modern materialism.

is complete nonsense. It is true that science is not inherently materialistic — it is nothing more or less than the desire to accurately understand cause and effect. But, and this is a huge BUT, in the entire history of the world no cause/effect relationship has yet been found to be anything other than materialistic. Even if that changes in the future, UNTIL IT DOES, it makes no sense to discard “materialism” (or naturalism or physicalism or whatever) as a working hypothesis. At a minimum, this approach should be the most efficient use of time and resources.

Your contention that most scientists are biased towards materialism and against everything else is complete bullshit. Of course good scientists are skeptical, but they are (and should be) equally skeptical about anything without evidence. Many, many scientists would love to believe in an afterlife, or ESP or any number of other cool but unproven concepts. But good scientists follow the evidence wherever it goes without preconceptions.

Scientist want to know “how it really works” and don’t care if it is supernatural or anything else as long as there is REPRODUCIBLE EVIDENCE. That is the only way we know how to truly understand cause and effect. And they don’t stop looking, ever! We have gone from Newton to Einstein to quantum mechanics because of the desire to understand things fully.

And it is complete bullshit to say that “materialists” cringe when quantum mechanics are mentioned. It is true that we don’t understand as much as we would like, but the desire to learn more DOES NOT mean that anyone cares which way the evidence takes us. Good scientists don’t have an agenda and Nobel Prizes are given for NEW, TRUE discoveries — regardless of what they happen to be.

As is often the case, you are projecting your own mental quirks upon others. You DO have an agenda or desired outcome and therefore you want to believe that the other side does also so that you can be dismissive of opposing viewpoints. But the thing is, many (or even perhaps most) of us would love to believe in the things you want to believe. I’ve been reading SF and Fantasy for over 50 years and I’ll bet my list of “cool powers” is at least as long as yours. But I, and apparently most of the other commenters here would rather believe in what is actually true rather than what we wish to be true.

Thanks, but I’m fully aware that HN is a lost cause. I’ve lurked here for years but only recently had enough spare time to start commenting. I realize that I’ll probably be dead before HN matures, but I would like to think that perhaps a few newcomers that are unfamiliar with his history will benefit from my bloviation.

If something has an effect on the material world, and if “science” can detect and understand it, then yes, that is part of materialism. No one denies that except perhaps you.

You are the one making claims that there is something extra, or magical, or supernatural that we “materialists” can’t possibly understand.

Either we can understand something and control (or at least predict) what happens, or else it literally doesn’t matter. Even if your conscious universe is true, it doesn’t make a damn bit of difference if we can’t understand what that actually means in day to day life. What is the impact or significance? Are there changes in either “world” that affect or influence the other. It all comes back to understanding cause and effect. If something in the world is affected by the conscious universe, then it would be nice to know how to make “requests” (or at least understand the “rules”).

If there is some way to perform actions (or get information) back and forth, then yes, I would probably say it is a material effect because it affects our material world. If we can’t control or interpret the information flow, then there is no way to tell the difference between a “conscious” and “unconscious” (or natural, etc.) universe. So we keep looking with an open mind — but we DON’T assume anything without evidence.

The things that obviously have an effect on our world are things that can be perceived with our physical senses or with technology.

Things that do not have an effect on our physical senses or technological instruments either do not exist, or we are just not yet aware of them.

We can assume that things we are not aware of do not exist, or we can accept that there may be things that exist even though we are not aware of them.

It is extremely common for people to be aware of things that do not effect their physical senses. You have to explain these away as delusions, illusions or hallucinations, because, in your opinion, they have not been observed by science.

You are certain that nothing exists unless you can perceive it with your physical senses, or with technology. You are certain that people who perceive or believe in things you can’t see, hear or touch or detect with instruments are unscientific, ignorant and delusional.

You are not open to the possibility of science discovering anything new. You don’t believe that the universe might be conscious because you dislike the idea. It would mean that religion might not be total nonsense.

“We can assume that things we are not aware of do not exist,” — reasonable

” or we can accept that there may be things that exist even though we are not aware of them.” — also reasonable, and we can do BOTH at the same time.

“It is extremely common for people to be aware of things that do not effect their physical senses. You have to explain these away as delusions, illusions or hallucinations, because, in your opinion, they have not been observed by science.” — also reasonable because people have ‘been aware’ of many, many wildly different, often contradictory things. If and when there is corroborating evidence for one, it becomes reasonable to start believing that thing, but not until then.

“You are not open to the possibility of science discovering anything new. You don’t believe that the universe might be conscious because you dislike the idea.” — Wrong beyond the shadow of a doubt. Many, many people, including scientists have been actively searching for ‘new things’, including evidence for ‘something extra’ in the universe. It’s not my fault no one has found anything yet. You want to pretend that no one is trying, instead of admitting that maybe there is no evidence.

Maybe there is evidence that we haven’t found yet, but it seems impossible (for example) that two mutually contradictory things could both be true. Since not all conceivable speculation could possibly be true, why not reserve judgment until we have some way to tell the difference.

“What is the hypothesis grounded in science for the “non-material” effects that you claim?”

Precisely the question he needs to answer before any of us should continue to waste our time with his game.

Back when HN was a lesser-known phenomenon here, I asked nearly the exact same question. Since we are here years later asking the same question, it was neither addressed nor mentally processed appropriately by him. It is easy to make vague accusations of close-mindedness about the scientific community not considering non-materialist views due to bias, but if that were true, then it should be easy to come up with good predictive theories to explain the immaterial phenomena he keeps mentioning. Actually, he shouldn’t have to come up with them at all, as they should be very obvious- with 7.1 billion people in the world there have to be some not so ‘blinded’ by materialism that have created a theory that bears fruit. I’m still waiting.

“We can assume that things we are not aware of do not exist, or we can accept that there may be things that exist even though we are not aware of them.”

If we are not aware of something, how can we assume it exists? Scientists have always known that there are things that exist that we have not yet discovered, that is what particle physics is about, as well as the discovery of new species. But in these cases, we have reasons to believe that these may exist, because specific theories predict them. I’m not sure what exactly you are advocating for, and why it would be a good thing… to assume that things exist with no good reason to believe so?

“You are certain that nothing exists unless you can perceive it with your physical senses, or with technology.”

Know one is “certain” about this, except how would assess the existence of something that you are admitting can’t be perceived? This is worse than Russell’s teapot.

And the lesson from Russell’s teapot is the burden of proof is on you. Show me the immaterial, and you win a prize. Many prizes, actually

“I’m not sure what exactly you are advocating for, and why it would be a good thing… to assume that things exist with no good reason to believe so?”

I had already said previously that some scientists believe that some discoveries in physics suggest the universe is conscious.

There is no scientific reason to believe otherwise. We do not yet have proof one way or the other. Of course, parapsychology shows that consciousness is NOT merely a product of physical brains. But commenters at this blog are not aware of the parapsychology research, so mentioning it results in meaningless arguments.

Ignoring parapsychology for now, we can agree that the question is open, with no definitive evidence for either side.

There is most certainly no scientific evidence for materialism. People whose brains are not functioning appear to not be conscious, but we don’t know if they are. A brain is required to communicate and interact with this world, so if they are conscious they could not tell us.

But the observation that people whose brains are not functioning do not seem to be conscious is probably the most cited justification for materialism.

“I had already said previously that some scientists believe that some discoveries in physics suggest the universe is conscious…We do not yet have proof one way or the other.”

How can discoveries (vague workding, of course) suggest a conscious universe if you immediately acknowledge that we have no proof one way or the other? Admitting this shows that adding a theory of consciousness to the universe adds NO explanatory power. “Not even wrong,” indeed.

“How can discoveries (vague wording, of course) suggest a conscious universe if you immediately acknowledge that we have no proof one way or the other? Admitting this shows that adding a theory of consciousness to the universe adds NO explanatory power. “Not even wrong,” indeed.”

Assuming the universe is NOT conscious has no explanatory power, is not supported by scientific evidence, and contradicts the experiences of millions or billions of people. It requires the assumption that people are much less intelligent, much more prone to hallucinations and delusions, than observations would suggest.

The two competing theories are: Mind creates Matter vs Matter creates Mind. Neither one has been proven, they are both beyond our current scientific knowledge.

The theory that Matter creates Mind seems impossibly implausible. You have to suspend every ounce of common sense to believe it. But, of course, you have been taught to completely ignore your common sense, intuition and observations, and to only believe what the scientific mainstream has decreed at any moment.

‘With no definition of what “consciousness” is, assertion or beliefs about “consciousness” are not “scientific”.’

Philosophers have their definitions and have been writing endlessly about it for ages. Materialist philosophy defines consciousness as something that somehow is created by physical brains. No one has any idea how or why that would happen, within the materialist framework.

Of course HN, falls back on his tactic of creating false equivalence and then concluding that his absurd pet theory is equally likely.

I must assume that the earlier reference to Russell’s teapot is lost on him, because his reasoning leads to thinking that the existence of a teapot orbiting between Earth and Mars is a 50/50 proposition. He has no concept of an epistemic burden of proof. He values intuition at least as much as rigorous study. How can anyone reach a person so insulated from reality?

If there were scientific evidence showing that matter creates mind then I would believe it, but there isn’t. The ultimate questions about how life and consciousness were created have no answers, and there is no “rigorous” scientific study that can tell us.

When we have a question and science can’t help us answer it, we can consider direct personal experiences, intuition and common sense. All of these are fallible, but we have nothing else when trying to answer these questions.

Ultimately we have to admit we do not know. But the commenters here who are arguing with me claim that they DO know. They accuse me of saying I know the answers, when I have not said that. I have said what I believe is more plausible.

What irks you most is the idea that if the universe is conscious, then believing in “god” would not be utterly ridiculous. You need to think spiritual beliefs are ridiculous, since that is the whole foundation of your sense of being smarter than average, and more in touch with the “truth.”

But, since you have insisted many times that the evidence, despite being consistently pointing in the same direction, is lacking, give some examples of what types of evidence would be convincing?

In order words, what types of evidence would disprove your idea of a conscious universe (whatever that is)? Give some examples of evidence that would only be compatible with a conscious universe. This is your opportunity to make a coherent point.

What irks you most is the idea that if the universe is conscious, then believing in “god” would not be utterly ridiculous. You need to think spiritual beliefs are ridiculous

Because the rational part of your mind (hidden though it may be) KNOWS that otherwise your ‘arguments’ don’t have a logical leg to stand on. Only by dismissing all of the available evidence (albeit incomplete) can you pretend to place intuition on an equal footing.

But you are wrong. We are willing to accept WHATEVER the evidence actually shows. Many of us (myself included) would love to have the comfort of an afterlife or the thrilling possibilities presented by ESP, etc. Collectively, our minds are orders of magnitude more OPEN than yours.

The difference is that we want to believe in the things that are most likely to be true rather than just wishful thinking. There are an almost infinite number of possible worlds but (as far as we know) only one reality. The tools of science and logic seem to be the best way to understand it. Intuition has a far less impressive track record.

Intuition sucks as a way to find the truth. Science has a much better track record. Of course, you have demonstrated repeatedly that you have literally no idea what science actually is. Or, for that matter, logic — you are easily the most illogical person I’ve ever encountered.

“I have said, at this blog, countless times, that natural selection is obviously true.”

Wasn’t your position that evolution was obviously a fact, but the driving mechanism of natural selection has no scientific evidence? I seem to recall you stating something along these lines multiple times… Something about Lynn Margulis-esque intentionality in the direction of mutations. Or maybe I’m conflating you with cwfong.

hn, the sad truth is that your ramblings are not worth close scrutiny.

The problem with much of what you assert is lack of internal consistency. You can’t claim to accept natural selection while denying “Darwin’s theory (and its newer variations)” — well, you can try, but it means that you have no idea what you’re talking about.

No, you can. He’s saying that NS does happen but it’s not the primary mechanism of evolution; that’s (information/god/untestable intuition/whatever he feels like but can’t demonstrate). He’s internally consistent in a way, but that internal consistency is fanciful mush….

The problem is that he does not understand what “random” means in this context; and he misunderstands epigenetics; and he refuses to respond when asked for a mechanism whereby his “directed mutation” could possibly work.

He could have explained this in his response above but he would rather play games.

Sorry, but I disagree. By definition (my definition anyway 😉 ) “natural selection” could never be “natural” if it could be “overridden” at any point in time when it didn’t make “the right choice”. It is impossible to have any kind of guided evolution and still claim that random chance has a role.

We don’t disagree really — I was using your quote as a jumping-off point to snipe at hn. However, I don’t think it’s logically impossible (in a purely hypothetical sense) to have natural selection occurring within very narrow margins around the edges of process driven primarily by something else; sort of like genetic drift in relation to natural selection.

It’s also worth mentioning that the entire premise is incorrect (or at least, not at all explained). The position HN takes hinges on the idea that there’s something ‘more valid’ about spiritual or religious beliefs if they come from someone who believes in a conscious universe (whatever that is – HN sure doesn’t seem to know in a way that’s useful). However, the fact that this argument is even necessary is never defended. It’s entirely possible to have spiritual or religious beliefs through a materialistic framework. Hint: This is probably why a lot of people don’t actually label themselves as materialists are completely confused as to why some stranger is using it as an insult to them… because it doesn’t actually describe much or provide predictive value about their other beliefs. For example, there are “established” religions (no invoking the FSM necessary) that have no problem combining materialistic worldviews with their faith (not that I find them compelling on a personal level).

Understood, but I hate to give HN any ammunition to twist into claims that he was right — when he is so obviously out of his depth.

Besides, if I can be pedantic for a minute, I still think it is not even hypothetically possible for purely random events to have ANY effect on any conceivable form of guided evolution. Simply ALLOWING a random event to take effect unchallenged is still a form of “guiding” if only by omission.

Factor in the “Butterfly Effect”, etc. and the designer (or conscious universe, whatever) would have to control everything to be certain of a desired outcome.

“I don’t think it’s logically impossible (in a purely hypothetical sense) to have natural selection occurring within very narrow margins around the edges of process driven primarily by something else; sort of like genetic drift in relation to natural selection.”

Natural selection is OBVIOUSLY part of the process. That doesn’t mean it explains evolution.

“It’s entirely possible to have spiritual or religious beliefs through a materialistic framework.”

First of all, I am not here to sell religion.

But anyone who believes in a materialistic framework and claims to be religious or spiritual has probably not thought much about his beliefs. Or maybe he belongs to one of those churches that is actually a social or political organization.

“Intuition sucks as a way to find the truth. Science has a much better track record”

If you read what I wrote, instead of just skimming and missing every other sentence, you would know that I NEVER said intuition was better than science. Never ever ever.

I said that when we consider things that science does not understand we should not automatically discount common sense, intuition, or billions of personal experiences. We should not mindlessly assume that most human beings are hallucinating idiots.

“If you read what I wrote, instead of just skimming and missing every other sentence, you would know that I NEVER said intuition was better than science. Never ever ever.”

If you read what I wrote, you would know that I NEVER said that you did. I said “Intuition sucks as a way to find the truth” and it does. People make mistakes based on intuition ALL THE TIME, e.g. your billions of personal experiences are ALL DIFFERENT.

“It is already known that DNA can respond to changes in the outer environment. The rate of mutation can be increased, and who knows, maybe certain types of mutations become more likely.”

And this is evidence for directed evolution because:

Again – the researchers who found these responses are monolith against your amateur interpretation of their work – why should I believe you are more correct than the they are in their interpretation? There’s nothing incomprehensible (with hindsight, it was admittedly a very amazing finding… in the 80s) about DNA repair mechanisms being initiated and abnormally accelerated in response to environmental stress in an effort to promote mutagenesis. You keep presuming this is a “checkmate” to evolutionary theory, but you never actually explain *why*

“But anyone who believes in a materialistic framework and claims to be religious or spiritual has probably not thought much about his beliefs. Or maybe he belongs to one of those churches that is actually a social or political organization.”

Ah, of course. Those big bad scientists are so mean for telling people conclusions derived from experiments about the materialistic world are “facts”. Totally unlike the noble HN, who is just here to let you know that your religious or spiritual beliefs count for less unless they are specific to his ideology and worldview. Such a desirable alternative.

“The intention here is not to learn or try to understand what someone is saying, if you perceive him to be on the opposing team.
The intention is to find ways to discount what he says, even if you think it makes sense!
You are working so hard at NOT understanding my point.”

1) All epigenetic processes are the result of random mutation and natural selection.
Therefore epigenetics is well and truly part of the Modern Synthesis. Period. “Directed mutation” has never been demonstrated, and no mechanism for “directed mutation” has ever been elucidated. Moreover, no mechanisms for “directed mutation” are even possible.

2) Epigenetic mechanisms do not change the genetic code and hence cannot be a factor in evolutionary change. They may result in “superficial” changes such as methylation of DNA nucleotides, and these “superficial” changes may be inherited through several generations, but there is no change to the actual genetic code. Therefore they cannot be a factor in evolutionary change.

Epigenetic programming is mediated through enzymes and in response to the environment. The mechanisms remain unknown, but epigenetic programming is not “random”

The processes of epigenetic programming involve enzymes that interact with the environment and the genome. The composition and structure of those enzymes is coded for in the genome, both in the sequences of the genes coding for those enzymes and in the processes that mediate the transcription, translation, and post-translational processing of those enzymes. Therefore, the activity of the enzyme(s) in mediating epigenetic programming is subject to environmental influence and hence evolutionary selection.

The process of epigenetic programming evolved just like every other process that organisms use to have a viable phenotype in an environment.

Epigenetic changes are very likely more important than gene changes in evolution because there are orders of magnitude more degrees of freedom in epigenetic changes than in genomic changes. Many genes that code for proteins have remained unchanged for hundreds of millions of years, and across many species. Similarities in genes is so well conserved it can be used to generate lines of descent.

None of this suggests anything “directed”, or “guided”, or any “agent” behind any of the processes that are observed to happen.

Organisms exhibit epigenetic programming because ancestors that exhibited epigenetic programming had more descendants than the non-ancestors that did not.

“Since computers are all about programming for a specific goal, please explain how to design something WITHOUT a goal in mind.”

The goal is not always perfectly specified, and a lot of trial and error is usually involved. There is always some kind of goal in mind, of course, but there are always things you don’t know at the beginning, that you find out as the program evolves.

Maybe non-programmers have the idea that writing a program simply involves writing out a list of instructions, but it is nothing like that.

To me, design is based on a fundamental understanding of reality and what you want your designed thing to do. If you don’t know what you want it to do, or if you don’t know the path by which the thing you want it to do will happen, you are not practicing design.

“To me, design is based on a fundamental understanding of reality and what you want your designed thing to do. If you don’t know what you want it to do, or if you don’t know the path by which the thing you want it to do will happen, you are not practicing design.”

If you never tried to write software, then you wouldn’t know that what you are describing is impossible.

And this is why biologists seldom appreciate the level of complexity of natural systems, and the limits of reductionism.

“The intention is to find ways to discount what he says, even if you think it makes sense!”

No, hn. Sometimes a cigar is just a cigar… And sniping is just sniping.

I think I’ve been pretty fair in acknowledging when you have occasionally made sense, and this is not one of those occasions. You’ve had literally years to connect the dots between your various assertions and heavy denialism, and what we have is basically: “Anyone who doesn’t agree with my vague and inconsistent monist philosophy is a materialist (undefined) ideologue; this includes entire mature and uncontroversial scientific fields such as biology. One day new physics (undefined) will validate me and explain the unobserved effects upon which I base the need for new physics. And we don’t know everything so what we do know is exactly as valid as anything I care to make up.”

The goal is not always perfectly specified, and a lot of trial and error is usually involved. There is always some kind of goal in mind, of course, but there are always things you don’t know at the beginning, that you find out as the program evolves.

Umm … you seem to be ignoring the fact that we are talking about the influence of design on evolution. Either it is guided as you propose, or the end result is essentially random, i.e. natural selection.

You really can’t have it both ways. Either the guiding intelligence (universal consciousness, whatever) has an end goal in mind or it doesn’t. If it does have a goal, then any “random mutations” will be completely irrelevant as they will necessarily be compensated for in order to meet the end goal. As you correctly point out, version 1.0 is ALWAYS adjusted to meet the end goal. So natural selection via random chance is a meaningless concept in the type of guided universe you propose.

Besides, even though you obviously haven’t thought it all the way through, you really don’t want to admit the possibility of a “trial and error” approach. Either ALL variability is accounted for, or else we are left with what is essentially a “cosmic scientist” i.e. someone who controls some variables with the goal of observing the results.

For all I know, the universe may indeed be a gigantic petri dish and something did set the cosmological constant and other factors specifically to allow life and is observing the results. But I don’t think that is the type of world view that you would be comfortable with. I’m pretty sure you want to believe something along the lines of “each individual life has meaning”. And since you are unwilling to accept the “material world is all there is” hypothesis, then that “meaning” must necessarily be in reference to some external metric. But it is impossible to reconcile that belief with random chance.

Nope, if you allow that possibility, then for all practical purposes the “cosmic scientist” might be interested in HOW MANY people/bacteria live or die under various circumstances, but each individual is going to be irrelevant.

I don’t really care what you believe or want to believe, but you should at least try to be logically consistent. And trying to say that natural selection plays a role in guided evolution is logically inconsistent.

If you never tried to write software, then you wouldn’t know that what you are describing is impossible.
And this is why biologists seldom appreciate the level of complexity of natural systems, and the limits of reductionism.

Holy Crap, this is complete BS.

You ALWAYS have a goal, and you are always trying to design at every step of the process. Often the end-users don’t describe the goal well, and goals do change as desires and practicality converge, but you can’t design without a goal.

And before you start telling us how you know so much more because you are a “computer programmer”, be aware that I have over 40 years of IT experience, including a decade as a direct report to the CIO of a Fortune 10 company. I’m pretty sure I have a good idea what it takes to develop a program.

The problem with a “partially guided” process is that if the process is chaotic (as virtually all processes with multiple coupled non-linear parameters are), then the butterfly effect is going to magnify any deviations from a purely “guided path” exponentially over time.

Nothing in your response to my post negates what I said.
Perhaps you simply misunderstood it, and perhaps I wasn’t clear.

My point was that epigenetic processes are coded for in the genome, and the genome is the result of random mutation and natural selection. Therefore, at base, epigenetic processes are the result of random mutation and natural selection. And epigenetic processes alter the genome ultimately only via the normal evolutionary processes of random mutation and natural selection.
If you ARE saying something other than this, you are simply wrong, but I doubt that is the case.

I will also add that the way you explained epigenetics is what leads people like HN to misinterpret epigenetics as a mechanism for “directed mutation”. That specifically is what I was trying to avoid. And my post WAS directed at HN.

I may respond in more detail tomorrow if time allows, and I welcome any further feedback.

Well I studied both computer science and cognitive psychology (the subjects are closely related, believe it or not). So I know a lot about psychology and I know it contains a lot of stupidly designed experiments, such as the ones showing your beloved Dunning-Kreuger effect.

The DNA sequence does not change when a cell is epigenetically programmed.

The only difference between your liver cells, your brain cells, your muscle cells, and essentially every other somatic cell in your body is their epigenetic programming.

The evolved process of epigenetic programming is not “random”. Maybe the early stages of the evolution of epigenetic programming were “random”, but the events that lead to non-successful random-type phenotypes did not survive, and so there are no extant organisms with “random” type epigenetic programming.

The process of epigenetic programming did evolve. Epigenetic programming of descendants is not “the inheritance of acquired characteristics”, as in Lysenkoism.

Calling evolution “random” is unfortunate terminology. A better term to use would be undirected. There may be differences in mutation rates depending on what bases code for what. If there is a bias in nucleotide substitutions because of their chemical composition, is that still “random”?

You still haven’t said anything with which I disagree and which disagrees with what I said. The context in which we are commenting about epigenetics is simply different.
I hope this response gives you a clue as to what I mean:

The DNA in our liver and mucsle cells is identical. The only difference is that different pattern of genes in the DNA of our liver and muscle cells are switched on and off – which is what makes one cell a liver cell and the other cell a muscle cell. This difference is coded for in the genome of the organism. The genome is the result of random mutation and matural selection. Therefore, epigenetics, at its base, is the result of random mutation and natural selection.

If you don’t agree with this, youve simply misunderstood what I’ve said.
In that case, maybe someone else can help.

hn, could you recommend any good resources for learning the basics of computer programming? I’m not at all bothered about platform or language, but would ideally not want to pay for something like .net.

BJ7, Clearly MM doesn’t know what he is talking about. That he agrees with hn is an example of “crank magnetism

I think we are pretty much in agreement as to what constitutes epigenetics.

There is a lot of very sloppy writing about it that talks as if epigenetic programming is an example of teleology, changes happening “for a reason”. It isn’t. The changes happen because the system (genomic DNA plus environment) evolved the epigenetic programming system to modify daughter cells (during differentiation) and germ cells (during epigenetic programming of the next generations) to better “optimize” conversion of substrates into descendants.

The epigenetic programming system is a lot more complicated than DNA replication in the cell cycle. It is more important too. (I know they are both essential, both are 100% important, but there are orders of magnitude more degrees of freedom in the epigenetic programming system, and that is the system that copes with a variable environment).

It depends on what you would like to do with your new programming skills. Learning is always more enjoyable and easier to retain if you can immediately put it to good use.

In almost any of the sciences or engineering fields, python is widely used and available on almost any conceivable platform. There are many newer, more glamorous languages but python is a great place to start to learn program flow, conditionals, etc. It is easy to create “quick & dirty” type programs for a one-time problems. Some people do use python for huge, production applications but you quickly get into “religious wars” about which is best for “professional” programs.

If you don’t actually code but still depend on computers professionally, I always recommend learning at least the basics of whatever “shell” your OS uses. For any of the IX’s (Unix, Linux, even Mac which runs on Darwin unix) that means “sh”. MSDOS has its own shell although not nearly as powerful. Conditionals and program flow are similar to python (or almost any other language) but the syntax is generally unique to each language/shell although there is often overlap. Just learning wildcard expansion in the shell will often save huge amounts of time compared to a graphical user interface.

For learning in general, I recommend not started with compiled languages such as C. The immediate feedback of interpreted (or just-in-time compilers) like python or the OS shells is much more fun and you can see and correct mistakes quickly.

I would not start with anything platform specific such as C# or .Net unless I was positive I would never switch platforms in the future. Better to first get the basics down which will work on any platform.

Having said that, if you are in the Apple ecosystem, I’ve been having a lot of fun with Swift lately. It’s free and provides real time feedback on program changes if you have a Mac and use Xcode. Even IBM has supported it with a free Web programming environment.

If you prefer books, I recommend pretty much anything from O’Reilly Publishing. Their editors do a great job of making sure all their offerings are entertaining and informative.

Of course, there are many, many tutorials on Youtube although quality varies widely. At least they are free so you can try them all until you find something you like.

Cheers, Steve Cross. I think what I need to do is set myself a goal — e.g. I wrote myself a backup script a few years ago to backup my iTunes library to another drive; it took me ages to work out how to do it but what motivated me to keep trying was the fact that I actually had a need. So if can find something else I maybe want to automate, I can have a crack at writing a programme to do it.

I’m a complete novice, but I picture epigenetic programming as something akin to the mechanism that “tells” a cell how to grow based on its immediate neighbors, i.e. a heart cell divides into more heart cells, kidney into kidney, etc. Obviously, there are boundary conditions as something gets “big enough” or become valve instead of muscle or needs to be connected to something else.

Goals are good. Although I do recommend trying to start with smaller, more easily accomplished tasks. Regular positive reinforcement is the best motivator I know.

And honestly, although I’m obviously biased, I do think that everyone should learn at least the rudiments of computer programming. Hardly anything we do nowadays is NOT somehow associated with computers, and using them more efficiently is a good thing.

More importantly, I really do believe that learning to create a list of instructions that can unambiguously be interpreted by a “logical” computer can help us learn to think more logically in general.

Clearly, that is not always the case. Some people are more adept at “compartmentalizing”. Or, at least I hope that hardnose is compartmentalizing — if not, then I would absolutely HATE trying to debug one of his programs. 😉

All control of cell function is “local”; that is individual cells control their own DNA replication, ATP synthesis, protein synthesis and everything else. There are signals that pass through cells to “tell” them what to do, but all communication is necessarily two-way. Signals must be generated to code for information, and then those signals must be decoded to produce the appropriate response inside a cell.

A problem that humans have is hyperactive agency detection, the human default is to impute “agency”, to impute that effects have causes; often due to an agent “causing” the effect. Weather is explained by “weather gods”, wind is explained by “wind agents”, trees are explained by “tree spirits”, cognition is explained by a homunculus called the mind. The hypothesis of a homunculus doesn’t add any explanatory power, it simply postulates an unexplained and unexplainable agent as the “cause”.

Very little of cognition is “algorithmic” in the way that Turing Equivalent computers are “algorithmic”. Most of cognition is via neural networks which don’t output “data strings” (digital data), so much as they output “feelings” (analog changes in the environment(s) in the vicinity of other neural networks) that change the relative weighting functions of various cell inputs.

“Learning” is the self-modification of neuroanatomy such that the new neuroanatomy is able to instantiate the idea being learned. You need a neural network that can “recognize” an idea before you can even think about it.

Most of what you find in all the books and tutorials is details, and those are things you can easily look up. I, and everyone I work with, still have to often look up details of syntax, etc.

I first started to learn programming with an assembler, and I actually recommend that because then you will get some idea of how the computer actually works. Everything Steve Cross mentioned was high level languages that abstract away the basics. As a result, they don’t give you a sense of what is really going on.

But as he said, it does depend on what you plan on using it for. If you want to make a website then I recommend PHP and JavaScript. My favorite high level language is actually perl, because I have used it the most. Also I think it’s a good language, either for scripting or for object-oriented programming (there are o-o frameworks for it like moose). If you have a Mac then perl is already installed, since it comes with unix. I like to use perl for just about anything.

“Can you recommend an assembler for perl? I don’t know if I’m using the correct parlance here.”

An assembler would be for a specific operating system, usually DOS (windows) or unix/Linux.

Don’t start trying to use an assembler right away. You should start learning the basics and have some fun with perl or other scripting languages. If you have a Mac, perl is already there and ready to use. If you have windows, then better to ask someone else who knows what MS stuff would be most fun and easy to start with.

Telling a beginning programmer that they should learn assembly is like telling a beginning woodworker that they should learn how to forge their own nails before they touch any tools.

Knowing assembler isn’t useful – it doesn’t help you understand how a computer works, or make you a better programmer. It forces you to deal with tiny irrelevant minutae, instead of looking at big problems, breaking them down into their smaller component problems, and then solving those.

Unless you plan on writing low level hardware drivers (which mostly aren’t written in assembly anymore), or realtime code, its just going to frustrate and distract you.

Perl runs through an interpreter – its a language that was really designed to do text parsing and scripting – its got a lot of functionality now, but its very hodgpodge in my opinion. The syntax is often obtuse, it does a whole lot of things that very few other languages do (like have ‘assumed’ variables). I hate perl (everyone has their favorites, and least favorites)

If I was training a new programmer, and had to pick a general purpose language at this point, I would probably push them at Java – its got a huge repository of code examples to look at, its modern enough that you’ll run into modern principles, and it can do anything. Its also hugely marketable if you decide its something to do.

I’d also recommend VB.NET – it has full access to the entire .NET ecosystem, has much more english-like syntax than C# or F#, and Visual Studio Express is a fantastic development environment (for being free). (it can run on OSX, BSD, linux, etc – check out Mono ) I’ve got everything from simple games to large scale production interface engines running at large hospitals (servicing medical devices, modalities, etc)

I agree with hn that perl is powerful and widely available, but I wouldn’t recommend it as a starting language. It is extremely powerful but it was originally designed as a text processing language by Larry Wall, a linguist. It does have phenomenal pattern matching and is superb for manipulating text, but some of the syntax and control flow grammar are fairly unusual, and I think it is harder to move to other languages.

You don’t really need an assembler (actually it would be called a compiler) for perl or python or any other scripting language. You probably know that all computers really only understand machine code, i.e. 1’s and 0’s. While it is literally possible to manually enter machine code for short programs, it is tedious and error prone.

Humans have developed two basic ways to simplify program entry. One is called “compiling” which is a two step process. First, you feed the source code into a compiler which translates Human readable instructions into the actual machine code that the CPU understands. Second, the “binary code” is executed. While this generally produces the fastest and most efficient code, it is more tedious for test/change/repeat cycles which are common while learning.

The alternative is what we call a Interpreter. It’s actually more complicated — there are many different approaches that try to combine the best of each approach, but in general, you run an interpreter program which give you a command line into which you can type instructions which are interpreted/executed immediately in one step.

For example, you can just start python, and then enter commands at the “>>>” prompt. There are often different versions or releases of the scripted/interpreted languages, but almost anything will do for learning, and the release installed on your OS is generally fine.

BTW, I agree that assembly language is worth learning eventually, but only if you really want to understand what goes on under the hood, but his description is misleading if not outright wrong. An Assembler is really just a compiler for a specific family of CPU’s i.e Intel 8086, Motorola 68000, etc. Machine language is very, very low level. You use commands like ADD, INC, CMP to operate on individual bytes of data. It takes a LOT of instructions to anything at all.

It seems that I was typing while RC was posting, but I agree with pretty much everything he said. Java is indeed a pretty good choice for a general purpose language, but in my opinion it is a little harder to learn.
But that could easily be the old dog / new trick problem.

There are two basic approaches (actually philosophies) to programming — procedural vs. object oriented. As most things in life, it is more complicated than that, especially since most languages evolve to contain parts of each approach. As you might expect, procedural focuses on process, i.e. a “simple” list of instructions (albeit with loops, conditionals, etc.). Object Oriented is more modern and supposedly safer (i.e. less prone to ‘bugs’). OO programs consist of objects or self-contained (hopefully well defined) modules that pass messages back and forth.

I learned a dozen procedural languages before OO was a thing, so I will probably always find things like python to be more “natural” at least for me, but YMMV. For short programs, either approach is fine, but many or even most computer science big wigs feel that OO is better for really large programs.

I was just belatedly proof reading and realized that my comment about interpreters could be confusing. I didn’t mean to suggest that using interpreters was more difficult — it is usually much easier. It’s just that helpful people are always trying to make the coder’s life easier by trying to combine the best features of each approach (e.g. just-in-time compilers that act like interpreters) so it is hard to definitively categorize every environment.

“Knowing assembler isn’t useful – it doesn’t help you understand how a computer works”

That statement is wrong. If you don’t ever learn an assembler you will not become familiar with what a computer actually does.

I learned assembler for DOS when I had just started learning, which everyone told me was a dumb idea. I probably should have started with something higher level, but at that time I didn’t know any better.

But I was always glad I went to all that trouble because it helped my understand computers on a level you never get with other languages. It forces you to find out what the computer understands, what its tiny little brain actually knows how to do.

Assemblers are seldom used any more because they are for specific processors, which makes them inconvenient. And compilers are better at optimizing now so it isn’t worth it, for most practical things.

But everyone who told me it’s a waste of time to learn an assembler was wrong, imo. In addition to getting familiar with a specific processor, you get familiar with an operating system. Since typical processors and operating systems have a lot in common, you learn about processors and operating systems in general.

“Object Oriented is more modern and supposedly safer (i.e. less prone to ‘bugs’).”

Object oriented is BETTER. I didn’t realize that until I had been forced to do it for a long time. It is less efficient, but that doesn’t usually matter. When you write software you are trying to model something, and objects are a natural way to organize a system.

People who got used to procedural languages might not see the benefits of o-o but there are major benefits, and it goes way beyond being safer. I don’t even know if it’s any safer. It is closer to how the mind works.

If mumadadd has a Mac I think it is perfectly fine for him to start with perl. Mumadadd — did you get overwhelmed by all this confusing advice? If you come back, please tell us if you have a Mac or a PC.

Perl comes with all unix/Linux systems and does not need any compiling (it is not really interpreted by the way, perl programs get compiled every time to run them).

So if you start with perl it will be quick and easy to write your first Hello World.

And it will be extremely easy to learn PHP, which resembles perl, if you ever want to make websites.

“Perl runs through an interpreter – its a language that was really designed to do text parsing and scripting – its got a lot of functionality now, but its very hodgpodge in my opinion. The syntax is often obtuse, it does a whole lot of things that very few other languages do (like have ‘assumed’ variables). I hate perl (everyone has their favorites, and least favorites)”

No it doesn’t run through an interpreter, although you don’t compile it.

Perl is a great language, loved by many, and only people who don’t use it hate it.

“As you might expect, procedural focuses on process, i.e. a “simple” list of instructions (albeit with loops, conditionals, etc.). Object Oriented is more modern and supposedly safer (i.e. less prone to ‘bugs’). OO programs consist of objects or self-contained (hopefully well defined) modules that pass messages back and forth.”

That is not how you would contrast procedural vs object-oriented. Both are structured — the main program calls sub-programs (usually called subroutines, functions or methods). Everyone now organizes their programs in a structured way, whether they are procedural or o-o.

The alternative to a structured program is a giant pile of spaghetti.

What differentiates o-o programming is, for one thing, inheritance. You do not have inheritance in procedural programming, which makes it very limiting and harder to organize.

There are many other things that you can do with o-o that make it far superior to procedural.

But the actual coding for both styles is similar. You have blocks organized into subroutines that take parameters.

“I think we are pretty much in agreement as to what constitutes epigenetics”

Yes, but of course you express yourself better than I do.
But I just wanted to clarify that what I was trying to say was no different from what you were saying. I haven’t seen anything you have written here about epigenetics with which I would disagree.

I usually bow out of discussions on topics about which I have some knowledge when someone with better knowledgeable and communication skills comes along. I’m just a reasonably intelligent, reasonably informed, layman on all the topics on which I express (what I believe to be) the opinion of the experts on these topics. I don’t have sufficient depth of knowledge to have an opinion of my own, but I think I’m reasonably good at spotting when someone has a misunderstanding of these topics and what has led to their misunderstanding.

Of course, hardnose does’nt even have a misunderstanding of the topics on which he presumes to pontificate, he just has no knowledge about them at all. He just latches onto the conclusions of the scientific fringe that support his own world view or philosophy. It is interesting that he has been unable to respond to the discussion about epigenetics on which his “directed mutation” depends.

Some years ago I was going through my son’s old text books and found one called “C how to program”. I read it cover to cover and solved all the problems at the end of each chapter. I then went back and found an updated version called “C++ how to program”. I also read that cover to cover and solved all the problems. A particular interesting one was solving “The Knights Tour” and displaying that graphically on the computer screen (ie you could watch the knight complete its moves over the chess board). You placed the knight on any random square of a chess board and then programmed it to land on every square only once ending up where you originally placed it. Then I made up some of my own puzzles including a 3D version of a 4 cell Tic Tac Toe. You could play against the computer selecting one of the options “easy” “hard” or “impossible”. It was not possible to beat the computer if you selected the “impossible” option.

This was a fun twelve months of programming of which I have very fond memories.
Of course it probably sound like child’s play for you computer experts!

I love perl — it was one of my “go to” languages a few decades ago and I still use it when it is the right tool for the job. But it is NOT a good choice for a first computer language. It is quirky to put it mildly. It is way too easy to pick up bad habits. Although some things have been grafted on later, it is missing a lot of important features that modern computer scientists consider essential for writing good, maintainable code. It is more difficult to understand and modify older code. For these reasons and others, perl has become steadily less popular for the last 20 years. It can still be a great tool, but just not as your first language.

Most of the popular and widespread languages have strong roots in ‘C’, pretty much the granddaddy of all high level languages, but even C was pretty low level in comparison to today. Many of today’s languages copy the basic control flow and use the same (or very similar) keywords for conditionals and loops, etc. As such, much of the basic learning, i.e. “thinking like a computer” is more easily transferable to a much wider variety of languages. Don’t get me wrong, syntax is often just slightly different, and (as I believe hardnose said) all programmers write code with a stack of reference material close at hand because it must be “exactly right” for the computer to understand it and give you the results you expect.

The guy that designed perl (Larry Wall) was not a computer scientist. He was a linguist and fascinated by sentence structure and how words are used. As you might imagine, his approach to problem solving was vastly different than the traditional computer science methods and perl reflects that philosophy. Not necessarily bad, but different. For some things it is fantastic. Much better than things like sed or awk for instance. Personally, I think every professional programmer should be familiar with perl, in spite of its declining popularity.

Your first few languages will have a big impact on your future approach to programming in general. They will affect how you analyze the goal and how you break problems down into logical steps. There are right ways and wrong ways to do things, or to be fair, more and less efficient ways. There are almost always many different ways to accomplish the same task, so learning the “best” ways first will have lasting impact on your programming skills.

Hardnose was right when he described procedural vs. object oriented — there is a lot to consider that I glossed over. I was trying to give a very simplified example without using words that would be unlikely to have any significance to a beginning coder. But the important point to remember is that the “experts” really have learned the best ways to create good, efficient, easy to understand programs. That is the reason that new languages keep being created and older ones get phased out. Look at any of the charts of computer language popularity to see what I mean.

One final note — don’t mess with Assembly Language for quite a while, if at all. It is indeed fascinating if you are curious about how computers ACTUALLY work, but it will provide no practical benefit unless you are writing production code meant to handle high volumes of data, or else (as RC noted) writing very low level drivers. Where speed or space are of the essence, knowing exactly how many CPU cycles you’re using or exactly how many bytes each data structure takes can be important. But with today’s ever faster systems and ever cheaper memory, it hardly ever matters.

Nope, I would definitely NOT consider your accomplishments to be child’s play. I think everyone should attempt to do something similar because it really does help people think more logically. Or, at least it can if you don’t “compartmentalize” and allow emotions to overrule logic.

For all practical purposes, you’ve already learned the most important portion of what every programmer learns. Beyond the basic logic and control flow of programming, pretty much everything else is simply a matter of learning the most efficient algorithms for a particular purpose and learning the specific Application Programming Interfaces (API) to accomplish the desired task.

Obviously, I’m over simplifying, but the logic really is the most important part.

“It is in”deed fascinating if you are curious about how computers ACTUALLY work”

Yes it is the only language that will help you understand how computers think. Everyone says don’t learn it because it isn’t practical, but they are wrong. You won’t ever need it for any job, probably, but it will help you in learning everything about computers.

Don’t get me wrong, I really do love perl — the pattern matching alone is really an art form. But I just don’t think it is the best place to start. While Larry Wall treats the “There’s more than one way to do it” philosophy as almost a badge of pride, it does tend to create code that is confusing for others to understand and difficult to maintain.

Assembly Language is interesting if you’re a geek and/or have very specialized needs, but in general, I don’t think it will help you much with higher level languages. The way you have to do things in tiny, incremental steps and the sheer number of steps required is so different from a higher level language, that I just don’t think there is much overlap in skills.

On the other hand, I once wrote a dis-assembler for my Radio Shack TRS-80 just so I could reverse engineer the ROM, so there is that.

“it does tend to create code that is confusing for others to understand and difficult to maintain.”

It is possible to write a pile of spaghetti in any language. With perl you have to make sure to use strict, and it helps to use an o-o framework (but perl 6 is out and it is a real o-o programming language).

I think mumadadd would love perl and it would be simple for him to get started. But maybe there is something for windows that he would also like (but I hope no one recommends VB script).

Almost forgot. Regarding Larry Wall’s degree in Natural and Artificial languages, that is by no means the same as computer science.

Perl was designed to be easy to parse, i.e extract meaning from the various keywords and symbols so it can be turned into machine code. I’ve already mentioned the superb pattern matching, and there is no doubt that perl can “understand” source code that would totally confuse other computer language interpreters or compilers. But, simply understanding what the programmer is requesting does not, by itself, ensure that the resulting code is efficient. As you should know, something as simple as variable typing as well as many other “best practice” concepts were lacking in the original release.

Many of these features have been added along the years (some more successfully than others) but a house is only as strong as its foundation, and sadly, perl is showing its age. Judging by the declining usage statistics, it seems a lot of people agree.

I absolutely agree, and it will continue to be true as long as programmers continue to forget that computers do what you tell them to do rather than what you want them to do. To that end, it is critical to do everything you can to make sure you “tell” them what you think you are.

That is really one of the main goals of modern languages — to make it more difficult to write confusing code. Although, as we all know, making something idiot proof only results in better idiots.

“No it doesn’t run through an interpreter, although you don’t compile it.
Perl is a great language, loved by many, and only people who don’t use it hate it.”

Yes, Perl absolutely does run through an interpreter – perl.exe on windows system, or the executable ‘perl’ in unix based systems. The fact that the interpreter comes standard in most ‘nix implementations doesn’t change the fact that its interpreted.

I use Perl, a lot. Its fantastic for one or two things – mostly parsing large amounts of text, or for scripting. For everything else, you’re working with things the language wasn’t designed to do, are usually hacks, and lead to some of the most obtuse code possible.

People do it, but using Perl to write anything other than those things is really using the wrong tool. Its a terrible language for beginners – because its so nonstandard, and so many of the important modern principles are implemented in really hacky ways. Like Steven said – its really showing its age.

As to assembler – one of the hardest things to teach programmers/devs/etc – is that you don’t need to reinvent the wheel – you’re not going to whip up a better encryption algorithm, storage engine, etc, in a couple of weeks. Getting down to the nitty-gritty and trying to build your own is nothing but a waste of time.

You can understand how encryption works without writing a public key system, and you can understand how memory mapping works without writing a memory controller, and you can understand how a computer works (registers, stacks, etc) without spending months learning assembly.

Use the environment that languages/systems give you – if there’s an available API that does something well, you use it. Modern code is (usually) written so you don’t need to know all the nuts and bolts to understand what its doing – just like you don’t have to know all the cutoff points in the fuel mapping for your car to drive – thats been abstracted away, because the current implementations are very efficient, and because people and programmers have better things to do.

Sometimes I wish I hadn’t learned assembly, or fortran, or cobol, or MUMPS, or LISP (especially LISP), or even C – starting my career in these older more ‘nitty-gritty’ languages made it really difficult for me to transition over to the much more modern full featured languages with full featured dev environments – I get way too tied up in ‘how is this happening’ – when in a lot of cases thats largely irrelevant in the production of business applications.

Again – telling someone they need to learn assembler to know how computers work is like telling someone they need to understand forging to know how cars work. Very few programmers/developers need to understand the exact machine code details for a particular processor.

Seriously, the kids these days don’t realize how easy they have it. Modern compilers can optimize code better than even the best assembly wizards of yesteryear.

Most important, you can’t really become a great coder (or anything else) unless you can overcome emotion and simply learn to use the best tool for the job at hand. I can’t count how many programs I’ve seen that could have been improved by a better choice of language. Too many people simply use their favorite language and wind up losing the benefits of languages specifically designed to solve certain types of problems.

“And perl is NOT interpreted. The whole program is compiled every time it runs.”

Ummm … the Wikipedia entries for perl and Larry Wall BOTH call perl an interpreted language.

Besides, I don’t think you have a good understanding of what the two terms (interpreter, compiler) actually mean.

In a very real sense, ALL computer languages from Assembly on up must be interpreted from human readable code into the binary object code that computers understand.

The term “compiler” arose in the bad old days when ALL computer languages were pretty low level languages, i.e. assembly code. Even ‘c’ required a LOT of code to get anything significant done. It was customary to break every program (except extremely small ones) into a series of modules which were separately interpreted into machine code and then compiled and linked into one “object program”. The object program is then executed by the computer. This “binary” could then be run as often as needed without recompiling unless changes were needed.

This allowed for changes to one or two modules to be made more easily and quickly without having to reinterpret every part of the program every time. This was a pretty big deal at the time because computers and compiler programs were both a hell of a lot slower than today.

The definitions have blurred in the last fifty years as “improvements” are added to each process, but for all practical purposes, interpreters look like one step to the enduser and compilers look like two or more steps that are required to run a program.

There actually are true compilers available for perl (and almost all interpreted languages) but they are uncommon. Their purpose is usually to “hide” the source to try to prevent people stealing commercial applications. In common usage, almost all perl (and python, ruby, etc.) code is run by a single step interpreter program.

“Besides, I don’t think you have a good understanding of what the two terms (interpreter, compiler) actually mean”

Don’t worry I know what interpreter and compiler mean. Originally, interpreted languages such as LISP interpreted each line, rather than compiling the whole program, while compiled languages such as C output a machine language program. Now days many languages are something in between. Java does not strictly fit the definition of a compiled language, and perl does not strictly fit the definition of an interpreted language.

“Maybe you could say an assembler interprets and a compiler interprets, but that is not how “interpreter” is defined.”

I never said that it was. I was simply trying to explain the reasoning to people like mumadadd if he is still listening.

What I actually said was: “for all practical purposes, interpreters look like one step to the enduser and compilers look like two or more steps that are required to run a program”, which is completely true.

But the real point is that perl is considered to be an interpreted language by pretty much everyone except you.

Apologies for my lack of participation in this discussion, given that I was the one who kicked this topic off — I was out of town all weekend with friends, so no opportune moments for comments. The pointers are appreciated though, and I’ve been surprised at the level of knowledge available on a topic that rarely crops up. 🙂

“If you knew more about perl you would know there isn’t a simple answer.”

Why can’t you just admit when you are WRONG, WRONG, WRONG !!!

I said several times that improvements are regularly made and that the definitions are blurred, but that “for all practical purposes, interpreters look like one step to the enduser” and that is really the most important fact for a novice coder to consider when deciding which language to learn.

If you knew more about computer science, you would know that MANY different approaches have been used when translating human readable code to machine language. As with many things in life, humans often pick names that are essentially arbitrary but happen to be convenient.

“for all practical purposes, interpreters look like one step to the enduser”

Different people define it differently. You can find plenty of debates over this. In the past, interpreters were defined more specifically, as executing each line rather than parsing and compiling the whole program.

The point is, if you intend to communicate useful information, you should stick to words the way MOST people use them.

Yes, interpreters are implemented in many different ways but once again, for all practical purposes, a novice coder is only going to care about (or even understand) the fact that it requires one step to execute a program instead of several. Most professional coders don’t even care about the details as long as performance is adequate.

You are perhaps the most insecure person I’ve ever encountered. Your pedantic insistence that you are correct in some alternate universe serves only to convince the rest of us that your insecurity is justified.

Two very different things — real professionals do learn, and they know when it is important to care and when it isn’t, as the FULL quote clearly indicates, i.e. “Most professional coders don’t even care about the details as long as performance is adequate.”

Like I said, STOP DIGGING. When the “last word” is consistently wrong, it doesn’t help your cause.

You know, hardnose, your life would be a lot more pleasant if you didn’t have to be RIGHT all the time (as is: “I’m right, therefore everyone else MUST be completely wrong”).

Your insecurity apparently makes it impossible for you to acknowledge or even recognize when people MOSTLY agree with you. You either haven’t read or aren’t paying attention to what I’ve said.

I explicitly said that learning machine code was worthwhile if you need to know what is REALLY going on with the hardware, or simply if you happened to be curious. I also mentioned that I personally got a lot out of it. But seriously, do you really have to get insulted just because a few of us believe that a novice coder with no plans to turn pro (AFAIK) doesn’t need to learn machine code and may not ever really benefit much from learning it?

Similarly, I in particular, but also RC, mentioned that perl was very powerful, useful and worth learning, but is not a very good place to start. Compared to most languages, it is quirky to say the least, and likely to make learning other languages a little more difficult should mumadadd be so inclined. And, if he never wanted to learn another language, then why not pick a starter language that is currently more mainstream? At least that way, his knowledge would be more likely to be useful for a longer time.

Perl usage has been declining for a long time. Perl 6 might (but probably won’t) change that pattern. Because it tries to retain a fair amount of continuity and even has a backward compatibility mode, it winds up being more complicated rather than less — at least for a new user. If you already “think in perl”, it is not necessarily a big deal but you really should learn how to change your perspective to that of different people on occasion.

Here is an article that explains how the so called neo-Lamarckian “inheritance of acquired characteristics” can be explained by the Baldwin Effect and how the Baldwin Effect via a combination of Niche Construction and Genetic Assimilation is just ordinary “random mutation and natural selection” (and therefore part of Modern Evolutionary Theory).