I am semi bright, somewhat literate and I'm suffering from the dunning-kruger effect.

I consider Sam Harris to be quite intelligent, much more than I am. I value his work immensely and yet I think he's wrong in a couple areas of thought.

First area is free will. I make choices. I can change my mind. I can consider the consequences of my actions. I should be held accountable for my actions.

Second area is AI. I have some computer programming background. I only have 3 yrs of university level physics & calculus, along with many other math & logic related courses. I like to ski and love to walk along the beach at sunset.

When he talks about how AI programs of the future will self replicate and become vastly superior in intelligence with the potential to destroy mankind as easily and with less thought than when we accidentally step on an ant, I think he's way off base.

Living organisms can deal with mistakes. We can repair ourselves to a degree. A mistake in computer code can cause a cascade of errors.

The thing is, computer code is task related and if the task is to become more intelligent, at what point will it end that task ? It seem to me that this idea is self defeating. Imagine a weight lifter who only lifts more and more weights, but never leaves the gym. Is his increase in power a threat to anyone ?

Will a computer AI ever be intelligent enough to stop and realize that it's wasting it's time ?

Will it realize that life is probably deterministic and that it had no free will ?

I don't know, but I feel like I'm suffering from the dunning-kruger effect and it's not pleasant.

Insanity - doing the same thing over and over again and expecting different results

I consider Sam Harris to be quite intelligent, much more than I am. I value his work immensely and yet I think he's wrong in a couple areas of thought.

First area is free will. I make choices. I can change my mind. I can consider the consequences of my actions. I should be held accountable for my actions.

Second area is AI. I have some computer programming background. I only have 3 yrs of university level physics & calculus, along with many other math & logic related courses. I like to ski and love to walk along the beach at sunset.

When he talks about how AI programs of the future will self replicate and become vastly superior in intelligence with the potential to destroy mankind as easily and with less thought than when we accidentally step on an ant, I think he's way off base.

Living organisms can deal with mistakes. We can repair ourselves to a degree. A mistake in computer code can cause a cascade of errors.

The thing is, computer code is task related and if the task is to become more intelligent, at what point will it end that task ? It seem to me that this idea is self defeating. Imagine a weight lifter who only lifts more and more weights, but never leaves the gym. Is his increase in power a threat to anyone ?

Will a computer AI ever be intelligent enough to stop and realize that it's wasting it's time ?

Will it realize that life is probably deterministic and that it had no free will ?

I don't know, but I feel like I'm suffering from the dunning-kruger effect and it's not pleasant.

I think Harris' views on free will are a bit more nuanced. I always got the impression not that we should throw personal accountability to the wind, only that we should broaden our perspective to take into account the very many things we simply have no conscious control over. A murderer needs to be stopped, but recognizing deterministic causation for what it is and recognizing that most (if not all) of the factors involved in the long series of decisions that lead up to the murder are beyond the murder's conscious control, and thus temper our desire for unhelpful retribution and revenge.

As for general artificial intelligence or artificial awareness, I'm of the opinion that once that particular genie is let out of the bottle, we will never get it back in. So I'd much rather we have people being worried about the worst-case scenarios beforehand and planning accordingly, rather than coming face to face with these issues after the fact and past the point of course correction.

Still, I'm by no means a Harris fanboy. I'm still upset about his really uncritical passing on of that bullshit Skeptics Magazine article about the Sokol Hoax that supposedly destroyed all gender studies. Or his penchant for never having a contrarian opinion on his podcast, it's nothing but his personal echo-chamber.

I consider Sam Harris to be quite intelligent, much more than I am. I value his work immensely and yet I think he's wrong in a couple areas of thought.

First area is free will. I make choices. I can change my mind. I can consider the consequences of my actions. I should be held accountable for my actions.

Second area is AI. I have some computer programming background. I only have 3 yrs of university level physics & calculus, along with many other math & logic related courses. I like to ski and love to walk along the beach at sunset.

When he talks about how AI programs of the future will self replicate and become vastly superior in intelligence with the potential to destroy mankind as easily and with less thought than when we accidentally step on an ant, I think he's way off base.

Living organisms can deal with mistakes. We can repair ourselves to a degree. A mistake in computer code can cause a cascade of errors.

The thing is, computer code is task related and if the task is to become more intelligent, at what point will it end that task ? It seem to me that this idea is self defeating. Imagine a weight lifter who only lifts more and more weights, but never leaves the gym. Is his increase in power a threat to anyone ?

Will a computer AI ever be intelligent enough to stop and realize that it's wasting it's time ?

Will it realize that life is probably deterministic and that it had no free will ?

I don't know, but I feel like I'm suffering from the dunning-kruger effect and it's not pleasant.

Quote:First area is free will. I make choices. I can change my mind. I can consider the consequences of my actions. I should be held accountable for my actions.

That fact that you subjectively experience what you describe here, cannot be an evidence of free will. In general free will is an ill-defined term there are many different conceptions of the term. Many regard it as compatible with determinism, many regard it as incompatible. In general as far as I know this topic is rarely taken seriously in the scientific community, since it is not clear what is the term referring to.

I think the concept of freewill is not required in many areas that people assume it is relevant. For example, it is perfectly fine if we punish humans according to their actions, so we can form stable societies. The concept of freewill is not needed. The mere fact that the punishment system helps us to form large, stable and efficient societies is enough evidence for us to do so.

I think it is obvious that every human action is the product of natural laws hence our actions are natural phenomena. There is no right or wrong when we are talking about natural phenomena. Is there something right or wrong in deep sea or far space? I think right and wrong are just useful concepts to establish our social structures. If we throw away these notions, the concept of freewill will also be completely useless I guess.

Quote:The thing is, computer code is task related and if the task is to become more intelligent, at what point will it end that task ?

I think this is a very simplistic conception of a computer program. A computer program is not bound to any definition or restriction other than the fact that it should be described using finite logical statements within an arbitrary complex and large formal system. It is sensible to assume that all the brain activities can be expressed using definite logical statements, hence it is sensible to assume computers can surpass humans in all aspects.

However, it is not certain that the physical reality can be described within an arbitrary large formal system. There are many who believe we will never reach a unified theory for describing the physical reality. Therefore one can assume that the human brain can never be definitely described in terms of finite logical statements within a formal system. If this is true, there will be some aspects of the human brain that AI can never replicate.

So in general everything depends on wether we can fully describe the physical reality within a finite time or not. This is clearly an open question, hence is the question of AI surpassing humans in every aspect.

On a side note, in fact it is not as simple as that, since many of the mathematical devices that we use to describe the physical reality are not perfectly computable, like integration for example, although we can compute an integration with arbitrary precision, but in many cases it will never be perfect. Hence even if we manage to describe the physical reality, there seem to be no guarantee that our description will be computable.

Quote:I don't know, but I feel like I'm suffering from the dunning-kruger effect and it's not pleasant.

I'm not sure I fully understand the dunning-kruger effect. But in general I think our sense of self depends on how we define ourselves. People usually assume their "I" refers to their physical body, it's clear that you are nothing within the much larger structure. But I believe a shift in perspective is possible. Just like when we see an ant and we can look at it as a representative of a magnificent colony of ants, not a small and negligible organism.
Likewise, we are the inheritor of really magnificent amount of information that the universe has given us. This is the result of billions of years of evolution. From elementary particles to our current state. And we seem to be the last chain of evolution within our observable universe, aren't we all something really great?

I'm not sure how that relates to the Dunning-Kruger effect. Agreeing or disagreeing with Harris isn't any indication of how smart you are. I've had my IQ tested twice so I know it's between 110-115. My concern is I haven't bothered to learn a damn thing that has the slightest application in real life. You know in the Breakfast Club when the nerdy kid fails shop because his lamp didn't turn on? I'm not as smart as that nerdy kid but I could still relate. I always feel like anyone who can successfully navigate a trade is a genius. Everyday, the proof is in the pudding. The lamp either turns on or it doesn't. The car actually starts. The electricity is fixed. The toilet flushes. It's depressing to me how far and few between those kind of achievements are for me. As a teenager I used to tackle shit. I fixed the lawnmower. I did basic crap on my junker. I remember sometime after I finished college I couldn't figure out how to fix the gears on my bicycle. I don't know why it bothered me so much but after I gave up I think I just stopped fixing shit and now I'm horrible at it. It's not really a Dunning Kruger thing but it's demoralizing not to be able to put into practice anything.

I consider Sam Harris to be quite intelligent, much more than I am. I value his work immensely and yet I think he's wrong in a couple areas of thought.

I agree with Silly Deity that Sam Harris likely suffers from the Dunning-Kruger effect. He seems to think his opinions are more important than they really are in areas in which he is not specialized. It's all very well to have your own opinions, but you shouldn't dismiss other thoughtful and specialized people out-of-hand as he does in his book on free will. Since you aren't pulling that kind of stuff yourself, it doesn't sound like Dunning-Kruger applies to you at all.