zulupineapple

there doesn’t seem to actually exist a word in English for the thing you know perfectly well people mean when they say “chemicals”.

Maybe that’s because “chemicals” isn’t a natural category? I don’t really know what is meant by that word. It could be something about the manufacturing process. But possibly it just means “complicated words listed on the packaging” and nothing more.

I am not saying: you, yes you, have to talk to people who use words you can’t stand or in ways you can’t stand

Yes. And if I don’t want to talk to people who use those words, and someone says those words to me, then I’m going to reply with something like your “No” replies. Thus, saying ″Technically, everything is chemicals″, is, in fact, very reasonable.

I think the species average belief for both “earth is flat” and “I will win a lottery” is much less than 0.35. That is why I am confused about your example.

Feel free to take more contentious propositions, like “there is no god” or “I should switch in Monty Hall”. But, also, you seem to be talking about current beliefs, and Hanson is talking about genetic predispositions, which can be modeled as beliefs at birth. If my initial prior, before I saw any evidence, was P(earth is flat)=0.6, that doesn’t mean I still believe that earth is flat. It only means that my posterior is slightly higher than someone’s who saw the same evidence but started with a lower prior.

Anyway, my entire point is that if you take many garbage predictions and average them out, you’re not getting anything better than what you started with. Averaging only makes sense with additional assumptions. Those assumptions may sometimes be true in practice, but I don’t see them stated in Hanson’s paper.

But toddlers should agree that they should be weighted less highly, since they know that they do not know much about the world.

No, idiots don’t always know that they’re idiots. An idiot who doesn’t know it is called a “crackpot”. There are plenty of those. Toddlers are also surely often overconfident, though I don’t think there is a word for that.

If the topic is “Is xzxq kskw?” then it seems reasonable to say that you have no beliefs at all.

When modeling humans as Bayesians, “having no beliefs” doesn’t type check. A prior is a function from propositions to probabilities and “I don’t know” is not a probability. You could perhaps say that “Is xzxq kskw?” is not a valid proposition. But I’m not sure why bother. I don’t see how this is relevant to Hanson’s paper.

I am having trouble cashing out your example in concrete terms; what kind of propositions could behave like that? More importantly, why would they behave like that?

The propositions aren’t doing anything. The dice rolls represent genetic variation (the algorithm could be less convoluted, but it felt appropriate). The propositions can be anything from “earth is flat”, to “I will win a lottery”. Your beliefs about these propositions depend on your initial priors, and the premise is that these can depend on your genes.

For example, you might think that evolutionary pressure causes beliefs to become more accurate when they are about topics relevant to survival/​reproduction, and that the uniformity of logic means that the kind of mind that is good at having accurate beliefs on such topics is also somewhat good at having accurate beliefs on other topics.

Sure, there are reasons why we might expect the “species average” predictions not to be too bad. But there are better groups. E.g. we would surely improve the quality of our predictions if, while taking the average, we ignored the toddlers, the senile and the insane. We would improve even more if we only averaged the well educated. And if I myself am educated and sane adult, then I can expect reasonably well that I’m outperforming the “species average”, even under your consideration.

But if you really think that there is NO reason at all that you might have accurate beliefs on a given topic, it seems to me that you do not have beliefs about that topic at all.

If I know nothing about a topic, then I have my priors. That’s what priors are. To “not have beliefs” is not a valid option in this context. If I ask you for a prediction, you should be able to say something (e.g. “0.5”).

If there is no pressure on the species, then I don’t particularly trust neither the species average nor my own prior. They are both very much questionable. So, why should I switch from one questionable prior to another? It is a wasted motion.

Consider an example. Let there be N interesting propositions we want to have accurate beliefs about. Suppose that every person, at birth, rolls a six sided die N times and then for every proposition prop_i they set the prior P(prop_i) = dice_roll_i/​10. And now you seem to be saying that for me to set P(prop_i) = 0.35 (which is the species average), is in some way better? More accurate, presumably? Because that’s the only case where switching would make sense.

Then the question is entirely about whether we expect the species average to be a good predictor. If there is an evolutionary pressure for the species to have correct beliefs about a topic, then we probably should update to the species average (this may depend on some assumptions about how evolution works). But if a topic isn’t easily tested, then there probably isn’t a strong pressure for it.

Another example, let’s replace “species average” with “prediction market price”. Then we should agree that updating our prior makes sense, because we expect prediction markets to be efficient, in many cases. But, if we’re talking about “species average”, it seems very dubious that it’s a reliable predictor. At least, the claim that we should update to the species average depends on many assumptions.

Of course, in the usual Bayesian framework, we don’t update to species average. We only observe species average as evidence and then update towards it, by some amount. It sounds like Hanson wants to leave no trace of the original prior, though, which is a bit weird.

The proposition that we should be able to reason about priors of other agents is surely not contentious. The proposition that if I learn something new about my creation, I should update on that information, is also surely not contentious, although there might be some disagreement what that update looks like.

In the case of genetics, if I learned that I’m genetically predisposed to being optimistic, then I would update my beliefs the same way I would update them if I had performed a calibration and found my estimates consistently too high. That is unless I’ve performed calibrations in the past and know myself to be well calibrated. In that case the genetic predisposition isn’t giving me any new information—I’ve already corrected for it. This, again, surely isn’t contentious.

Although I have no idea what this has to do with “species average”. Yes, I have no reason to believe that my priors are better than everybody else’s, but I also have no reason to believe that the “species average” is better than my current prior (there is also the problem that “species” is an arbitrarily chosen category).

But aside from that, I struggle to understand what, in simple terms, is being disagreed about here.

Ok, that’s reasonable. At least I understand why you would find such explanation better.

One issue is that I worry about using the conditional probability notation. I suspect that sometimes people are unwilling to parse it. Also the “very low” and “much higher” are awkward to say. I’d much prefer something in colloquial terms.

Another issue, I worry that this is not less confusing. This is evidenced by you confusing yourself about it, twice (no, P(C|B), or P(rain|sprinkler) is not high, and it doesn’t even have to be that low). I think, ultimately, listing which probabilities are “high” and which are “low” is not helpful, there should be a more general way to express the idea.

So when I said “rain is the correct inference to make”, you somehow read that as “P(rain) = 1”? Because I see no other explanation why you felt the need to write entire paragraphs about what probabilities and priors are. I even explicitly mentioned priors in my comment, just to prevent a reply just like yours, but apparently that wasn’t enough.

Characterizing this as “A implies C, but (A ∧ B) does not imply C” is tendentious in the extreme (not to mention so gross a simplification that it can hardly be evaluated as coherent view).

Ok. How do you think I should have explained the situation? Preferably, in less than four paragraphs?

I personally find my explanation completely clear, especially since I expected most people to be familiar with the sidewalk/​rain/​sprinkler example, or something similar. But then I’m aware that my judgements about clarity don’t always match other people’s, so I’ll try to take your advice seriously.

Yes, I explicitly said so earlier. And propositional logic makes no sense in this context. So I don’t understand where the confusion is coming from. But if you have advice on how I could have prevented that, I’d appreciate it. Is there a better word for “implies” maybe?

Maybe you’re talking about the usual logic? I explained in the very comment you first responded to, that by “X implies Y” I mean that “observing X lead us to believe that Y”. This is a common usage, I assume, and I can’t think of a better word.

And, if you see a wet sidewalk and know nothing about any sprinklers, then “rain” is the correct inference to make (depending on your priors). Surely we actually agree on that?

When B is not known or known to be false, A implies C, and, when it is know to be true, A&B do not imply C. Surely we have no actual disagreement here, and I only somehow managed to be unclear that before I introduced B, it wasn’t known?

Of course not, that would be too easy. He does, at some point, explicitly acknowledge that environmental effects exist and are “very important”. And then he proceeds to do nothing with those effects. If you admit that a factor is very important, but then arrive at the same conclusions as someone who believes that the factor has 0 effect, that is still a problem.

I imagine the thinking goes like this. He observes that there are relevant factors X, Y, Z. He notices that Z has a lot of uncertainty, or is hard to work with, or ect. Then he decides that he will look at X and Y in isolation. And then he assumes that whatever conclusions X, Y have lead him to, actually have some bearing on the real world. Depending on Z this may or may not be the case. If it happens that Z really is 0, then you arrive at the correct conclusion, but even then, the path you took to get there is still wrong.

Of course, all of this is very hard to talk about when nobody gives their actual probability estimates (and giving probability estimates is also very hard). In particular this is because the value of P(“inferior genes”) doesn’t seem central to Harris. There is a good chance that I’m misreading him and that his estimate of P(“inferior genes”) is not as high as I assume it to be, and that he really is honestly updating on the environmental differences.

Still, the claim that Harris is making an error seems more likely to me. At least now you better understand what error I’m accusing him of? (By the way, I’m also accusing him of the error pointed out by RamblinDash, but didn’t want to duplicate it).

If you say that, I’ll have to raise my confidence in Caplan a little bit. But I’d raise it a lot more if you hinted at what those considerations actually look like. How high is his confidence in his conclusions, and where is that confidence coming from?

No, you’re missing the point. I have very little interest in whether P1 or P0 or some P2 is true (and I don’t really see Jacob strongly favoring one of those either).

Instead, my bold claim is that Harris is not actually evaluating the environmental effects. He says that he uses the “default hypothesis”, but that only makes sense if you have zero information about the environmental differences, which is not the case. Ignoring data you have to make some conclusions look more reasonable is an error.

Now, on the object level, I admit that I favor P0, because the idea of genetic differences doesn’t really make much sense to me. Why should they be there? Because only smart people managed to migrate out of Africa? Mabe by pure chance? Also, it seems to me that there are some hard-to-measure environmental factors, so it’s not surprising if the measured part of environmental contributions is small. But I actually know very little about this issue.

I’m talking about the kind of “X implies Y” where observing X lead us to believe that Y is also likely true. For example, take A=“wet sidewalk” and C=“rain”. Then A implies C. But if B=“sprinkler”, then A&B no longer imply C. You may read this, also by Elizer and somewhat relevant.

those assumptions (which involve, e.g., X and Y both being preferred to some third alternative which might be described as “neutral”, or X and Y both being describable as “positive value” on some non-preference-ordering-based view of value, or something along such lines)

X being “good” and Y being “bad” has nothing to do with it (although those are the most obvious examples). E.g. if X=$200 and Y=$100, then anyone would also pay to upgrade, when clearly both X and Y are “good” things. Or if X=“flu” and Y=“cancer”, anyone would upgrade, when both are “bad”.

The only case where people really wouldn’t upgrade is when X and Y are in some sense very close, e.g. if we have Y < X < Y + “1 penny” + “5 minutes of my time”.

But I agree, it is indeed reasonable that if someone has intransitive preferences, those preferences are actually very close in this sense and money pumping doesn’t work.

Because sane people can be reasoned with. If a sane person is wildly mistaken, and you correct them, in a way that’s not insulting and in a way that’s useful to them (as opposed to pedantry), they can be quite grateful for that, at least sometimes.

High decoupling conversations allow people to focus on checking the local validity of their arguments.

We unfortunately live in a world where sometimes A implies C, but A & B does not imply C, for some values of A, B, C. So, if you’re talking about A and C, and I bring up B, but you ignore it because that’s “sloppy thinking”, then that’s your problem. There is nothing valid about it.

I suspect you don’t quite understand what high decoupling is.

High decoupling is what Harris is doing in that debate. What he is doing is wrong. Therefore high decoupling is wrong (or at least unreliable).

I get the feeling that maybe you don’t quite understand what low decoupling is? You didn’t say anything explicitly negative about it, but I get the feeling that you don’t really consider it a reasonable perspective. E.g. what is the word “empathy” doing in your post? It might be pointing to some straw man.