[Question] Can a Bayesian agent be infinitely confused?

Edit: the ti­tle was mis­lead­ing, i didn’t ask about a ra­tio­nal agent, but about what comes out of cer­tain in­puts in Bayes theorem

Eliezer and oth­ers talked about how a Bayesian with a 100% prior can­not change their con­fi­dence level, what­ever ev­i­dence they en­counter. that’s be­cause it’s like hav­ing in­finite cer­tainty. I am not sure if they meant it liter­ary or not (is it re­ally math­e­mat­i­cally equal to in­finity?), but as­sumed they do.

I asked my­self, well, what if they get ev­i­dence that was some­how as­signed 100%, wouldn’t that be enough to get them to change their mind? In other words -

If P(H) = 100%

And P(E|H) = 0%

than what’s P(H|E) equals to?

I thought, well, if both are in­fini­ties, what hap­pens when you sub­tract in­fini­ties? the in­ter­net an­swered that it’s in­de­ter­mi­nate*, mean­ing (from what i un­der­stand), that it can be any­thing, and you have ab­solutely no way to know what ex­actly.

So i con­cluded that if i had un­der­stated ev­ery­thing cor­rect, than such a situ­a­tion would leave the Bayesian in­finitely con­fused. in a state that he has no idea where he is from 0% to a 100%, and no amount of ev­i­dence in any di­rec­tion can ground him any­where.

If you do out the alge­bra, you get that P(H|E) in­volves di­vid­ing zero by zero:

P(H|E)=P(H)P(E|H)P(E)P(E)=P(E|H)P(H)+P(E|!H)P(!H)=0P(H|E)=1∗00

There are two ways to look at this at a higher level. The first is that the alge­bra doesn’t re­ally ap­ply in the first place, be­cause this is a do­main er­ror: 0 and 1 aren’t prob­a­bil­ities, in the same way that the string “hello” and the color blue aren’t.

The sec­ond way to look at it is that when we say P(H)=1.0 and P(E|H)=0, what we re­ally meant was that P(H)=1.0−ϵ1 and P(E|H)=0+ϵ2; that is, they aren’t pre­cisely one and zero, but they differ from one and zero by an un­speci­fied, very small amount. (In­finites­i­mals are like in­fini­ties; ϵ is ar­bi­trar­ily-close-to-zero in the same sense that an in­finity is ar­bi­trar­ily-large). Un­der this in­ter­pre­ta­tion, we don’t have a con­tra­dic­tion, but we do have an un­der­speci­fied prob­lem, since we need the ra­tio ϵ1ϵ2 and haven’t speci­fied it.

Thanks for the an­swer! i was some­what amused to see that it ends up be­ing a zero di­vided by zero.

Does the ra­tio be­tween 1ep­silon over 2ep­silon be­ing un­defined means that it’s ar­bi­trar­ily close to half (since 1 over two is half, but that wouldn’t be ex­actly it)? or means that we get the same prob­lem i speci­fied in the ques­tion, where it could be any­thing from (al­most) 0 to (al­most) 1 and we have no idea what ex­actly?

and also, when you use ep­silons, does it mean you get out of the “dogma” of 100%? or you still can’t up­date down from it?

And what i did in my post may just be an­other ex­am­ple of why you don’t put an ac­tual 1.0 in your prior, cause then even if you get ev­i­dence of the same strength in the other di­rec­tion, that would de­mand that you di­vide zero by zero. right?

Us­ing ep­silons can in prin­ci­ple al­low you to up­date. How­ever, the situ­a­tion seems slightly worse than jim­ran­domh de­scribes. It looks like you need P(E|h), or the prob­a­bil­ity if H is false, in or­der to get a pre­cise an­swer. Also, the miss­ing info that jim men­tioned is already enough in prin­ci­ple to let the fi­nal an­swer be any prob­a­bil­ity what­so­ever.

If we use log odds (the frame­work in which we could liter­ally start with “in­finite cer­tainty”) then the an­swer could be any­where on the real num­ber line. We have in­finite (or at least un­bounded) con­fu­sion un­til we make our as­sump­tions more pre­cise.

This math is ex­actly why we say a ra­tio­nal agent can never as­sign a perfect 1 or 0 to any prob­a­bil­ity es­ti­mate. Do­ing so in a uni­verse which then pre­sents you with coun­terev­i­dence means you’re not ra­tio­nal.

Which I sup­pose could be termed “in­finitely con­fused”, but that feels like a mix­ing of lev­els. You’re not con­fused about a given prob­a­bil­ity, you’re con­fused about how prob­a­bil­ity works.

In prac­tice, when a well-cal­ibrated per­son says 100% or 0%, they’re round­ing off from some un­speci­fied-pre­ci­sion es­ti­mate like 99.9% or 0.000000000001.

This math is ex­actly why we say a ra­tio­nal agent can never as­sign a perfect 1 or 0 to any prob­a­bil­ity es­ti­mate.

Yes, of course. i just thought i found an amus­ing situ­a­tion think­ing about it.

You’re not con­fused about a given prob­a­bil­ity, you’re con­fused about how prob­a­bil­ity works.

nice way to put it :)

I think i might have framed the ques­tion wrong. it was clear to me that it wouldn’t be ra­tio­nal (so maybe i shouldn’t have used the term “Bayesian agent”). but it did seem that if you put the num­bers this way you get a math­e­mat­i­cal “defi­ni­tion” of “in­finite con­fu­sion”.

The point goes both ways—fol­low­ing Bayes’ rule means not be­ing able to up­date away from 100%, but the re­verse is likely as well—un­less there ex­ists for ev­ery hy­poth­e­sis, not only ev­i­dence against it, but also ev­i­dence that com­pletely dis­proves it, there isn’t ev­i­dence that if agent B ob­serves, they will as­cribe any­thing 100% or 0% prob­a­bil­ity (if they didn’t start out that way).

So a Bayesian agent can’t be­come in­finitely con­fused un­less they ob­tain in­finite knowl­edge, or have bad pri­ors. (One may simu­late a Bayesian with bad pri­ors.)

Pat­tern, i mis­com­mu­ni­cated my ques­tion, i didn’t mean to ask about a Bayesian agent in the sense of a ra­tio­nal agent. just what is the math­e­mat­i­cal re­sult from pluck­ing cer­tain num­bers into the equa­tion.

I am well aware now and be­fore the post, that a ra­tio­nal agent won’t have a 100% prior, and won’t find ev­i­dence equal to a 100%, that wasn’t where the ques­tion stemmed from.

There is a lot of philo­soph­i­cal work on this is­sue some of which recom­mends tak­ing con­di­tional prob­a­bil­ity as the fun­da­men­tal unit (in which case Bayes the­o­rem only ap­plies for non-ex­tremal val­ues). For in­stance, see this pa­per