Inductive bias is a systematic direction in belief revisions. The same observations could be evidence for or against a belief, depending on your prior. Inductive biases are more or less correct depending on how well they correspond with reality, so "bias" might not be the best description.

Discuss the post here (rather than in the comments of the original post).

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it, posting the next day's sequence reruns post, summarizing forthcoming articles on the wiki, or creating exercises. Go here for more details, or to discuss the Sequence Reruns.

If you start out with a maximum-entropy prior, then you never learn anything, ever, no matter how much evidence you observe.

?

The prior in this situation that I would call "maximum entropy" is the uniform prior (which certainly does change when you update). The monkey binomial distribution isn't maximum entropy at all! What's going on?

This may be off-topic, but why don't you make your comment on the original post? IMO it would be more efficient if the discussion of a certain post were centered at one place, so new readers could see all relevant critiques at once.
And since there are already many comments at the original post, why not make your comments there? It would also add to the cohesion of lesswrong in general, I guess, but maybe I'm missing something;)

I saw this too, I think he means 'maximum entropy over the outcomes (ball draws)' rather than 'maximum entropy over the parameters in your model'. The intuition is that if you posit no structure to your observations, making an observation doesn't tell you anything about your future observations. Though, that interpretation doesn't quite fit since he specified he knew they were drawn with a known probability.

One of the things EY made me realize is that any modeling is part of the prior. Specifically a model is a prior about how observations are related. For example, part of your model might be 'balls are independently with a constant but unknown probability'. If you had a maximum entropy prior over the draws, you would say something more like 'ball draws are completely unrelated and determined by completely separate processes'.

This still confuses me. 'Ball draws are completely unrelated and determined by completely separate processes' still contains information about how the balls were generated. It seems like if you observed a string of 10 red balls, then your hypothesis would lose probability mass to the hypothesis 'ball draws are red with p > 0.99.'

It seems like the problem only happens if you include an unjustified assumption in your 'prior', then refuse to consider the possibility that you were wrong.

My prior information is that every time I have found something Eliezer said confusing, it has eventually turned out that I was mistaken. I expect this to remain true, but there's a slight possibility that I am wrong.