The dashboard clock on your car only shows hours and minutes (not seconds). You know the clock is only accurate to plus or minus a few minutes; likewise, your digital watch is set to within a few minutes of true time. Assume the car's clock and your watch are independent, equally likely to have any error within that range, but both running at the same rate so the difference between them is a constant.

At a random time you observe the car clock and your watch. At the moment that you do the comparison, you see that they display the same time in hours and minutes (HH:MM displays are equal; seconds are not shown).

Question: after that single observation, what is the likely difference in seconds between the clock and the watch?

The answer? Use Bayes Theorem to update your beliefs as new information arrives. As per Introduction to Bayesian Statistics, take the odds Before of each possibility, multiply by the likelihood of seeing what happened in each case, and you get the odds After. For instance, if you have three coins in your pocket — one normal, one double-headed, and one double-tailed — and you take a coin out at random and start tossing it, then every time it comes up "heads" the odds that it's the two-headed coin get multiplied by two, and the odds that it's the two-tailed coin get multiplied by zero.

Now consider the time difference Δ between dashboard clock and watch. Initially all values of Δ are equally likely. The first observation that HH:MM match is 100% sure to occur if the (unseen) value of seconds SS for clock and watch agree; it's 50% likely if Δ = 30 seconds; it's zero chance if Δ > 60 seconds. So the After belief in the time difference goes from being equiprobable to something like a triangle shape, peaked at Δ = 0 and falling linearly as the absolute value of Δ increases. (Hmmm, like the Greek letter Delta = Δ, eh?!)

Wait a random length of time and make another observation of HH:MM. If watch and clock agree, the unknown value of Δ becomes more sharply peaked around 0. What happens if they disagree? What if a long series of random observations shows agreement 50% of the time? Or 1/60th of the time?

And to warm up for the continuous case, suppose you have three watches in your pocket, one perfectly synchronized with the dashboard clock, one set 30 seconds ahead, and one that's 2 minutes off. Take one watch out at random and start comparing its HH:MM display with the clock at randomly separated times. How do the odds change that it is each of the three coins, uh, I mean watches?