By the same token, why should a player be punished solely for variability of performance?

It is freely acknowledged that initial ratings are the one area of the current FIDE rating System where significant improvement is required. No simple solutions seem to help, and there may be no alternative to something more complicated. This is a major decision.

The k factor will be 20 except that in a month where a junior player has outperformed expectation, then the k factor will be 40.

My impression based on a few seconds of mental arithmetic is that a junior whose playing strength is constant and who plays ten games each month might be expected to gain 100-150 points per annum from this rule. If that is our desired result, why not just give them the extra points and be done with it?

After spending many months looking at the data, I think the hypothesis that juniors tend to improve is conclusively proved against the alternate that playing strength is constant. Based on this conclusion it then becomes a question of how to recognise this in the system where a lot of junior results are against other juniors. Earning the grade improvement seems to be a good idea as opposed to "just give them extra points".

"After spending many months looking at the data, I think the hypothesis that juniors tend to improve is conclusively proved against the alternate that playing strength is constant."

Are you sure? Admittedly it was many years ago, but I took a small sample of juniors (about the first 30 pages of the grading list) and discovered that on average juniors went up slightly. Obviously some went up (some dramatically so), some went down, but what tended to happen is that the ones who started to get lower grades, gave up, therefore removing themselves from the calculation.

"After spending many months looking at the data, I think the hypothesis that juniors tend to improve is conclusively proved against the alternate that playing strength is constant."

Are you sure? Admittedly it was many years ago, but I took a small sample of juniors (about the first 30 pages of the grading list) and discovered that on average juniors went up slightly. Obviously some went up (some dramatically so), some went down, but what tended to happen is that the ones who started to get lower grades, gave up, therefore removing themselves from the calculation.

I am sure you thought of that!

On the other hand, you have high-graded juniors leaving the system at age 18, and low-graded juniors joining the system very young. So don't they cancel each other out?

On the other hand, you have high-graded juniors leaving the system at age 18, and low-graded juniors joining the system very young. So don't they cancel each other out?

The key point, as perhaps established by the BCF graders 50 years ago when 5 and later 10 points started to be added to junior grades as directly calculated, is that when playing adult players, there needs to be an inflationary measure so that the players in the pool of adults don't find their grades reducing for no good reason as at least some of the juniors are going to have grades below their current playing strength. Most national Elo systems have felt the need to adjust for this in some arbitrary manner. Even the international system attempts to do so with the K=40 rule.

If your are suggesting this was intended as an inflationary measure, then you are mistaken. The k factor represents a tradeoff between two risks

1. The player's strength is changing but the system does not reflect that quickly enough, and

2. The player's strength is not changing but the system reflects random fluctuations in results.

It seems plausible, and is supported by evidence, that juniors are more likely than adults to experience genuine changes in playing strength ( typically up, but also down ). The different k factor is therefore statistically justified.

"After spending many months looking at the data, I think the hypothesis that juniors tend to improve is conclusively proved against the alternate that playing strength is constant."

Are you sure? ....

Being sure is not a concept fitting with statistics theory!

It is clear from both senior and junior data that weaker players are more likely to drop out. On its own this would lead to the average of the remainers to be higher than the bigger initial list.

However I regularly see big improvements in the 3Rs from pupils in our local school. This is essentially a closed set albeit with a strong focus on improvement. This lends weight to the view that juniors might tend to improve at chess.

What I can see is that between July 2018 and January 2019 those juniors in both lists improved by 4 grading points (about 30 Elo in six months -pretty consistent with the last 10 years). We have no evidence on those leaving the list, but I would think it perverse that their change in ability was/would have been backwards enough to counter balance the observed improvements.

Someone seems to have got in mind that there's an ECF proposal to delay the publication of July/August grades in the traditional format. If true this has a knock on effect on those leagues that have grading restrictions as clubs wouldn't know what teams they could enter.

You might hope the ECF wouldn't be so stupid, but then ...

A clarification that grades in the existing format will be published to the usual schedules in July/August 2019 might be welcome.