The poster's classification (Page 2) of speed errors in print-writing includes many error types which anyone would agree harm legibility.

However, re one type presented, I do not see that this type is a legibility error.

That type is "laking" (shown in poster page 2, top left graphic). A "lake" (term apparently created by the poster authors) is shown as use of parabolic/near-parabolic curves for upstrokes of m/n/r/h/b/p, so that the area outside the curve of the upstroke is — in examples shown — 1/3 to 2/3 of the area in the curve of the upstroke: as compared with conventional USA print models where a circular/elliptical curve forms the upstroke, and the area outside the curve is therefore <1/3 (often, <1/4 or even <1/5) of the area in the curve. (Note: other authors call this "branching": Getty and Dubay, 1994; Reynolds, 1969. Those authors also apply their term to the upstroke curves of a/d/g/q.) For consistency, "branching," not "laking," is used in this message. For a chart of branching and other handwriting terms, see Getty and Dubay, page vii. I have placed online a graphic of the chart from that page, at http://www.tinyURL.com/GettyDubayTermsLARGEview

Candler-McCleskey /a/ correctly note this feature among several that increase with writing speed, and /b/ see it as a legibility error.However, other research (Lehman, 1973; Duvall, 1985) suggests that branching typifies higher-speed higher-legibility writing, to the point that high-legibility high-speed writers are far more likely to branch than to follow conventional USA print models that teach non-branched formation.

Lehman documents higher speed/higher legibility/fewer and smaller deviations from model among students taught an experimental handwriting model whose features — illustrated in Lehman's graphics — include branching m/n/r/h/b/p and a/d/g/q, vs. a control group taught conventional models with non-branched print. Duvall shows that, among the entire 11th grade of a school district that prioritized teaching a conventional USA model with conventional (and non-branched) print, not one student who printed had this or several other defining features of a print style. Further, the identified cluster of deviations (including branching) was most consistent for students whose writing scored highest for speed and legibility.

Examining letter-shape legibility, Duvall (1986) finds that kindergarten letter recognition/matching is most accurate with a letter-style whose features happen to include branching: similarly, Sassoon (1993) finds that reading comprehension of primary schoolers and others (including below-average readers) is most accurate/rapid in a font whose features happen to include branching (Times Italic font) vs. fonts more common for reading matter (Times Roman and other common fonts whose letters do not branch).

These findings suggest that print models/instruction should use branching, as a feature linked with greater legibility and speed. Further research needs doing (with MovAlyzeR?) to test such modifications singly and in combination: who does this research, or would like to do it? A first step: evaluate speed/legibility levels attainable with existing print models in their standard forms and in modified forms whose experimental variable is the presence/absence of branching in m/n/r/h/b/p and/or a/d/g/q.

In other words: what are the effects of changing a variable (encouragement vs. forbiddance of branching) in a taught style?

My hypothesis: branching will boost speed/fluency, without loss of legibility: likely with gains to legibility. Specifically, I predict that a branched model will be significantly more fluently produced, significantly more resistant to speed-degradation/other deviation from model, and not significantly less legible.

Since 2 letter-groups can branch (m/n/r/h/b/p and a/d/g/q), the ideal experiment would use 4 experimental groups and a control group, to test variables separately and in co-occurrence. Controls would get no instruction, just pre-test/post-test.

Experimental Group 1 would use the print program from the Candler-McCleskey research.
Experimental Group 2 would use the Candler-McCleskey program but modified to teach/encourage branched m/n/r/h/b/p.
Experimental Group 3 would use the Candler-McCleskey program modified to teach/encourage branched a/d/q/g.
Experimental Group 4 would use the Candler-McCleskey program modified to teach /encourage branched m/n/r/h/b/p _and_ a/d/q/g.

Note: this design can serve for similar tests on other writing programs. Testing the effects of changing variables composing a program may give information unobtainable through whole-program tests. Specifically:

/a/ Programs which are overall helpful may have features lacking discernible effect.
Testing a program feature-by-feature can isolate which features lack impact and should be removed to focus effort on actually helpful features.

/b/ Overall beneficial programs may have features that actually harm students' writing progress/maintenance: features whose harm is masked by other, actually beneficial, program elements.

If unsuspected harmful/unproductive features of programs are located, documented, analyzed, and removed/replaced, programs' beneficial effects should rise. A somewhat beneficial program, minus elements that kept it only "somewhat," would become _highly_ beneficial. (E.g.: Candler-McCleskey report improved legibility from their program. If there is a way to improve the program's approach to certain letters, that should further improve program results and the ease of obtaining/maintaining them.)

How can interested IGS members work together on such research?

This should be a future IGS paper(s), maybe using Movalyzer analysis.With research quantifying/comparing various programs' features, educators need no longer treat a program as one huge indissoluble variable to compared in whole with other huge, unanalyzed conglomerations of variables (= other handwriting programs).

Instead of only considering the program as the variable, let us regard each program as a set of many variables.

As far as I know, this finer-grained analysis is not being done: even studies cited above focus on style-vs.-style comparisons. Feature-vs.-feature comparisons are also needed. How can IGS do such research or get it done?

Kate Gladstone brings an interesting perspective to the work Jan McCleskey and I conducted concerning the One Hour to Legibility Program. I agree that often handwriting programs are assessed without focused diligence to the features within them. Testing the hypothesis that 'lakes' as natural adaptations to speed writing may actually improve legibility could provide interesting results. Our study results concur with Graham et al, 1998, and suggest that individuals, including children, adopt their own style and that this style is resistant to change. Further analysis of our data, not included in the IGS abstract, found that the actual letters scored as illegible were different at posttest, but were once again correlated with beginning performance by follow up. In other words, the children changed their letter formation while in the program but subsequently returned to previous letter formations. What we found meaningful for intervetion in our study results was that the spatial organization taught in the program was retained. Our results suggest that global readability can be improved with attention to relative letter sizing and alignment and that letter formation, at least for this group of writers who were selected based on having knowledge of letter formation (mean number legible letters at pretest 22/26), may not have a large impact. I agree with Ms. Gladstone that discovering exactly what parts of intervention produce the most effect is an important task. Understanding the impact of speed errors in different populations of writers would be one such question that could be explored, perhaps in the adult population.Catherine Candler

Kate Gladstone brings an interesting perspective to the work Jan McCleskey and I conducted concerning the One Hour to Legibility Program. I agree that often handwriting programs are assessed without focused diligence to the features within them. Testing the hypothesis that 'lakes' as natural adaptations to speed writing may actually improve legibility could provide interesting results. Our study results concur with Graham et al, 1998, and suggest that individuals, including children, adopt their own style and that this style is resistant to change. Further analysis of our data, not included in the IGS abstract, found that the actual letters scored as illegible were different at posttest, but were once again correlated with beginning performance by follow up. In other words, the children changed their letter formation while in the program but subsequently returned to previous letter formations. What we found meaningful for intervetion in our study results was that the spatial organization taught in the program was retained. Our results suggest that global readability can be improved with attention to relative letter sizing and alignment and that letter formation, at least for this group of writers who were selected based on having knowledge of letter formation (mean number legible letters at pretest 22/26), may not have a large impact. I agree with Ms. Gladstone that discovering exactly what parts of intervention produce the most effect is an important task. Understanding the impact of speed errors in different populations of writers would be one such question that could be explored, perhaps in the adult population.Catherine Candler

For optimum legibility, lake volume should be between 25 to 60 percent of the total volume of the glyph's counter. Generally, the branch will need to part near the center of the stem, considering that the letter width is in the "regular" or normal range (not condensed or expanded), which applies to nearly all copybooks.