Beat Stamm has published an updated and greatly expanded version of his important 1997 article ‘The Raster Tragedy at Low-Resolutions’, which explained the reasons why hinting was needed, as a corrective to raster rounding problems in coarse grids. The original article, excerpted from a presentation that Beat gave at the 1997 OpenType Jamboree and still available on the Microsoft Typography website, dealt with the then predominant b/w rendering of text type sizes. The new version of the article ‘covers anti-aliasing including sub-pixel rendering, opportunities made possible by anti-aliasing, challenges in the rasterizer and elsewhere, and a discussion of "hinting" in the context of these opportunities and challenges.’

I'm about halfway through a close reading of the document, looking with particular care now at the section on subpixel spacing (§3.3.2), which with yesterday's relatively quiet release of IE9 has suddenly ceased to be a pending issue. The first thing I noticed when setting up my IE9 homepage to match my Firefox 3.6 page is that the thin verticals of M and N (approx. 14pt Constantia, 96dpi) are becoming pale smudges. As Beat notes:

The main drawback of fractional advance widths and fractional pixel positioning is the loss of stroke rendering contrast at sufficiently small type sizes and device resolutions (cf 3.1.3). As we have just seen, it thwarts any intentions to carefully position strokes to “optimize” stroke rendering contrast. Informally put, at text sizes on 96 or even 120 DPI screens, text is simply rendered with more “fringe” than “core.”

_____

So far, the fundamental observation is this, from §1.2 (Beat's emphasis):

Even more math, equations, parenthesis—really!—why does this matter? It matters because it shows how rounding or sampling makes font rendering non-linear. Once it is non-linear, it is no longer scalable. The two concepts linearity and scalability are equivalent here.

The startling conclusion is that rounding or sampling makes the allegedly scalable font format non-scalable!

On why the ancient and obsolete efficiency measures of the Windows TT rasteriser will never be ‘fixed’ to enable greater precision (§4.0.3):

Think of it this way. With the existing rasterizer, a particular character may render in a particular way at a particular ppem size. The author of the constraints (or “hints”) may see no fault with the rendered pixel pattern. But he or she may be oblivious to the fact that it took the rasterizer a seemingly serendipitous chain of tiny rounding errors to produce this particular result.

Now, if any single one of these rounding errors is “improved,” the outcome may change easily, as we have just seen. And since the font looked “good” before, chances are that it will look “worse” afterwards. We simply have to learn how to deal with the available precision—it’s hardly ever going to be exact, anyway.

All of Chapter 6 should be required reading, and then required re-reading.

Unfortunately, one of the things it makes clear is that you can't, after all, hint TrueType in FontLab, at least not in accordance with Beat's definition of hinting:

“Hinting” designates the act of constraining the scaling of outline fonts to yield sets of samples that best represent the type designer’s intent at any point size, on any raster output device, having any device resolution, using any rendering method, in any software context, and for any end-user.

[Is there some markup mechanism by which I can differentiate roman and italics within blockquote tags?]

I think extreme or ideal definitions can be very useful, in that they oblige one to consider how something might or should work or be used, rather than proceeding on the basis of how it has worked or how it has been used. Beat makes a distinction between hinting and 'pixel-popping', and even when one acknowledges a practical continuum between these things in ‘getting the job done’, the distinction remains a useful one. What I take away from Beat's Chapter 6 is that we need to think about TrueType more as a programming language, which will make it possible to move closer on the continuum to Beat's definition of hinting, and away from using this potentially very powerful tool as a ‘technophiliac disguise for “pixel-popping”’.

Hence my comment about FontLab's TrueType hinting capabilities, which pretty much do function as a pixel-popping interface, and which hence have become increasingly less useful as we move away from bi-level rendering. We're now at a place where enthusiastic amateurs are finding that FL (auto)hinting can apparently improve the appearance of some glyphs in some fonts at some sizes in some environments -- heck, maybe even most glyphs in most fonts at most sizes in most environments --, but they're doing so ‘oblivious to the fact that it took the rasterizer a seemingly serendipitous chain of tiny rounding errors to produce this particular result’. In other words, they're hinting from particular outlines to particular results, rather than generalising hinting as a programming strategy that will provide the most flexible set of nested instructions and functions, such that varying raster output devices, rendering methods and user preferences are able to take most advantage of that hinting.

This, of course, presumes another point that Beat makes: hinting is (should be) all about acting, not reacting. If this is the case -- or becomes the case -- then hinting goes from being the cladding on a building that makes it look nice to being something like a foundation, something that new rendering methods and options for end users can reliably build on top of, and without resorting to the kind of ‘band-aids’ Beat was forced to enact in the Windows rasteriser to get existing fonts not to explode in ClearType.

“asking an artist to become enough of a mathematician to understand how to write a font with 60 parameters is too much.”

That's how Donald Knuth explained why Metafont failed to catch on with type designers. And while I realize that an apples-to-apples comparison is not appropriate to Metafont and TT hinting, looking at Beat Stamm's site makes me feel the same way about TT hinting.

>“asking an artist to become enough of a mathematician to understand how to write a font with 60 parameters is too much.”

first if all, hinting for contemporary windows environs requires less than 10 parameters, and second, the vast majority of the kind of artists you are citing juggle hundreds of parameters in the design of a font, no problem. Third, if you really believe hints should be more useful than what they are used for, you're a tad late to support it.

> think extreme or ideal definitions can be very useful, in that they oblige one to consider how something might or should work or be used

You mean like hearing someone saying the iPad is the perfect example of how a web device should treat typography? Does that oblige you to think, or just react with standard windows rhetoric?

>FontLab's TrueType hinting capabilities, which pretty much do function as a pixel-popping interface, and which hence have become increasingly less useful as we move away from bi-level rendering

when was Type 1 hinting ever really useful for bi level rendering in the resolution spectrum of text?

>then hinting goes from being the cladding on a building that makes it look nice to being something like a foundation

Finally, a great ideal! I initiated the first baby steps at integrating hinting in the foundation of font design last fall. We will see! Memes, can't live with them, can't live without them. ;)

David: when was Type 1 hinting ever really useful for bi level rendering in the resolution spectrum of text?

I didn't say anything about Type 1 hinting: I was talking about FontLab's TrueType hinting tools. [Those tools do try to treat aspects of PostScript and TrueType hinting as analogous processes and to leverage PS font and glyph level hints as a basis for TT hinting, but they expose and enable bi-level pixel-popping behaviour. My point was that they don't do much more than that, and do not expose TrueType as a programming language.]
___

David, we're not going to do Beat's article justice if we drag the discussion into our disagreements on other topics, e.g. the scalability of the iPad approach to typography.

James, even if you were to read just the introductory paragraphs of each chapter and then skip to the conclusions, I think you would gain a lot from Beat's work. He does nicely summarise the conclusions in plain English, and you can just take the mathematics and detailed analyses in between as evidence that he's not just making stuff up.

I've been learning from the Typophile Forums for a while now but I've never commented on any post.

Reading the revisited Raster Tragedy website (I'm on chapter 5) has been enlightening. In particular the section about “Natural” Advance Widths. I'm currently working on a text font for screen in Fontlab, and although Fonlab's hinting tools have been useful for fast prototyping, I've been hitting a wall with character spacing at certain sizes.

I know Mr. Stamm writes: "There is no easy “fix” to overcome the shortcomings of “natural” advance widths. It can be done, but it requires some serious efforts in terms of coding in TrueType!" but I was thinking David Berlow's description of quantizing values during the outline design process and font spacing (this post) could be a solution to this rounding problem, or at least help in minimizing it.

Of course I've only started learning about font hinting and TrueType Instructions so I don't understand everything or probably missed something.

The website certainly made want to learn more about the TrueType language and the manual instruction process, and also made me realize that for the final version of the font I should use VTT and not rely on the easy way (FontLab hinting tools).

Some things have changed since then. At the time, talk at MS was about subpixel positioning and y-direction AA in ClearType becoming widely used in Windows Vista. That didn't happen, although those features were present in the massively underexposed Windows Presentation Format. Instead, those features become widely used two days ago with the release of IE9.

The other thing that changed is that I'm no longer using a 145ppi screen. I'm back to 96ppi but a much larger monitor and sitting further away. :)

I think the intervening years have proven David right about quantising outline design and spacing, although with the ever more pressing caveat that this is a size specific solution, which makes me nervous given the widespead use of text zoom on touchscreen devices. Customers want scalable fonts, and I don't think we really have a good mechanism to deliver them while also getting the kind of quality of text on screen that we want, users deserve and customers might just be willing to pay for if only it also scales. Recently, I've been working on UI fonts, and the nice thing about the project is that there's a limited range of ppem sizes in which the customer is interested (in part because they're presuming a particular resolution), so I could quantise the design and spacing to work optimally at the most common of these sizes. But UI fonts are exceptional in that they tend to be used at static sizes. I still think it makes sense to decide on a target size for a design for screen, and to quantise accordingly, but as the text is scaled that design and spacing may appear freakish. I'll also point out that a project in which a customer is only interested in two ppem sizes is only half as nice as a project in which they are only interested in one ppem size.

JM>...but I was thinking David Berlow's description of quantizing values during the outline design process and font spacing

The demonstration of this was to show what would happen in Quartz or Windows if TT hints were all interpreted and to debunk the continued showing of Verdana at 11 ppm as evidence of CT quality, not as a general guide to tragedy avoidance.;)

JH> ...years have proven David right about quantising outline design and spacing, although with the ever more pressing caveat that this is a size specific solution.

Absolute quantizing of a type design to a size, quantizes absolutely, for sure. But, quantizing a type design non-absolutely, is, in my experience, not size specific. It is in a spectrum of quantizibles in which we work, after all, not just for the low resolution tragedy that makes us face it one big unit (pixel) at a time.

I'd be interested to see how you make decisions with regard to quantising.

As I said, recently I've been working on fonts that have a prioritised ppem size, so I quantise for that size and then work outwards, watching the relationships of verticals and horizontals gradually degrade until, at some larger size, they coincidentally coalesce again.

I've also tried the trickier thing, which is to try to position a design within a field of sizes. The decisions to be made in this case are complicated, and I've not determined even if there are rules of thumb that can be generalised across different types. Of course, it doesn't help that none of these projects involve the same writing systems.
_____

Re-reading that old thread -- when did I find the time to produce all those screenshots! --, I thought of another thing that has changed since 2006. In that thread I was referring to 'contrast' in the sense of figure-ground contrast -- what Beat refers to, as in my first quote from his text above, as ‘rendering contrast’ --, but since then, to avoid confusion with contrast of stroke thicknesses, I've adopted the term 'stroke density' to refer to that desirable quality of text, epitomised in good quality printing, wherein the solid (blackness) of a stroke is not affected by the thickness of the stroke. My preference for (ASPAA) ClearType over other antialiasing mechanisms has always been based on the fact that it does a better job of preserving stroke density (as illustrated in numerous places in that old thread). As Beat observes, some of this advantage is lost in HSPAA ClearType, when subpixel positioning degrades stroke density in verticals (y-direction AA can degrade stroke density in horizontals, too, but should be controllable with the gasp table).

Which brings me to what I'm most looking for in hinting these days (beyond obvious things like y-direction alignment control): strategies to maintain a minimum stroke density, to keep the not-actually-black strokes as close to black as possible at normal reading distances.

JH> Which brings me to what I'm most looking for in hinting these days...

If one resolution-independently quantizes the only stems one can hint and then carefully controls the minimum distance to the sub-pixel, it's managable. With that, I believe you've reached the only hintish frontier that ain't server-based, drover.:)

JH> Re-reading that old thread -- when did I find the time to produce all those screenshots!

Lol. When I read that thread I wonder, "How did we all find time for such a glacial pace?", think of it, back then a conversation to establish guides for CT font developers could start behind the scenes in sept 2004, see me surface the conversation here on typophile in mar 2006, and later... get a "definitive" answer from MS in oct 2009...

dberlow: I understand. It just made some sense when I made the connection between that post and the information in the revisited Raster Tragedy site concerning Advance Widths, at least in general. I do think quantizing to a discrete group of values could certainly help me (maybe not with the same values that you describe in the old post). Of course I think of it as a guideline.

John Hudson: Scalability is certainly a problem that makes me uneasy when looking at the preliminary results with a fast hinting pass in FontLab. The only thing that gives me some comforting is that the project I'm working on (my graduate thesis) contemplates optical sizes, so I can concentrate mainly on a limited range of sizes for each optical size group.

I do think that in any font with optical sizes, each group of optical sizes (e.g., Captions, Text, Display, etc.) should work independently in the whole size range. That may be easy for print but for the screen I think it's only an ideal that may not be 100% possible (of course it depends on the characteristics of each optical size).

______

I thank you both for your answers. It's always a pleasure to read your discussions.

Beat Stamm: First of all, thank you very much for making the revised Raster Tragedy website. I can honestly say it blew my mind, specially §6.3.1 & §6.3.3. Since I've just started learning how to instruct TT fonts, it made me realize there's so much more to it than CVTs and Links. I certainly want to learn how to code in TT language(although I don't think I will learn fast enough for it to be useful for my thesis [nearby deadline]).

I agree with you about how introducing more fonts won't overcome the scalability and spacing issues, but there are other reasons for why I'm making optical sizes (although maybe I should call them something else). While they are designed to work at different sizes and hierarchies, they are also stylistic variants with æsthetically different contours for an Editorial Design context. And well, for my actual experience in making fonts for the screen, it certainly helps to have a specific range of sizes to concentrate when designing and "hinting" each font (I hope I can update them later to fix any scalability problems).

I would like to ask if you have any recommendations for someone who wants to learn how to code TT instructions. I've downloaded the TT spec files and searched for other written material concerning TT, and I'm in the process of reading them. I've been exploring and manipulating the tables in some fonts in FontForge (Constantia, Georgia, etc. [VTT 4.4 didn't come with the example fonts]) to understand how each part works. I don't know if this is the best way to learn but it's the only one I can think of in this case. I would appreciate any recommendations you have.

_____________

Again, thank you very much for your time and for sharing your knowledge.

There is indeed a lot more to it than CVTs and Links: Getting an intuitive understanding of the sampling process (“filling in the pixels”), realizing that the theory says this shouldn’t work (“wrong side of the Nyquist Limit”), seeing how type designers can craft rather legible bitmaps regardless of the theory (sharing an office with Hans Ed. Meier got me plenty of visual instruction), and finally bridging the gap between theory and practice by using a combination of typographic rules, math/geometry/logic, and software engineering.

When I developed the font-scaler for my PhD project, attempting to bridge said gap, I thought of font-scaling as Dynamic Regularization of Intelligent Outline Fonts, that is, the concept of constraining a single set of outlines to become progressively more regular as point size and device resolution decrease. Since then I have adapted this concept for use with anti-aliasing methods and for optimizing stroke rendering contrast (§3.1.2) and bi-level pixel patterns (§6.3.3). Importantly, I had to implement this concept in an obscure programming/assembly language called TrueType—not always an easy feat.

Unfortunately I am not aware of a textbook specifically on TrueType coding. By the time I first got exposed to coding in TrueType, I didn’t seem to need one. I needed the TT specs for sure, initially learned by example, and later sometimes by stepping through the rasterizer’s source code with the help of a debugger—an atrocious way to learn. Let me find out why the sample font “Myfont42.source.ttf” no longer comes with VTT (I don’t have it, either).

I didn't have this when I went to do some hinting in VTT last summer, so I wrote and commented one on latin caps for some developers. What I also found in the most up-to-date version of VTT MS could supply, was that the excellent built-in tutorial, written I assume in the late 90's for aliased hinting, is considerably more complex than what's needed to add high quality CT-hinting to fonts for Windows use today.

That tutorial could be compressed down to alignments, overlaps, relative shifts or moves and light CVT management, as I proved to me in a series of font hintings. I suspect any template font from the "aliased age" might cause more confusion than help unless it too is compressed to what actually works in Windows for most users.

Then, if people want to go on to hinting for Windows Grey Scale/Freetype, that's a another dimension, but also quite different from aliased age hinting.

Have you read the current gasp table spec, Rich? You'll see in the example settings given that the same set of values is differently interpreted depending on the smoothing model used.

Best practice for the gasp table is always font specific. This is the whole reason for there being a gasp table: one set of gridfitting/smoothing settings does not work appropriately well for all fonts, with stroke weight being a key factor in determining the best options.

_____

I was thinking a couple of days ago that the gasp table spec could be usefully extended to enable the font developer to specify whether and at what sizes subpixel positioning should be permitted in DWrite or other environments that use that technology. I'm working on some fonts that, at specific ppem sizes, are designed for advance widths to round to full pixel boundaries, with the intent that the outline edges maintain the same relationship to the pixel boundaries, ensuring consistent and appropriate stroke density. The trouble is that when these fonts appear in the same line of text as other fonts, the sub-pixel rounding of those other fonts throws off the positions of my outlines on the line, defeating the whole purpose. It would be great if I could use the gasp table to enforce rounding to full-pixel boundaries at these key sizes.

JM Solé wrote " was thinking David Berlow's description of quantizing values during the outline design process and font spacing (this post) "

'That' post (CT Design Guide) talks a lot on 'Quantising'. Quantising being a technical practise of 'averaging out', used a lot in digital music production, to 'fit' musical notes onto the (beat) grid. It does make a lot of sense when dealing with fonts for screen to adopt such strategies of quantising (in David Berlow's post) to better fit stems, bowl edges, terminals etc to the digital pixel grid. This minimises the need for 'lots' of hinting instructions. With DirectWrite, where the need for comprehensive hinting instructions has been lessened it still makes sense to fit the grid as much as possible, for crisper rendering and minimising those weird blips at small sizes. It even makes for crisper rendering on Quartz.

re; gasp tables& truetype fonts - with DirectWrite coming online now, version 1 tables are essential for optimum rendering on Windows IExplorer 9 and Firefox 4. A version 1 gasp table is backward compatible with the pre- & GDI cleartype (i.e. a version 1 table will still do ye olde antialias & gridfit if it needs to), but a version 0 gasp table is not forward compatible; i.e. it wont bring out the best of DirectWrite font rendering.

ps Fontlab still can't generate version 1 tables! Fontforge generates version 1 tables, or you can use TTX to hack one. erm... or use VTT :o

I prefer not to speculate what someone may or may not learn from examples addressing bi-level rendering only. Learning methods are about as individual as people are individual: What works for me may not work for somebody else, and vice-versa. If nothing else, Vincent’s sample font illustrates how the control program (CVT + prep), the font program, and the glyph programs work together (and I doubt there is nothing else to be learned).

Learning methods aside, David’s description of “hinting” reminds me of what was referred to me as “light-weight hinting” or “substantially y-direction only hinting.” The fallacy with this approach is that it causes seemingly unpredictable spacing issues, as discussed in §3.3.0 (end of paragraph, ‘O’ rendered in ClearType with “y-direction only hinting”) and §4.2.1 (end of paragraph, ‘lellellel’ rendered in ClearType). Likewise, absent or inadequate x-direction “hinting” can cause substantially equal stems to render with seemingly unequal sample counts, as illustrated in §5.1 (‘m’ rendered in ClearType with a darker middle stem).

Moreover, perpetuating the “you don’t need x-direction ‘hints’ in ClearType!” mantra can only help to never get the “hinting” to match the LCD sub-pixel structure, with all the consequences discussed and illustrated in §5.6. Whether it is text that is written vertically, or whether you rotate your screen/tablet from “landscape” to “portrait,” what was the x-direction now becomes the y-direction, and vice-versa. Sadly, Windows doesn’t handle ±90° rotation correctly because there are no fonts that are “hinted” to match the LCD sub-pixel structure—or the fonts are not hinted to match the LCD sub-pixel structure because Windows doesn’t handle ±90° rotation correctly.

Last but not least, I don’t see how “hinting” for gray-scaling is another dimension, as David put it. The stems are the same, the crossbars are the same, and so is the design contrast between horizontals and verticals or uppercase and lowercase. The only difference between gray-scaling (§2.1) and y-anti-aliased ClearType (§2.3) is a different set of oversampling rates, which can be addressed in the pre-program, and potentially a different method to “sharpen” strokes.

Quantizing outlines doesn’t make any sense at all except to demonstrate, as David put it, “to show what would happen in Quartz or Windows if TT hints were all interpreted and to debunk the continued showing of Verdana at 11 ppm as evidence of CT quality, not as a general guide to tragedy avoidance.”

I don’t know what the confusion is, but there is a difference between quantizing the beats in music (1-dimensional rounding in time), and quantizing the outlines of fonts (2-dimensional sampling, potentially followed by down-sampling with a suitable anti-aliasing filter which, in case of a box filter, could be understood as averaging).

To be sure, I have experimented with MIDI recording software, and I did get my piano phrases quantized—but certainly not to my beat. Conceptually, a thusly quantized MIDI file is still scalable in time, that is, you can play it back faster or slower. Just change the number of beats per minute. By contrast, a quantized set of outlines will demonstrate what David wanted to show at exactly one ppem size. At any other size, the screen will “re-quantize” it (ignoring potentially demonstrable effects at integer multiples of the one ppem size).

Beat> The fallacy with this [y-only hinting] approach is that it causes seemingly unpredictable spacing issues,

weird. The fallacy of y only hinting is that it causes spacing issues? No Beat, the policy of not interpreting x hints in low resolutions and at small sizes causes spacing issues. y-only-hinting is the only rational typographic reaction to CT, not an evangelized solution to the tragedy.

Beat>Quantizing outlines doesn’t make any sense at all except to demonstrate,

Terms: Typeface Design is, (beyond the drawing of compatible but dissimilar shapes), assigning dissimilar values to similar letter features to compensate for the optical effects caused by feature location, and by the combination of features within a glyph.

This "Dequantizing" is a big part of type design; the O is taller than the H because of feature location, the cross strokes of the B are lighter than the H's because of feature combination.

Typographic Quantizing, generally is grouping dissimilar values of letter features to an identical value for some reason, either in spite of the optical effects, or in anticipation of the effects of output.

Absolute Quantizing means the letters are systematically fit to a grid, to compensate for the effects of low resolution at a particular pixel per em size.

Relative Quantizing means to group some features relative to each other, and to an output range of pixels, but not to a grid location or pixel value, maybe because the optical effects of the dequantized features are not going to work for the output.

Now from a Y point of view: there are big alignments one almost never quantizes, like the difference between upper and lowercase;) Each of these alignments has a daughter overshoot/undershoot. And each of these has daughters too! for shapes that need more or less shoot than the O e.g.

Well thank Ptah for Y instructions in Windows and heavy rendering on the Mac, but still, having the grand-daughter alignments of the H in a text face for the web means more hinting for no user benefit. Relatively quantized, the problem goes away.

I have proceeded and demonstrated that quantizing in X direction letter design makes lots of happy screen fonts too. Again, this is not my choice — I wanted a world where all fonts could work because all hints could work. Apple said absolutely not and fixed it with rendering. MS said "We'll see", and we have seen a lot.

And John, just noticed his vertical thin stems are disappearing in the ClearType collection, 6 years after I told him he could not design a font for screen text and print display without x instructions.

Quoting David: “the policy of not interpreting x hints in low resolutions and at small sizes causes spacing issues.”

The last part of §3.3.0 illustrates an UC ‘O’, rendered in ClearType, that contains no instructions in x-direction whatsoever. Have a look at the left and right side-bearing spaces this yields, and let me know how the rasterizer is supposed to interpret nothing to do the spacing?

jh>You'll see in the example settings given that the same set of values is differently interpreted depending on the smoothing model used.

Interpreted by what, where? Assuming that Cleartype is turned on, no matter what the settings in the GASP table, I'm not seeing any differences in the rendering of a hinted font in any browser using GDI or DWrite. So under what conditions does it make a difference? The spec can say this, that, or the other thing, but if I'm not seeing a difference, I have to wonder if this stuff applies to conditions that are obsolete.
Do you have an example handy?

Digital music software does quantize to the beat - the timeline of music being divided into 'beats to the bar', e.g 16 beats to the bar. Quantizing in music takes the position of each note in the timeline and rounds it's position to the nearest beat.

&

A quantized outline would surely work for not just a single ppem size, but also for multiples of that value. no? an outline quantized to the pixel grid at '6pts' would also fit the grid at 12, 24, 48... etc. No?

To be sure, I have experimented with MIDI recording software, and I did get my piano phrases quantized—but certainly not to my beat. Conceptually, a thusly quantized MIDI file is still scalable in time, that is, you can play it back faster or slower. Just change the number of beats per minute. By contrast, a quantized set of outlines will demonstrate what David wanted to show at exactly one ppem size. At any other size, the screen will “re-quantize” it (ignoring potentially demonstrable effects at integer multiples of the one ppem size).

As far as I remember, I had to tell the software to quantize e.g. to the nearest quaver (eighth note), and this took the swing right out of the phrase. Yet I learned in jazz theory classes that out of two consecutive quavers, the first one is longer than the second one, but that the two still add up to a crotchet (quarter note). Coming from classical music, I would have notated it in triplets, using a tie across the first two out of three, but that didn’t seem to be an option. When I told the software to quantize to the nearest hemidemisemiquaver (sixty-fourth note), the midi timing got a lot closer to swing, but the notation became unreadable.

And yes, like I wrote, the quantized outline would have demonstrable effects at integer multiples of the ppem size, hence a set of outlines quantized to 12pt at 96dpi = 16ppem would work at 32, 48, 64, 80, 96, … ppem but likely not at 15ppem or 17ppem or similar (besides the missed opportunities to do a better job at 96ppem with rendering stroke design contrast, under- and overshoots, serifs, etc).

Beat Stamm & dberlow: I understand what you were saying about quantizing to fit the grid. Although, as dberlow said, quantizing has some value in type design, even if it's not really helpful in font rasterization for the screen.

Beat Stamm: I'm having some trouble with the Raster Tragedy website. FF4 renders in a different way than Chrome. There are many features missing (like the sidebar navigation, titles and example images [!]). Here's the start of Chapter 2 in Firefox & here's Chrome(I scrolled down a bit to show that images do render in Chrome). I don't know if this happens to anyone else.

Beat Stamm: This is just a parenthesis. Some music software (like Ableton Live or Reason) allows you to add swing. In Reason you can simply add swing with a switch and not mess with the quantized notation. In Live you can add any sort of syncopation pattern (they call it Beat by the way ;) and can even control the intensity of each note (useful for a jazz ride cymbal pattern, for example) but if you want to have it "stay", it hardcodes it to the notation, modifying your original notes and (many times) destroying the quantization.

I think David and I were talking about quantization at two different levels of abstraction. David’s recent post reminds me how Hans Ed. Meier explained aspects of type design to me, although I don’t recall Hans using the term “quantization” in the process. In one of David’s earlier posts he referred to “to show what would happen in Quartz or Windows if TT hints were all interpreted” which I interpreted as being related to font rendering, as opposed to font design.

As to FF4, I had it installed for a very short time but didn’t like how it seemed to “hard-wire” my screen’s DPI to 96, hence I reverted to 3.6.16. I don’t recall not seeing the navigation bar, titles, and illustrations, but from these symptoms can only guess that JavaScript is disabled or not executing. The front page should give you a warning (CAUTION: […]) if it is disabled.

Last but not least, good to know that midi recording software has improved since I last tried some eight years ago. Should give me something to try next winter.

David: And John, just noticed his vertical thin stems are disappearing in the ClearType collection, 6 years after I told him he could not design a font for screen text and print display without x instructions.

a) I didn't just notice this: I knew it would happen when subpixel positioning was introduced, and noted that it was the first thing to be noticed when looking at Constantia in IE9;

b) The thin verticals rendered with nice stroke density in GDI ClearType, even down at 9ppem...

c) ...because Constantia was hinted to render that way in GDI, including x instructions, (just not deltas); we've never made any fonts without some x hinting, because like Beat we continue to find it useful.

So we're back to my earlier question for Beat: what potential is there for hints to improve stroke density in subpixel positioning environments?

As illustrated in §3.3.2, fractional advance widths and fractional pixel positioning “thwarts any intentions to carefully position strokes to ‘optimize’ stroke rendering contrast.” In other words, even if I prefer the “navy core” over the “maroon core,” because I may think it yields a higher contrast, I don’t get to choose this and choose sub-pixel positioning.

The other aspect is the minimum distance. As mentioned in §4.1.3, “To minimize distortions caused by aggressive bi-level constraints, it was eventually decided that ClearType rendering should reduce the minimum distance criterion to one half of the value specified by the author of the constraints.” This wasn’t my decision, but IIRC decided by majority vote.

There is a simple and a practical way to overrule this “divide-by-two” minimum distance. Both effectively require doubling the minimum distance. The simple way to try out if this works for you is something like this:

XLink(parent, child, cvt, >=2)

In the long run this would be very impractical and “hard-wired” for ClearType. Hence, once you’re certain that this is worth your while, I’d try to conditionally set the minimum distance, something like this:

JM Solé >...as dberlow said, quantizing has some value in type design, even if it's not really helpful in font rasterization for the screen.

Lol. That is definitely not what I said. Absolute and relative quantizing are both used, and they are used exclusively in the design of fonts for screen rasterization.

Beat>Have a look at the left and right side-bearing spaces this yields, and let me know how the rasterizer is supposed to interpret nothing to do the spacing?

I don' think the rasterizer is supposed to interpret anything at that size, it should obey the instructions. And the instructions in your example should be building a tidy O, shorter and narrower, or taller and wider, depending on the adjoining sizes. But that wasn't the point.

Beat>... eventually decided that ClearType rendering should reduce the minimum distance criterion to one half of the value specified by the author...

dberlow: This usually happens when I exaggerate something and/or write something in a rush without stopping to think about it. Thanks for making me see this. It made me have a second look at your description. I had a specific case in mind, so Absolute quantizing did not seem all that useful, but I forgot about some UI fonts and other size-specific fonts.

Also, even when one is not designing fonts for screen rasterization, quantizing is very useful for regularizing your design and sometimes saving time, or at least that's what I've seen in the little experience I have.

But really, I think I will stop writing about quantization. I don't want to hijack the topic with something that has already been discussed.

Quoting David: “I don' think the rasterizer is supposed to interpret anything at that size, it should obey the instructions. And the instructions in your example should be building a tidy O, shorter and narrower, or taller and wider, depending on the adjoining sizes. But that wasn't the point.”

The part of the rasterizer that executes TrueType instructions is called “interpreter,” hence in this context “to interpret” means “to execute” (or, to use your words, “to obey”) the instructions. I gather this terminology may cause confusion, but that’s what’s used conventionally and actually. I’ll try to remember to use “to execute” henceforth.

But, returning to the point I am making in §3.3.0: The fallacy of the “you don’t need x-direction ‘hints’ in ClearType!” approach is that it causes seemingly unpredictable spacing issues. The following example, taken from §3.3.0, rendered in ClearType, contains no instructions in x-direction whatsoever, hence there are no instructions for the rasterizer to execute, to obey, to disobey, or to “misinterpret.” Yet the spacing is off—off enough that even I can see it:

However, with instructions in x-direction I can tackle the proportions, take the advance width as a “budget,” and allocate left and right side-bearing spaces in accordance with the design.

I’m sure there are other strategies to allocate side-bearings and black-body-width, given the advance width (which I prioritized) and the proportions of the ‘O,’ and I’m open to suggestions, but the above is what I get following the aforementioned strategy. Notice that all strokes are rounded to the nearest 1/6 of a pixel, CVTs are used for all strokes and heights, but there are no deltas involved in the process.

Beat! whomever @microsoft "voted" for the current solution elected no X hints. It's a one-party system and they are all Y-men. Thus, the distance between the CT quality floor, (no hints), and the CT quality ceiling, (a little Y hinting), is a tiny space with no room for anyone to distinguish themselves hintingly. What are you gonna do about it now!? :)

This has been in my head for a while now, so i'd be interested to have cold water poured on it :)
The idea of 'quantising' values in a font design in order to aid rasterisation reminds me of the faces Frutiger did for IBM's Selectric computer/typewriter. There was a hardware constraint, so the designer worked with the constraint to better the output. in the case of the Selectric there was the constraint of fixed widths to characters, yet a wish for a more 'typographic' fitting of the characters. So Frutiger came up with a set of 6 char widths (including their fixed sidebearings) that he resolved by 'quantising' his fonts (he never called it 'quantising' !) But he took eg, Univers, and quantised the face down to only 6 character widths, with each character fitting into 1 of those 6.

My question into this is , could a similar approach aid screen rendering of a font in any way ? Could a similar approach help in hinting of a font?

Quoting David: “[...] no room for anyone to distinguish themselves hintingly. What are you gonna do about it now!? :)”

Show the font makers of this world that x-instructions (“x-direction hints”) represent an opportunity to improve spacing and scalability, if that’s what they care for, and show the typophiles at large what font quality they could get, if only they knew what to ask for?

Just because “some Y-men said so...“ doesn’t make anything a fact nor a ceiling of what can be achieved.

Quoting Vernon: “My question into this is , could a similar approach aid screen rendering of a font in any way ? Could a similar approach help in hinting of a font?”

Not entirely sure I’m not trying to answer the wrong question. Conceptually, in the illustration below, I have taken the outlines of a lc ‘m’ and quantized them for the purpose of making them easier to hint.

Looking at the pixels this gets me at 12 pt and 96 DPI (16 ppem), I find that this makes it very easy to hint.

Essentially no hinting required. However, if I’d rather have 11 pt at 120 DPI (18 ppem), this gets me back to square one.

Back to hinting.

As mentioned before, if I were to use an integer multiple of 12 pt, say 24 pt at 96 DPI (32 ppem), the quantized outline would be again very easy to hint.

I could repeat the above exercise with a lc ‘n,’ but even allowing for a different character width, this would get me substantially the same results.

The point is that the character shapes on the golf ball type element of the Selectric typewriter are analog (even if the advance widths have been quantized to just a few), the ribbon that’s struck by the type element is analog, and the paper that holds the imprint of the ribbon is analog. No raster tragedies, no hinting required.

Conversely, the screen is digital. Even the outlines that have been quantized as above get re-quantized (rasterized, or sampled), causing raster tragedies and requiring hinting.

I could repeat the above exercise with any kind of anti-aliasing, with substantially the same results on the abstraction level of samples instead of pixels. But as illustrated in §3.0, the raster tragedies don’t go away, they’re merely less obvious to see.