Primary tabs

I haven’t heard of any spectacular changes (the underline feature in Illustrator was long overdue). Some of the Photoshop features look neat.

Is there any buzz on InDesign CS2 or anything else? Are they oﬀering anything free with the upgrade (i.e. last time they had Brioso Pro and several other choices—maybe this time they’ll team up with Macromedia and oﬀer a discount on Fontographer MX)?

Not only do I not like mandatory activation, I detest it! Ultimately, it makes things more diﬃcult for me (a paying user) when rebuilding a hard disk upgrading a PC with a new hard disk. I’ve had experience with the windows photoshop activation and its awful. Once I had to call adobe’s activation line and the automated process kept rejecting the very long number I keyed in. I had to eventually wait for a CSR and get the activation key. That’s time I will never get back, time I could have spent designing or choosing my own way to waste time. This does not deter piracy.

Heh. The actual name is Garamond Premier. The only relationship to the old Garamond Pro is that both are inspired by the types of Claude Garamond and designed by Robert Slimbach. Garamond Premier has a several-times-larger character set, and four completely independently designed optical sizes. Language coverage includes Greek, Cyrillic, extended Latin (even Vietnamese), and there are small caps for all the supported languages.

I was going to say that you’d have to wait for the next generation of technology after OpenType for that kind of capability. But then it occurred to me that OpenType’s table structure and capacity to handle new, arbitrary tables in the future without harming existing functionality means that we could do this in OpenType 2.0.

We just have to ﬁgure out how to put physical instantiation into a table. Also, the cost of arbitrary fabrication systems capable of building physical objects based on computer-based instructions needs to come down. But they only cost $20K today (according to a recent piece in The Economist), so if we apply price reduction at rates equivalent to Moore’s Law (improbable, but just for fun)… mmm, in about 13.5 years arbitrary fabs will cost about $40 so every computer will have one. Then you could just have a new OpenType table that contains instructions for building stuﬀ, and have it build the thing that goes and gets your drink from the fridge.

First, I believe that you’re underestimating the exponential rate of growth of Moore’s Law in today’s ever-shrinking world of computer literates. Many institutions like MIT are spreading the technology to create such fonts into the nether regions of this tiny planet. And with nanotechnology—the line between analog and digital is being shredded faster than the hands of a drunk man feeding a wood.

And that’s not even factoring the close connection between font designers and programmers and their refrigerators.

Heck, it took Adobe 22 years to make CS1 and about 1.5 years to pop out CS 2. That’s an 1,466 percent increase.

I fully expect that Creative Suite 3 will be out in about 24.6 days.

So maybe I should wait. (But I’m very bad at waiting, though.)

Thanks for keeping us up-to-date, DeWitt

PS They must work you like slaves in that place. Interrobang twice for “help me, they’ve shackled me to my desk and make me type my reports in Comic Sans.”

Hmmm. I get 37.3 days between CS2 and CS3. But only 2.5 days after that for CS4! You gotta love geometric progression!

Thankfully, I can type my reports in pretty much any typeface I choose — though I generally use our corporate typefaces, Minion and Myriad. (Lately I’ve also been using my upcoming Hypatia a lot, just to put it through its paces in real usage.)

“Let’s see what optical kerning does to one of Adobe’s own most recent state-of-the-art fonts, Adobe Garamond OT Pro. Consider the simple word ‘jumps’ at 12 points. This word sets perfectly and doesn’t require any kerning. It has been carefully optimized by Adobe’s designer Robert Slimbach to have absolutely perfect spacing at this point size.

Now let’s specify Adobe’s ‘optical’ kerning. InDesign subtracts 23 units of space between j and u, adds 14 units between u and m, subtracts 16 units between m and p, and -1 units between p and s. The result is atrocious. The u is now too close to j, and there is a noticeable river of space between u and m. A well-made font has been rendered useless.

It’s the same story no matter what font or setting you choose. Adobe’s ‘optical kerning’ isn’t a feature — it’s a liability. Worse still, it’s featured in Illustrator and Photoshop.

Adobe has gambled that its target customers will not be able to tell the diﬀerence between a professional font rigorously spaced by its original designer, and a complete mess-up spaced by its guesstimator. I don’t agree. My faith in designers is greater than that.”

> Let’s see what optical kerning does to one of > Adobe’s own most recent state-of-the-art fonts

No, let’s not, because everybody knows nothing can beat a really good human spacer. InDesign’s optical spacing wasn’t meant to be used only with Adobe’s fonts, or Carter’s fonts, it was meant to be useful to a lot of people, and most fonts most people use have imperfect spacing. Take Mrs Eaves for example, a font that many people love, to the point that they ignore the font’s horrid spacing: InDesign renders Mrs Eaves actually usable.

Anybody free of a visceral, burning desire to strike down Adobe at every turn in the berserker rage of a jilted lover (think Glenn Close in Fatal Attraction) can see that InDesign’s optical spacing is just another tool, that can be used or misused. It is better than not having it, and that’s more than can be said of many things, certainly most fonts!

> My faith in designers is greater than that.

Faith, shmaith. Most fonts are spaced badly (you of all people should realize and admit this), and InDesign’s optical spacing makes them less bad, that’s all. A good typographer simply knows when to use it and when not to. Think of it as anti-lock braking on a car: there are more people who need to avoid rear-ending the car in front of them than there are people who need to perform a controlled slide.

Actually, what I really want to know is whether it can look at a pile of fonts with diﬀerent values for the “size” feature and treat them as a single font, selecting the correct optical size automatically based on the size as used. I’ve heard rumors of such a feature, and of course it would make perfect sense with Garamond Extreme, but I’d like to get it conﬁrmed.

Now just to convince my boss that buying CS2 is a justiﬁable business expense…

>I generally work with a font’s built in kerning and then ﬂip on the Optical Spacing for display work and ﬁne tune from there. Often saves a lot of hassle.

Well, Stephen, I suspect the reason you see a slight visual improvement is that the predominant action ‘optical kerning’ takes is tightening.

May I suggest that you simply track negatively instead?

The problem with ‘optical kerning’ is that it tightens disproportionately and incorrectly — and when it thinks it has to add space, the results are usually disastrous.

The advantage of using simple negative tracking is that you preserve the correct spatial relationships of the font. (Unless it is just a shareware cockup — in which case, what are you doing using it to start with?)

The profound disadvantage of ‘optical kerning’ is that it distorts the original spatial relationships of the font in an irrational and unpredictable manner.

It requires only the simplest analysis of any of its suggestions to see where it almost invariably goes wrong.

What is really great about the feature, though, is that it makes it so easy to determine exactly where and how it has gone wrong.

My position is that it is simply indisputable that the traditional technique for dealing with display sizes — used since phototype — is the best: negative tracking. ‘Optical kerning’ is a gimmick gone wrong.

I know that deeply respected font folk from URW have worked on this intensively, but I believe their work has been in vain, primarily because the program still does not look at actual stems. Analyzing white space has been tried by a hundred programs a hundred times, and just doesn’t work. Never has, never will. EvB and JvR tried doing this in a much more sophisticated manner, and ultimately conceded failure. The only thing that is surprising here is the number of worthies that were taken in on this latest occasion.

There remains the suspicion that ‘optical kerning’, though of dubious merit for professional fonts, can be of value for badly spaced shareware fonts. I can’t agree. If the program cannot understand a well-spaced font, it cannot understand a badly-spaced font. It seems clear that whatever slight beneﬁt might exist is just due to generalized tightening that could be achieved better with simple tracking.

I have yet to see a single documented test which establishes that ‘optical kerning’ actually understands what it is doing.

But this isn’t an issue which needs to be argued theoretically. Arguments for either side can easily be demonstrated by simply mentioning the settings.

I have shown that ‘optical kerning’ will hopelessly mangle the spacing of the word ‘jumps’ in Adobe Garamond Pro. Can you provide us with some examples in which you can consistently show that ‘optical kerning’ actually performs well? (Especially when tested against simple negative tracking?) My suspicion is that in at least 95% of cases, ‘optical kerning’ will make visibly deleterious decisions. And my position is that no program can even begin to know how to space or kern until it is aware of actual stem positions. This, clearly, Kernus/URW/optical kerning cannot accomplish and indeed it is too much to expect, in a WYSYWIG context.

Incorrect. The spacing is depending on point size, with smaller sizes generally getting loosened, which is of course what’s required. I’ve actually graphed the algorithm’s behavior, and it’s “bilinear”: a straight line between 4 and 12 point, another (shallower) straight line between 12 and 72, and no [further] change below 4 and above 72.

That said, it’s probable that overall InDesign’s algorigthm tightens more than loosens, but I would think that’s due to most fonts being spaced too loosely (it’s easier that way, after all).

> May I suggest that you simply track negatively instead?

No, because that ignores “boundary conditions”, like the right side of the “r”. Fooling with the tracking is a brute-force approach that’s generally much worse than what InDesign would do.

One problem with InDesign’s algorithm though: if you apply tracking to text that has optical spacing enabled, the tracking is still applied “blindly”, leading to the boundary issues above; I think it would have been much better to take the opportunity to apply the tracking adjustment as a “recommendation” to the optical algorithm to change its parameters. But maybe there are performance issues (which is why I think the alogirthm is bilinear and not outright smooth).

> Unless it is just a shareware cockup

There’s a diﬀerence between a font with good forms but lousy spacing, and a font that’s good in both. There are very many fonts in the former category (probably because most people in type design are aesthetically competent but analytically lacking) and these are the ones that beneﬁt most from InDesign.

> Analyzing white space has been tried by a > hundred programs a hundred times, and just > doesn’t work

It hasn’t been done to the point of matching a good human spacer, I agree, but it’s really the only way you could get close to it! Balancing the white space, after all, is the whole point of this; stems are circumstancial, a shadow of the true solution.

The reason it hasn’t been done yet isn’t that it’s not possible, it’s that there’s not enough money in the ﬁeld to ﬁnance a serious eﬀort to completion. Many people who have given up wouldn’t have if we paid them enough. But then we have people like Raph, so all is not lost.

> Can you provide us with some examples in which > you can consistently show that ‘optical kerning’ > actually performs well?

You can easily do it yourself, like with Mrs Eaves (which is no shareware font). And that way your denial has a chance -albeit a tiny one- of being cast oﬀ. If anybody else did it, you would simply fabricate deeper diversions. I know you Bill.

—

> My conclusion was that ….

Tellingly, you were the only one to come to that conclusion, if I remember correctly.

And is it a coincidence that you’re also an Adobe-hater? Every opinion anybody presents has to be put in the broader context of what that individual wants from his life, has to be “ﬁltered” to extract anything useful out of it, sometimes ﬁltered quite heavily.

About Bill for example, this hit me just yesterday: he’s the “Da Vinci Code” of the type scene. Full of interesting rumors and insights, but a whole bound together with a self-centered, sensationalistic opportunism that can make him dangerously misleading. I guess you could say that Bill himself is a tool that’s diﬃcult to use well!

>And is it a coincidence that you’re also an Adobe-hater? Every opinion anybody presents has to be put in the broader context of what that individual wants from his life, has to be “ﬁltered” to extract anything useful out of it, sometimes ﬁltered quite heavily.

Hrant, must you always compensate for imagined spin? How about accepting arguments on face value? All I want is the truth.

I am not an “Adobe hater”, for goodness sake, I am a critic. Heaven knows they get enough routine praise from others I don’t have to say, “They’re wonderful, but..” every time I dare oﬀer my opinion. If you check the original thread, you will see that then as now I did my practical research and testing, showed visual proof (which is more than most pundits can be bothered to do), for those with the eyes to see, and came to the conclusion that Optical Kerning is a dodgy propostition for serifed type, but can be eﬀective for sans faces.

I dunno, Nick, that’s not a very persuasive example, because you found it necessary to add +10 tracking to oﬀset the kerning. Can’t you ﬁnd, somewhere, a combination of letters that optical kerning actually does right? It looks like what you have accomplished here is to use +10 tracking to negate the ‘optical kern’ between o and r and decrease the metric/optical kern between t and o. Then, both you and Kernus/Indy have chosen to open the tight space between r and t — in fact you have chosen to open it considerably further than Kernus/Indy would have chosen. That is clearly not what the designer(s) of the font would have chosen, but, regardless of its value as a personal choice in setting the word, what possible reason can Indy/Kernus have to oﬀset the intended aesthetics of the font with so much additional white space as an automatic choice?

Again, I would like to see some evidence of perfected behaviour here. Setting display type with an auto kerner and then having to add +10 tracking — which is, let’s face it, a drastic corrective — doesn’t make the case. I’m not saying it can’t be done, but I await evidence that Kernus/Indy is making rational choices with any consistency. Far be it from me to encourage you to waste more time on this, but can’t you come up with something a little better?

Also, Adobe warmly recommends using optical kerning for text and makes no mention that the system should be preferred for display that I can ﬁnd.

I dunno, Nick, that’s not a very persuasive example, because you found it necessary to add +10 tracking to oﬀset the kerning. Can’t you ﬁnd, somewhere, a combination of letters that optical kerning actually does right? It looks like what you have accomplished here is to use +10 tracking to negate the ‘optical kern’ between o and r and decrease the metric/optical kern between t and o. Then, both you and Kernus/Indy have chosen to open the tight space between r and t — in fact you have chosen to open it considerably further than Kernus/Indy would have chosen. That is clearly not what the designer(s) of the font would have chosen, but, regardless of its value as a personal choice in setting the word, what possible reason can Indy/Kernus have to oﬀset the intended aesthetics of the font with so much additional white space as an automatic choice?

Again, I would like to see some evidence of perfected behaviour here. Setting display type with an auto kerner and then having to add +10 tracking — which is, let’s face it, a drastic corrective — doesn’t make the case. I’m not saying it can’t be done, but I await evidence that Kernus/Indy is making rational choices with any consistency. Far be it from me to encourage you to waste more time on this, but can’t you come up with something a little better?

Also, Adobe warmly recommends using optical kerning for text and makes no mention that the system should be preferred for display that I can ﬁnd.

>because you found it necessary to add +10 tracking to oﬀset the kerning.

I thought it was more instructive to compare words of the same length, to better show where InD adds and subtracts. But if you’d rather see it the other way, here’s with no tracking:

Look, I’m no fan of “Optical Spacing”, I’m just pointing out where I consider it may have some slight merit, such as putting some space between the “r” and “t” in Hel45. I haven’t set a job in Helvetica since 1989, and never use Optical, but in the interests of science…

I’d rather have the tight r_t combination and the nicely balanced T_o, o_r in the metrics sample, than the poorly balanced T_o_r in the Optical sample. At least the letter space in the Metrics sample is well balanced.

Sorry, make that “optical refutation” After all, it only takes one good refutation to disprove a theory. That’s proper (or Popper) science, ennit?

***

Sorry for getting oﬀ topic.

The most typographically relevant new feature seems to be “Anchored Objects” in InD CS2. This will prove useful for keeping marginalia close to their source in a main text column. Very biblical. And it will be useful, especially in the layout process (and a design stimulus) for a complex paragraph head — say, dates and times of a performance— that projects from a text column.

If the theory is that InDesign’s algorithm is Infallible -a retarded theory that nobody has ever put forth- then of course you’ve “proven” it. Yippee.

If the theory is that InDesign’s algorithm is Useless, then not only have you proven nothing, but you’re not even trying. Neue Helvetica indeed. No, why don’t you try Miller? I keep bringing up Mrs Eaves, a pricey, heavily used font that Bill detests but from a designer you admire, but both of you merrily collude in ignoring its relevance here.

Miller was sarcastic — my point was that it’s spaced too well for a poor industry to have produced an algorithm good enough to impove on. Obviously, if we’re trying to see if InDesign’s algorithm is useful (not omniscient) we should use a mid-range benchmark; and I don’t think a Helvetica from a good foundry qualiﬁes either. Also: if you’re serious, you really need to try a little more than 3 pair adjacencies…

That’s actually less convincing than I expected… Some things (like the extremely important “the”) are much improved, but some places the decisions are suspect*. That “ru” in the 12-optical for example makes me think that there’s something to Nick’s observation about a “problem” with serif fonts. Overall, an improvement I’d say, but not as much as I expected.

* It’s especially interesting to note the cases where the kerning decision is of opposite sign!

Something else though: I think there’s a problem with your rendering. Look at the “iam” (ﬁrst word) in the 12-optical and the 48-optical: the positive pair is smaller in the 48, but the gap looks much bigger…

Maybe we need a PDF? (And if you did a convert-to-outlines you’d still be legal.)

What I tried to do is count how many of the decisions seemed good and how many seemed bad. For the 72 point I got 10 good versus 4 bad; but for the 12 point (which I actually observed on-screen, since my printer isn’t good enough) it was quite close: 6 good versus 5 bad. It’s possible that the PDF suﬀers from the same type of problem as the previous GIF did, although I personally doubt it. Somebody with a 2400dpi printer might want to try it out and chime in…

—

One thing that hasn’t been mentioned yet: from what I understand (I think John pointed this out once), the Optical algorithm’s decisions depend on the existing spacing; they don’t completely over-ride it. I’m not sure if that’s true, or why it’s true, or how it might work, but it’s possible that if a font’s existing spacing is really whacked the algorithm doesn’t perform as well? Or maybe if a font’s spacing is unusually loose (like in Mrs Eaves) the algorithm gets confused? Dunno.