Jürg Lehni: I am interested in the relation between the role of type technology and the designers who are relying on it. Before the digital revolution in the field, the companies that provided the printing infrastructure and the people who were making typefaces for it had a much stronger relation between them than they do now. This facility was only available to a selected few, and the trade was highly specialised. The people involved worked closely together on all details to ensure highest quality. Today this strong link is gone. Has this led to a loss of quality or a lack of discussion between the industry that provides the infrastructure and the designers that are using it?

Peter Biľak: The close relationship of technical and creative people was inevitable due to the technology involved. The best example is probably cutting punches based on drawing of typefaces. If you look at collaboration of famous type designers and punch cutters (e.g. Jan van Krimpen and Rädisch) you will find that punch cutters would take a lot of liberty in interpreting the drawings, in order to make type work in the technology involved. The type might have looked quite different than originally intended — the medium had a significant impact on the end result. This continues today, as available technology always defines the boundaries within which type design exists. But the friction between the original intention and the final execution in type design is smaller than ever before, due to the fact that the author can execute most of the work alone.

Let us look at the next step in the history of typesetting. With the introduction of photocomposition systems, apparently cutbacks were made on the expectations for quality in return for lower printing costs, higher speed, and greater availability. The claim was that optical scaling removed the need for different cuts at different sizes, while previously different cuts were actually not the same shapes, different optical corrections were in place depending on the size, such as thicker stems at lower sizes, etc. Were these corrections simply ignored in optical systems for more flexibility? How did the designers think about this development?

Optical scaling was already considered for metal typesetting after the discovery of the pantograph. But in spite of using a single design master for the collection of fonts at different sizes, optical corrections did not immediately disappear. Photocomposition definitely (but temporarily) killed it, or to put it more precisely, the users preferred the simplicity of having a single matrix rather than having few hundred kilos of metal type in various sizes.

Optical scaling in Garamond types.

There is quite a difference between the impact of human interpretation of a punch cutter and the influence of the technological framework on the final design. By removing the in-between step of the punch cutter, we are certainly closer to the direct formulation of an idea. This is a promise that computer technology keeps making in many fields, not only in the field of Desktop Publishing. But is this idea really desirable or do we actually need the technological impact and the limitations to inspire our designs?

I think artists have always desired to be independent from technicians. So I would see it as a positive development, since it gives us the chance to express precisely what we wish. There are still enough challenging limitations, which have to do with physical limitations of humans and materials, so there is no need to worry about lack of challenges.

The introduction of digital systems has continued the tendency of photocomposition to ignore the need for optical corrections, up until today’s OpenType standard. But there was another effort to define a page description language that tried to take into account far more typographic concerns: Donald E. Knuth’s Metafont and TeX systems, which in retrospect seem much more flexible than the outline based approach as used in OpenType. Do you know about Knuth’s work? What is your opinion about it?

I discovered Knuth a couple of years back, and was very impressed.

I am curious to hear your opinion on the quality of the final result of Knuth’s Metafont endeavours. It seems that once he finished the work on his Computer Modern Family of typefaces, the purpose of Metafont for him was largely fulfilled. What would you say about this family of typefaces from a typographical point of view? Is the final result of such a dynamically defined family of fonts comparable to well drawn contemporary typefaces, or does it show problems of automatisation? Does it show its age?

I guess my answer will not be a surprise to you: Computer Modern suffers from a lot of problems if looked at purely in terms of the outlines of any single weight. But I guess this was never the point, as it has strengths that are unparalleled even today. Try setting a test that continuously progresses from sans to serif, then to condensed and mono-spaced in any standard commercial application from 2009!

At one point in the past, Adobe seemed to have partly acknowledged the need for parametric fonts when they introduced the Multiple Master (MM) system, that allowed linear interpolation along multiple axes between different master definitions for outlines of each glyph. MM has since been removed from all applications that once supported it. What is your opinion of MM? Is it a shame the feature disappeared, or was it unnecessary?

MM technology was closely related to GX fonts, another piece of technology that never made it far. Both were very interesting solutions, which ceased to exist before they were fully explored. The last time I worked with MM was more than 10 years ago. But I regularly use MM principles in my work and execute them in LettError’s Superpolator instead.

This leads us to another very interesting topic: With all of LettError’s efforts in the combination of type design and programming, starting with RoboFog which probably led to the integration of Python in FontLab (UFO being another example), there seems to be a tendency to make scripting and additional software like Superpolator more and more a part of the design process. Does this help to overcome the shortcomings of OpenType as a rather static file format?

The reason why some designers create their own tools is simply a lack of better tools. FontLab is the only company in the field that now licenses all major font editors (FontLab Studio, TypeTool, Fontographer), which all come with a number of bugs and limitations. Making scripts or stand alone applications is an effort to move beyond the limitations of boxed software and provide frameworks to make fonts that can more flexibly react to the different requirements by every new project.

If you could be part of the formulation of a new type technology, where would you see the role of software? Would you try to change things fundamentally, or are you happy with how things currently work?

I personally do not feel the limitations of the technology. Or perhaps my brain is so much conditioned by it that I cannot think of them as limitations, which is probably closer to the truth. Of course all shrink-wrapped software comes with clear limitations, but as a designer you are supposed to think outside of these limitations.

But indeed, it would be much simpler if one did not have to look for work-arounds, but could use colour, layers, and other attributes directly in software-based type design. Fonts are an interesting product, because they are a not the final product. When making fonts, standard tools are used to produce non-standard tools for graphic designers. The question is to what point the fonts should be designed, because to a certain extent they are going to be redesigned by someone else—someone else will choose the sizes, leadings, colours, and context in which they will appear.

This raises another very interesting question. Do you see fonts simply as tools, or as artwork? Are they something in-between, and if so, what would the best definition be?

This is quite hard to define, as sometimes they need to be a tool, and sometimes an artwork. I am hesitant to come up with a definition, as that would be less flexible than the possibilities of typefaces. And then there are also the distinctions of terminology (e.g. font vs. typeface), which I am not so interested in. The principles of the alphabet are different to the principles of rendering technology. It is hard to contain it all in one definition. But to answer your previous question more directly, I try not to get too engaged with designing and defining the technology by itself. I see too many gifted designers getting too deep into the abstract questions of formulations of technology, so they have no time left to actually use the technology in a creative way. I suppose there is a lot of creativity in making the tools, but I also want to reserve the time to the simple making of artefacts: books, posters, typefaces, exhibitions, etc. I do use some non-standardised tools, but it depends on the projects, and on some projects I work with just a pen and pencil.

You have Successfully Subscribed!

Topics

Scattered around the internet numerous interviews with graphic designers can be found. This site brings together some of those conversations from various online sources. Together they form a kaleidoscopic portrait of contempory graphic design through the thoughts of its leading practitioners.