How it works

Font pairing in design, font pairing with machine learning

Font pairing is a classic problem in the design world. Different fonts can be used to draw attention, lead the eye, or even form the foundations of a brand identity.

What exactly is a good font pairing? This is a difficult question to answer, but we can start with an easier question - what are bad font pairings?

Pairing fonts that are very similar, but just slightly different creates visual conflict.
This is actually a core tenet of design - contrast is important not only in font selection but color and position as well.

Fonts that share no relationship at all are not great either. Intent is another foundational aspect of design - things that look random and haphazard tend to evoke a feeling of discord (unless that's the effect you're going for)

A common way to combine fonts is to use fonts in the same family, or created by the same designer. Another approach is to match various typographic measures, like x-height and ascenders/descenders.

Good font combinations tend to be fonts that share certain similarities, but contrast in some specific way.

If we simplify this and view it from a graphical perspective, we might create a map to guide our search. Let's say that the Y axis represents the font weight, and the X axis the obliqueness.

The fonts on the opposite sides of the graph are possibly good pairings because they have a lot of contrast. The farther away they are from each other, the more they contrast.

Font pairs that are both far from each other and oriented vertically/horizontally are better candidates, because they share one dimension in common (similar weight or similar obliqueness)

Since fonts vary by a lot more than just obliqueness and weight, we have to add more dimensions. Eg. A Z-axis for serifs vs sans-serifs.

Now we have a 3D map, and like before our best candidates are on opposite sides of the graph but run parallel to the axes. We might opt to keep going and add more dimensions for things like font width, letter spacing, ligatures and so on.

As the number of features increase, the dimensionality of our map increases - 4D, 5D, 6D etc. We won't be able to visualize the map past 3D, but the math is the same in all cases.

Following this formula, we can systematically find fonts that share similarities but contrast in a key way - eg. similar in obliqueness and serifs, but different weights.

There could also be contrasts that are not ideal - fonts that are similar in weight, but contrast in obliqueness and serifs. Not all of these axes of contrast will be visually pleasing, but the map can serve as a guide to find unique and sometimes surprising relationships between fonts.

In machine learning terms, the coordinates on the map is a vector of features. How do we create these vectors? In simplistic cases we could go through all the fonts we have, and rank them from boldest to thinnest. This might be a lot of work, but doable for one person. If we have 10, 20 or even more features to grade, things become more unwieldy and require some computer assistance.

To automatically extract features, a common approach is to use a deep neural net. With this approach we don't actually need to specify which features we want, rather the deep learning model discovers the features for itself and we use the resulting data as our map.

There are more details on the deep learning aspect of this project on Github