The Making of Conversations

The making of Conversations was on many levels a process of
dialogue, between people, processes, and systems. Xavier Klein and
Christoph Haag were as much involved in editorial decisions as they were
in creating an experimental platform that would allow us to produce a
publication in a way true to the content of the conversations it would
contain. In August 2014 we discussed the ideas behind
their designs and the status of the systems they were developing for the
book that you are reading right now.

I wanted to ask you Xavier, how did you end up in
Germany?

It's a long story, so I'll make it short. I benefit from the
Leonardo program, a scholarship to do an internship abroad. So I searched
for graphic design studios that use Open Source and Free Software. I asked
OSP first, but they said No. I didn't know LAFKON at this time,
and a friend told me: Hey there is this graphic design studio in
Germany, so I asked and they said Yes. So I was happy.
(laughs)

How did you start working on this book?

I thought it would be nice to have a project during Xavier's
stay in Augsburg with a specific outcome. Something going beyond pure
experimentation. So I asked Constant if there were any projects that need
to be worked on. And I'm really happy with the Conversations
publication, because it is a good mixture. There is the technical
experiment, how you would approach something like this using Free
Software. And there is the editing side. To read all these opinions and
reflections. It's really interesting from the content side, at least for
me -- I don't dare to speak for Xavier. So that's basically how it
started.

You developed a constellation of tools that together are
producing the book. Can you explain what the elements are, how this book
is made?

We decided in the beginning to use Etherpad for the editing.
A lot of documentation during Constant events was done with Etherpad and I
found its very direct access to editing quite inspiring. Earlier this year
we prepared a workshop for the Libre Graphics Meeting, where we'd have a
transformation from Etherpad pages to a printable .pdf. The idea
was to somehow separate the content editing and the rendering. Basically I
wanted to follow some kind of 'pull logic'. At a certain point in the
process, there is an interface where you can pull out something without
the need to interfere too much with the inner workings of this part. There
is the stable part, the editing on the Etherpad, and there is something,
that can be more experimental and unstable which transforms the content to
again a stable, printable version. I tried to create a custom markdown
dialect, meant to be as simple as possible. It should reduce to some
elements, the elements that are actually needed. For example if we have an
interview, what is required from the content side? We have text and
changing speakers. That's more or less the most important informations.

So on the first level, we have this simple format and from there the
transformation process starts. The idea was to have a level, where
basically anybody, who knows how to use a text editor, can edit the text.
But at the same time it should have more layers of complexity. It actually
can get quite complex during the transformation process. But it should
always have this level, where it's quite simple. So just text and for
example this one markup element for ok now the speaker
changes.

In the beginning we experimented with differents tools, basically small
scripts to perform all kinds of layout task. Xavier for example prepared a
hotglue2svg converter. After that, we thought, why don't we try
to connect different approaches? Not only the very strict markdown
to TeX to.pdf transformations, but to think
about, under which circumstances you would actually prefer a canvas-based
approach. What can you do on a canvas that you can't do or is much harder
with a markup language.

It seems you are developing an adhoc markup language? Is
that related to what you wrote in the workshop description for
Operating Systems: 1Using
operating systems as a metaphor, we try to imagine systems that are both
structured and open?

Yes. The idea was to have these connected/disconected parts.
So you have the part where the content is edited in collaboration and you
have the transformer script running separately on the individuals'
computers. For me this solved in a way the problem of stability. You can
use a quite elaborated, reliable software like Etherpad and derive
something from it without going to its inner workings. You just pull the
content from it, without affecting the software too much. And you have the
part, where it can get quite experimental and unreliable, without
affecting all collaborators. Because the process runs on your own computer
and not on the server.

The markup concept comes from the documentation of a video streaming
workshop in Linz. There we wanted to have the possibility to write the
documentation collaboratively during the workshop and we needed also to
solve problems like How about the inclusion of images? That is
where the first markup element came from, which basically just was was a
specific line of text, which indicates 'here should be this/that image'.
If this specific line appears in the text during the transformation
process, it triggers an action that will look for a specific file in the
repository. If the image exists, it will write the matching macro command
for LaTeX. If the image is not in the repository, it will do nothing. The
idea was, that the creation of the .pdf should happen anyway,
e.g. although somebody's repository might be not at the latest state and a
missing image would prevent LaTeX from rendering the document. It should
also ignore errors, for example if someone mistypes the name of image or
the command. It should not stop the process, but produce a different
output, e.g. without the image.

Why do you think the process should not stop when there's an
error? Why is that so important?

For me it was important to ensure some kind of feedback,
even if there might be 'errors' in the output. Not just 'not work'. It can
be really frustrating, when the first thing you have to do, is to find and
solve a problem -- which can be quite hard with this sort of
unprofessional scripts -- before there's is happening anything at all. So
at a certain point, at least something should appear, even if it's not
necessarily the way it was originally intended. Like a tolerance for
errors, which would even produce something, that maybe different from what
you expected. But it should produce 'something'.

You imagine a kind of iterative development that we know
from working with code, that allows you to keep differents versions, that
keeps flowing in a way.

For example, this specific markup format. It's basically
markdown and I wanted some more elements, like footnotes and the option to
include citations and comments. I find it quite handy, when you write
software, that you have the possibility to include comments that are not
part of the actual output, but part of the working process. I also enjoy
this while writing text (e.g. with LaTeX), because I can keep comments or
previous versions or drafts. So I really have my working version and
transform this to some kind of output.

But back to the etherpash workshop. Commands are basically comments
that will trigger some action, for example the inclusion of a graphic or
changing the font or anything. These commands are referenced in a separate
file, so everybody can have different versions of the commands on their
own machine. It would not affect the other people. For example, if you
wanted to have a much more elaborated GRAFIK command, you could
write it and use it within your transformer of the document or you could
introduce new commands, that are written on the main pad, but would be
ignored for other people, because they have a different reference file.
Does this make sense?

Yes. In a way, there are a lot of grey zones. There are
elements that are global and elements that are local; elements can easily
go parallel and none of the commands actually has always the same output,
for everyone.

They can, but they do not need to. You can stick to the very
basic version that comes directly from the repository. You could use this
version to create a .pdf in the 'original' way, but you can
easily change it on different levels. You can change the Bash commands
that are triggered by the transformer script, you can work on the LaTeX
macros or change the script itself. I found it quite important to have
different levels of complexity. You may go deeper, but you do not
necessarily have to. The Etherpad content is the very top level. You don't
have to install a software on your computer, you can just open up a
browser and edit the text. So this should make the access to collaboration
easier. Because for a lot of experimental software you spend a lot of time
to get it even running. Most often you have a very steep learning curve
and I found it interesting, to separate this learning curve in a way. So
you have different layers and if you really want to reconfigure on a deep
level, you can, but you do not necessarily have to.

I guess you are talking about collaboration across different
levels of complexity, where different elements can transform the final
outcome. But if you take the analogy of CSS, or let's say a Content
Management System that generates HTML, you could say that this also
creates divisions of labour. So rather than making collaboration possible,
it confines people to to different files. How do you think your systems
invite people to take part in different levels? Are these layers porous at
all? Can they easily slip between different roles, let's say an editor, a
typographer and a programmer?

Up to a certain extent it's like a division of labour. But
if you call it a separation of tasks, it makes definitely sense for me. It
can be quite hard, if you have to take over responsability for everything
at the same time. So it makes sense for me, also for collaboration, to
offer this separation. Because it can be good to have the possibility not
to have to deal with the whole system and everything at the same time. You
should be able to do so, but you should not necessarily have to. I think
this is important, because a lot of frustration regarding Free Software
systems comes from the necessity to go to the deep level at an early
stage. I mean it's an interesting problem. The promise of convenience is
quite hard, because most times is does not really work. And it's also fine
that it doesn't really work. At the same time it's frightening for people
to get into it and so I think, it's good to do this step by step and also
to have an easy top level opportunity to go into, for example,
programming. This is also a thing I became really interrested in. The
principle of the commandline to 'extend usage into programming'.
You do not have to have a development environment and then you compile
software and then you have software, but you have this flexible interface
for your daily tasks. If you really need to go a deeper level, you can, at
least with Free Software. But you don't have to ... compile your kernel
every time.

Not every time! What I find interesting about your work is
that you prefer not to conceal any layers. References, commands, markup
hints at the existence of other layers, and the potential to go somewhere
else. I wanted you to ask about your fascination or interest in something
'old school' as Bash scripting. Why is it so interesting?

Maybe at first point, it's a bit of a fascination for the
obscure. That normally, as a graphic designer you wouldn't think of using
the commandline for your work. When I started to use GNU/Linux, I'd try to
stay away from the terminal. Which is basically, as I realised pretty
soon, not possible. 2 At some point,
Bash scripting became really fascinating, because of the possibility to
use automation to correct or add functionalities. With the commandline
it's easy to automate repetitive tasks, e.g. you can write a small script
that creates a separate .svg file for each layer in a
.svg file 3, convert this
separated .svg files to .pdf files 4 and combine the .pdf files to a
multipage .pdf5. Just by
collecting commands you'd normally type on your commandline interface. So
in this case, automation helps to work around a missing multipage support
in inkscape. Not by changing the application itself, but by plugging
something 'on top' of it. I like to think of the Bash as glue between
different applications. So if we have a look now at the setup for the
conversations publication, we may see that Bash makes it really easy to
develop own configurations and setups. I actually thought about prefering
the word 'setup' to 'writing software' ...

Are you saying you prefer setup 'over' configuration?

Setup or configuration of software 'over' actually writing
software. Because for me it's often more about connecting different
applications. For example, here we have a browser-based text editor, from
which the content is automatically pulled and transformed via
text-transform tools and then rendered as a .pdf. What I find
interesting, is that the scripts in between may actually be not very
stable, but connect two stables parts. One is the Etherpad, where the
export function is taken 'as is' and you've got the final state of a
.pdf. In between, I try to have this flexible thing, that just
needs to work at this moment, in my special case. I mean certain scripts
may reach quite an amount of stability, but not necessarily. So it's very
good to have this fixed state at the end.

You mean the .pdf?

I mean the .pdf, because ... These scripts are
quite personal software and so I don't really think about other users
beside me. For me it's a whole different subject to go to the usability
level. That's maybe also a cause for the open state of the scripts. It
would not make much sense -- if I want to have the opportunity for other
people to make use of these things -- to have black boxes. Because for
this, they are much too fragile. They can be taken over, but there is no
promise of ... convenience? 6

And it's also important for myself, because the setups are really
tailored to a specific use case and therefore more or less temporary. So I
need to be able to read and adapt them myself.

I know that you usually afterwards you provide a description
of how the collage was made. You publish the scripts, and sketches and
intermediary outcomes. So it seems that usability is more in how you give
access to the process rather than the outcome. Or would you say that
software is the outcome?

Actually for me the process is the more interesting part of
the work. A lot of the projects are maybe more like a proof of concept,
than finished pieces of software. I often reuse parts of these setups or
software pieces, so it's more collections of 'How to do something' then
really a finished thing, that's now suitable to produce this or that.

I'm just wondering, looking at your designs, if you would
like that layering, this unstability to be somehow legible in the
.pdf or the printed object?

I don't think that this unstability is really legible.
Because in the process there's a certain point where definitive decisions
are taken. It's also part of the concept. You make decisions and that make
the final state of the object what it is. And if you want to get back to
the more flexible part, then you would really have to get back. So I don't
actually think that it is legible in the final output, on the first sight,
that it is based on a very fluid working process. And for me that's quite
ok. It's also important for me -- because I tend not to do so -- to take a
decision at a certain point. But that's not necessarily the ultimate
decision and therefore it's also important to keep the option open to
redefine ... 'the thing'.

What you're saying, is that you can be decisive in your
design decisions because the outcome could also be another. You could
always regenerate the .pdf with other decisions.

Yes. For example, I would regenerate the .pdf with
the same decisions, another person maybe would take different decisions.
But that's one step before the final object. For example, if we do not
talk about the .pdf, but we actually talk about the book, then
it's very clear, that there are decisions, that need to be taken or that
have been taken. And actually I like the feeling of convenience when
things get finished. They are done. Not configurable forever.

(laughs) That's convenient, if things get done!

For this specific book, you have made a few decisions, for example your
selection of fonts is particular.

Xavier, can you say something about the typography of Conversations?

Huuumn yep, for the typographic decisions ... in the
beginning we searched for fancy fonts, but in a way came back to use very
classic fonts, respectively one classic font. So the Junicode 7 for the text and the OCR-A 8 for anything else. Because we decided to
focus on testing different ways of layouting and use the fonts as a way to
keep a certain continuity between the parts. We thought this can be more
interesting, than to show that we can find a lot of beautiful, fancy
fonts.

So in the beginning, we thought about having a different
font for every speaker, but sooner or later we realised that it would be
good to have something that keeps the whole thing together. Right now,
this are the two fonts. The Junicode, which is a font for medievalists,
and the OCR-A, which is a optical character recognition font from the
early age of computer technology. So the hypothesis was, to have this
combination -- a very classical typeface inspired by the 16th century and
a typeface optimized for machine reading -- that maybe will produce an
interesting clash of two different approaches. While at the same time
providing a continuous element throughout the book. But that still has to
be proven in the final layout.

I find it interesting that both fonts in their own way are
somehow conversational. They are both used in situations where one system
needs to talk to another.

Yeah, definitely in a way. They are both optimised for a
special usage, which, by the way, isn't the usage of our case. One for the
display of medieval texts, where you have to have lot of different signs
and ligatures and ... that's the Junicode. The other one, the OCR-A, is
optimized to be legible by machines. So that are two different directions
of conversation. And they're both Free and Open Source fonts ...

And for the layout? How are the divider pages going to be
constructed?

For the divider pages, it's an application 'Built with
Processing', done by Benjamin 9. In a
way, it's a different approach, because it's a software with an extensive
Graphical User Interface, with a lot of options. So it's different from
the very modular, connective approach. There we decided to have this
software, which is directly controlled by the controller, the person who
uses it. And again, there is this moment of definitive decision. Ok,
this is exactly how I want the title pages to look. And then they are
put in a fixed state. At the same time, the software will be part of the
repository, to be usable as a tool. So it's a very ... not a 'very
classic' ... approach. To write 'your' software for
'your' very specific use case. In a more monolithic way
...

Just to add this. In this custom markdown dialect, I decided at a point
to include a command, which is INCLUDEPAGES, where you can
provide a .pdf file via an url to be included in the document. So
the .pdf may be stored anywhere, as long as it is accessible over
the internet. I found this an interesting opportunity for collaboration.
Because if somebody does not want to stick to the grid given by the LaTeX
configuration or to this kind of working in general, this person could
create a .pdf, store it online, reference it and the file will be
included. This can be a very disconnected way of contributing to the final
book. And that's also a thing we're now trying to test ourselves. Because
in the beginning we developed a lot of different little scripts, for
example the hotglue2svg converter. And right now we're trying to
extend this. For example, to create one interview in Scribus and include
the .pdf made with Scribus. To also test ourselves different
approaches.

This book will be both a collage and have a overall,
predefined structure provided by the lay-out engine?

I'm trying to make pragmatic use of the functionalities of
LaTeX, which is used for the final compiling of the .pdf. So for
example, also ready-made .pdf files included into the final
document are referenced in the table of contents.

Can you explain that again ?

Separate .pdfs, that are included into the final
document will be referenced in the table of contents. We can still make
use of the automatic generation of page numbers in the table of contents,
so there it goes together. There are certain borders, for example since
the .pdfs are more like finished documents, indexing will
probably not work. Because even if you can extract references from the
.pdf, I didn't find a way until now, how to find out the page
number in a reliable way. There you also realise, that you can do much
more with the plain text sources than you can do with a finished document.
But I think that's ok. In this case you wouldn't to have a keyword
reference to the .pdf, while it's still in the table of contents
...

What if someone would want to use one of these interviews
for something else? How could this book becoming source for an another
publication?

That's also an advantage of the quite simple source format
on the Etherpad. It can be easily converted to e.g. simple markdown, just
by a little script. I found this quite important -- because at this point
we're putting quite an amount of work into the preparation of the texts --
to have it not in a format that is not parseable. I really wanted to keep
the documents transformable in a easy way. So now you could just have a
~fiveliner, that will pull the text from the Etherpad and convert it to
simple markdown or to HTML.

Wonderful.

If you have a more or less clean source format, then it's in
most cases easy to convert it to different formats. For example, the Evan
Roth interview, you provided as a ConTeXt file. So with some text
manipulation, it was easy to do the transformation to our Etherpad markup.
And it would be harder if the content is stored as an Open Office
document, but still feasible. .pdf in a way is the worst case,
because it's much harder to extract usable content again, depending on the
creator. So I think it's important to keep the content in a readable and
understandable source format.

Xavier, what is going to happen next?

Right now, I'm the guy who tests on Scribus, Inkscape. But I
don't know if it's the answer to your question.

I was just curious because you have a month to work on this
still, so I was wondering ... are there other things you are testing or
trying ?

Yeah, I think I want to finish the hotglue2svg.sh,
I mean it's my first Bash program, I want to raise my baby.
(laughs) But right now I'm trying to find different ways of
layouts. The first one is the one with the big squares, the big unicode
characters and all the arrows. So it's very complicated, but it's the
attempt to find an another way to express a conversation in text.

Can you say more about that ?

Because in the beginning, my first try was to keep the
'life' of a conversation in the text with some things, like indentation or
with graphic things, like the choice of the unicode characters. If this
can be a way to express a conversation. Because it's hard to it with
programming stuff so we're using GUI based software.

It's a bit coming to the question, what you are doing
differently, if you work with a direct visual feedback. So you don't try
to reduce the content to get it through a logical structure. Because
that's in a way how the markdown to LaTeX transformation is doing it. You
set certain rules, that may be in special cases soft rules, but you really
try to establish a logical structure and have a set of rules and apply
them. For me, it's also an interesting question. If you think of grid
based graphic design, where you try to introduce a set of rules in the
beginning and then to keep for the rest of the project, that's in a way a
very obvious case for computation. Where you just apply a set of rules.
With this application of rules you are a lot confronted in daily graphic
design. And this is also a way of working you learn during your studies.
Stick to certain logical or maybe visual grids. And so now the question
is: What's the difference if you do a really visual layout. Do you deal
differently with the content, does it make sense, or if you're just always
coming back to a certain grid, then you might as well do it by
computation. So that's something that we wanted to find out. What
advantage do you really gain from having a canvas-based approach
throughout the layout process.

In a way the interviews are very similar, because it's
always peoples speaking, but at the same time each of the conversations is
slightly different. So in what way is the difference between them made
legible, through the same set of rules or by making specifics rules for
each of them?

If you do the layout by hand you can take decisions that
would be much harder to translate to code. For example, how to emphasize
certain part of the text or the speaker. You're much closer to the
interpretation of the content? You're not designing the ruleset but you
are really working on the visual design of the content ... The point why
it's interesting to me is because working as a designer you get quite
often reduced to this visual design of the content, at the same it may
make sense in a lot of cases. So it's a evaluation of these different
approaches. Do you design the ruleset or do you design the final outcome?
And I think it has both advantages and disadvantages.

http://libregraphicsmeeting.org/2014/program/

let's say hard

using sed, stream editor for filtering and
transforming text

using inkscape on the commandline

using pdftk

... distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.