Proof of Work

Whenever a computer in the bitcoin network wants to record transactions,
it must perform a simple but unguessable and time-consuming calculation
then send the results to other machines on the network to verify. It is
therefore computationally (and monetarily) expensive to record a
transaction if you are not actually performing one. This discourages
abuse of the Bitcoin network.

This calculation and its output are the “proof of work”, they prove that
the computer’s user has been willing to do some work and expend some
resources in order to prove their good faith:

In Bitcoin, an algorithm called SHA-256 is applied to the transaction’s
data. Give SHA-256 any data and it will output a string of characters
that cannot be used to recreate the original data but that will always
be the same for the same data. They are a kind of identity for data:

Bitcoin uses SHA-256 to repeatedly make such an identity string for the
transaction data and a number that it increases by one each try called
the “nonce”. Eventually, and there’s no way of predicting precisely when
but it should take about ten minutes, the output string will start with
several zeroes. When it does, Bitcoin uses that as the proof of work for
the transaction:

Not every image can be mistaken for a face by a face detection
algorithm, in particular finding a face in a series of randomly
generated pixel images takes some time.

The amount of work required to do so will be greater than nothing, and
cannot be guessed precisely. We can therefore use machine pareidolia
with random images as proof of work.

Facecoin

Facecoin replaces Bitcoin’s search for leading zeros with a machine
pareidolia search for faces.

SHA-256 output is used as an 8×8 256-level greyscale pixel map, and a
face recognition algorithm is used to try to find one or more faces in
it. If no faces are found, the nonce is increased and another attempt to
find a face is made. This can take from one to several hundred tries.

When a face is found, the nonce and the face bounding rectangle are
recorded so the proof of work can be validated.

Why?

Bitcoin is a very interesting development in cyberculture. It’s a
repository for the hopes and fears of various ideologies, and a frontier
or dark space for the imagination and social or economic activity in a
90s Internet way. Its protocol is a communication model of existence,
identity, community and proof, with a CCRU-ish market worship at its
base. Because of all of this I think it’s worthy of and desperately
needs artistic investigation.

Artworks are proofs of aesthetic work, used as unique value identities
both in the market (art is used as an investment, signifier of status,
and symbolic resolution of lacks in free market ideology by oil
oligarchs and trust fund managers) and by organized crime (stolen art is
used as a medium of exchange by criminal gangs).

If Facecoin was widely adopted these two value identity systems would be
trivially but critically mapped onto each other by millions of machines
cranking out imaginary portraits across the network as part of a
financial network, and vice versa.

Cryptocurrencies such as Bitcoin use a “proof of work” system to prevent abuse.

Artworks are proofs of aesthetic work.

Facecoin uses machine pareidolia as its proof of work. This is implemented by applying CCV’s JavaScript face detection algorithm to SHA-256 digests represented as greyscale pixel maps. An industrial-strength version would use OpenCV. Due to the limitations of face detection as implemented by these libraries, the digest pixel map is upscaled and blurred to produce images of the size and kind that they can find faces in.

The difficulty can be varied by altering the size and blur of the pixmap. Or by only allowing particular detected face bounds rectangles to be used a set number of times.

Reviewing almost 70 artworks quickly and in depth is a challenge. With _MON3Y AS AN 3RRROR | MON3Y.US, I chose the approach of describing each artwork’s notable features and then pulling out themes and commonalities at the end. Halfway through I realised that by changing each description into a standard format, I could write code to parse the descriptions and analyse them to help me find those themes and commonalities. So I did. The code is in R and it’s available here:

Flags and words join the subjects, hundred unit notes are the most popular, looped animated GIFs, collages and graphics join the forms and figure/ground relations are there with mention of “background”.

No surprises there, except possibly “love”. The code will confuse “Euro” and “European”, so that’s why the US is mentioned but not Europe. Facebook and Google add corporations to the subjects. Colours are added to the formal properties: yellow, blue, white, black. Landscape joins the subjects. And works play, are direct, are classic, have style, an aesthetic, a price, are new. And I weasel about them with “possibly”.

Next lets look at the associations between words. First some obvious ones.

The topics are clearer with more words, these are just the first few for each one. I think this is the closest to what I want in terms of discovering what I have written about, although as I say the choice is arbitrary (or at least aesthetic rather than statistical).

Using more code from the Vasari/bloggers posts, we can plot the associations between words:

Changing the parameters and outputting to PDF creates a more detailed and readable graph. It’s fun and inbetween topic modelling and frequency counts for usefulness.

Finally let’s see how I feel about the art with sentiment analysis:

neutral positive
66 3

I do try to find the positive in artworks but there was one that gave me an immediate and visceral negative reaction in the show (you can spot it if you look hard at the reviews). I’m surprised that there are fewer that count as positive. I “love” one of the pieces. Is it in the positive list?

[1] "Martin Kohout" "Marc Stumpel" "Ciro Múseres"

It’s not. But one of the ones listed does mention “love”, so I don’t know what’s happened there. Sentiment analysis has improved greatly over the last few years, but apparently not in the library I was using.

If I was going to use these techniques to help review art I’d write longer “bag of word” descriptions for each artwork, with fragments of text and individual words acting almost as tags or streams of consciousness, and I would then use topic modeling and clustering to help pull out themes. I’d prefer to use an algorithm to choose the number of topics, as I feel this is more intellectually defensible, but I like the results enough to use it without. I’m disappointed by the performance of the sentiment analysis library I used, next time I’ll try a different one.

Will there be a next time? Yes, the next time I’m reviewing a group show with more than a few artists. Producing this report has been labour intensive, but I’ve a libary of code now and a better understanding of the issues. And I can automate report construction and revision using Knitr, which would allow me to mix Markdown text and R code without hacing to copy and reformat output.