In two previous posts, I went on about lenses in Clojure.
Pinholes comprised
a small library of higher order functions to formalize and simplify
the viewing and manipulation of complex nested structures.
Tinholes did essentially the same
thing, but with macros instead. In both cases, there's recursion going on,
as we burrow through layers of nesting, but macros had the advantage
of doing it all before compilation, giving core.typed a chance to
check our work.

The macro post was inexplicably popular, catapulting me to levels of
fame I never expected to achieve without consciously emulating an
early De Niro character. It even made the low 20's
on Hacker News,
where this comment
was made:

Using macros to pre-compile the lenses is clever, but

Punch drunk on Google Analytics, I was inclined to stop reading at "clever,"
but, reminding myself that self doubt is the mother of all virtue, I pressed on:

feels like a hack around typed-clojure instead of being
aligned to it. All of the information needed to determine a lens'
action is available at compile time even prior to expansion. Can a
van Laarhoven representation be made in typed-clojure that
recovers this information?

Interesting. (The bit further down, where I'm advised to show a "little
humility," was less interesting. Mea culpa, already, mea maxima culpa.)
Can we, in fact, make the van Laarhoven formulation be made to work in
typed Clojure?

TL;DR

Yes we can!

Though with less than perfect grace.

Typed clojure is amazing and important.

But it's a work in progress.

Even when it's finished, coding styles that work well in Haskell
will still be awkward in Clojure.

And vice versa.

Either way, strong typing is both crucial and achievable.

While not originally intended as such, the latter points may constitute a feeble response
to

Continuing the tradition of awesome nomenclature, I am honored to present vanholes.

The van Laarhoven representation of lenses

If you don't know much about van Laarhoven lenses, but do know a little Haskell
I strongly recommend
this tutorial,
with which I would never try to compete.
There's also a
great talk by Simon Peyton Jones,
but, as you'll discover if you try the link, its permissions were recently
locked down, roughly coincident with my post going up,
making last Monday a banner day for the forces of ignorance regarding lenses.

The basic, mind-blowing idea of the van Laarhoven representation is that,
rather than specifying separate getter and setter functions for some piece
of data within a structure, you can write just one function, with a, seemingly,
very weird type. In Haskell,

typeLenssa=Functorf=>(a->fa)->s->fs

i.e. a function that takes two arguments

first a function from some type a to a functor over that type

the second a structure

and returns the same functor, but over the structure type.

Untyped van Laarhoven lenses

To some, this section is totally backwards and probably sacrilege, since the
representation is usually derived by reasoning about types, but exploring the
mechanics in conventional Clojure can be interesting.

The basic "trick" is that, since lenses will be written take arbitrary functors,
we will be able to pick specific ones that twist the lens function into
the right sort of accessor function.

A really simplistic functor can be built in Clojure using protocols. Since
protocol methods are dispatched based on their first argument, we'll need
to implement a backwards p-fmap method
that takes the container-like thing
first, and then wrap the method call in an fmap function to reverse the
arguments.

If were only going to use l-1 with the Const functor, you'd wonder
why we bothered typing out #(vector % (second pair)),
as it was destined to be ignored. Fortunately, there's another functor we can
throw at it:

(We can't use set, because of the existing clojure.core/set.)
Here, constantly has the effect of ignoring the original occupant, while
we no longer ignore the mapping function.
(lset l-1 5 [3 4]) evaluates as:

(:runIdentity (l-1 (constantly (-> Identity 5)) [3 4]))

(:runIdentity (fmap #(vector % 4) ((constantly (-> Identity 5)) 3)))

(:runIdentity (p-fmap (->Identity 5) #(vector % 4)))

(:runIdentity (->Identity [5 4]))

[5 4]

In fact, the set operation us usually defined in terms of something called over,
which lets you apply a function to the focal point:

Composition of lenses

One nice aspect of this representation is that, the lenses being ordinary
functions, they can be composed. Let's say we have another lens, for
accessing the :foo element of a hashmap. It's the same pattern as before.
The first argument of fmap is a function that implants an element in the
structure, and the second argument is the extraction of that element, wrapped
in the input function:

(defn l:foo[x->Fxhm](fmap#(assoc hm:foo%)(x->Fx(:foohm))))

It's tempting to write a macro for building these things out of traditional
getters and setters

The alert smartass will now be asking why, having made such a big deal about
representing the bidirectional lens in a single function, rather than as
separate functions for each direction, we're now writing convenience tools
for building the single function out of the separate functions. The riposte
to this question is that lenses, being functions, can be composed.

That is, they could be composed if they were unary rather than binary
functions, which they would be in Haskell, since everything is curried
there: (a -> f a) -> s -> f s is equivalent to
(a -> f a) -> (s -> f s). To get the same effect in Clojure, we need
to curry and uncurry explicitly, e.g.

so we can just do (lset (lcomp l:foo l-1) ...). One might be (or maybe was)
tempted to write more general utilities along these lines, but that would
needlessly complicate the next section of this post.

Typed var Laarhoven lenses

The original mission was to explore lenses in typed Clojure. The mission would be easier
were core.typed fully implemented, documented and tested, but it's sort of not,
especially in the mad interzone of protocols and higher kinded types.
(Important: This isn't to disparage the project in any way. It's a colossal achievement for
about 1.03 people, who
are the first to admit that it's not done yet.)

There are at least a few examples of people not quite getting functors to work. The protocol declaration is
lifted from a Google groups response by Ambrose to one of them:

Foremost is that the specific implementation
of fmap will be determined at runtime by dynamic dispatch1 on the subtype,
rather than chosen by matching type at compile time. We cannot possibly do the latter,
because Clojure typing is completely separate from compilation. This is why we
get/have to specify variance; subtyping and therefore variance are innate to JVM
languages but not to Haskell.

Dynamic dispatch is also behind the reversal of the arguments to fmap, as noted earlier.

And by arguments we mean actual multiple arguments to a single function, rather than, multiple
functions, each of one argument, returning another function of one argument: by conscious design
choice, Clojure does not automatically curry, so we have one less ->.

We don't take malicious pleasure in using the symbol f to mean both
function and functor.

Ambrose warned in the aforementioned response that it "gets messier if you want to abstract over the Functor,"
but, that being the whole point of this exercise, we soldier on. Const is not that hard

We need to define and annotate fun before it's used, but after
->Identity has been defined, which requires extending Identity explicitly, rather than inline:

(t/ann-record[[a:variance:covariant]]Identity[runIdentity:-a])(defrecord Identity[runIdentity]);; Now ->Identity exists(t/annidentity-fmap(t/All[ab](t/IFn[(Identitya)[a-> b]-> (Identityb)])))(defn identity-fmap[thisf](->Identity(f(:runIdentitythis))));; Now we have an fmap with a type.(extendIdentityIFunctor{:p-fmapidentity-fmap})

The annotation for identity-fmap is pretty much the same as for p-fmap in the IFunctor
declaration, except specific to Identity.

This is frightening but, I believe, spurious. The :mandatory vs :optional shouldn't be important; that
just means that we weren't required to have implemented p-fmap in this extend but could have done so
later. More worryingly,

The type-checker is expecting four Anys, rather than a constrained arrangement of two types,
as if we had declared p-fmap to be
(t/IFn [(IFunctor a) [b -> c] -> (IFunctor d)]))) rather than
(t/IFn [(IFunctor a) [a -> b] -> (IFunctor b)]))).

Even so, the more specific case ought, I think, to be palatable to the more general one.

So, we take the batteries out of the smoke detector and go back to sleep

The first problem is that protocols cannot, apparently, be used as type functions, and we get
an error to this effect. It's a little surprising, since we were able to use (IFunctor a)
within the definition of the IFunctor protocol itself. Trawling the mailing list,
we somehow find a posting containing a link to a
gist that defines a type function explicitly

A few speeches in this vein - and evil counsels carried the day.
They undid the bag, the Winds all rushed out, and in an instant
the tempest was upon them, carrying them headlong out to sea.
They had good reason for their tears: Ithaca was vanishing
astern. As for myself, when I awoke to this, my spirit failed me
and I had half a mind to jump overboard and drown myself in
the sea rather than stay alive and quietly accept such a calamity.
However, I steeled myself to bear it, and covering my head with
my cloak I lay where I was in the ship. So the whole fleet was
driven back again to the Aeolian Isle by that accursed storm, and
in it my repentant crews.

typechecks! More importantly, infelicitous modifications of this do not, which suggests that,
notwithstanding compromises along the way, we can provide assurances of type safety to future
van Laarhoveners.

But was it worth the trouble?

Yes, for me. I learned a few things, and I'll probably learn more when you correct
my errors.

From a practical standpoint, I couldn't possibly recommend that anyone take this route in
real, production code today, and I'm not sure I would recommend it even after the
core.typed kinks have been worked out. The tinhole approach is, for
Clojure, more intuitive and concise, while offering no less type safety.

Quite simply, Monad and company are not natural fits for any language that
doesn't have sophisticated static typing fully integrated with its compiler. By
"fully integrated," I mean that the compiler produces different code for different types
signatures and arguments.

On the other hand, fancy macros are a natural fit for languages that are
meaningfully homoiconic. By "meaningfully homoiconic," I mean that it's
practical for normal people to write code-generating code, that such code is concise and
that it doesn't appear to be in a different language from the code it generates.

Homoiconicity does not substitute for type-checking, but, if it is innate to the language,
the type checker may not need to be, thus enabling the sinful pleasures of
dynamic typing while stll allowing a stricter regimen to be enforced.

which is almost literally transcribed from
a blog post
explaining transducers with Haskell.
(Note, however, the use of ReducingFn rather than Reducer, as in the post.
The reducer is the collection-like thing, not the function with which it is reduced.
The beauty of transducers is that they can be defined independently of the reducer.)