Thursday, December 5, 2013

Really, after Unit and Id, the tuple is one of the simplest data structures around. In its base for the tuple is of type

data Tuple a b = (a, b)

Tuple has two fundamental functions:

fst :: (a, b) -> a

snd :: (a, b) -> b

and that's all there is to it, and that's all there is to the tuple.

(really, the above declaration should be:

data (a, b) = (a, b)

but the typeful LHS ('left hand side') declaration being the same syntax (and I say 'the same syntax,' not 'the same semantics') as the instance definition on the RHS ('right hand side') may make some people's head hurt. That's also why I gave the list declaration as:

data list a = nil | (a, list a)

and not the standard syntax of:

data [a] = [] | (a:[a])

There is the question of programs are written to be read as well as to be executed by a machine).

So: simple.

Or.

Is.

It?

(I believe this has become my trademark.)

Of course if you have the Cartesian cross-product of two typed values, then why not three? why not one? why not ... 'n'?

So, of course, the tuple is any number of typed values associated through their cross products, but I like to keep things simple here (and in my code), so I'll just be working with the above given definition of tuple and where I need a 'triple' (heh: 'triple') ('triple' was the term used for monads up until recently) I'll define one, or I'll simply compose tuples, for, after all, any n-relation can be reduced to a set of 2-relations, e.g.:

data Triple a b c = (a, (b, c))

data Quad a b c d = (a, (b, (c, d)))

data Quinz a b c d e = ... etc

So from tuple you have triple, or quad, or quinz, or whatever.

So: tuple.

What can we do with it?

Well, everything, actually. You see most programming languages take the tuple for granted, and call it the parameter-set of functions. Haskell does not do this. Haskell be smart. Haskell be curry(ing) the arguments to functions.

Haskell B. Curry. Geddit? No? Look it up.

So, in programming language X, where X is most likely the language of your choice you invoke a function in the 'standard' way:

foo(bar, baz, ...)

Not so with Haskell ... I mean, not necessarily so (and not even ordinarily so).

Nor even with other Lisp-like languages. When everything's a list, you get a tremendous amount of conceptual freedom and from that freedom comes power.

But ... power? Later.

So, with Haskell, you can force the tupling of arguments to a function:

foo :: (bar, baz) -> quux

But why would you want to do that, requiring all the arguments to a function to be present when you can curry them:

foo :: bar -> baz -> quux

And then apply the arguments individually as needed, building a partial function (the partial function itself being a (functional) value) until you finally apply to get the solution (in this example, the quux-typed value).

So, currying: the 'anti-tuple.'

So we showed a simple, elegant use contra the tuple, but what are the uses of the tuple.

Well, like I said: plentiful, many-ful, many-fold. Manifold.

(I love writing words.) (Bear with me.)

(which means something totally different than 'bare with me.')

One of the primary uses that come to mind for the tuple, besides freely-relating two disconnected values (which, in and of itself is a big, huge win ... ever see a list of parameters to a function where there are obviously-paired values, except they aren't paired, so the code gets all mucked up treating something that isn't as if it were because it should be?), is to 'pretend' that two values are one and then to fool everybody else with this conceit.

Where does this work?

Have you ever needed to return two values from a function?

But of course you can't because a function returns 'a,' 'one,' 'single' value. A function doesn't return two values, and you don't want to have to write a class to package those values together (because why? That's something you may have to look into yourself to answer: if two values are related, why aren't you making that relation explicit in the type system? Types makes things easier, but only when you use them), so what do you do?

Return the tupled pair.

You now have functions that return two values, well-typed, at that, and if they can return two, then can return three, or four, or ... 'n.'

That's one place where tuples shine: 'Oh, I need the value from the list, but I also need the list-index where that value was.' You tuple the index with the value, and your function now returns the data you need to proceed with your work.

But then, once you start using tuples one place, you find yourself starting to see uses for tuples in many other places you never considered before, because you just coded around two disparate values.

'Coding around' things is the algorithmic approach, tuple provides a data-centric solution. You ever notice how greatly algorithm simplifies when the right data types are used?

Tuple.

Examples

Arrows

Of course, the fundamental data type of category theory, the map can be viewed as a list (which is a linked set of tuples (or 'pairs')) of tuples, where each tuple is composed of the key for the first value and the value for the second.

So that gives us another example: one is 'Map.Entry<K, V>' so obviously 'Tuple' I'm wondering why the Java folks just didn't name it 'Duck' since it was quacking like one.

But then ... Arrow.

Arrow captures the entirety of the function f :: a -> b, so if you look at it slightly differently, an arrow, or morphism, or function, is the mapping of all values applicable to the function f from Category A to Category B.

So, one way to look at an arrow f is simple the set of all tuples (a, b) where the 'a's are all possible inputs to f with the 'b's being the corresponding outcomes.

But there are more to arrows than that (and, fear not, this article will not be side-track to an arrow-exclusive): the primary arrow operation is 'first' (arrows are fundamentally, or minimally, defined on 'arr' (which is the 'arrow-maker' function) and 'first').

What does first do?

First supposes that the input has been split from some value a into two (tupled) values (a, a). And first takes an unit function and applies it to the the tuple's first value, leaving the second value unchanged.

So, if you have some value a, and a function f :: a -> b then

first f

does the following

first f (a, a) -> (b, a)

Or, actually, and more generally, this works for any tupled input value so:

first f (a, d) -> (b, d)

works as well.

Exercise: Given than f :: a -> b and first f :: (a, d) -> (b, d), write an implementation of first.

Ordering

We mentioned before that Map.Entry<K, V> is just another name for 'Tuple<K, V>', so then, looking at it this way, either List<Tuple<K, V>> or Set<Tuple<K, V>> is another name for Map<K, V> depending on which search strategy we wish to use.

... with a significant difference: as both the K and V in the tuple are used to compute the tupled instance's identity, then both those data structures permit 'duplicate' entries: a key may be associated with zero, one or more values.

And, here is where the search strategy becomes significant: List has an explicit positional ordering: first comes the first value, next come the second value, etc. This positioning may be exploited for the inherent positional index to a tuple ... or I exploited this inherent indexing, at any rate.

Problem statement: so, have a list of values, rows from a data table that we wish to display in order.

Simple enough: we make the data table entity a comparable type, sort the section, and we're done.

Or.

Are.

We?

The problem here is that ...

Well, there's always a 'the problem here is' when a model encounters the real world and when developed code hits production data.

This particular problem is that there are scenarios where an entity is represented by compound data, and so you need two rows to indicate one entity. In these cases, you have the same entity id, so what do we do? We can complicate the sort-criterium for the comparison for the entire entity because of this special case, or we can sort simply and handle sorting these two-row entities on a case-by-case basis.

But then we sort the list twice. Once naturally and automatically, and we don't need to think about that sort.

But how do we juxtapose two rows with the same entity id in a list? We can't change the indices on a list pel-mel, can we?

Or.

Can.

We?

Using tuples, we can indeed do just that. If we have the mostly-well-sorted list

[Ent {13, ... Nov 20, asdf},

Ent {27, ... Aug 3, aega},

Ent {27, ... null, hjg},

Ent {34, ... null, asjkl},

Ent {34, ... Apr 26, aehe},

...]

And we wish to have the 'null' date higher up on the list, as per customer requirements. One way to do that is to add this to the search criteria. But what if the search criteria is already based off of three values of the primary key (and this date is not part of the primary key, because it is nullable)? Do we wish to add this new criterium for these specialized cases?

Yes, we can do that.

Or, as we did in the project, we did another sort, manually. You see, the question comes up, how do we recognized that two rows represent one value, and, when recognized, how do we swap the row order for that one case where we wish to?

You can't say, 'oh, this list index is now less one and the other list is now index plus one'

Not with a straight-up list, anyway, you can't1 change the indices there. But if we make the indices explicit by tupling them with their listed data:

int index = 0;

for(Ent entity : entities) {

newList.add(OrderedTuple.pair(index++, entity);

}

And then, when we encounter our special case, we verify the order or correct it:

OrderedTuple<Integer, Ent> oldPair =

OrderedTuple.pair(-1, Ent.prototype());

for(OrderedTuple<Integer, Ent> pair: newList) {

Ent newEnt = pair.snd();

Ent oldEnt = oldPair.snd();

if(newEnt.id == oldEnt.id) {

if(newEnt.date == null) {

int pairKey = pair.fst();

pair.changeKey(pairKey - 1);

oldPair.changeKey(pairKey);

}

}

oldPair = pair;

}

// now we sort on the revised indices:

Collections.sort(newList);

// and return that sorted entity list (sans the explicit indicies):

List<Ent> revisedList = new ArrayList<Ent>();

for(OrderedTuple<Integer, Ent> pair : newList) {

revisedList.add(pair.snd());

}

return revisedList;

And that's one way to skin that cat.

So, what's OrderedTuple? It's simply a mutable tuple with an ordering imposed on the keyed value:

Tuples are a way of pairing or associating things that are not explicitly already paired. As we saw above, normally the indices of a list are implied, but they are there and are associated with the elements of the list. A tuple makes that implied associate explicit. Once something is explicated then it can also be exploited in ways unimagined heretofore.

Another, novel, way I've used tupling is entanglement. Something over here, an event or an object, changes, and that change is supposed to reflect throughout the system. How do we do that?

One way is to hardcode it:

if(x changed) then change y

But this is at the algorithmic level, and things happening procedurally are notoriously hard to track.

We could, instead, make this a structure and realize entanglement as a data type.

This is what I do. I associate an observed object with an action, and that action is to change a value entangled (implicitly) with the observed object.

Well, as in the above scenario with Ent types, I was collecting a set of three of the most current dates from Ent rows where 'most current' is defined as the the last update date auditing column is greatest on a particular set of entity types.

The thing is that any of these three dates could be null, so if I wholesale replaced dates with the most recent Ent row, then I found I would be nulling out already established dates.

I had to update dates on the most recent non-null dates, and this occurred on a case-by-case basis.

A further complication: one of the dates has an associated duration type-code, so every time I replaced that date, I had to update the duration code, too.

Algorithmically this turned into a nightmarish mess. Three development teams were called in to code the solution, and three teams each, eventually, threw up their hands, declaring this to be an intractable problem.

I used entanglement to solve this problem.

For the associated date-duration-code, I had one entanglement:

Entanglement.entangle(incomingRow.getDateA(), new Effect<Date>() {

@Override protected void execute(Date newA) throws Failure {

stored.setDateA(newA);

int dur = incomingRow.getDurCd();

if(dur > 0) {

stored.setDurCd(dur);

}

}

}).observe();

For the stand-alone date, I used a simple entanglement:

Entanglement.entangle(incomingRow.getDateB(), new Effect<Date>() {

@Override protected void execute(Date newDateB) throws Failure {

stored.setDateB(newDateB);

}

}).observe();

Then I simply iterated up the result set of the set of entities of concern and I was able to get the correct dates for each entity row that were required.

Easy, peasy!

Using the entanglement type (which, itself, is simply a tuple of observed value and an effect) I overhauled a patchwork system, that actually wasn't working at all, of over one-thousand lines of code with a fifty line replacement that satisfied customer requirements.

My take-away here is that solving problems algorithmically leads to code bloat and complication, but when the problem is reconceptualized into data structures, the problem is simplified and modularized.

Tuple is a data type that does just that: often, in code, two values are implied to be associated, but there is no direct correlation. A tuple makes that association explicit.

κCalc: the Kappa calculus

One more point to cover on tuples.

The λ calculus can be considered to be the marriage of two dual calculi: the κ calculus, which is the calculus of composition,2 and allows functors of the first order only, and the ζ calculus, which is the calculus of application. So, in the κ calculus, values are lifted into tupled types, or dropped to the unit type. An entire calculus exists around the manipulation of tuples, and it has its own arithmetic and everything.

You can do anything with tuples! Tuples are like Tiger Balm that way.

-----

(1) Ever hear a British-English person say the word 'can't' ...? It is to die for! But I digress, and also, in digression, have embarrassed Mr. Sh, my boss.

(2) When I read this paper on the κ calculus, I was so pleased! I had defined my morphism type to be a type that contained only the identity and the composition functions, but not application! And I had wondered if I were doing the right thing. Isn't the whole point of it all to apply a function? Isn't that what a morphism ... 'is'? And, actually, no, that's not the case at all. There do exist morphisms free of the concept of function application, and the κ calculus has arrows where elimination occurs from (de)composition, not from function application.

There exist a whole class of arrows that do no application at all. And I got that correct!

But this topic can be discussed in the article where we consider the morphism.

I suppose I'll start that article with the words: "We consider the morphism."

Hm, I wonder if the map article and the morphism article are one and the same?

Monday, November 25, 2013

This is the continuing series of articles of continuing on continuations. Really.

And State. Oh, and State. Ugh.

So, the State monad is a monad on a 'stateful-like' function.

Still with me?

So I have this beautiful implementation in Java — No, really. I do. — and, so ... how do I test that it works?

Well, you know I have to use it. I have this State monad transformer example, but I want something really, really simple, so:

> inc :: State Int Int

> inc = do x <- get

> put (x + 1)

> return x

But the thing is, we don't have the do-notation in Java, so we have to unwind this as a set of monadic binds, so:

> inc1 = get >>= \x -> put (x + 1) >> return x

Cool! But we don't have (>>) ('then') in Java, we have (>>=) ('bind'), so one more transformation:

> inc2 = get >>= \x -> put (x + 1) >>= \_ -> return x

AND we're done! Now, an example:

> runState (inc2 >> inc2 >> inc2) 2

and we get:

(4, 5)

as expected.

Shoot, do we have to write (>>) ('then') as a convenience function? We still have to take the argument and declare the type of the function, but the function itself simplifies. Let's see the Java implementation first as:

(3) Thus spake the Zen Master: "Why are you in awe of those who use call/cc and similar parlor tricks? The real miracle is that when I wish to sum, I use (+), and when I wish to 'continue' I hit the carriage return." Then he concluded his lesson with the cryptic utterance: "One bowl of rice" and departed their presence for Nirvana (he likes Soundgarden, as well).

Chapter Postlude

The problem that this chapter addresses is what I've had in my java categories library for some time now. I've had this thing called 'MutableBoxedType<T>' with a very easy implementation:

@Effectual public class MutableBoxedType<T>

implements Functor<T>, Copointed<T> {

public MutableBoxedType(T val) {

this.value = val;

}

@Effectual public void put(T newval) {

this.value = newval;

}

@Effectual public final Effect<T> putF = new Effect<T>() {

@Override protected void execute(T obj) throws Failure {

put(obj);

}

};

// Functor implementation -----------------------------------

public <X, Y> Arrow<Functor<X>, Functor<Y>>

fmap(final Arrow<X, Y> fn) {

return new Function<Functor<X>, Functor<Y>>() {

public Functor<Y> apply(Functor<X> obj)

throws Failure {

MutableBoxedType<X> mbt =

(MutableBoxedType<X>)obj;

return new MutableBoxedType<Y>(fn.apply(mbt.extract()));

}

};

}

// Copointed implementation --------------------------------

public T extract() {

return value;

}

private T value;

}

Simple enough, we just put a box around a mutable type. The problem is all those @Effectual annotations and the very mutability-ness of the type itself. Once we make the guts of an instance mutable we lose all provability as 'equals' and '==' now have been reduced to meaningless, vis:

MutableBoxedType<Integer> five = new MutableBoxedType<Integer>(5);

oldFive= five;

five.put(12);

five.equals(oldFive); ... is true

Yes, five and oldFive are equal, but their shapes have changed during the course of the program and we need deep copy semantics here to sniff that out.

The (functional) state monad doesn't 'change' ... not via mutation, anyway. What happens instead is that when the state gets updated, we have a new state. It just makes sense (provably so) and gives us our beloved 'mutation' (by all appearances) even in the functional domain.

Thursday, November 21, 2013

So, okay, we've shown how to lift continuation functions up into the monadic domain, so we could control the input to a continuation, just as we've shown how we can (very easily) control the output from such a function.

And the '(very easily)' is the rub there. If it's the output of the function that concerns us,1 then let's simply work with the output, good or bad.

There's a pattern that addresses this, it's known as the Either data type:

> data Either a b = Left a | Right b

The Either type, itself, is not a functor, as it represents choice. By convention this falls very nicely into failure conditions where we wish to know what went wrong:

> type Answer a = Either a String

If we get back a Left a solution, we know our answer is good; but if we get back a Right String answer, then we know we had an error, and the error message is contained in the string. We can use the either function to dispatch on our called-function's output:

> either :: (a -> c) -> (b -> c) -> Either a b -> c

So, all that heavy lifting of CPS-functions we did in the last chapter to make them monadic ...?

Let's return to the drawing board, restarting from the example code base where we made our divBy function in the CPS-style for unit values:

public ANS appDivBy(final Arrow<Integer, ANS> tail, Integer by)

throws Failure {

return Continuity.divBy(basis).apply(by)

.apply(new Function<Integer, ANS>() {

public ANS apply(Integer x) throws Failure {

return app(tail, basis, x);

}

});

}

And with the Either type ...

Whoopsie, we need to define our Either type in Java, first, before we can use it! Java is particular about these kinds of things.

Funny, that.

Well, the type is very easy to declare:

interface Either<LEFT, RIGHT> {

public <ANSWER> ANSWER

either(Arrow<LEFT, ANSWER> afn, Arrow<RIGHT, ANSWER> bfn)

throws Failure;

}

The Either type gives us the data structure of choice (materializing that choice, once it's made), and the function that dispatches on that choice.

I'd give you the implementation for Left and Right, the two materialized options of the Either choice, but we have to do a bit of ground-work to make them happen. Or should I just try to develop the two subtypes of Either and let the basic types fall out, as what happened when I did the development work? Yes, let's do that.

There's a lot of red in the above code. Why? Well, because we already implemented all that code in the Right<?, RIGHT> type. All the red code is repeated, nearly verbatim, just replacing 'Left's for 'Right's. What's going on here? It seems like there's shared functionality here.

There is.

The thing is, are functors require to carry around some unit type? I argue: no. The intrinsic property of a functor is that it is mappable from one category to the other: fmap(). I further argue that it is not intrinsic to a functor-type to carry a value. So I said arg() is not part of the Functor protocol.2

And this, by definition, fits in very nicely with the whole concept of monads. Once you're lifted into the monadic domain, you stay there, and you are not allowed to extract() values from the monad. For example: try extracting a value from Maybe.NOTHING, or from [] ... you get an error. I argue even more strongly than this arg() against a(n unspecified) monad should not compile!

And that's my heart-ache with the above code. Left, by declaration, is a Functor, but it shouldn't have to carry around all these marshalling and unmarshalling functions around a carried type. A generic type-carrying type should have all that functionality and Left and Right should shunt or delegate all this functionality to that type-carrying type.

With this common class as a parent, both in functionality and in type-inheritance (both Left and Right are identifying types or type labels on the underlying values), both Left and Right reduce to much smaller implementations:

public class Left<LEFT, RIGHT> extends Id<LEFT>

implements Either<LEFT, RIGHT> {

public Left(LEFT value) {

super(value);

}

// Id override -----------------------

protected <X> Id<X> constructor(X val) {

return new Left<X, Unit>(val);

}

// Either override -------------------

public <ANSWER> ANSWER

either(Arrow<LEFT, ANSWER> a, Arrow<RIGHT, ANSWER> b)

throws Failure {

return a.apply(arg());

}

}

and for Right:

public class Right<LEFT, RIGHT> extends Id<RIGHT>

implements Either<LEFT, RIGHT> {

public Right(RIGHT value) {

super(value);

}

// Id override -----------------------

protected <X> Id<X> constructor(X val) {

return new Right<Unit, X>(val);

}

// Either override -------------------

public <ANSWER> ANSWER

either(Arrow<LEFT, ANSWER> a, Arrow<RIGHT, ANSWER> b)

throws Failure {

return b.apply(arg());

}

}

Much, much simpler!

You see in these implementations the constructor() function supplies our (sub-)type, so that fmap() on Id works equally well for these types, as well.

You also saw in the original implementation the following (questionable) type: ? That type has been replaced by the Unit type (familiar to you Haskell-functional programmers). Unit is what? One? Well, yes, there's only one Unit instance, because it represents the null or nothing value in the unit category:

public class Unit {

private Unit() { }

@Override public String toString() { return "()"; }

public static final Unit instance = new Unit();

}

Easy, peasy!

And, in fact, with just these elements, we have a system sufficient to handle normal and erroneous states (or any left or right paths) for our simple arithmetic system:

class ResultE extends Result<Either<Integer, String>> {

public ResultE(Either<Integer, String> res) {

super(res);

}

public Either<Integer, String>

apply(final Arrow<Integer, Either<Integer, String>> tail)

throws Failure {

return appE(tail, basis, 3);

}

private static Either<Integer, String>

appE(final Arrow<Integer, Either<Integer, String>> tail,

final Either<Integer, String> basis, Integer in) throws Failure {

return Continuity.g(basis).apply(in)

.apply(new Function<Integer, Either<Integer, String>>() {

public Either<Integer, String> apply(Integer x) throws Failure {

return Continuity.f(basis).apply(x).apply(tail);

}

});

}

public Either<Integer, String>

appEDivBy(final Arrow<Integer, Either<Integer, String>> tail,

Integer by)

throws Failure {

Either<Integer, String> ans = null;

try {

Continuation<Integer, Either<Integer, String>> cont =

Continuity.divBy(basis).apply(by);

ans = cont.apply(new Function<Integer,

Either<Integer, String>>() {

public Either<Integer, String> apply(Integer x)

throws Failure {

return appE(tail, basis, x);

}

});

} catch(Exception ex) {

ans = new Right<Integer, String>("Failed with error " + ex);

}

return ans;

}

public static void main(String args[]) throws Failure {

eitherCPSdivBy0();

}

private static void eitherCPSdivBy0() throws Failure {

final Either<Integer, String> basis = new Left<Integer, String>(0);

final ResultE inf = new ResultE(basis);

looper(new Callee<Either<Integer, String>>() {

public Either<Integer, String> call(Integer inni)

throws Failure {

return inf.appEDivBy(new Function<Integer,

Either<Integer, String>>() {

public Either<Integer, String> apply(Integer x)

throws Failure {

return new Left<Integer, String>(x);

}

}, inni);

}

}, "either-CPS");

}

}

Note that it is the caller that determines what is 'good' (left) and what is 'bad' (right) and what to do in either case. And notice that in the below output, how simple and obvious everything is, we either have a left/number or a right/error condition.

So, okay, what's the take-away from all this? Besides the fact that Either is easy to implement and to apply and that lifting continuations to monadic binding is hard?

Well, just that. The previous two chapters are a reinvention of the ContT, the continuation monadic transformer, where I developed a need or justification for its use and then I wrote the thing out by hand from scratch.

THEN I looked up ContT in the Haskell standard library.

The journey of Haskell programmers is reimplementing functionality of the standard libraries. Enlightenment comes when they realize they are doing just that.

But I'm actually feeling pretty good from all this exercise. I never saw the 'point' before of continuations, and I never 'got' ContT; I didn't even look at it as a thing, and now, it's ... 'Ah!A Kleisli arrow on the continuation function! Now I see!" and I didn't see the 'point' of the Either type. It was just choice. So what?

Choice is a powerful thing. It's more than just error handling, as we have and show here, but even just that is powerful enough: it's simple to implement and easy to apprehend.

Continuations. It's shown us ... well, continuations, but the control pattern also exposes opportunity to explore data types that are very simple to use, but why? Because the patterns that arise beg the simplicity that are filled by these data types.

That's what I like about these simple concepts: rigorously applied, they open new vistas, yes, but they also facilitate the exploration with these patterns that allow us to try something, ContT, perhaps, and if that doesn't work well, then, well, how about Either, then? Or perhaps Maybe? Or a monadic or comonadic List? An applicative functor?

An universe of space to explore, and, with these tool, a (type-)safe, fun and exciting set of ways to branch off in these explorations.

I'll see you on the other side.

Next chapter. Tuples? Coroutining with continuations? We shall see.

-----

(1) In most cases, it is so that the output of functions is what concerns us. That's a reason monads are so pervasive: they allow us to control the output of functions safely in the monadic domain.

(2) This was my personal little crisis. I just couldn't see the point of functor types not carrying a datum. I mean: what did that look like? I was stuck on the idea of functor-as-id, and id carried a value it identified, even if it were the empty/null/unit-type instance. Same for monads. Yes, monads are the triple {T, μ, η} but if they aren't carrying a datum, what's the point? Monads not carrying a value seemed to be a rather abstract (and pointless) exercise to me. After all, even Nothing has a value: Nothing.

But that's where I missed it, right? But I'm in good company: the ancient Greeks looked upon the concept of zero as anathema and heretical. 'Something' exists, and 'nothing' doesn't exist. It can't. Conceptualizing it is error.

Or so they thought, and so did I. And looked what that blocking ... block? Math was stuck until zero was admitted, and then from zero, came everything ('infinity'). Same for me. By forcing Functor and, consequently, Monad to have arg(), I was boxing Functor into having something that it mayn't, and forcing Monad into something that it isn't: you extract from comonads, to extract from a monad is an error.

Let's look at Nothing again.

Nothing :: Maybe a

Nothing has the (Maybe'd) a-type, but it doesn't carry an a-instance.

Same thing for any kind of list:

[] :: [a]

The empty list has the (listed) a-type, but it doesn't carry an a-instance. It's an error to ask to extract an a-instance from an empty list.

Same thing for any mzero (dependent) (sub) type of a monoidally-typed (or -kinded) type.

So, whereas before I would have arg() for monads and throw an error when I encountered a Nothing or [] or other mzero-value, now I'm thinking that arg() depends on the type of the value. There's no arg() for [] and none for Nothing, either, and it's now a compiler error to ask for an arg() from those values. You just can't ask for arg() from Maybe nor from list any more when they're monadic, you have to obey the math of monads now, and fmap() the monadic function against the monad itself, then join() to get the monadic result.

Much cleaner, and now provably correct, because now the above doesn't yield a runtime error. When this new concept of monads is applied in programs, compiling the program successfully means it works now.

And this reconceptualization means I can now explore monads that do not carry an underlying value. I can have some monad m that itself has values m1, m2, m3, ..., mn and those (monadic) values, themselves, have their own intrinsic meaning: I don't have to ask() for their arg()ed values for the context of them: I can now use comonads for that when I need that.