Profile

Stream

This weekend, for the Boston Haskell hackathon, a bunch of people who pay more attention to homotopy type theory than I do are in town. So I've been looking at the notes for cubical type theory to catch up some (on the parts that are relevant to my interests).

I can't really claim to understand them well. Some of it is written in typing rules, but some of the notation used is unexplained, and seems to expect you to understand it via intuition about categorical models that I don't have a good understanding of. But, eventually I came to something that explained a big gap in my understanding. The question is, "if I know eq : T = U, and I have P(T) how do I get P(U) if eq can have computational content and must apply functions at all the right places?" And the answer in cubicaltt is that some operations are defined by induction on types.

This isn't that surprising, because observational type theory did the same thing. However, a big difference occurs to me. Observational type theory went to great lengths to ensure that equality was computationally irrelevant. And this presumably ensures that the things defined by induction on types don't matter when you run your program. But the whole point of cubicaltt and the like is that equality is not only relevant, but has non-trivial content.

So the question is, can you erase types (or terms of type U if you prefer)? And the immediate answer that occurs to me is, "no." This isn't actually that novel either. It is related to the question of whether you can statically resolve all dictionary passing for type classes in Haskell. JHC, for instance, implements type classes by case analysis on (representations of) types, and so it cannot erase all of them statically. But it might not be immediately obvious that internalizing the, "everything respects weak equivalence," (meta) fact would have a similar effect.﻿

Presumably the action on object (i.e. types) is going to be as erasable as the type produced by said action, which is conditional on it being used to act on morphisms.

This is normal. Just because you can't erase all types doesn't mean you can't erase any. JHC passes all types and runs an unused variable analysis to erase what types (and terms) can be.

However, it might be an example of where an erasure/parametricity tracking calculus would be useful for types. Typically it's redundant, because types are automatically computationally irrelevant, unless you add something like type case.

This alternately gives a perspective of how homotopy type theory 'works.' In plain intensional type theory, the substitution operator works by doing nothing. In homotopy type theory it takes a computationally relevant explanation of what it has to do to map between the two types. You could imagine this in a Haskell-like language as the operator having a type class constraint on the type being substituted into. It just happens that the type class can be automatically derived for all valid type( function)s.﻿

Wrote down my arguments about why unsafeInterleaveIO isn't actually as bad as its reputation suggests. With digressions about referential transparency and purity, in case you want to use terminology exactly the same way I do.﻿

One area where I'm at odds with the prevailing winds in Haskell is lazy I/O. It's often said that lazy I/O is evil, scary and confusing, and it breaks things like referential transparency. Having a soft spot for it, and not liking most of the alternatives, I end up on the opposite side when the ...

+Neil MitchellYeah. It's Ed's blog, and a while back he invited me to write guest posts, because I wasn't really interested in making my own. But no one ever informed the planet haskell folks (or whatever is necessary to display something other than Ed's name).﻿

we have some code for generating Excel files (edit: apparently we don't; we just have generators for other stuff. You'll just have to take my word for what the Excel version looks like), and it turns a fairly declarative specification of the file into effectful procedures that generate the file; T -> U with a side effect. You can combine multiples of these by running them in sequence; something like:

\t -> f t ; g t

However, there's a problem (if your language implementation doesn't handle this right). In the code above, once we're in g, we're clearly done with f, and it and all the things it transitively references could be garbage collected (at least as far as we're concerned). But without the call to g eliminating the stack frame of our lambda, the reference to f is still live, and it has to stick around.

This is also a good reason for the compiler to CPS your code (or do something similar; the STG used in GHC is nearly in A-normal form for instance), so that you can handle cases like:

\t -> f t ; g t ; h t

Once we've jumped from f into the continuation {g t ; h t}, f is no longer live. But without this sort of implementation, building up structure using first class functions and closures is expensive.﻿

I think this might be a good video to show programmers who are struggling with learning Haskell, monads, category theory, etc. There are, naturally, a lot of parallels between that stuff and learning to ride a backwards bicycle after already knowing how to ride a normal one.

It also kind of illustrates +Brent Yorgey's monad tutorial fallacy. When people's brains snap over to finally understanding, they tend to think it was the last example or analogy they read that did it, when it was actually the period of repeated struggling, and everything eventually snapping into place is just how understanding works in general.﻿

Wow, I've seen one or two of those, but I thought it was just a minor problem, but I assemble all my playlists on the web interface.I do wish they had a query API, I like to figure out what music works for me when, and try to organize up more of the same.﻿

In the last couple posts I've used some 'free' constructions, and not remarked too much on how they arise. In this post, I'd like to explore them more. This is going to be something of a departure from the previous posts, though, since I'm not going to worry about thinking precisely about ...

Speaking with +Edward Kmett about some operad stuff he's been doing caused me to realize a new way of seeing that all the GADT stuff in Haskell is not really dependent types.

My usual go-to example is functions. Whenver people use GADTs and indexing to fake dependent types, they think about data types. And those are amenable to faking (though it's often not pleasant) because, well, indexed data types are data. But functions are wired in, and there's really no good way (that I know of) to represent singleton subtypes of function spaces using GADTs, at least if you want it to look remotely similar to dependent types.

However, I came to the realization that it isn't really even adequate for data types, because the semantics of the type level are different than those of the value level. For instance, I wrote some stuff recently about how (finite) lists aren't the free monoid in Haskell. But Haskell's type level is expected to behave more like a total language, so finite lists are the free monoid at the type level.

And even though we have all kinds of automatic lifting now, the semantics don't actually line up. The 'faking it' procedure is that to have 'dependent types' involving T, you lift T to the type level, create T' :: T -> *, and maybe recover T by working with ∃(t :: T). T' t. But this construction actually isn't the same as having types that depend on your original T (for Haskell), because it is actually constructing the finitary version of T. There are even certain rules for working with existentials that ensure this is the case.

The short version is that reasoning about the lifting of your data type doesn't mean you're actually reasoning about the original value level type, just about a corresponding object with an analogous definition.﻿

Last time I looked at free monoids, and noticed that in Haskell lists don't really cut it. This is a consequence of laziness and general recursion. To model a language with those properties, one needs to use domains and monotone, continuous maps, rather than sets and total functions (a ...