Monday, October 20, 2008

Types: Values versus Locations

Minor thought I had this morning: I was doing some prep work for the conference I'm speaking at next week, and I noticed I was being perhaps overly pedantic about the terminology of types in a way that only matters for imperative languages.

I habitually make explicit the distinction between values and locations of a particular type. I might say, the storing of values of multiple types in a location of a single type is an instance of polymorphism. In describing a class, I might say, this is "an iterator over a stream of values of type T", instead of just saying "an iterator over a stream of T" (no, not a teapot!).

The distinction is important in languages that have mutable state. Locations can have their address taken, they are subject to polymorphism, and the location itself has an identity independent of its value. Values, on the other hand, cannot have their address taken - at most, the value is or contains an address. Values always have a fixed type, but locations may contain values of different types if the type of the location is polymorphic. Locations may be lvalues or rvalues, but values are always rvalues (unless you dereference, index or field-access them). Especially important is the fact that implementations of closures in imperative languages like C#, Delphi, Ruby, etc. have almost always opted to capture locations, not values.

However, consider a pure language, like Haskell. If the language doesn't have mutable state, there's no such thing as a location (at the conceptual level). If you have an iterator in such a language (perhaps modeled using a tail-call continuation design), it's redundant to say "iterator over a stream of values of T" - it's always OK to say "iterator over a stream of T" instead. And the closures might capture values or locations, as performance demands, since the semantics don't change.

Painfully Amateur Philosophy addendum: locations are a pretty physical concept - you know the data is in there in memory somewhere - but values are more of a platonic concept, existing in some pure universe, and we can only refer to them by metaphor and convention by using specific bit patterns interpreted in precise ways. I specifically use the word metaphor in the conceptual metaphor sense: in no way are the electron levels inside the machine representing the ASCII characters of "cat" anything like the furry mammal. Rather, the bit pattern is like a pointer only the human mind can dereference, once it has been transformed into what humans have agreed to be a semantically equivalent representation on screen (which is just a different part of memory) or on paper (which is just bits streamed out over the wire).

Perhaps most people find this obvious and boring, but what interests me is the way the representational power of the bits is entirely unmagical, yet it permits meaning to be stored inside the machine. I say meaning, by which I mean that we humans, the only judges of what is meaningful, find to be meaningful, but no more: I do not think meaning is something inherent in objects, just in judges. What if brains had no more magic in their neurons than the circuits in the machine? (It seems entirely plausible to me, and to many programmers I imagine. And this would constrain those judges to be mere boolean functions over matter patterns.)

In such a scenario, qualia would be beyond our power to deconstruct using physical means, since the brain/machine could be evaluated "on paper", and such an evaluated brain would report the same qualia as you or I, and we would have no way to argue otherwise in a one on one dialogue (this is how I think of the Turing test - as a philosophical concept, not an actual benchmark, which I think is silly and pointless). Under this assumption, there isn't any argument that could prove that the machine isn't conscious which doesn't itself rely on arbitrarily chosen boolean functions which explicitly return false for non-mysterious matter patterns (e.g. capable of "understanding"). Closely related is the problem of other minds, about which I'm on Turing's side - if we can't tell the difference, then there isn't any.

Perhaps consciousness and the experience of qualia is just what matter feels like when it's part of a causal chain? (The technical term, I understand from my Googling, is "type physicalism", though epiphenominalism is related.)

1 comment:

Anonymous
said...

A lot of effort has been put into making the pointer transparent to the programmer. Whilst this saves a lot of typing, it does raise issues of understanding what exactly is going on, and sometimes this is important to know.