Archive for software

We now have a web based tutorial for LogiQL, the descendent of Datalog we have been developing at LogicBlox. Given my high standards, I feel we still have more work to do on the language. However, as our newly revised website points out, it is being successfully used it solve very real business problems already.

Jeff pointed me to these extremely cool and useful web application: Detexify2. At least a few of you readers have spent time skimming The Comprehensive LaTeX Symbol List looking for the incantation required for a specific symbol. With Detexify2 you can just scribble something that roughly looks the symbol and it will tell you the name and, if necessary, the LaTeX package that provides it.

The only problem is that it lets users help train the recognizer. This can also be a good thing, but I can imagine a few malicious users (or perhaps just people with very bad drawing skills) ruining it for everyone.

Given that the formalization of Scala Classic has ground to a halt, for reasons I may go into later, I spent part of today hacking on the Scala compiler itself to add support for singleton literals. Currently, Scala allows singleton types for stable identifiers. My modification allows literals to be used in singleton types. I can't promise that it will be in the forthcoming Scala 2.7.2 release, but I would like it to be.

Overall it was far less work than I was expecting. Scala already internally supports what it calls "constant types", there is just no way to write them in Scala source code presently. Consequently, most of the effort was in extending the parser.

Given my modifications, it is now possible to write code like the following:

scala> val x : "foo".type = "foo"

x: "foo".type = foo

What I was not expecting was that out-of-the-box things like the following would work:

scala> val x : "foobar".type = "foo" + "bar"

x: "foobar".type = foobar

scala> val y : 10.type = 2 * 5

y: 10.type = 10

scala> def frob(arg : 10.type) : 6.type = arg - 4

frob: (10.type)6.type

Unfortunately the excitement soon passes when you realize all the things you can't do with singleton literals (yet). Even if we turn on the experimental dependent method support, you can't write things like

def add(arg : Int) : (arg + 5).type = arg + 5

because these are exactly what they are called, singleton literals, not full-blown dependent types.

One cute example, based on a use suggested by Sean McDirmid, would be that some people might do something like the following with implicits:

In any event, they will hopefully become more useful as the type system continues to grow. I am also sure someone will probably come up with a clever use for them that hasn't occurred to me yet. If so, let me know.

Following on my earlier entry on modules in Scala, I'll give an encoding of Standard ML style functors here. You can get a pretty close approximation by using class constructor arguments. However, I am going to cheat a little to get the closest encoding I think is possible by using the experimental support for dependent method types. You can get this by running scala or scalac with the option -Xexperimental. It works okay at least some of the time, but no one has the time at the moment to commit to getting it in shape for general consumption.

So here is my example of how the encoding works. First, the SML version:

The only problem I discovered is that it is not possible to define RicherEq in terms of Eq as we could in SML:

scala> type RicherEq = Eq { def neq(x: T, y: T): Boolean }

<console>:5: error: Parameter type in structural refinement may

not refer to abstract type defined outside that same refinement

type RicherEq = Eq { def neq(x: T, y: T): Boolean }

^

<console>:5: error: Parameter type in structural refinement may

not refer to abstract type defined outside that same refinement

type RicherEq = Eq { def neq(x: T, y: T): Boolean }

^

Why this restriction exists I don't know. In fact, this sort of refinement should work in the current version of Featherweight Scala, so perhaps it can be lifted eventually.

I still need to think about higher-order functors, and probably spend a few minutes researching existing proposals. I think this is probably something that cannot be easily supported in Scala if it will require allowing method invocations to appear in paths. However, off hand that only seems like it should be necessary for applicative higher-order functors, but again I definitely need to think it through.

I just saw a thread on Lambda the Ultimate where I think the expressive power of Scala in comparison to Standard ML's module system was misrepresented. I don't want to go into all of the issues at the moment, but I figured out would point out that you can get the same structural typing, opaque sealing, and even the equivalent of SML's where type clause.

For example, consider the following SML signature:

signature Nat = sig

type t

val z: t

val s: t -> t

end

This signature can be translated in to Scala as:

type Nat = {

type T

val z: T

def s(arg: T): T

}

It is then possible to create an implementation of this type, and opaquely seal it (hiding the definition of T). In SML:

structure nat :> Nat = struct

type t = int

val z = 0

fun s n = n + 1

end

In Scala:

val nat : Nat = new {

type T = Int

val z = 0

def s(arg: Int) = arg + 1

}

In many cases when programming with SML modules it is necessary or convenient to give a module that reveals the definition of an abstract type. In the above example, this can be done by adding a where type clause to the first line:

structure nat :> Nat where type t = int = struct

...

We can do the same thing in Scala using refinements:

val nat : Nat { type T = Int } = new {

...

Great, right? Well, almost. The problem is that structural types are still a bit buggy in Scala compiler at present. So, while the above typechecks, you can't quite use it yet:

scala> nat.s(nat.z)

java.lang.NoSuchMethodException: $anon$1.s(java.lang.Object)

at java.lang.Class.getMethod(Class.java:1581)

at .reflMethod$Method1(<console>:7)

at .<init>(<console>:7)

at .<clinit>(<console>)

at RequestResult$.<init>(<console>:3)

at RequestResult$.<clinit>(<console>)

at RequestResult$result(<console>)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflec...

There were some issues raised about how faithful an encoding of SML functors, and well-known extensions for higher-order functors, one can get in Scala. Indeed, off the top of my head it is not entirely clear. So I need to think more about that before I write some examples.

I've been meaning to write this up for a while, but it seems like there has always been something else I really ought to be doing. So I expect this will be a bit more terse than I might like. Anyway, when I wrote about encoding higher-rank universal quantification in Scala back in March, I used a rather elaborate scheme involving the use of Scala's first-class existential quantifiers. While this was the point of the exercise, surprisingly, no one called me on the fact that if you just want higher-rank impredicative polymorphism in Scala, there is a much simpler encoding. Maybe it was obvious to everyone or no one read closely enough to think of raising the issue. So today, I'll explain the better way to encode them.

First we can define an infinite family of traits to represent n-ary universal quantification, much like Scala represents n-ary functions types:

trait Univ1[Bound1,Body[_]] {

def Apply[T1<:Bound1] : Body[T1]

}

trait Univ2[Bound1,Bound2,Body[_,_]] {

def Apply[T1<:Bound1,T2<:Bound2] : Body[T1,T2]

}

// ... so on for N > 2

Really, the key to this encoding is the use of higher-kinded quantification to encode the binding structure of the quantifier.

Now it is possible to write some examples similar to what I gave previously, but more concisely:

object Test extends Application {

def id[T](x : T) = x

type Id[T] = T => T

val id = new Univ1[Any,Id]

{ def Apply[T <: Any] : Id[T] = id[T] _ }

val idString = id.Apply[String]

val idStringList = id.Apply[List[String]]

println(idString("Foo"))

println(idStringList(List("Foo", "Bar", "Baz")))

type Double[T] = T => (T, T)

val double = new Univ1[Any,Double]

{ def Apply[T <: Any] : Double[T] = (x : T) => (x, x) }

val doubleString = double.Apply[String]

val doubleStringList = double.Apply[List[String]]

println(doubleString("Foo"))

println(doubleStringList(List("Foo", "Bar", "Baz")))

}

As I mentioned previously, this example would be much improved by support for anonymous type functions in Scala. I am pretty sure Scala will eventually support them, as they would not require any deep changes in the implementation. They could be just implemented by desugaring to a higher-kinded type alias with a fresh name, but depending on when that desugaring is performed, it is possible that it would result in poor error messages. Supporting curried type functions is also quite desirable, but given my current knowledge of the internals, that seems like adding them will require some more elaborate changes.

I think Vincent Cremet was the first person to suggest this sort of encoding, and I vaguely recall reading about it on one of the Scala mailing lists, but I could not find the message after a little bit of time spent searching.

One of the things that will show up in the imminently forthcoming Scala 2.7.1 release candidate, is the addition of traits for representing equivalence relations, partial orderings, and total orderings. Previously, the trait

Ordered

was used for representing totally ordered things:

trait Ordered[A] {

def compare(that: A): Int

def < (that: A): Boolean = (this compare that) < 0

def > (that: A): Boolean = (this compare that) > 0

def <= (that: A): Boolean = (this compare that) <= 0

def >= (that: A): Boolean = (this compare that) >= 0

def compareTo(that: A): Int = compare(that)

}

However, the

Ordered

trait does not provide a representation of a total ordering. Therefore, the new trait Ordering:

trait Ordering[T] extends PartialOrdering[T] {

def compare(x: T, y: T): Int

override def lteq(x: T, y: T): Boolean = compare(x, y) <= 0

override def gteq(x: T, y: T): Boolean = compare(x, y) >= 0

override def lt(x: T, y: T): Boolean = compare(x, y) < 0

override def gt(x: T, y: T): Boolean = compare(x, y) > 0

override def equiv(x: T, y: T): Boolean = compare(x, y) == 0

}

The tricky part however, was writing description of the properties required of something that implements the

Ordering

trait. When one normally thinks of a total ordering one thinks of a relation that is

anti-symmetric,

transitive,

and total.

The problem is that

Ordering

is not defined in terms of a binary relation, but a binary function producing integers (

compare

). If the first argument is less than the second the function returns a negative integer, if they are equal in the ordering the function returns zero, and if the second argument is less than the first the function returns a positive integer. Therefore, it is not straightforward to express these same properties. The best I could come up with was

compare(x, x) == 0

, for any

x

of type

T

.

compare(x, y) == z

and

compare(y, x) == w

then

Math.signum(z) == -Math.signum(w)

, for any

x

and

y

of type

T

and

z

and

w

of type

Int

.

if

compare(x, y) == z

and

lteq(y, w) == v

and

Math.signum(z) >= 0

and

Math.signum(v) >= 0

then

compare(x, w) == u

and

Math.signum(z + v) == Math.signum(u)

,
for any

x

,

y

, and

w

of type

T

and

z

,

v

, and

u

of type

Int

.

Where

Math.signum

returns

-1

if its input is negative,

0

if its input is

0

, and

1

if its input is positive.

The first property is clearly reflexivity. I call the third property transitivity. I am not sure what to call the second property. I do not think a notion of totality is required because it is assumed you will always get an integer back from

compare

rather than it throwing an exception or going into an infinite loop.

It would probably be a good exercise to prove that given these properties on

I finished a draft of "Generalizing Parametricity Using Information-flow" for LMCS this past weekend. That should leave me with fewer distractions to work on something for ICFP.

My work on Featherweight Scala is moving along, sometimes more slowly than others. Other than the issues with singletons I have already discussed, one problem with the original definition of Featherweight Scala is that it did not truly define a subset of Scala proper – that is, there were many valid Featherweight Scala programs that were not valid Scala programs.

So I have been working on trying to get Featherweight Scala to be as close to a subset of Scala proper as is possible without making it far too complicated to be a useful core calculus. Some of this has involved changing the definition of Featherweight Scala. Some of this has involved lobbying for changes to Scala proper to make it more uniform (so far I've succeed with extending traits to allow value declarations to be overridden).

Through the whole process I've also been spending a lot of time typing things into the Scala interpreter to figure out how Scala proper treats them. For better or worse, I've actually managed to uncover a fair number of bugs in Scala proper doing this. I've almost reached the point where I treat it as a bit of a game: can I from thinking about the properties of the Scala type system come up with some corner case that will exhibit an unsoundness in the language (a ClassCastException, a NoSuchMethodException, etc.), or at least crash the compiler?

Last week I came up with a pretty nice one where I used the fact that Scala should not allow inheritance from singleton types to launder types in an unsafe fashion (now fixed in trunk!). Prior to that I came up with something fun where you could trick the compiler into letting you inherit from Nothing (which is supposed to be the bottom of the subtyping hierarchy). Today I got to thinking about Scala's requirements that paths in singleton types must be stable – all components of a path must be immutable. So for example in

var x = 3

val y : x.type = x // ill-typed

The path x is not stable because x is a mutable variable. However, Scala does allow lazy value declarations to be stable.

var x = 3

lazy val y = x

val z : y.type = y // allowed, at present

Note that y is not evaluated until it is first needed, and in this case its evaluation involves a mutable reference. Furthermore, add into the mix that Scala also provides call-by-name method arguments (lazy values are call-by-need). So I started riffing on the idea of whether I could violate stability by changing the mutable reference between the definition of the lazy value, or a call-by-name argument, and its use. In retrospect, I am leaning at present towards the belief that there is no way this should be exploitable from a theoretical standpoint. That does not mean that the implementation is necessarily in alignment with theory, however. I did manage to hit upon a combination in my experiments that resulted in a NoSuchMethodException, so the exercise was not a complete letdown.

I should point out that these things do not really reflect on the quality of the Scala language as whole. It is a rich language that does not yet have a complete formal model, and as such, in the implementation process it can be easy to overlook some particularly perverse abuses of the language.

Writing about RSI reminded me that I had never gotten around to talking about the ThinkPad X61 tablet I purchased at the beginning of August. It is a pretty solid when used as a laptop.

As far as being a tablet goes, it works reasonably well in that domain too, with some exceptions. Firstly, for now if you want to use the tablet capabilities to their fullest, you need to run Windows Vista. The tablet is supposed to be supported under Linux, but there is really only one program that supports handwriting recognition program available, Cellwriter. It looks promising, particularly because it can be trained to generate any Unicode glyph – with Vista you are limited to the system's configured language. However, I do not think it would be difficult for Gnome or KDE to catch up in this area if they put a little effort into it.

My initial solution to this was that I would just run Linux under VMWare. Except I soon found that while I can use the tablet as a mouse, VMWare will not accept the input events the handwriting recognition subsystem generates. When I filed this as a bug they did not seem to think this was a problem.

While working with standard Windows applications, anything with a input field can accept handwriting recognition input. I almost wrote my entire defense presentation this way, but near the end I gave in and used the keyboard to do most of the last minute tweaking.

Of course, there is the question of whether writing by hand is any easier on my wrists than typing. It is difficult to say, for one, it becomes basically impossible to use emacs, unless you can do everything from pull-down menus, because of the chording necessary to activate some functionality. And it is definitely slower than typing, even with practice I expect. However, part of the problem could perhaps be resolved by rethinking various applications with tablets in mind.