Anyone who is reading this: don't buy a cheap laptop. That
is, unless you enjoy sending it away for two to four weeks
every few months or so for repairs 'cause some flimsy little
part broke. I've just requested an RMA for this thing,
AGAIN, to fix all the silly (and not so silly) little
problems that have been accumulating since the LAST time it
took a little journey to the shop. The most annoying part
is that fact that the expensive/important stuff, eg the CPU,
hard disk, and display and work perfectly, and I have
troubles with flimsy plastic door to the DVD drive and the
power port where I plug the damn thing in. Uhg. ARM
Computer - www.armcomputer.com - avoid them. Their customer
service sucks, too. I guess I'm glad I didn't buy it from
one of the REALLY skechy bargin-basement places.

I'm also kind of peeved that a power hiccup shut down mir
(my desktop) in the middle of the morning, ruining a pretty
nice uptime of nearly three months. It's a really
silly thing to get annoyed at, but then again it also takes
a good fifteen to twenty minutes to fsck the three hard
disks, so it's a definite inconvenience.

Played around with metaspace a bit. Java3D is, of course,
still buggy, which makes development difficult. I wrote a
metaobject which is a rotating bar to see how well "naïve"
animation works - turns out we have a problem with messages
being handled out of order. For some stuff that's not an
issue, but for a sequence of property changes like animation
frames, this is a big problem. I haven't yet come up with a
good way to handle it yet. Sigh. Maybe I'll go back to
hacking MIM. At least that won't require my laptop (see
above) to work on.

Doing Cannabis Reform Coalition (CRC) stuff too. Crap. I
need to make a web page for that. Note to self: WORK ON CRC
WEB PAGE!!!

Made a big API change to MOS. Error reporting takes place
using exceptions instead of return values. Not really the
sort of change you want to make at this stage of coding with
20,000 lines already written, but it actually only took a
couple of nights of tedious combing through compiler errors
and adding try/catch blocks around all the affected method
calls. Wheew. Except for polymorphic typing (which
potentially will be one of MOS's most interesting features)
it's getting to be feature-complete, which is VERY good,
because that means I'll be able to actually work on making
it stable, and more interestingly be able to work on MOS
applications (MIM and Metaspace.)

Whew... Site keys work. This means that most
communications (and all that an attacker would actually be
interested in) are now fully encrypted in MOS. I don't want
to have to do that again. Not that it was particularly hard
to implement (actually I was supprised at how little code it
actually involved) but it was not fun at all to debug. Then
again, I did do the whole thing to two nights, I suppose it
could have been much, much worse...

So the upside is that MOS is now (probably) really,
really secure. The downside is that all the key
negotiations that have to happen before things can talk to
other things takes ages. I don't know well this scales, but
until MOS gets another order of magnitude faster we may be
looking at lengthy login times for complex worlds. We'll
see how things pan out, though. The security measures of
MOS are one of it's most important features.

Oh, I did get the performance up though by re-writing the
message parser. MOS throughput went from 11 messages/sec to
about 150 :-) Same XML message format, but parsing overhead
is MUCH less now. Good.

Spent all day benchmarking MOS. Ouch. It's rather painful
to realize the software you've spent the last year working
on is only capable of a throughput of 11 messages/second.
The problem seems to consist primarily of the fact that XML
is very heavyweight: parsing a small message takes
about 25 milliseconds, so the current system has a maximum
theoretical throughput of only 1000/25 = 40
messages/second! That's really, really bad.

So I guess a couple possibilities present themselves. The
first would be to write my own specialized, optimized XML
parser. The other would be to create a new, binary-ish
encoding that expresses the same information in a tighter
and more easily parsed format. It's a tough call, actually,
and ironically I expect it would be pretty close to the same
amount of work.

Now that I think about it, a binary encoding would also have
the advantage of being able to transmit data like numbers
much more easily (as such things are presently kept in text
encodings and have to be converted back and forth, ouch, I
know.) Hmmm Hmmm. This is going to take some thought.

Note to self: to find the whistle in the second quest, walk
THROUGH the wall into the center part of the "A" in the
second level. Not the first time I've missed that.

Strictly speaking, I didn't get jack shite done today.
Turned in the last lab for CS201 and got really far in Zelda
II but other than that... Where did all the time go? Damn.

Anyway. Was gonna write a new chat client that used a
"mario" style interface, eg you could jump around on
platforms like all those old NES games. It could be a
really l33t way to chat, mixing talking and interactivity in
some amusing ways :-) Then I realized it would make more
sense to hack gfxchat to do the platform stuff rather than
writing a whole new program. THEN realized that it was
11:00pm and I didn't feel like writing any code. So I wrote
a poem instead :-)

A tail around a corner
A toy rolls by
Memory of a purr
Black and white spots
A flash of fur

Streaking through the house
Death to ping-pong balls!
Stop!
Time for food
Time for grooming
Time for sleep

Noble, poised
This is he
Living with us
But not below us
One of us
The feline, the king.
(other poetry on my website)

nixnut -
XML is just a syntax. Yes, it is a metalanguage in the
sense that it defines how to specify real, useful languages,
but it is in the design of these languages where we need
more meta-level markup. One thing that has been rattling
around in my head recently has been how to design a language
that actually coerces users into using content-based markup
so that programs can reason about a page's contents more
intellegently than the modern complex web page that's just a
million nested tables. The greatest boon to intellegent
agents and the Internet would be a web that computers could
extract meaning from as well as people.

Okay, I'm ranging everywhere from hypermedia and virtual
reality to intellegent agents and artificial intellegence
here, but I think this is the future, and the future is
going to be cool :)

"Architechture and Assembly language" **growl** A little
from column (b), a LOT from column (a). Electrical
engineering for computer science majors. Lovely. Did I
mention that this class _FAILS_ half the students in it
every semester? And that half of those are FAILING IT FOR
THE SECOND TIME?! Well, I went and talked to a professor
about it at least. Maybe the department will get a clue...
some day.

jmason, nixnut and jschauma
responded to my little rant on hypermedia. Well, the
superficial part about why HTML sucks, not the important
bits :-)

Someone mentioned that it could be done server side. Of
course you're right, the server can do ANYTHING - my example
of slashboxes or netcenter channels is just a very
sophisticated sort of server-side include. The idea here is
doing it client-side. How can we get more intellegence into
the client?

Frames are evil, layers are proprietary, and I don't think
javascript could actually acomplish this sort of multilevel
document nesting. What I'm thinking of is beyond just
sticking one piece of a web site in another (as banner ads
do); rather what would be really cool would be for there to
be an interaction between the inner and outer documets on
various levels - visually (layout) and conceptually (outer
documents are more general, inner documents more specific,
or other sorts of information relationships.)

What I think would be really cool, is if ON YOUR HOME PC
(none of this centralized "portal" crap) you could combine
many inflowing information sources (CNN, Slashdot, Memepool,
Sluggy Freelance) into a single page that is best laid out
for YOU - personalized, managed by an intellegent agent,
most useful to you, and best of all you can screen out the
advertising :-)

I don't think the web is up to this. If HTML were a
stronger metalanguage, meaning it described the MEANING of
the data it wrapped tags around, then we'd be able to
analyze so much more from the average HTML document. Modern
HTML is a bunch of layout information with a few vestigal
tags from
the times people tried to make it mean something.
<em> vs <i> anyone? Or <address>?

Whoops. Got a meeting to go to. More rants on this subject
tomorrow :-)

Dude. Objects actually go away now when they're supposed
to. This is cool. Finally I won't have to restart the
server every time something screws up :)

Talked to my friend Reed for a long time last night about
the future of hypermedia. What the web could have really
used would be an <include> tag, that let you include bits of
HTML from other documents into your own, just like you can
inline offsite images. If you could nest documents in
documents in documents you could build incredible dynamic
even sort of organtic thing - more of a complex multilevel
expansive document space than a mere web page. Here's an
example: take oh, slashdot slashboxes, or netscape netcenter
channels. These are headline services. They get a feed
from these other sites, and reformat it and integrate it as
part of their own page. Now think of this: imagine if they
simply linked back to the originating site, and that site
could then paint inside the box whatever it liked? You can
still maintain a single cohesive web page, but the browser
is actually compositing many pieces of other sites together
into a single sort of digest.

By better laying out our hypermedia information, in
hierarchtical/hyperlinked structures, the browser can become
more of an intellegent agent. Power to the end user! The
internet right now is racing in two directions - one towards
more centralization, with portals and ASPs and backbones,
the other towards more decentralization, with gnutella and
freenet and their kin.

The problem, however, is how to balance the order (and
control) gained/afforded by centralization, vs the chaos
that is the current rather primitive crop of distributed
systems? The web does this supprisingly well, but only by
having multiple, redundant, _centralized_ indices of the
web. Distributed centralization. Gotta love it :) But the
web is also incredibly noninteractive, and that's really
bad. When people say they use the web "as an applications
platform" they really mean they're using HTTP to download
some ActiveX or Java controls. The idea of a web page as an
application only makes sense in the context of forms and
javascript - but these are not real-time technologies. A
dynamic hypermedia system needs to be able to respond to
changes in the system as they happen, and the web is
unsuited to this.

In case you hadn't guessed, the project I'm working (ADR) on has a lot
to do with this :)

Birthday yesterday (July 29th). I'm twenty now. No longer
a hotshot teenager. Damnit. Now I really have to do
something cool to stay ahead of the next generation :)

Left sputnik (my laptop, my desktop is named mir :-) plugged
into the wrong wall socket (it's on a switch, meant for a
lamp, which means you can turn it off) all night. Woke up
to find the battery completely flat. WHOOPS!

Caught the illustrious graydon on
IRC, had a nice chat about
CORBA. I probably should have looked into it more before
implementing MOS (Meta Object System, see Amherst Distributed
Reality for the specifics.) Not that I think CORBA
would have been a perfect fit for our application, since it
has some specific feature requirements, but because it's
good to have another point of view, especially when the two
systems are conceptually similar.

Finally rounding out the lifecycle of objects though.
It
was sort of bad that you could aquire and manipulate
objects, but not get rid of them :) But things ought to get
GC'd now... I hope.

Arrrrg! I wish Java3D 1.2 for linux would come out.
The
interesting parts of our project (working virtual reality)
are at a complete standstill. It's very frustrating.

I'll keep y'all posted, 'cause when this finally comes
together
its gonna be mad cool :)

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser
code is live. It needs further work but already handles most
markup better than the original parser.