So, it's Christmas, and the New year went by. as usually
at this time of the year. The family have been relaxing and
the my mind have been floating around fishing ideas on the
pond of imagination.

So, as usual I spend some time trying to solve a
very interesting puzzle. What the are nature made up
by. One thing I just can't understand is why we make
such a fuzz out of the number dimension of space. It's the
numbers of freedoms in the model. But for god sake a pencil
have 6 freedoms. Three rotational degrees of freedom and
three translational degrees of freedom. My point is that it
is likely that there are a local structure that we cannot
see that have a gazillion of freedoms (well this is a free
source blog aren't it :-).

It looks to me that it is pretty premature to twist
peoples mind about talking about high dimensional space in
the way we do. On the other hand the number of freedoms
are important mathematically no doubt about that.

The next things that just forces me to try to think by my
own is how we argue about the fundamental parameters in
space. In some science shows I seen on the tube they only
explain this by saying that we would not be here to analyze
it and simply that's why. And this is why they are feasible
and not a small distance away from it leading to an
unfeasible solution.

Here is where the engineer in me just go into rocket mode.
What!!!, this is not the only argument. It is just as likely
that the reason is that the laws are simpler and demand
less parameters. To visualize, We have a higher dimension
model that works really well. A simpler model with fewer
parameters will then be embedded very much like a surface in
space e.g. move a little in the wrong direction and you are
out of the surface. And the setup will be unphysical.

After a couple of weeks hacking fixing bugs etc in the
guile-unify module the example in my previous post runs
effectively according to the discussion in my last
post - not the hackish slow method I used previously. So it
seams to work - cool. The next step is to understand another
problem one have when trying breath first like
algorithms. It can have to much of computational complexity.

I've been wondering what example to use. And I think that it
would be interesting to go after automatic proof generation.
I decided to play with the leanCop prolog solver for this
application in order to understand needs. A first approach
will be to make proof searches that postpone
against stack usage. E.g. try to search for a simple proof
before a more complex one.
</b>

Now, there is this piece left that is needed in
order to dive into this problem. That is one probably need a
way to cut out branches and trim the reasoning trees. How
can this algorithm be constructed? Well, assume a certain
level L. One will hit at some point a memory barrier K. Then
one need to decide about Cutting out a fraction F from the
tree because of the high complexity of typical algorithms.
This can be done randomly or e.g. according to an importance
index. So the tactic here is to bring out all the importance
numbers, sort them and search for the level separating the
whole tree in correct fractions. Notice that it can be smart
to add a small random variable to the index in case there
are many similar importance numbers - and one get a random
sample. After finding the threshold the c-code can make sure
to trim the tree according to the threshold in a adaptive
and globally sound way.

And of cause there are some holes remaining like making sure
scheme variables sit in the redo tree get garbage collected.
but this is boring though important technical stuff.

There is another idea I later would like to follow. The
redo tree can be seen as a storage of a set of permutations
of a few symbols or numbers. Therefore, later on, some kind
of compression of the redo tree will be used. So typically
you have a sequence of (type pointer val type pointer val
...) where the pointer points to a scheme variable or
immediate. That will be reset to the new value. now
currently we waste space by storing (type pointer
val) as three 64 bit elements on a 64 bit platform. Type can
be just one byte and we could device a control byte
describing the the size of pointer and val around stored
base points. e.g. choose between 1,2,4 or 8 byte
representations around 4 stored values. many
application will then just need 4-5 bytes to be able to
restore a state for a variable which is maybe a 6 fold
saving of memory (this is the benefits of C). Of cause one
can let a control bit decide if there need to be a resetting
of a stored base value and use some kind of adaptive
learning to compress but I leave that out for now. actually
any technique found in gzip can of cause be used if one
likes. We will do it in the order of one extra scan of the
tree not doing an actual redo and therefore loss of
addresses due to variable sized atoms may not have such a
great impact. On the other hand it is possible to store
pointers to speed up plain tree searches in the redo tree.

I hoped I'm not bored you here. But one of the reasons I
do write this is to help me think about what to do next.
Ergo it help OSS ;-)

And what you note is that this is a more compact reduction
of pattern matching then doing it with the standard VM of
guile. So the end results is that code executed on this VM
is both faster and more compact then using the standard
setup. But of cause if we would like to compile this to the
native platform then the standard compilation to pure scheme
probably has an edge.

Interestingly though (I'm implementing a prolog on top of
guile), this pattern can be generalized to the case where A
- the input is a unifying variable. The destruction will
look almost the same, but we need to tell
the VM that we are in a mode of destructing a unifying
variable. Meaning that if X is not bound we will set that
variable to a cons and push two new unbound variables (car
A) and (cdr A) onto the stack.

Note, the intention is to make an addition to a prolog
engine to accomplish the above.

postpone_frame(Level,Fraction) will start executing
all postpones above Level*Fraction. Then it will execute
all postpones above Level*Fraction*Fraction and so on. You
may want to change the algorithm at your taste but the main
idea is not to use a full sort but tasty chunks of code
which basically flows from most interesting to least
interesting. I will add a lower level as well for which a
direct Cut is taken.

Actually the current working implementation is very
costly, e.g. every postpone is visited and if criteria is
not met (the state is still recalculated!!!) it will again
postpone (stupid, yea I know, but I wanted a working simple
starting point)

How cool it is to have fun and do something useful. Well
at least on the paper. My main focus for some time is to
learn scheme and help out for that community. It's a really
nice experience - thanks for that.

Anyhow I just entered a new project on Advogato - guile
unify - which is may latest contribution. I've been hacking
on it for some months, and feel that it got some interesting
twists. so what is it?. Well it's exploring the combination
of scheme and prolog.

One of the unique features of scheme that have etched
its pattern in my brain is continuations - mainly because it
was so hard to grok that call/cc stuff when I first
encountered it. And this is a really interesting combination
to breed in this marriage between a schemer and a prologur.

prolog is a combination of a variable stack, tree
search, and unifying matches. To introduce continuations one
probably end up with needing redo trees. A path from the
root of the tree out to a leaf is passing variable "set
commands" in such a way that the state of variable stack is
restored from a blank base setup at the leaf, where a
closure is found ready to take a continuation action. By
making a tree one can save some on
memory and perhaps also on time. Got this working and
prompted me give the project some light.

Now, actually I'm lying. Pure continuation will come,
but I'm targeted the method for a limited kind of
continuation. e.g. a postpone command.

Consider writing a chess solver. Now you build up state
information in various structures as you go and would like
to use that information to draw strategic conclusions for a
certain move. You may want to cut unstrategic moves, right!
Well actually this may be prompting the developer to do a
lot of tuning to get correct. So an interesting idea is to
store the state of the game, save that continuation on a
list and continue with that list if resources are available.
This approach is more adaptive and probably lead to less
tuning for the developer. To note is that storing this state
for millions of continuations can put a severe strain on
memory and also raise the complexity of the continuation. So
that's why compress all the information in a redo tree very
much like saving a word lexicon effectively may be
interesting - actually, i don't now about this, I just made
it work!! and it was great fun and a challenge to do it.

Consider trying to logically prove a statement. Could be
cool to try to search for a small and elegant solution
before trying those infinite paths. So just monitor stack
usage and postpone at the points where stack reaches above a
level, and continue to the next level if needed then on.
Just write the usual prolog stuff and insert a few
commands and you will have it. Nah not implemented but hey
look at.

Yea, trivial, but it shows how simple to code it. Now it's
actually trivial to make a stack size criteria that will be
a nice hack to try just to see if it improves.

Oh, The speed is not to bad and it is taking advantage of
tail calls, prompt logic and CPS. Turns out CPS is not bad
for speed - but do affect it.

Now, entering true continuation is just a matter of hard
work. I will need to write a special GC for this structure
because, the data-structure is tuned to be good at the
scenario above and true continuation will be a cool
creature, but a second citizen in my view that need some caring.

I will try to find time to talk a little more about the
internals of all this. Maybe it's all a nice sand castle,
maybe it is severely cool. Time will tell. I will continue
to hack on it. If you have any questions or would like to
learn or help, join the guile devel community and ask about
guile unify.

I ended the last sequence of blogs about exploring looping,
then basically stopped and started to learn about type
theory and prolog using Qi.

right now I working with this engine to write a type
system that works pretty much like lisp type system e.g. if
we can deduce type, then use it! This type-engine will be
used to compile Qi to lisp/clojure/scheme/go? or whatever
lisp like environment you got.

Well every Christmas I spend some time on thinking about space
and some ideas form, it's a fun and entertaining game. maybe
not correct thoughts, but entertaining. Now the new year
starts and it is back to business with computer quizzes
instead of trying to find the dream of Einstein. Anyhow I
made a small document describing my (well you never now,
people tend to independently walk the same paths) view of
how the world is constructed.

I promise, no more of this, until next Christmas.

----------------------------------------

We have the right to think, correct or not, and if in a
world of only correctness, you will simply drown in
mathematics and never learn to swim it.

I spend this Christmas, reading Simon Sings Big Bang. And as
Simon pretty much says, I say as well, what a wonderful world.

I'm mentally affected by my education that I constantly ask
myself if the things that is presented in a popularizing
book is really what is true. Did he mean that. Sure people
turn things int a better daylight when asked about it
afterwords and so on. There is a constant flow on my
mentally left margain. Anyhow I'm now really impressed by
the puzzle that scientist made to achieve such a solid
ground for the Big Bang theory.

There is always weak spots in an arguments but the core
argument is really solid in my view.

I like the formulation that uses the potential formulation
under the Lorenz gauge if I remember everything correctly.
Then all components follow a wave equation and there is one
linear first order constraints that look close to a simple
continuity equations. Now I wanted to understand what this
actually meant and searched for some example that would
reveal that to me. And there is one telling partial
solution. You could have a solution where you make the
constraint a continuity equation. You will have a sort of
"magnitude of disturbance field" in the scalar potential and
the vector potential will be a sort of momentum potential
e.g. the scalar potential times a velocity field of constant
velocity c. It's a really trivial and very particular
solution. But you can modify it. You can assume that if
along a direction you have the same disturbance, then you
can choose any velocity you like.

Now, in my view this captures some essential features of
electromagnetism. A constant stream of light is not
dependent of the speed of the stream and it is
information that is constrained to the speed of light. Not
necessary the actual physical or disturbance transport.

Note that if we have just one stream the transversal
direction has to be transported with the speed of light
and indicate plane waves.

Even if this is a simple particular solution. One would
probably be able to deduce Maxwell's equations after closing
the space using Lorenz transforms

Ok this is just a mathematical play, but it poses from
my position of knowledge very interesting question. It's
just some speculation from a guy that is not an expert. But
I still hope that I've teased your imagination so please
have fun, play with the mathematics, enjoy the stars and
have a very happy hacking new year.

I'm coding on a library called BoopCore right now. The
reason is that I was thinking about how to make the code for
the new version of Qi, called Shen.

Oh, I did a small test with the Einstein riddle. Got
basically the same speed as gprolog so the "prolog"
compiling part is not too bad.

My idea though is that it should be used by designers of PL
tools and for people with specific needs. As an example
backtracking and unification can be really customized and fast.

The program itself is pure magic, e.g. has some really poor
design. But this is ok for a first version. That version has
to have all the features, which is not decided from the
beginning, but grows as bugs are found and new features has
to be implemented. This means I am Exploring ideas, Fighting
the beast to get them implemented but throughout the whole
process, enjoy it as if it was the best wine in the world.

I have been quite for some time here, work is calling,
family is calling and a new version of Qi ready to be
explored was released. Now in the mean time I have been
studying the sequent calculus and the Qi source code to
learn how to mod it to my taste.

So perhaps my coolest hack is a type driven macro
framework. So a little code,

The first line says that at any kind of arguments and type
of evaluation context first ask for it usage and the return
values will be stored in Usage. This will send out the
type-system to track usages according to some mechanism,
this is done the first time. The next time if not inhibited
(ask? Usage) will be negative and the system goes down to
expand according to function signature and the properties of
the The Value of Usage. In (? Usage T), T is the type that
is returned from the function in the non ask context, e.g.
(+ (ex1 X1 Y1) (ex1 X2 Y2)) should type-check!!

It works stupidly by type-checking repeatedly and
whenever something is asked for a retry is done. A process
that can be made much more effective by remembering the type
deduction and use this memory at nonvolatile sites in the
deduction tree a property that somehow has to be passed up
the deduction tree. Anyway (ask? Usage) will be true if
someone later in the deduction have inhibited it. Such if
a ex1 value is used in the argument of ex2 that also asks
for information in this case ex2 inhibits ex1 when it asks
for information. (To speed up this deduction process ex1
should be marked as nonvolatile)

This is a quite general construct and of cause the process
of macro-expansion, usage information exchange and so on can
be repeated recursively.

So the macro can use information how the result of the form
is used later on in the code and under what signature the
type-system thinks that this form will be evaluated under.
So there is a message system that works in both directions
in the code graph (what signals do I get from the arguments
and what context or what features of what I provide is used.

There are weak points to this construct but I think now
have one big chunk that can be put to use when I go deep
into optimizations of loop macros. At least I now have a
powerful tool to do optimizations by using the computer as
much as possible and my coding fingers as little as possibly