Anyway It got a bit too late yesterday and I ended my last
post without checking it very well. Anyone wanting to
implementing delayed variables should try to skip the case
when you cannot know for sure if things yield or not. I just
don't see any uses of it and it can quite easy create
infinite loops. In the description I presented there seems
to exist some good seeds of ideas though, that can grow to a
checker that can warn about dangerous constructs, if you
still need to implement this case as well.

How do I create those delayed variables. Well I'm lazy.
Each delayed value is composed of a value and a list of
lambdas. when I know the value I fill it in and then execute
the lambdas which will propagate the value through the
whole delayed graphs. It is expensive of cause but to do it
this way is, well my way of having fun. Anyway
normal stateless functions can be blessed to use delayed
variables. It works by checking if an argument is
delayed if it is, the correct lambdas is added to the
incoming delayed argument(s) lambda list. This lambda just
checks to see if all arguments now have materialized and if
that is the case calculate the function and put the value in
the returned delayed value and then evaluate all the
lambdas in the returned delayed value lambda list.

Everything is not what you think it is, reality is an
retinal image, Nothing... makes you think.

What so cool with Internet and forums like advogato is that
ideas, words and units of thought is flowing from mind to
mind mutating, mixing, replicating, bouncing, In a very
subtle way. The communication is many to many and this is
indeed important, we must be able to communicate in order to
make sure that the world won't fuck up.

Introspection is a nice powerful feature that seems like a
hot topic these days. I try to practice this a lot, not only
in my code, but on myself. It is kind of cool to dig out the
association and logic behind an idea.

One idea I got was to check out scheme. There is some nice
food for thought in that community. I've planning to read
about lazy evaluations, but here is my current view on
delayed evaluations.

If D1,D2 is delayed, basically f(D1,D2) :D3 is delayed
as well represented if any of its arguments is delayed. Now
this means that D1->D3 and D2->D3 and you see that what you
have is a directed acyclic graph (DAG) that represent your
overall delayed state with a pointer to a function at each
node. So it is not at all a difficult concept. But what
about and (if D A B), D is delayed. Here it seems you need
to store the value of both A and B in order to be able to
use it later, this is actually a difficulty. I will need
only one of A and B but I have to get both values in order
to later retrieve the correct value. If you have recursion
in A or B you may find your way into an infinite loop. So if
A = (g C E) you need to store (g C E)
and continue so recursively. With no gotos and internal
state in your code you will then be safe. If you have state
you then need to store the state related to both A and of B.
A key issue here is to avoid relation pollution e.g. you do
not want to store state that is not needed to retrieve the
value later.

In my (switch-if P A B) construct if P A is incremented
else B is incremented. I try to avoid using delayed values
in the P but storing the state of A and B may seem to solve
the problem. will it will be safe? No, there is another
problem. A and B will contain signals backwards in the
system about yielding values, about finishing iteration and
skipping stuff. So depending on the possible signals send
back you will have both complicated knots to untie as well
as more trivial ones. Some of these questions can be
mitigated by simply returning values of special type like a
skip value. A quit value which also can be implemented.

Consider that we are under a delay. And now the delay is
activated with a skip value. This is the interesting
construct. Consider an update of internal state:

(update Var Vals Oldval)

The most simple choice of logic is that if any Val is a skip
value, Var is a skip value and equal to Oldval. A function
just returns a skip value if any of it's arguments is a skip
value. This logic is then transported in the delay graph.

The finish signal is a little more subtle but basically
each object as a finish value Vf, and if object Y at an
update gets a finish value then it should fire it's finish
function to update Vf, the variable representing the
finishing value. Else the transportation and action of
the finish type is like skip but of higher precedence.

The relate is just a relation that represents the transport
of the finish all the way to the current Ret if this has
been delayed and you have to iterate a few iterations more
than needed.

Trying to handle the yield signal is delayed. No just
kidding. You need to do the following

(do A B C)

The logic is, take next A, if A is a yield stop! and use
value of A. the next time start with A again. Else continue
with B and so on. Now if B is delayed and might be a yield
you need to store the state of the do e.g. the state of

A,B,C and the internal state of do e.g. a program pointer.
Then at the next update of this object if the delayed value
have materialized you will then just continue with the
evaluation of the do construct and everything would be fine
else you will have to just chain the actual updating and
return the delayed value.

Can we have infinite loops? Well, At each step you will have
a tree of objects that you update in the natural order, when
you update subtrees will collapse an new subtrees will be
created. A delayed object is returned at a certain object
and then transported to objects below it in the order. Hence
if you keep a logic where you always will fill in the
missing information at the same object that you issued the
delay you will be safe.

I didn't realize all this before I started writing. This is
why I write, enjoy

Many times you hear that it is good to read for your
children and it is.

Many times you hear that it is good for your children to be
home with them when they are small, and it is.

But I'm so tired on the focus on children, the thing is that
it is good for _you_ to take care of your children. At least
that is my experience.

I have found out that caring for my child have increased my
understanding of humans. I have found out that reading and
telling stories has improved my verbal abilities a lot, my
memory is better and my creativity is well ahead of the
times before her birth. I found out that my wife can bring
in serious amount of money to the family and among all she
is not hitting the wall as many other mothers in my country.
I found out that I do more in less time. I found out that I
can work all of the time all day, all night, enjoying it,
not feeling stressed a bit. I guess
I just found out about life.

The trick to achieve all this is UBNA or ...

Use your Brain and Not your Arse

This is a constant fight. I realized the key part to success
was to try use my brain when doing stuff and not put on my
autopilot or what I call my arse. Things might be boring
but If you use your brain _everything_ can become
interesting. Caring can be boring, Caring can be very
interesting. Reading can be boring, but if you put your
heart into it, you will find out that your mind and voice
when playing together is like any other cool
rock-band you like and after a while you will dig it.

(do (yield 1) (yield 2)) first yield 1 and then yields 2
if there is no yield in the other argument that argument
will not be updated, but now we have another yield statement.
How would you solve that?

I just consider the evaluation of the yield as a
parallel statement meaning that the above construct will give

(+ 1 3)
(+ 2 4)

The hole construct is a little isoteric but you may
want to have the possibility to first yield all the
elements of the first arguments and then all the elements of
the second argument or any combination you can dream up
This is a sequencing patterns that I have not considered but
it's interesting to make a note of it in the head.
Implementing a tool to allow all possibilities is not that
hard but if it is worth it, I don't now.

I check out the texmacs project regularily. I think that it
has such a potential and there are some very interesting
ideas floating around in that project.

What are the optimizations I've been thinking about? Well my
method although slow should yield a taste of how it works,
if it works and shed light on interesting patterns, not new,
but interesting. So I'm just looking to find the elements,
and how to combine them. This means that I do a tremendous
amount of function referencing and consing! Just inlining
all the small functions and maybe some tools around that
will be enough (the consing is due to me not using
multiple-value-bind) maybe not. Other loop macro framework
that is interesting and fun to check out, usually seems to
flatten the stuff and this could be a good pattern to mimic
for success. I have tools that manages the closured internal
variable state so flattening the variable space can maybe
succeed. Now flattening everything and use gotos might be a
bit clumsy and too clumsy, much of that can be done by the
lisp compiler perhaps, it might be more wise to concentrate
on some kind of loop unrolling helping
framework, to find fastpaths and cut out unlikely paths to
functions, I don't now. All I know is that while I'm coding
with functions and closures I keep in a parallel universe in
my head a version of the code that is flattened. One note,
some patterns can function on the stack, but not all can do
that, If stack variables are faster you may want to think
about how to make as much use of the stack as possible and
minimize the need to use the heap.

This might end up as a quantum dot of importance, but I
don't care, at worst it will result in a document with cool
patterns of computing, that's good enough for me

Well, now I need to finish the slow slurry singularities of
black magic wormholes between I and I + 1.

I should remind myself to quote Newton. But I
don't. Why? Well that's implicit. The thing is every good
thought I have can probably be found in an scientific
archive somewhere or in some project X out there and My
whole experience is based of others efforts. That's how an
open system works and that's probably a key fact to explain
the technological success we have had the last centuries. So
I'm blowing in the wind and we all are blowing the wind.

What I'm heading at is my clambda stuff. This stuff will
be basically c morphed into a more lispy syntax. That is,
there should be a mapping between C-syntax and c-lambda and
when you managed to grasp that mapping the translation will
be simple meaning that you can actually use the c
experience. Now why this exercise? The main reason is that I
want to build my own object system!! ... Just kidding, When
I coded in C i just recognized that stuff I wanted that is
not that hard to do in lisp is difficult in C due to the
inferior pre-processor. Thats It, I wan't to add a lispy
pre-processor to C,C++,Java,C#,...

I choosen the Qi flavor of lisp because, well, I think it is
the best base to start from and I wanted to learn it

Now to mentally sell this stuff and have fun I would need to
implement extensions to C that makes a difference
e.g. with some batteries included. Ok here are some ideas,

1. Meta Object System, He he, I'm not kidding here,
look at Lisp, they have actually abstracted out the action
of dealing with objects in a tool that constructs object
systems - this is a neat idea and why not bring that over
to the C family so that everybody easily can make their own
home brew object system. (Python seem to have similar system
as well and you may find more examples)

2. Loop macro facilities This is what I have been
discussed and worked on during some time now.

3. Static type checking Tools exists today but I
would like to have better meta facilities to explore new
inroads and versions of it and better integration.

4. Custom types, deduction of types. Qi Has an
advanced type system, cool if we can use some of that.

5. structs vs file-systems Structs in C are a much
like file system without the tools of the file system. Well
Using C++ we do have the potential of a file system but it
would just be a cool idea to just tighten these two
metaphors together to see what it brings still keeping it
light weight. The idea is that we work with the struct
incrementally like in a shell we consider files, we consider
directories, we don't consider content, we just manages the
content structures and the metadata of the structures and at
a point the struct is set in stone e.g. this is a metastuff
to define data structures. Not to manage the content.

6. Different views of code
If you want to study very difficult code because the problem
is, difficult. The notation system is very important for
your success. Long variable names and whitespace is not well
suited for this. Hard stuff just demands that the patterns
inside in it can be recognized
and matched by your experience. For these use cases I
need some tools to design the notation systems.
My idea here is to use a simple notational macro facility
and a tool to layout sheet cheets information well alligned
in a column to the right of the code. E.g. I do not want to
use tool tip technology, I want to layout the translation
information well visible in a column to the right. Basically
a macro is associated with it's usage, it's usage for the
first time, it's usage after beeing not used for x lines of
code and so on. The idea is that the definition of these
macros should always be visible according to some logic

7. Transfer of Meta Information
True Openness will allow users to tag their information and
let these tags transfer through the system to basically do
more with your data. Think of associating each byte with a
meta information, have tools
to define how all basic operations handle the flow of
information. This framework can be used if you want to track
your content, for security and so on. It is wothless for
doing DRM cause the user is in power of it and this is why
it is interesting
</b>

He he in 10 years I will be finished with this and it will
rule the world.... No, of cause that will not happen, but It
is so fun blowing in the wind and what is important is
progress and that my friend, is all us blowing the wind.

I'm using emacs, but it has it's drawbacks. When editing
this blog I tend to accidentally hit ctr-w. If
there is only one open tab in firefox, bam!, firefox quits
and I cannot find the text in the history. I Looked around
for a fix, looked at firemacs, but decided to wait for a new
upcoming plugin that will let you use emacs directly
instead, e.g.
EmbeddedEditor . Meanwhile I will make sure to have one
extra tab open or just paste in the text

Oh well here are some Loopy cranky stuff: Consider,

(for X in (li L)
(for Y in .X
(sum Y)))

Can you see two possible interpretations of sum?

Here is one:

(for X in (li L) (A)
(for Y in .X (B)
(A sum Y)))

E.g. yield the sum of all Y in the list of list. And of
cause this will yield the sum of the elements of the last
list in the list of list L,

(for X in (li L) (A)
(for Y in .X (B)
(B sum Y)))

(we could have skiped B here)
Permutations could be expressed as

(for X in (li L) (A)
(for Y in .X (B)
(coll ((A X) B coll Y)))

This is a complex construct but to dechiffer it we
could have used the notation,

(X B coll Y) == (update in B (coll.X Y))

E.g, coll.X is multiple collectors parametrized by X and
define in X defining scope, A. Each of coll.X is updated in
scope B by the value Y.

Lets put in a mental overdrive, lets talk like spook, and
walk the talk.

Consider walking in a tree, Personally I'm very found of
doing that by recursion, these two models fits so nicely
together. When walking and you are at function G, the stack
trace could be represented by for example "FHFFHHGGGHG".Do
you see where I'm heading with this observation?

permutations can also be used by using (Matcher Var)
eg, parametrize by Var defined in the scope defined by the
Matcher. See, using abstractions makes us trekkers in a
larger universe then we set out to explore in the beginning.
You can combine any of the current knowledge about how to do
matching on sequences and use that as a Matcher, you can use
perl regexps or whatever. The simple un optimized
implementation is to just use scopes in the stack as objects
that you can duck-type in these constructs, define a
mechanism to label scopes and then just use pattern matchers
and some simplifying notation for the most obvious and
common cases.

A matched objects has a start, that start is what you
would like to use to define the needed scope and associated
variables. Another possibility of matcher definitions is to
start at the current scope and move backwards. One should
also consider cases with multiple matches of the same
pattern and some way to handle this like doing stuff in
multiple channels meaning that secondary collectors and
accumulators get a distributed just like in matrix algebra

I'm mayby too kind to people, cranky people have a tendency
tend to want my company (temporally), and you now what,
that's cool.

The thing is I listen to people I meet and I always try to
deliver food for thought, play along in dialogs with my
character of the moment simply because it makes a
difference.

You see, I,ve got this idea, that If I (at a reasonable
rate) answer someone a little cranky, which usually is
quite intelligent but by some kind of accident or sickness
went out on a limb, they will become a litle less cranky, a
little less sad, a little less dangerous.

And you know what, some of these dialogs become soo cool
that I keep them as funny/deer/cool memories.

I have piled up quite a lot of the loop stuff, time
to actually make it work. The stuff has grown and I need
to start considering how the code should look like when
using the machinery. I want to remove the dot in .+ .* and
so on in the default mental model and instead use some kind
of mark if you want to use the original function.

1. V is updated at the locations they are defined in the
loop, but values are collected wherever they are present if
it is in a "return value position" - this should be the
default in order to get loop of loops correct

2. V are updated whenever they are present in the
evaluation paths if using these variables be sure to mark
them with like ?V? or something (The question is if they
are useful) but it is a mental model that is consistent

3. V has to explicitly be updated (so we should do a .V or
something to mark them special) These Variables has it's
uses but it is not that common

It is interesting to note that functions invocation has
to be manipulated for cases like

(f (sum V) H K)

if the function has a switch like if or case statement
(switching the next value evaluation between) you will get
the wrong behavior so again you would like to translate this to

(.let ((A (sum V))) ((f A H K))

As you all noticed the code I present here is not 100%
working, actually they are more or less dysfunctional but
that's ok, I usually spend the night correcting the
mistakes. I consider it more important for my development to
enter stuff into this diary than to keep it absolutely 100%
correct. The thing is that it is a nice exercise to try
formulate the abstractions floating in the head with words
and it's a good creative stimulus.

I'm not happy with the parser examples but decided to push
down the importance to perfect that development until I have
a working copy of all the current ideas, cleaned it up, and
release it under some project name. The code will be working
but not especially useful because it's main purpose is to be
a functional specification from which you can deduce
unit-tests. Yes!, the specification documentation is code!

Today I consider the idea of uniqueness of identities and
mixing identities. Basically the problem is that for a set
of objects I want to assign an identity or a set of
identities so that construction of identities of
combinations of the objects are fast enough and also give
enough randomness to be used as keys. It is a good exercise
to test out the generator framework and as always working
gives inspiration.

So the first question you should ask is do I need to
guarantee uniqueness? Can I assume that collisions will be
negligible. Sometimes clashes is rare and a cheap extra
check at the end of the analysis can be used to verify
correctness of the analysis and in the minuscule number of
clash cases a mitigating strategy can be used.

Hmm, assume that we have objects X1,X2,... and identities
I1,I2,... What I'm wondering is if one can use random double
precision numbers between in [0.1,0.9] as identities. Then
the combination [X1,X2] could be build by simply use
(R * X1 + (1-R)*X2), with =.1<R>0.9 a random double. This is
rather fast to compute and should mix well and a key should
be possible to extract from this operation. Anyone knows
about the properties of such an approach?

Here is another approach where clashes do not happen for
correct choices of p and q which also has the potential to
mix well.

q**(k1*k2*k3..km) mod p,

if you are going to mix X:s and Y:s (X x Y) you could
generate identities q**i mod p, for X and q**(k1*i) mod p,
for Y and just multiply the identities modulo p to get new
identities. This could mix well, lets happily hack some
generators for this...

The input K is a list that describes the number of objects
in each channel (number of X:s , number of Y:s) and so on. N
is simply the total number of combinations-. P will be a
prime larger then N according to some method (like the first
one)
Q is just another prime number (we assume that P >> 100
here) R will be q**1, q**k1, q**(k1*k2),... not including N.
The 1.. in mul tells the transformer to start from 1.., eg
1, k1, k1*k2, etc. powmod is just a function to quickly
evaluate q**k (g**k = q**(k/2) * q**(k/2)).

The slet is a let construct that allows the value of N to be
pop and pushable e.g. internal state. N just initializes to
a copy of R. mk-g will generate the generator (this is poor
mans object system but it's prototyping so I don't care very
much and the functionality is similar) Now the list should be a
list of lambdas according to 1. get the value e.g. the
identity and we have a selector here that picks up the
correct identity channel (identity for X or for Y). 2.
finding the next value. just updates q^(i*k) -> q^/(++i k)
mod p. Sig is a macro mainly introduce code so that we only
take the next value ones in each iteration. the third one is
the final value and the fourth is the construct to make sure
the updating is correct. and the rest is basically memory
management. Not a perfect interface but you never get it
right if you do not try.

What I learned from my work with this was that it's nice
to have lambdas that generate generators e.g.

P is the prime number used in the modulo code, now /. is the
lambda construct in qi, and ./. is a similar lambda
construct for generators. So G will be a generator and Fxy
will be a generator generating lambda that captures in the
closure G. I coded ./. by using something like,