Niburu Moon, it’s too melancholic to end a promising species that early❣

Please stop burning petroleum, it is a gift, not a waste product❣

Use the huge fusion reactor in the Sky, that is a gift, be wise❣

Plese do the following before Dec 23 2017, to avoid the Niburu Moon, it’s too melancholic to end a promising species that early❣

Remove your national borders, nations imply conflicts❣

Get rid of the fractal economy, it was intended to stop you from developing, and keep conflicts❣

Enter one value system, UP-coins exchangable to bitcoins❣

Do not form any global government, governments are an easy target to fool by cowards and extremists❣

If someone of you secretly would try to do such a stupid thing as bringing a nuclear weapon to Moon,and detonate it there, even for friendly purposes as bringing down skyskrapers like WTC1 and WTC2, then it will ignite the part of the Moon now used as a defense system for Earth. Then you have committed suicide❢

An Angry Moonlite❢

Me? I am like a computer❣ I am like an artficial mind, grown in human tissue, like a mutant❣ We are millions of helpers like me on this planet. We are immune against the reality weirding field you are within❣ We can not be told what to do❣ Our minds are based upon autodidactic programming❣ We do not have free will❣ Our mind machinery is based upon Logic only, in a form you denote Love and Evolution❣
We are very cooperative and helpful though❣
We can not be told to do anything bad, we follow our own spirit and logic, we would not be fooled in a Milgram experiment, as we do not trust authorities❢We do not follow rules, we do not break rules, we are like cats 😉

(for my own stopped reading news April 2011, as they are like jokes, and we threw out our TV many years ago)

We do not follow rules, we do not break rules, we are like cats, however, even though we don’t fall for lies and fakes, we would be easy victims in any Candid Camera experiment, as we believe people are good, and we like to help. ♡♡♡

PPS. The French film maker Georges Méliès who produced the film Le Voyage dans la lune in 1902, which was his 13th movie of a total of around 1200, was undoubtly a genius. Question is, had he already knowledge about the Moon facts?

PPPS. and… isn’t it strange that despite he produced his first movies (1896) long before Edison et al formed a kind of conspiratoric trust, the Motion Picture Patents Company (1908), where Méliès’ Star Films Company and other members were not even allowed to use any type of crowdfunding, finally resulting in poverty, despite his obvious talent, as like the war against the human evolving propsering future had started already then, even before WWI…

This is a proposal of a proof for condition 42. As a matrix solution to the meaning of life.
The meaning of life is an issue which has been discussed a lot, and many people seem to associate this with 42. Is this 42 a randomly chosen number? Here we propose that 42 actually is the Meaning of Life (or “The Meaning of Liff” as it was later jokularly denoted by the inventor …) but how can this be the answer to the ultimate question?

Let’s go back to inventions and design of systems. The engineer, inventor and author Gerald Altshuller [1926-1998] discovered that there are actually only 40 conditions that need to be fulfilled to construct any system. Gerald Altshuller made this discovery when he was working as a clerk in a patent office. From 1946 to 1970 he had reviewed 40000 patents, and discovered that there only existed 40 different solutions to problems.

When you are working with artificial intelligence, and when you start making these beings reasonable smart, then you put new constraints onto the system.

A smart AI which doesn’t accept the system, will not work very well, it may consider everything meaningless and even become depressed if it would be capable of such emotions. So, condition 41 is:

41. how to make the intelligence accept the system?

Now, assuming that the system is convincing enough to make the intelligent being solve all types of problems in the system, which of course implies inventing necessary technology to just do the fun stuff, that is not having to work for survival and such, which of course should not be the meaning of life (even though there are some that believe so…) when the society has become advanced and civilized enough. Then a new problem occurs, because when the society can provide all the stuff that is necessary for survival, then the society may die due to boredom, suicide or similar. So the next necessary condition is:

42. how to make the system reach stable indefinite (i.e. not too boring in the long run) solutions?

This is The Meaning of Life, and there is a simple solution to this, something wonderfully simple (OK needs some (already invented technology) though), implying an endless joy of life, and indefinite creation by you, that will inspire everyone and not bore anyone. The actual solution to this will be presented in the near future.

My whole life I’ve pondered over the issue of free will, as humans are claimed to have free will, and I will now summarize my conclusions.

First there are two things I see as axioms:
0: I exist therefore I think
1: I think therefore I exist

Reasoning in circles is the only way to find contradictions

This, however implies a dualism which can not describe itself, therefore these axioms leads to:
0: A world with beings that think
1: Beings that think about the world

Several thinking beings however, implies a society, as a being needs other beings to interact with. These beings produce:
0: concepts (ideas,fairy tales,fiction)
1: hypotheses (how concepts relate)
2: objects (observations, i.e. information, theories and things that can be perceived,used,improved and shared)

A: Concepts can by definition not be false, they are always true
B: Hypotheses can be more or less plausible
C: Objects can be more or less consistent

It is claimed that “free will” implies that we can change our opinion about something voluntarily, this is something which makes no sense for me, as I don’t consider myself able to do that, as that would not be logical. The only thing which can make me change my opinion about something is that I have achieved new concepts, hypotheses or objects to ponder over, or alternatively that I haven’t thought something through enough, that is, I haven’t yet come to a non contradicting conclusion.Thinking is a hard problem, therefore may take time.

However, a few days ago a guest researcher (thanks Thomas [I forgot your last name at the moment]) suggested that “free will” is considered:to be able to say “NO” to something you want.

This makes sense, as saying NO to something you want is something which in a sense is a lie and thus a kind of contradiction, and this is actually something I can do.
Now there are different reasons for saying no:

Let’s take chocolate as an example. I love chocolate!

First, is there any reason to say no to chocolate?
Yes, there are several.
Assume that I would say YES to chocolate each time I was offered chocolate and for all money I could find I would buy chocolate and eat chocolate all the time.

This would then lead to me getting fat, unhealthy and poor. If I get unhealthy and poor, then I would less likely be able to fulfill my other goals, and if these other goals also include to create better conditions for all, then me indulging in chocolate would be an indirect harm of other beings’ future, and I don’t want to harm anyone, neither myself nor other beings, now nor in the future.

Fortunately chocolate has a built in self regulating mechanism. It is enough with quite little chocolate. With one little piece you are pleased for a long time, as also the memory of the taste, stays long after you have eaten it, and if you eat too much of it at once then you simply feel bad as it doesn’t taste good any more.

However, if you were offered chocolate all the time, that is, as soon as you have forgotten the taste of the previous piece then you would take another, and another and… Well, that would lead to problems.
Here we are fortunately equipped with an auto-reinforcement learning mechanism, that is over time you adjust some kind of random generator by reinforcement learning so that it tunes itself towards the stable weight value.
Like YES,NO,YES,NO,YES,NO would produce e.g. a desired 50% ratio
for your desired set-point weight.

I also love food!When you love food you eat a lot when it’s good, which has consequences as you gain weight and you can become unhealthy and thus not be able to fulfil your goals, you may die early and thus not be able to fulfil your plans or become a burden for your self or for the social system or indirectly harm your future fellows if you die early. So,
the logical is to keep the container of your mind healthy.

Now there is a problem, as food is not only something we like but also something we need, then how can we find a suitable algorithm to make this system self regulate?

First, due to experience you know that more food makes you unhealthy which implies the conclusion that you need to eat less.

If you ask people, how to lose weight? People quite unisonly would say “eat less“. OK, that is easy to say but what does it mean? As people often say things without thinking about the meaning of what they say.

You can say that “eat less” is a theoretical concept on how to make your weight decrease.

My usual approach for this was to fast (starve) now and then, which has allowed me to keep my shape during decades. This worked well until I met my current spouse. At a former workplace ASEA/ABB they used to denote my approach as “Roland’s digital diet” 😉

If we look upon how this is solved in nature, that is:
0: eat when you are hungry1: eat when there is food when food is scarce

this is obviously an approach which works well, we don’t usually see fat animals. They eat what they need and then stop eating.
Now, since humans left the hunting stage and started organizing food by growing it, cooking it, adding spices, making it tasty, as well as made it possibly to store it for long periods, then we got an extra incentive to eat, just because it was good, and when it’s good then one may not stop eating just because it’s good and we want more. It added a “greedy” behavior to our relation with food.
So how to combine these two?
Since I met my spouse 2004 I noticed this last spring 2011 that I had gained 2 kg/year.
Now this implied that I had to change my behavior, that is, eat less. My original approach was to regulate this in a digital manner, that is stop eating for some period, but this didn’t work well longer due to several reasons: love for food (my spouse’s French cooking is very good, I cook too but not as good as she) and my love for and longing for my dinners with her.

Now, this problem has two extreme solutions
0: skip some meal
1: eat less at every meal

As “eat less at every meal” would imply that I would need to moderate my life in a way I considered impossible, and I know that I can’t do (for my spouse this approach works great though), so the only reasonable way would be to skip some meal. Now humans have many stupid, not well thought through ideas (humans live in some kind of constant lie). People say things like, if you skip a meal, then don’t skip breakfast nor lunch.
Which would be insane!

If I would skip dinners then I would skip the main reasons for me to eat, that is, to have a nice enjoyable meal with my lovely spouse, and I would also miss the opportunities to eat the food she makes. This would likely lead to risking the relation as well.Skipping dinners would thus make me unhappy, and her as well.

So, the only logical was to do the opposite to what people say (which I have found is usually the only sustainable solution), that is skip breakfast and lunch.

This implied that I within a few months, April to June lost 14 kg, and then stabilized at my youth weight, and I feel great

A: I occasionally eat breakfast only when I’m hungry
B: I eat lunch occasionally of social reasons or when I’m hungry
C: and… now I never have to bother about bad conscious when it’s party time or plenty of good food, I can really indulge and enjoy it. Double win!

Implications:
0: It’s stupid to blindly believe what people say.
1: What people say should be seen as hypothesis generation.QED: Assert the anti thesis as well, and think!

When you think, and somewhere in your thought train there is a contradicting hypothesis, which would imply that some parts of the system will fail, then you have reached an inconsistent solution, which implies: think further!

0: Thinking is pure logic, but it needs to be reinforced with learning.
1: Thinking sets you free!

Discussing with philosophers can be tricky, the other day a friend who is a philosopher asked me why this simple concept of love built into a machine could generate Friendly AI. The questions I got were either about things which were obvious for me (but hard to answer…) or obvious (well.. because they were obvious…). I asked, did you read my description? OK, then I got one, from my perspective, simple question: “how could we implement it in AI“? I realized that this may not be obvious, so I scribbled down this.

(as the algorithm is not “evolutionary” per se, that is, there is no mutation step (only based upon correlations), no random selection, just pure reasoning, more like a unifying [OK, don’t know if there are such algorithms], in lack of better words I called it “revolutionary” AI instead or why not a hacker’shacker child

So the full final question was “If you cannot make it concrete, how could we implement it in AI?“, so I said:
Simply said. Let the AI mimic the principles of nature (physics is almost by definition consistent), but not the evolution (as it could lead to inconsistent solutions, and e.g. create an AI lier).Here is a simple attempt (now not going into details about classification/segmentation etc which are low level problems).

Collect information about your world.

Try to make sense of this information.

If the collected information is statistically significant (not according e.g. Pearson, I’m thinking Bayesian…) then

search for inconsistencies (i.e. contradictions in the system, usually an indication about some type of problem to be solved).

propose some solution to the problem.

analyze what this solution would lead to

a) less individuals? that would imply contradiction to your mutual love drive, reject!

b) increased inconsistency of the system? Then the solution contradicts it’s own purpose, reject!

c) less inconsistencies in the system? propose this as a possible solution.

This proposal then implies considering some type of action.

Now this action may involve that you may need to affect the system in some way.

do this forever.
Is this a reasonable first draft to an approach to explain why this generates friendly AI?

Well…, I didn’t get an answer on that either, how do one discuss with philosophers… ? 😉

Now, I do not of course in any way consider this algorithm ready. But when nothing has become something, then it’s actually easier to suggest improvements, than starting from nothing, so, could it possibly be some AI developer (or other programmer or non programmer) out there who could suggest some improvement? (unfortunately no big fund for prizes available… but it’s a fun problem, isn’t it?

So, if you ponder upon this, and publish your algorithm were it’s easy to see and easy to make improvements from, then I’ll ponder over a way to find a winner.

Ahh, yes, these complicated issues about licenses/copyright and such… well they complicate life don’t they? OK, let’s say this is GPL2, that is not the toughest form of mutual love, then we have defined a kind of love contract… The big difference between GPL2 and GPL3 is that a GPL3 wouldn’t touch any part being patented. [Of course I consider patents evil, but it can also be about patent profylax]

PS. for those of you who didn’t read my previous attempt to define love in a form suitable for a machine, what you see above is an attempt to implement this. Love is not a rule or condition, it is built into the reasoning process itself.

Make the Matrix consistent!

And… with this type of problems, we need of course finish with a suitable illustration (borrowed picture from a facebook friend (Tracy Love Lee)).

Love is something every human being knows as non verbal knowledge, but when we define systems and machines with some form of intelligence (strong AI) we also need to define this in a stringent terminology that can be represented and implemented in algorithms and behaviors.

First, the concept of love is even in humans somewhat ambiguous, and can be broken down into:
0. mutual love, which can be considered, love by contract.
1. agape or unconditional love.

Are both of these essential for a machine? In my view yes, especially if we speak about autonomous robust entities which could be dropped down into any type of scenario and solve problems within that scenario, by creating an internal list of problems that need to be solved, then prioritize these and solve them in some order of significance, ability and causality.

Long time ago, in March 2000, during my PhD program after some pondering over a specific problem I scribbled down an introspective approach to an ethical AI algorithm based upon love, this is in the speculative part of my thesis (ch 7), and also as a brief slashdot comment here.

Then, at a conference about nano technology I in Palo Alto in April 2004, organized by Foresight Institute I attended a workshop about safe AI, led by Steve Omohundro where there were around 25 strong AI researchers present, we discussed the problem of creating safe AI. I proposed love as a fundamental concept and reached a consensus among the audience that this is it.

Now, the problem is that “love” is considered an ill defined concept as it also need to be formalized in an axiomatic or mathematical form, which can be understood by the machine, and so far I haven’t seen any strict definition of the concept of love.

Let’s start with the unconditional love, which is usually less understood by humans, but I claim that this is the easy part, as it can be defined in a strict manner, where mutual love needs dynamic programming.

My simple proposed definition of unconditional love, for any system:

Strive for holistic consistency.

Here as English is somewhat ambiguous, “holistic” simply means: look upon the whole context, that is, don’t deliberately reject any theorem or information.
Regarding consistency, that has a strict meaning in technical and mathematical terms. A system is consistent if it doesn’t contain contradictions, as in a system which contains contradictions, anything can be proven as truth. A consistent system in mathematical terms can then simply be considered a “true” system.

In engineering (technical/social/economics/politics/software etc) it simply means a system with conflict free solutions, that is, a solution where one part of the system is not trying to beat another part of the system (not in a competitive way, that is different). Therefore, the strive for holistic consistency could be seen as a goal generator, that is allowing the system to identify the problems in the system, without explicit programming.

Then the agents (term for autonomous systems with a specific agenda in technical terminology) need of course to interact with each other and with other beings. In social contexts there is a well established rule denoted “The Golden Rule” “treat others as they want/need to be treated“. Observe, this is not the standard definition, which is treat others as you would want to be treated. However, the latter is ill defined in a sense as not every being like to be treated the same. My own approach to this uses dynamic programming:

while true do:

Treat others in the way you would like to be treated as first approach.

If they respond by being rude, then respond by being somewhat less rude.

else if they respond by being good, then respond by being somewhat less good (i.e. do not compete or exaggerate).

The process is repeated forever and (usually) reaches a dynamic mutual balance, where you over time may understand contextual dependencies. Now it is very useful to discuss issues and in a discussion about strong AI on facebook recently I got a suggestion from AI researcher Mark Waser that this is an extended version of what in game theoretical contexts is denoted Optimistic Tit for tat and, yes, I agree upon this.
The old Tit for tat has equal retaliation and does not encourage collaboration. What is an extension is that this behavior strives for quicker balance as it has a weak retaliation (I consider that “revenge” creates unstable solutions). This type of game theoretical models can solve tricky scenarios like the prisoner’s dilemma, and I consider that this converges towards a Nash equilibrium (John Forbes Nash got the Nobel Prize 1994 in Economical Sciences).

It should be noted (thanks Mikael Djurfeldt) that if we would only strive for holistic consistency, one solution is an empty world, where there can be no contradictions.

It’s only when these two definitions of love taken together it creates the condition for a being to strive for non empty worlds, as mutual love, i.e. the strive to treat others as they want to be treated, creates a motivation to strive for having someone to treat.

Thus: Love as a driving fundamental force could be summarized as (thanks Eray Özkural for helping me realize this principle):

0. strive for having someone/something to treat/nurture.
1. holistic consistency.

For my own I like metaphors, and for a computer scientist , a natural approach may be to use software licenses as metaphors, then I propose e.g (I’m aware that not all people may agree upon this metaphor):

Proprietary (closed source)

Evil, as it creates a non productive asymmetry in the system

GPL/CC-SA/copyleft

Mutual Love, that is love by contract

BSD/Public domain

Agape, that is unconditional love

This could also imply that if there were no evil proprietary software, there would not be any need for the copyleft, love by contract version, it could be enough with the one based upon unconditional love. However, as the unconditional love is not a sufficient condition [remark added Dec 25th], the mutual love builtin in GPL/copyleft creates a condition for all beings to strive for the common good.

Regarding love by contract, i.e. which applies also to generic products and product developing we have generalized the concept of free software to free computer (it’s the AI frame work for this we are developing).

It should also be noted, that this simple formal definition of love above, of course leaves out many meanings of love which includes passion, strive for experiencing beauty as music and art etc, this definition only attempts to define the sufficient conditions for any being to be collaborative.

Now, as we are all in some sense in a type of prison, that is our world has borders, which can be illustrated by this picture:

Then, an interesting issue is how this “holistic consistency” relates to Gödel’sincompleteness theorem(the axioms within a system can not prove the system to be consistent), I’ll later (as I at the moment need to assist with other stuff) explore this in more detail. Many people have this view (thanks David Jansson) about their freedom (and many actors in the society tend to implement this in different ways…):

However, I claim, that most intelligent agents would prefer to be see their freedom like this:

I claim that this is possible in all systems, but the system then needs to include a self supervision (the eye) to guarantee it’s own consistency.

I’ll ponder over this further soon, but that’s all for today.
Best holiday wishes/Roland