Month: January 2013

This is an old magnetic drum used as a lawn roller by my dad in his garden. The drum is a remnant from BESM-6 (or BESM-4?) mainframe computer from Soviet times, somewhere around 1970s. The capacity of the drum was 192 Kb. The jeep in the background should serve for getting the feel about the size of the thing :). Hail Moore’s law!

Keynote talk by Margaret Boden: Creativity and AGI

Three types of creativity:

Combinatorial creativity: unfamiliar combinations of familiar ideas. There is an element of surprise in this. Exploratory creativity asks the question what is the limits of the currently accepted style.

Transformational creativity: starting with previously accepted style, you come up with ideas which fit the same sphere. Transformational creativity negates / changes at least one of the previously accepted dimensions – so it goes beyond that is known conceptually.
Examples : cubists in art; Kekule and bensol molecule;
Transformational creativity can be achieved using evolutionary algorithms but then it depends on what kind of mutations you allow to happen.

exploratory creativity, is somewhat easier, because if we can define a style of thinking and we create a generative system based on such definition; this might enable an exploratory kind of creativity in AGI

Boden does not think computer creativity will happen in our lifetime though to some extent all 3 styles of creativity have already modeled to some extent.

Ontological creativity? – inventing new concepts

Of course the whole talk is based on human cultural values…

Questions: There is a thing called “ontological creativity”, which was not even mentioned in the talk. Answer: it is the same as transformational creativity;
Comment: machines should be members of our moral community. In this way they will have both responsibilities and rights;

Patterns emerge, some persist and guide the emergence of further patterns. This is very close to what humans demonstrate.

“Emes”: the emergence of memes;
Create systems that allow for the “edge of chaos” effect.
Computational Compassion and ethical A.I. (:))). AI needs to have imagination to have wisdom and compassion, etc.

Panel discussion

symbolic and sub-symbolic knowledge should be represented in single representation;

Where are common collection of concepts / criteria, etc. which should be go into architecture?
The answer is that the complexity of the field requires to experiment and try out different things, learn from each other, etc. Only by exploring different paths the field can find something important.

Panel Session

Q: How do you debug these systems? How can you debug interaction effects, which are more or less impossible to debug:

A: Tools to dig inside the brain of the system;

A (Ben Goertzel): Unit tests; Combination of research and software engineering; Lots of unit tests; Some parts of the sytetem are written in Scheme and Python; Much harder to identify bugs when modules start to interact;

A: Sigma is written in Lisp;

Q: Is it about just programming languages? If you choose C++, doesn’t it constrain what you can do this?

A: Cognitive architectures define the language.

A: Ben does not believe in message passing algorithms;

Q: What is a difference between architecture and language (as a toolbox);

A: OpenCog was supposed to be a platform. But what happened is that it became a platform for themselves; He started from almost philosophical theory of the mind;

A: Sigma is a statistical relational language;

Q: If both Sigma and OpenCog are successful, what will be the difference between minds of these systems?

A: Ben: OpenCog can do a many different minds itself. People are not as broad as OpenCog architecture. Humans are constrained by biological goals, while OpenCog does not.

A: If people are optimal adaptation to the environment, then both systems should end up similar if they grow in the same environment;

Q: How much architectures are based on experimental data vs philosophical considerations;

A: ACT-R was very based on the cognitive science. Sigma focuses on functionality, but the cognitive science is in the backmind. Philosophy has nothing to offer;

A: Ben – started from the philosophy, but neuroscience and experimental psychology are more important (but not there yet) than the at least now. You have integrate a lot of stuff and put a lot of intuition in order to get something. Mathematical theory of Artificial General Intelligence; AGI is very largely about computational efficiency. So basic question is how to make it efficiently, because in theory with infinite capacities is possible.

Session 3: Universal Intelligence and it’s Formal Approximations

Joel Vaness. On Ensemble Techniques for AIXI Approximations

Switching / Tracking
– at time 1 model 1 is ok, but at time 100 Model 2 is better. What is the method that could automatically switch?
– then you find sequence of model which predicts well (not one model, but sequence);
– you can actually compute this even the space of models is exponential;

Convex mixtures
– convex combinations of model predictions. Much more efficient than weighting;
– minimize instantaneous losses at each point in time. e.g. you can use gradient;

Peter Sunehag. Optimistic AIXI

General Reinforcement Learning – takes a history of actions and rewards and returns the next actions;
Asymptotic optimality;
– AIXI is rational
– AIXI is optimal on average;
– is not guaranteed asymptotic optimality

Optimism and optimality
The way they achieve asymptotic optimality is Optimism;

1) Read Access to source code of the agent (to the environment)
– In this case environment can discriminate among agents
– Simpleton gambit – presses the agent to destroy its own source code;
2) Read/Write access to the source code.
– the agent can be destroyed by the environment;
– agents are maximizing utility and also try to survive, which means optimize their source code;
3) Read/write access to memory

Consequences:
– choosing actions randomly is sometimes better than any infinite, deterministic computation.
– memory can be doubted (the neuroanalyzer problem);
– what is the probability of the memory state;

5) Space-time embedded agents;
– oracle merged with environment;
– agents hardware belongs to hardware;
— environment executes agent’s code;
— environment determines meaning of source code (meaning can change);
— environment determines computation time and all computational constraints;
— considers indirect impact of computation; there is no way for the agent to put more complexity into the environmant (consistent to the second law or thermodynamics);

Before running the environment, you know which part of a sequence;
After running the environment, there is no explicit agent environment;

Questions of the agent in this space-time embedded agents:
– what defined the identity of the agent for T>0;
– what happens when I die
– am I living in a simulation;
– where do I come from?

Laurient Orseau. Space-time embedded intelligence

– Utility function is not part of the environment and not part of the agent. So utility function is expernal;
– so, simple, unified framework; closer to reality;
– consequences:
— multi-agent environments are natural;

– any practical narrow method is an approximation of AGI in some sense.
– AGI is not practical (? he did not say this but I guess this is the idea)
– they want to bridge the gap between these two extremes;

1 extreme: universal general intelligence (universal turing machine)
– unbiased AGI cannot be practical and efficient;
– for any UTM and input output history, another UTM can also be found with the same conditional Kolmogorov complexity;
– Verbal notion of representation: virtual machine as an representation;

Panel Session

Q:Moving agent to the environment will complicate things. A: Yes, but this is how things are. So maybe we will need to do this.
Q: Why do we need this universality. A: Intelligence is also a universal machine, but not with respect to computation, but with respect to procusing computation al algorithms;

Session 4. Conceptual and Contextual Issues

Javier Insa Cabrera. On measuring social intelligence: experiments on competition and cooperation

Lempel-Ziv approximation:
– complexity of the environment does not affect results;
– complexity of other agents makes the environment more difficult.
– so social complexity is more important than complexity of the “pure” environment;

Notion of intelligence is based on two illusions:
– Animal part = mobility, perception and reactivity;
– Human part – being able to talk to your system = learning by being told;
The goal is to put these peaces together;

Analogy to Turing machine:
— at the core is simple state machine;
— but if you add the tape the behaviour becomes much more interesting;

Innate mechanisms:
— Segmentation (division of the world in to spacial regions).
— Comparison
— Actions
— Time
Language interpretation glues all these things;

ELI: A Fetch-and-carry robot;
– uses speech, language, and vision to learn objects and actions, but not from the lowest levell
– save learning for terms not knowable a priori.
RoboEarth – repository of useful information;

Omohundro’s “basic AI drives”
Bostrom’s “instrumental goals”
He calls them “actions”, not “goals”.
The idea is that AI acts according to it’s utility function and does not take actions which reduce utility according this function.
Define utility function as an average of of human utility values.

Bill Hibbard. Decision Support for Safe AI Design

A system of visualising running of agents to see whether they are safe. So I guess his proposal is first to run AI on simulated environment and only if is behaves safely in the simulated world, then put it into the reality.

Ensemble of simulations
Vis5D visualization;
The greatest danger with nuclear weapons is human element. This is true with AI and AGI.

Panel Session

ELI: Natural language is translated to RDF triples, checked that against the database and then reasons whether the action is good for the patient.
Uses Kinnect camera (100 usd?).

Session 5: Cognitive Architectures and Models C

Serge Thill. On the funcitonal contributions of emotion mechanisms to (artificial) cognition and intelligence.

Leslie Smith. Perceptual time, perceptual reality and general intelligence

Hard question – nature of the neural construction of reality;
– Perceptual reality is different from physical reality;
Understanding these differences may help to understand AGI and intelligence in general
Dunne 1925: attention never really confined to a mathematical instant. It covers slightly larger field..

Perceptual event are not physical instants and they overlap. So they are not totally ordered.
Leaky integrate-and-fire neurons

Time always mattered in AI/AGI;
– Except in toy problems like vector classification;
– But sometimes its presented as a simple ordering of events;

– Percepts arise from sensory surface very rapidly;
– preprocessing takes part on the sensory surface;
– signals to the cortex are already preprocessed;
– percepts are cortical;

You need to group the sensory inputs to some sort of percepts; How do you group signals.
– One option is contiguity (in any sense);
– This includes some preprocessing;

Temporal contiguity: integration;
– Grouping sensory signals by segmenting sensory surface
– Do this in asynchronous way;
– in a spike-based way, also using beta and gamma oscillations;
– In AGI probably this would be different, but AGI needs that anyway;

Abdel-Fattah. Creativity, Cognitive mechanism and Logic

What is cognitive capability that makes human cognition unique in comparison to animal cognition oand artificial systems?
– A: creativity may be the answer;
– Creativity can be found in analogy making and something else;
– There are at least too important mechanisms:
— analogy making – first stage;
— concept blending – second stage after analogy making;

Knut Thomson. Stupidity and the Ouroboros model

– an agent is stupid if it is unwittingly acts against his own interests;
– stupidity is a label put by others;
– stupid people look at the minor details, while not seeing more important;
– Intelligence is a label that humans grant to other rational agents. The definition is the opposite to stupidity.

Basic features are stored in Schemata;
Consumption analysis highlight slots of the activated schemata and directs attention to the most urgent issues;
– pattern matching and constraint satisfaction;
– it can be understood as an extension of production systems;

Clever is who applies an understanding as wide as possible, chooses tools and accepts help from friends.

Summary
– there is no absolute stupidity and intelligence;
– both are labels and depends on the context;
– this is why we have dozens of definitions;

Claes Strannegard. Transparent neural networks: Integrating Concepts

– Can we build general and monolithic neural network model that can co both symbolic and sub-symbolic processing.
– Transparent neural graph is a labeled graph;
– There are labels on nodes and labels on connections;
– connections are also labelled with probabilities;

An organism is a sequence of TNNs;
– it can develop using development rules;
– formation / update;
– Hebb rule;
– Ebbinghouse rule; (use or loose them);

Implementations in the Haskell and C#

W.Skaba. Binary Space Partitioning as Intrinsic Reward.

The AGINAO project.

Cognitive agent; robot embedded control program, self-programming, dynamic and open-ended architecture, real -time operation in natural environment.
Tsbula rasa – epigenetic architecture, nothing is known a-priory;
Basic building blocs are small pieces of code (small machines). Each building block has many inputs and one output, which are just the strings of integers;

Predefined building blocks are atomic sensors and atomic actuators.
From these atomic blocks, program is being generated
The model constructs its own dataflow.
The question is how to evaluate basic building blocks.

Intrinsic motivation and intrinsic reward
– intrinsic motivation – agent is doing that just for fun;
– intrinsic reward;
– there are different methods of intrinsic motivation (Schmidtbauer?);

Software implementation:
– there is always a conditional jump and discrimination between negative and positive examples;
– exploration = adding a new action;
– exploitation = ececutionof any existing action;

Panel Session

Q: Creativity among animals – they are creativity. To what extent the notion of creativity would apply in other domains;
A: AGI should be creative. ON the other hand we have smart things which are not creative;
In general animals are not creative, but there are certain examples of creativity like special cases;
Everything depends on the notion of the creativity. Can you yourself decide what is creative and what is not. At least you have to be able to explain the whatever you created is creative.
Q: about two last presentations:
A: What was presented was a simplification; Actually, there are three models imaginary prediciton has two parts: predictions into the future and prediction in the past (!!!).
A: Another approach is to avoid loops. The cure: loops are ok, but they should be separated by time.
Q: anybody implements chemical emotions? A: People do a lot of research Peter Something in London builds a models of serotonin / dopamine. So yes, there are people doing that.
Q: A marker of significance. We are not only interested in the colour and taste of the apple, but also significance of the apple.

Modelling actions in verbs;
Predicting the sensory-motor consequences of our action and the action of others!

What is the connection between perception and action?
Actions can be defined by physical constraints and also by external physical constraints (mass, gravity, etc.)
Also actions can be described by mental states;
When we understand actions we want to understand what intentions do they have. We watch people and derive intentions from their behaviour.

Kinematic patterns. You can define them with just several points and from the movement of these points we can almost infer what emotions are related to these patterns.

Force can be expressed as a vector (derived somewhere from kinematics).
Kinematic variables: acceleration, change in direction, etc.

Two-vector model of events

Abram Demski. Logical Prior Probabilities.

They generate logical theories by pulling sentences at random.
The idea I guess is that they check whether some statement contradicts to prior statements or not.
Similar to the inductive logic programming.
Add enough facts until you answer the query and after that all facts that you add have to be consistent with previous ones. So if you validate the theory it cannot become invalid.

Bounded approximation process, related to bounded rationality.

Keith McGreggor. Fractal Analogies for General Intelligence.

Fractals:
The world seems to exhibit repeated, similar patterns – fractals. Similarity is occurring at different scales.
What is the fractal formula for the real world images?
Collage theorem;
– Fractal representations – series of codes.
– Memory: a prior percept, fractally reminding;
Similarity
Odd One out – a novelty problem
Interplay between observer and observed.
similarity and analogy making are the core of intelligence;
fractal representations allow analogy making.

Tutorial sessions

Aaron Sloman. Meta-morphogenesis: How a planet can produce Minds, Mathematics and Music.

All systems require cooperation between them (for social, biological and socio-technical systems);
In any cooperative systems there could be parasites. Every element can pursue non-cooperative strategy.
Prisoner’s dilemma.
Too many parasites ruin the system.
– Security is how we induce cooperation;
– Cooperation induced trust;

Societal pressures – mechanism which society uses for individuals to conform to the social norms
1) moral (“stealing is wrong” thing);
— innate moral capacity (?);
2) reputation. this has to do how others respond to our actions; we get praised for good behaviour and get slapped for the bad behaviour. social consequences; very big deal for humans;
— we are only species that can transfer reputation information. Other species can recognize individuals but they cannot transfer reputation information.
— religion: just the believe that someone may be watching you makes people more moral;
this is a primitive societal pressure tool-kit. the problem is that they do not scale very well. Dunbar’s number. Over 150 a lot of these mechanism start failing and groups cannot keep security;
3) institutional societal pressure. We codify our rules about theft and then delegate enforcement to police. Its much cheaper to penalize defectors than to reward cooperators (too man cooperators)

4) security systems – any artificial mechanisms designed to enforce cooperation of prevent defection.
In the real world all four societal pressure systems work together, they never work separately.
Which one is more important depends on context. Society will use these pressures to find optimal level. Too many defecotrs is too damaging and too few defectors is too expensive. So society finds a leverage between pressures to come up with the right balance.

E-bay security system is reputation based.

Every moment an individual has to make a decision : should I cheat or should I not :))))

There are multiple competing interests, different aspects of a person, etc. So not so easy when you go into real situations from the simple prisoner’s dilemma model.

Very often laws goes against the rules of society.
Different aspects (pressure types) have different scaling possibilities.
Morals are related to groups (our language, our country, our planet). There are also universal morals. He says that we are only species which have this.

How technology is changing things and how can we get ahead of it?
– Technology is about scaling; more people, increased complexity, intensity, frequency, distance, artificial persons.
– Technology upsets the balance between cooperators and defectors;
– In response society has to rebalance itself (copyright);
– Social norms also change and the notion of copyright has change;

Sort of iterative process with feedback loops with stability is the goal; But he is not sure that this is the case, because attackers have advantages. First, attackers are more agile.
Examples with internet cameras and internet crime, it took years for police to understand how it works.
Syria: the government is using internet to fight the protesters and shutting it down when it seems that it will favour the opposition.
Those who has power will get more power through technology. The question is how slow is too slow?
The gap tends to be larger during fast social change and fast technological change (these are related).
We have seen this during the Enlightenment.
Agile security, reaction security or something like this. you cannot get ahead of the bad guys, but we can react fast.
Reactive security is scarifying some individuals for group interests.

No matter how much societal pressure you deploy, there always will be defectors. Decreasing law of returns.
Security is a tax we pay not to get a benefit, but to prevent a problem.
Society needs defectors. Groups benefit from the fact that some people are not following social norms, because this brings change to the sysetm. So a system that allows for the defection is very valuable for society.

QA: There is a difference between perceived security and actual security.
Q: In case of AI security you cannot have a single defector. This is an edge case. No conventional method can work with the infinite risks and 0 probability that it will happen.
Disruption is a noise. Usually noise does not kill the system.
He believes that humans have the capacity to get more moral. The speed of light. The pace of change is outpacing our ability to integrate it.
Defection comes from autonomous self-interested units, not just from autonomous units.
Socio-pats do not get benefit from cooperation so they are natural defectors.
We do not know the direction where the next innovation will come. So you cannot predict it.
– Surveillance;
– Censorship;
– Propaganda;
How do you enable the good part or that and prevent the bad side now.
The book : Evgenyi Morozov. The Net Delusion: The Dark Side of Internet Freedom

If artificial agents will be very different from people then people will not care about them and they will not care about humans. Can we do anything about that with hard security? All security systems have safe and then the locking mechanism and the key. So every security systems will have exceptions. The more judgement, the more useful the system is.
Security vs. usability problem.

Intro into Wireheading;
– A technique to implant electrodes into pleasure sensors to pleasure centres.
– Q: will machines be sublect to this type of behaviour?
– Humans are surely subjec to that.
– Machines: Eurisko

– direct stimulation: a machine can push a reward button directly;
– a machine try to optimize it’s reward taking more an more computational resources for that;

Potential Solutions
– inaccessible reward-function: separate the source code for the reward function from the code that the system would not be able to modify it;
– reseting reward function (to default seting);
– Revulsion;
– Utility indifference; put AI in the state of indifference to an event;
– External control. This works on humans (drug control, etc.). Mindplex machines – multiple connected minds.

Evolutionary competition between machines;
Learning proper reward functions, but the risk is that they learn something that we do not want;
Utility function bound to the actual world;

Rational and self-aware optimizers will choose not to wirehead; for some reason he thinks that this is the main danger (??)

Argument: rationality in the real world is not perfect (perfect rationality is impossible in the real world).
Small errors will add up and make big error happen (?)
Ghandi and the pill;

Temporal infuence on reward (depends on timem horizon)
General goal fulfillment
Common common sense – utility function function that will satisfy everyone.
– The question is whether this is possible (probably not);
– How the system will interpret human orders (literaly, or with some sort or interpretation)

Conclusions:
– even smart systems may start to corrupt their reward channels;
– link between mental problems in humans and this kind of behaviour in machines;
– security: someone can take over the system; what if they reprogram it and put it back?

Anders Kornai. Bounding the Impact of AGI

Two kinds of AGI’s : animated vs. automated AGI;
– Automated AGI will just follow the utility funciton;
– Animated AGI is an agent, it will have it’s own goals;

Hardware cannot do it;
– tolerable rate of existential threat; (very small as I understant);
– very high prexision is needed http:/kornai.com;
– must be done with softwere;

Reliability of pysics combined with math
There are some things which you simply cannot do in mathematics (you cannot escape some theorem).
The idea you can build the same boundaries ot AI;

From PPAs to PCG;
Sketch of the argument;
– I intend to do X voluntarily for some purpose E
– E is good (by my definition)
– My freedom and well beind are generically nesessary condition
– my freedom are necessarily good;
– I have a claim right to my F&WB
– Other PPAs have a claim for their freedom;
– So all other purposes are valid.

He wants to apply logic to this argument, not just philosophical argumentation;

QA: How do you go from 4 to 5 arguments? Answer was: this cannot be explained in 5 minutes, because this aargument takes 60 pages in the original proof.
The Book Derick Something. The dialectical necessity of morality

Living on the Brink of the SInfularity
– AGI is likely to emerge gradually and unevenly over a priod of years;
– Funding will go primarily to projects that offer;

Pacemakers
– the problem is not with the reliability of the technology, the problem is to integrte it with other funcitons of the body. What is going to happen is higher integration.
– merging with machines argumant

How to cut of feedback loops;
– risk control departments;
– separate software will be developed for risk control and breaking feedback loops;
– risk control slow things down;

Surveillance
– bigbrother;
– protecting legal rights and privacy;
– most of the surveilande are being cone via mobile phones;

Participatory Sensing: teaching groups how to use data;
– servers at home which aggregate data;

Self-driving automobiles were developed without overarching theory and now they are better than humans.
Book: Eden Medina. Cybernetic revolutionaries.
Synco science fiction novel published in Chile 2009.
Heinz Dietrich El socialismo;
Irrational responce of the humans to technology (based on fear?).
iPad has an app which does almost the same as that crazy chilean cybernetic system for economy monitoring and decision making;
AGI will evolve by incorporating different technologies step by step (is this his idea?).
Another approach would be to do what FHI is doing – doing philosophical arguments.
He’s sceptical about rules and proofs which can be implanted into AGIs to prevent them from doing wrong. This will develop incrementaly.

BG: development of AI will be similar to development in political economy (from experience).

Ben Goertzel. GOLEM.

What kind of architecture could you build if you have insanely much of computational resources.
OpenCog is a system that can use computational resources now available.
Taken specific architectural steps to get common sence morals / ethics. He talks about “raising” a young OpenCog.

How do you make a system that reprogram itself in non-trivial way and also maintains it’s original world. THis dose not guarantee safe system in the real world, because we do not know the real world / universe.

Goal is a system
– much more generally intelligent much more than people;
– resonable beneficial;
– unlikely to be horribly harmful;

Assumptions
– capability of radical self-improving is the most plausible way to do this;

“Steadfast” system
– it’s not able to give up it’s initial goals.
– if it does that it stops functioning;

How do you create steadfast AGI superhumanly intelligent, self-modifying.
To what extent and by what methods must we spwsify what “benefcial” means in order to do the above?
You cannot do that in predicate logic.
Maybe you can specify the goal with examples and natural language…

GOLEM.
Low level control code which tests all rewritings of the operating program of the system which ensures that each change is beneficial with respect to original goals.

Goal evaluater.
Historical repository;
Operting program
Searcher
Memory manager;
Tester – uses historical backtesting;
Every part of the system can be optimized except goal optimization;

Conservative:
– no changes to the hardware or low level control code;
– if changes are made, then stop functioning;

GOLEM paper is online.
Why Golem should be Steadfast?
Preserving architecture is among the goals of the system. In case of this machine should not change the goal.
External threat still exists (aliens from Pluto).

How to make GOLEM Smarter and Less Conservative
– acquire massive computing power;
– tweak a reasonable, inference based AGI architecture.

How do you define beneficialness of GOLEM?
– it is not possible to put that in formal way;
– encounter all examples, but this is difficult; He expect answer from the philosophical community;
– e.g. formulating the procedure;
Options:
— coherent extrapolated volition?
— democratically?
— coherent blended volition?

It is questionable whether it is possible to build such system;

Conclusion: he is an optimist, but this is only intuitevily grounded, not formally or anything close to that.
People are doing the same that GOLEM would do, but they are doing it very badly. So the idea (this is mine) that you have to create superintelligence prior to intelligence.

Alexey Potapov. Universal Empathy and Ethical Bias

Complex valus sytems:
– the problem is that is impossible to define value systems a-prory.

Paper: “Complex value systems in Friendly AI” Yudkowsky 2011

Classical opinion:
– reward funciton must vecessarily be fices;
– Without rewards there could be no valuser and the only purpose of estimating valuse is to achieve more reward.
– Is it true? NO.
His approach is to have some sort of values based on which reward function may be changed, if I understood this correctly. This does not seem to solve the problem, actually.

Multi-agent environment:
– He wants the agent to extract values automatically from the einvironemt (humans and other agents I suppose);

Special Session: AGI and Neuroscience

Yamakawa. Hippocratic formation something.

Fujitsu Ltd. The goal to construct new computational technologies to enable AGI and inspired by neuroscience.
Singularity Impact Factor

Autonomous frame generation is key for AGI. Intelligence is based on inferences. Frames are source of any inferences.
A frame is composed of a set of variables and each variable has values.
For narrow AI frames can be constructed by humans, bot AGI should be able to generate them itself.

Variable Assimilation as frame generation
– variables form both frame are matched and something happens.
– brains intrinsically contain sequential data frames
– human brain is thought to generate new frames autonomous to realize high level cognition
– many animals also have this kind of ability;
– why not focus on the brain region to get hints for FG?

Neocortex stores frames and activates them but it cannot process global VA all by itself;
HCF supports FG as relational indices
– Hippocampal formation for FG;
– HCF associate combinations of stimuli rather than individual signals with the meaning;

The question
– Does a brain sim have experience?
– How similar is it to human experience
– The answer isn’t empirically determined yet
– consider scientifically plausible explanations
Izikievich and Edelman 2008.

Pan-experientialism
– claim: experience is a basic capability of every physical resource;
– Likely to be part of causal picture of thought;
– Only hpysical resources organized in the right way are intelligent/ consciour
– Can a glob of plasma be experiencing? It can have some sort of pro-experience / pan-experience;

Evil Alien thought experiment
– Evil alien shuffles all your neural connections randomly
– The argument: physical resources will be functional so there will be some experiences, but they will not be conscousness;
– The consept of consciousness is quite independent from experiences (or vice versa);

Neural code hypothesis
– Experience is determined by neural code (neural spikes)
– Codes evolve differently in different individuals;
– therefore experience should vary very much across the brains / individuals;
– so what will happen if we change the spike trains;
— chemical transmission, EM fields, computation / data. sort of supports the hypothesis that experience would change;

Diana Decra. The Connectome, WBE and AGI

Paper: Lichtman and Denk. The big and the small: challenges of imaging the brain circuits
Problems in neuroscience:
– complexity (65 billion neurones, synaptic connections more that 7000, etc.)
– imaging electical and chemical activity; non-linear summation;
– neuron extend over vast volumes; mapping neurons can be very difficult;
– the detailed structure cannot be resolved by traditional light microscopy – this is not a problem any more;
– need for dense or saturated reconstruction; we need to have running movie, not just 3d picture, because picture will not show the function;
— many projects running in Connectome communities;
The goal (of neuroscence) is to connect structure and function

Implications for AGI;
– gathering conectome data;
– they want to model the connectome;
– with the full connectome it would be easy to implelment this in the silicon (in the atomic scale);

In order for radical improvement, you need not only to reconstruct the connectome, but we need to understand principles;

Cognitive neuroscience;

Randal Koene. Toward Tractable AGI.
Neurolink startup.

Brainlike AGI is trying to use nature’s knowledge about the intelligence;

– Representations and Models
— models = representation;
— you can break nature into pieces which are not entirely independent;
— the questions is how these pieces communicate;

Simplificaton of an intractable system into a collection of system identification problems
– SI of observable + internal = intractebel if black box is brain
– many communicationg black boxes with accessible I/O;

Tools for structural decomposition;
– open the system and look to morphology (stacks of EM images);
– data from structure
— si for compartments;
— 3d shape;
— invisible parameters?

Recording dynamic properties of the system. For this you have to pick reference poinst. REsolution of the reference points is important.
– problem specific criteria, not method specific;
– molecular ticker tape by DNA amplification;

Challenges:
– care about the signals; are we looking at the right signals. electromagnetic fields do matter in the brain;
– what is suffucient data? when do we know what is enough?
Virtual systems: NETMORPH; netmorph.org
Nemaload
C. Elegans (Dalrymple)
Retina (Briggman)
Berger
Memory from piece of neuran tissue (Seung)

Discussion
– good gage of problems – proof of concept;
– SI is not new = many fields can contribute;

WBE to substrate independent minds;
SIM is a notion that the brain is the machine.
corboncopies.org
2045.com

fields:
– computational neuroscience;
– dognitive neuroscience;

Roadmap to AGI

Ben Goertzel
– AGI roadmap workshop 2009;
– The problem is that people cannot agree on the roadmap, but rather are trying to do everything on the own.
There is some agreement about the final goal;
But there is no agreement about how / where / with what to start (language, robots, whatever);
Another problem is that you can construct some sort of AGI test for human level intelligence, but how to construct a test which measures intermediate progress (25% of human-level AGI).
Conclusion: we are not going to come up with any concensus.

David & Ben are working on low cost robot for AI community to play.

Paper: Mappping the landscape of AI;

Joscha Bach
– convergence is a function of funding 🙂
– to get funding you need benchmarks
– developmental perspective – we do not need adults;

David Hanson
– AGI community is an evolutionary ecology;
– so you thing which is needed is infrastructure for this ecology and then you will (may) have a Cambrian explosion [of AGI research results].
– we have to aim not just to human level intelligence, but to the intelligence of the best of us (geniuses).

M. Brundage. Limitationos and Risks of Machine Ethics.

Stuart Armstrong. How we’re predicitng AGI

– more or less the same as is written in his blog post in LessWrong.
Conclusions:
– our own opinions are not reliable;
– phosphphy has some things to say;
– proposal: increase your uncertainty;
– proposal: decompose your prediction as much as possible;
– do not rely on our gut feeling;

foresight – what may happen in the future;
forecast -what will happen;
prediction – is not something well defined;
foresight is now seen as a valid way to look to the future;
Delphi analysis – mehtodological basis of doing foresight (it seems that he is following this).

Motivations, area of IT/AI foresight
– the more realistic foresight project, the more chances it has (I guess he is talking about European projects);
– AI seems not to be very realistic;

Development of complex system models (a retrospective)
– There is a long history of building complex system models
– In 1930s Forester and models with thousands of equations;

Research Objectives;
– to elaborate an ict/is model sitable for forexasts, scenarios an drecommendations;
– there was anothre one but I did not catch it…

IS & IT modelling
– separe temodels foe the major components of the information society;
— it is much more useful to adopt simple model with accurate parameters than complex model with many parameters which cannot be estimated;
– specific models

– methodology;
– technologies and models;
– Foresight support sytem;

EC: National cohesion strategy

– The foresight process based on an ontological knowledge-base, intelligent autonomous webcrawlers and analytics.
– Then produced with analytical machines
– Recommendations were used by stakeholders in the industry;

The large part of the talk is dedicated to the ability of the simulations to run on different speeds and implications of this to a social structure of the world.

Varietes of lives: the simulation can choose to remember only what is pleasant, not what is painful (even statisticaly more time was painful).

Carl Shulman. Can unsecure WBEs create secure ones?

– you can look to WBEs from outside of from inside;
– in a human life, WBE will be a very short period, because WBEs will run in subjectively shortire times;
– can a human government exert effective conteol over a territory where:
— time runs 1000 1000000 times faster;
– you will need to have proxies which could operate in much higher speeds in these worlds;

The question – do we want to release uncontrolled WBE release;
Why not to leave this to future generations? (it would be silly to ask Neandarthal to figure problems of the industrial society)
What could humans know better than WBE humans?

*** Essentially analyzes (as well as the previous talk) a scenario explained in Cory Doctorow’s “RApture for the nerds” ***

AGI follows WBE rapidly;
– the idea that if you create WBE which can be run on much larger speeds, then you can create AGI very fast in physical time;

Non-competitive WBE period can avoid arms race AGI development
WBEs and humans will diverge socially, probably because of differences in speed.

Ozakur:
1) What kind of social political changes would be acceptible for both machines and humans.

1. Autonomous systems;
– autonomous if it takes actions for the goals which are not completely specified by the creator;
– the sytem can supprise the designer;
– pressure toward autonomous systems in time critical applications. it goes into that direction (cites us military reports etc);