The fluid tip will not see external triphasic spikes of vertebrate axons above the noise level.

Metal probe the most useful.

Pt electrode in CSF behaves like a capacitor at low voltage across a broad frequency range. CSF has compounds that retard oxidation; impedance is more resistive with physiological saline.

Noise voltage generated by a metal electrode best specified by equivalent noise resistance at room temperature, Ermsnoise=4kTRnδF R_n should equal the real part of the electrode impedance at the same frequency.

Much of electrochemistry: solid AgCl diffuses away from an electrode tip with great speed and can hardly be continuously formed with an imposed current. Silver forms extremely stable complexes with organic molecules having attached amino and sulfhydril groups which occur in plenty where the electrode damages the tissue. Finally, the reduction-oxidation potential of axoplasm is low enough to reduce methylene blue, which places it below hydrogen. AgCl and HgCl are reduced.

The external current of nerve fibers is the second derivative of the traveling spike, the familiar triphasic (??) transient.

Svaetichin [1] and Dowben and Rose [3] plated with Platinum black. This increases the surface area.

Very quickly it burns onto itself a shell of very adherent stuff. It is kept from intimate contact with the tissue around it by a shell.

We found that if we add gelatin to the chloroplatinic acid bath from which we plate the Pt, the ball is not only made adherent to the tip but is, in a sense, prepoisoned and does not burn a shell into itself.

He thinks about things in a slightly different way - separates what I call solutions and objective functions "post- and pre-representational levels" (respectively).

The thesis focuses on post-representational search/optimization, not pre-representational (though, I believe that both should meet in the middle - eg. pre-representational levels/ objective functions tuned iteratively during post-representational solution creation. This is what a human would do!)

The primary difficulty in competent program evolution is the intense non-decomposability of programs: every variable, constant, branch effects the execution of every other little bit.

Competent program creation is possible - humans create programs significantly shorter than lookup tables - hence it should be possible to make a program to do the same job.

One solution to the problem is representation - formulate the program creation as a set of 'knobs' that can be twiddled (here he means both gradient-descent partial-derivative optimization and simplex or heuristic one-dimensional probabilistic search, of which there are many good algorithms.)

The representation step above "explicitly addresses the underlying (semantic) structure of program space independently of the search for any kind of modularity or problem decomposition."

In MOSES, optimization does not operate directly on program space, but rather on subspaces defined by the representation-building process. These subspaces may be considered as being defined by templates assigning values to some of the underlying dimensions (e.g., they restrict the size and shape of any resulting trees).

In chapter 3 he examines the properties of the boolean programming space, which is claimed to be a good model of larger/more complicated programming spaces in that:

Simpler functions are much more heavily sampled - e.g. he generated 1e6 samples of 100-term boolean functions, then reduced them to minimal form using standard operators. The vast majority of the resultant minimum length (compressed) functions were simple - tautologies or of a few terms.

A corollary is that simply increasing syntactic sample length is insufficient for increasing program behavioral complexity / variety.

Actually, as random program length increases, the percentage with interesting behaviors decreases due to the structure of the minimum length function distribution.

Also tests random perturbations to large boolean formulae (variable replacement/removal, operator swapping) - ~90% of these do nothing.

These randomly perturbed programs show a similar structure to above: most of them have very similar behavior to their neighbors; only a few have unique behaviors. makes sense.

Run the other way: "syntactic space of large programs is nearly uniform with respect to semantic distance." Semantically similar (boolean) programs are not grouped together.

Results somehow seem a let-down: the program does not scale to even moderately large problem spaces. No loops, only functions with conditional evalutation - Jacques Pitrat's results are far more impressive. {815}

Seems that, still, there were a lot of meta-knobs to tweak in each implementation. Perhaps this is always the case?

My thought: perhaps you can run the optimization not on program representations, but rather program codepaths. He claims that one problem is that behavior is loosely or at worst chaotically related to program structure - which is true - hence optimization on the program itself is very difficult. This is why Moshe runs optimization on the 'knobs' of a representational structure.

http://www.dana.org/news/cerebrum/detail.aspx?id=3066 -- great article, with a well thought out, delicate treatment of the ethical/moral/ legal issues created by the interaction between the biological roots of violence (or knowlege thereof) and legal / social systems. He posits that there must be a continuum between ratinoal free will and irrational, impulsive violent behavior, with people biased to both by genetics, development, traumatic head injury, and substance abuse (among others).