Friday, December 30, 2016

My experience of research was in the 1980s, when I was delving into the mysteries of intelligence, via AI. The process model was this:

Take your most acute/insightful intuition

Find a simple-enough formal model/example

Analyse/program the model

Show some surprising/interesting results

Write/publish your paper.

AI at the time presented a superfluity of interesting problems - truly it was bliss to be a researcher. Only later did I realise that the most profound questions ('What evolutionary pressures selected for natural, human intelligence?') led into a conceptual swamp, where the right questions could not even be identified, let alone solved.

In physics, subject to centuries of theoretical and mathematical development, the situation facing the new postgraduate researcher is overwhelming and intimidating. As Bob Henderson observes,

"What we’d created is called a “toy model”: an exact solution to an approximate version of an actual problem. This, I learned, is what becomes of a colossal conundrum like quantum gravity after 70-plus years of failed attempts to solve it. All the frontal attacks and obvious ideas have been tried. Every imaginable path has hit a dead end. Therefore researchers retreat, set up camp, and start building tools to help them attempt more indirect routes. Toy models are such tools. Rajeev’s river almost certainly didn’t run to The Grail. The hope was that some side stream of one of its many tributaries (Virasoro, Yamabe …) would.

"Actually that was my hope, not Rajeev’s. Rajeev, I believe, just liked doing the math. The thing was a puzzle he could solve, so he solved it. For him that was enough.

"I, of course, had my sights set on bigger game."

Bob Henderson recounts an insightful, often tragic tale of hope and disillusion. Beautifully written to boot. Give it a try.

Thursday, December 29, 2016

It's well-known that the "spasm wars" of the early twenty first century succeeded in 'bombing mankind back to the stone-age', to use a quote popular at the time. As a consequence, records of even earlier times are now almost completely lost.

Recently however, archaeologists from the Institute for Deep Time unearthed a staggering artefact from the city once known as Wells in England's south-west. After much technical effort it has shed new light on the mediaeval period in that country, as well as posing new questions.

The hoard consisted of a number of primitive disks which encoded multimedia reconstructions of the early middle-ages. The language, customs and apparel appear authentic, although the inhabitants of that early era had confused ideas about their own geography.

The trove covers a span of several years, vividly documenting a catastrophic collapse of the state followed by civil war and external invasion.

One major question has puzzled researchers. Towards the end of the record there are indications of order finally being restored and that a new age of peace and prosperity may be around the corner.

But one question remains unresolved: did Daenerys Targaryen ever reclaim the Iron Throne?

"If you want to get stronger, some kind of resistance training is the way to go. So when you walk into the gym, should you start loading plates onto a barbell or should you just walk up to the first easy-to-use machine you spot? Let's compare both approaches.

The space of possible Noughts and Crosses board positions is bounded above by nine factorial, 362,880. It will be less than that because many games terminate before filling all nine squares. There are also symmetries which compact the search space.

"When considering only the state of the board, and after taking into account board symmetries (i.e. rotations and reflections), there are only 138 terminal board positions. Assuming that X makes the first move every time:

91 distinct positions are won by (X)

44 distinct positions are won by (O)

3 distinct positions are drawn (also known as a cat's game)."

Still, it's a pretty large look-up table.

So my first thought was that you should create the entire tree of all possible Noughts and Crosses games which could then be transformed into a look-up table. I guess at this point, the distinction between the table and a set of rules is rather in the noise.

An obvious thought is to chunk: to create higher-level features such as "in the centre", or "at a corner", or "a cross/nought at three corners with an empty space between each of them along an edge".

This is how people think, and it exploits both symmetries and the logic of the gameplay. It also sounds complicated: a viable game strategy needs many more such features.

---

I was thinking about this as the owls hooted and the odd snuffling thing perambulated around our house. I came to two conclusions:

- The AI system should examine each board position (as it is about to play) and check two things:

Is there a winning next move for me?

If not, is there an opponentwinning next move which I can block?

This seems to require a two-ply lookahead.

You wouldn't get perfect play with such a shallow search, but the evaluation function just described is good, and I think you would get at least competence.

I also think it would also be interesting to implement a full minimax search algorithm - just for the experience of doing so. That would give you perfect gameplay.

Noughts and Crosses is a pretty trivial game, but the principles involved in playing it are also implicated in very sophisticated games. So it's a kind of laboratory test bed.

---

My thinking is therefore this.

Start by writing some Noughts and Crosses players in Common Lisp as a learning experience. (It's been years since I programmed in Lisp and I've forgotten the functions and their syntax - in any event, I was using Franz Lisp).

I'm getting a tiny bit bored with the Propositional Calculus, which only lets you solve toy problems, but it's a good place to study search/inference strategies like set-of-support and hyper-resolution without the complexities of unification.

I can just about type a function into the Editor pane, save it, and then download it into the Listener pane for execution. Everything else is a complete mystery.

Is it any wonder that I just went to make myself a cup of coffee .. and that I am now typing this?

---

Update (the following day, Christmas Eve): It wasn't so bad. The manual basically just walks you through the on-screen menus. I only need the Editor and the Listener at this stage. The Stepper is very clear, and looks to be useful.

Thursday, December 22, 2016

Here are the pages - from Chapter 13 of "Logic: Form and Function" - where J. A. Robinson documents and describes his hyper-resolution theorem prover in Lisp. You will find the Lisp interestingly archaic.

My apologies for the poor quality - it was done in a rush. Nevertheless, the material is quite readable. It's published as the PDF link below.

Tuesday, December 20, 2016

Who wants a chatbot experience like navigating a call centre voice response tree?

"Creating interfaces for people to interact with machine learning will be a challenge. At the moment, since the technology is so new, one of the only patterns we know is what Siri showed us on the iPhone.

"Most of the conversations with Siri tend to become an elaborate back and forth. Talk to the device, and it will ask whatever questions it needs to ask to get you what you want. But what that interface turns into is a slightly more usable command-line interface. And while Bash has its adherents even today, Windows won. Conversational interfaces will not be able to replace the rich visual interactions that we’ve become used to over the past decades."

Alex Moore's company, Boomerang, designed their system to streamline the interaction model.

"Our best predictions came from throwing pure neural networks at the problem. The difficulty with that is that the output of those neural networks is complete gibberish to humans. The neural networks provide a matrix of weights that are applied to different parameters that the algorithms determine on their own. If you look at that matrix of weights, there is no way to make sense out of it. You just feed new data into it, let the machine make its calculations, and report a result.

"By contrast, when we used classifiers from the decision tree family of algorithms, we were able to have a data scientist manually review the outputs of the machine learning algorithms for common combinations. We were able to manually translate the machine learning output in those frequent cases into something understandable to people. Rather than put a conversational agent between the analysis and the user, we traded off a slight bit of accuracy for a dramatic decrease in the amount of time users have to spend interacting with the product to receive the value."

We're in the hype phase. Soon we'll get the backlash and maybe a few years out, something which actually works.

Monday, December 19, 2016

They could be babies, those little homunculi, pushed around in kiddie-karts by Google staff. You can't see the high-bandwidth comms, the data links ferrying every perception and mewling cry to DeepMind's server farms.

"She specialises in interviewing candidates for sales positions. Although relatively new to the job, Matlda still knows more than those she grills and uses her arsenal of 76 questions with ruthless efficiency to assess a job hopeful’s skills and professional expertise.

"She starts her 25-minute sessions slowly, reading her interviewee’s emotions and putting them at ease by, for example, playing the music she knows they listen to at home.

Matlda is a 30cm tall robot designed to shortlist job applicants and interview them.

...

“When you are doing face to face interviews and you have 10 candidates, if you liked candidate number four, by the time candidate number seven arrives the decision is already made unless they are something exceptional,” says Prof Khosla. “Matlda gives candidates a fair go.”

" .. a robotic interviewer “monitors [the candidate’s] facial expression”, says Prof Khosla. “Is there any physiological response? Do they seem impassioned when they are relating their experience? That is very important information [to glean from] a face to face interview — it tells an interviewer they relate more intrinsically to that area.”

"The robot compares what it has learnt with the traits of successful employees at the company that is doing the hiring. This is particularly helpful in sales, where turnover is high. “It’s all based on how the emotional responses and cognitive responses benchmark against their most successful candidates in their own culture,” says Prof Khosla."

It seems to work, at least with some people.

"Yu Ang Tan, a software engineer who engaged in a trial interview with Matlda, found the process uplifting. “I feel that a Matlda-enabled interview process gives me the feeling of a more standardised interview process.

“At the same time, having the process conducted by a robot with a cute appearance encouraged me to open up to it in ways I would otherwise hold back with a human interviewer.”

---

This is so obviously a gimmick that I wouldn't normally have wasted your time with the piece. But job interviews have some salience in our household at the moment.

Alex has a panel after Christmas and is currently prepping by studying Scala, a souped-up descendant of Java that I hadn't heard of before. He has bought the 800 page book.

"Please note that the LispWorks Personal Edition, distributed free of charge, has the following intentional limitations:

There is a heap size limit which, if exceeded, causes the image to exit. A warning is provided when the limit is approached.

There is a time limit of 5 hours for each session, after which LispWorks Personal exits, possibly without saving your work or performing cleanups such as removing temporary files. You are warned after 4 hours of use.

The functions save-image, deliver, and load-all-patches are not available.

Initialization files are not loaded.

Layered products that are part of LispWorks Professional and Enterprise Editions (CLIM, KnowledgeWorks, Common SQL and LispWorks ORB) are not included."

But it's well-supported and they have to make money. As with all language/development systems there's that steep initial learning curve. This afternoon it's PDF manual time - if I can summon up the energy.

---

In other news, I'm expecting two tons of bark chippings for the front garden to be delivered this afternoon. To be spread on the anti-weed mats Clare already laid out.

I'm also expecting Clare herself at some point, bearing the Christmas wreath she's meant to have learned how to make at the Bishop's Palace lunchtime class (just finished at time of writing), plus the prizes she won at the Labour Party Christmas raffle last Saturday in Wells Town Hall which need picking up on the Glastonbury Road: a bit of a stroll from the Palace.

"If they're too heavy, give me a ring," I said. "I'll pick you up in the car."

That would be the car which I won't be able to get out of the drive on account of two enormous bags of bark chippings blocking the bottom.

Later: my fears prove groundless as the first bag is placed carefully on the mat

Thursday, December 15, 2016

John Alan Robinson is a semi-forgotten hero of artificial intelligence (he died four months ago). He discovered the resolution rule (and rediscovered Herbrand's concept of unification) which made automated theorem-proving a computational reality. And he wrote a book which was a classic on how to implement a resolution theorem prover.

His 1979 book, Logic: Form and Function, is out of print and now only available second-hand. It's sunk into obscurity, as you can see from Amazon's terrible image above.

After some efforts, I have managed to order a copy and will do my best to write a review (Google reports no accessible reviews on the Internet). Here is some background on Robinson (Wikipedia).

"Robinson was born in Halifax, Yorkshire, England in 1930 and left for the United States in 1952 with a classics degree from Cambridge University. He studied philosophy at the University of Oregon before moving to Princeton University where he received his PhD in philosophy in 1956. He then worked at Du Pont as an operations research analyst, where he learned programming and taught himself mathematics.

He moved to Rice University in 1961, spending his summers as a visiting researcher at the Argonne National Laboratory's Applied Mathematics Division. He moved to Syracuse University as Distinguished Professor of Logic and Computer Science in 1967 and became professor emeritus in 1993.

It was at Argonne that Robinson became interested in automated theorem proving and developed unification and the resolution principle. Resolution and unification have since been incorporated in many automated theorem-proving systems and are the basis for the inference mechanisms used in logic programming and the programming language Prolog.

---

Automated theorem proving (ATP) is the engine of classical AI in the way that neural nets are the engine of contemporary AI.

Deep Learning, which makes everything into a classification problem, has superseded brittle, classical AI not least because high-level features emerge from the weight-training process under some optimisation criteria, rather than being hand-crafted by the programmer.

Every time I turn on the BBC or ITV news I get another concerned presenter gravelly describing the latest terrible suffering inflicted by the savage, blistering air and ground assault on Mosul Aleppo.

From Patrick Cockburn, The Independent:

"The Iraqi army, backed by US-led airstrikes, is trying to capture east Mosul at the same time as the Syrian army and its Shia paramilitary allies are fighting their way into east Aleppo. An estimated 300 civilians have been killed in Aleppo by government artillery and bombing in the last fortnight, and in Mosul there are reportedly some 600 civilian dead over a month.

"Despite these similarities, the reporting by the international media of these two sieges is radically different.

"In Mosul, civilian loss of life is blamed on Isis, with its indiscriminate use of mortars and suicide bombers, while the Iraqi army and their air support are largely given a free pass. Isis is accused of preventing civilians from leaving the city so they can be used as human shields.

"Contrast this with Western media descriptions of the inhuman savagery of President Assad’s forces indiscriminately slaughtering civilians regardless of whether they stay or try to flee. The UN chief of humanitarian affairs, Stephen O’Brien, suggested this week that the rebels in east Aleppo were stopping civilians departing – but unlike Mosul, the issue gets little coverage.

How to code a resolution theorem prover for the propositional calculus in Lisp (page 232).

---

Resolution is one way to Prove Theorems In Propositional Calculus

Theorem Proving is a highly developed field that requires considerable mathematical sophistication to understand fully. Still, if we limit ourselves to proofs in the propositional calculus, we can build a simple theorem prover. It will be based on the so-called resolution principle (which can be extended to deal with the more difficult problems of the predicate calculus). The key to this theorem prover, again, is a matcher.

A theorem in the propositional calculus, for our purpose, consists of a premise and a conclusion, both expressed in LISP-like prefix notation. The following sets up a typical premise and conclusion:

These are examples of conjunctive normal form since only atoms appear inside NOTs, only NOTs or atoms appear inside ORs, and only ORs appear inside ANDs. Each of the OR lists is called a clause. Each atom or atom with a NOT is called a literal.

Our theorem prover will use the assumption that all inputs are in conjunctive normal form because it can be shown that anything written in terms of ANDs, ORs, and NOTs can be placed in equivalent conjunctive normal form.

■ To prove a theorem, it must be shown that the premise implies the conclusion. For a premise to imply a conclusion, it must be that any combination of T and NIL values for the atoms in the premise that make the premise evaluate to T also makes the conclusion evaluate to T. Or said another way, to prove a theorem, it must be shown that any combination of literal values that makes the premise true also makes the conclusion true.

Clearly, theorems can be proved by simply trying all ways to bind the atoms to T and NIL, checking that no combination causes the premise to evaluate to T and the conclusion to NIL. Resolution offers an alternative. We first list the steps involved to give a general feel for what is involved, and then we will explain why they work.

■ Step 1: Negate the conclusion, put it into conjunctive normal form, and combine it with the premise.

■ Step 2: Search for two clauses in which the same atom appears naked in one and negated in the other. Form a new clause by combining everything in the two clauses except for the naked atom and its negation. The two clauses involved are said to resolve. The new clause is called a resolvent. Combine the resolvent with the other clauses.

For the example, the first and the third clauses resolve, producing a resolvent. That is, (OR P) and (OR Q (NOT P)) yield (OR Q). Combining this with the rest of the clauses yields this:

Note that two auxiliary functions are needed. INVERT returns a bare atom if the argument is in the form of (NOT <atom>) . It wraps its argument in a NOT if the argument is an atom. COMBINE gets rid of the match-causing atom and its negation and makes sure there are no repeated elements in the resulting clause:

I haven't yet tried to copy and paste this code into a Lisp system so there may be the odd parenthesis missing. After checking it out, my next step would be to remove all those confusing loops and write it in a recursive fashion. Also experiment with different control strategies, such as set-of-support.

This may be a lot shorter but (as is often the case with Prolog programs) its operation is fairly opaque to me. In Lisp you see what you're doing; in Prolog you have to imagine execution through a complex, sprawling and invisible search tree.

Tuesday, December 13, 2016

"Jigsaw — formerly known as Google Ideas — says it intends to spot and remove digital harassment with an automated program called Conversation AI.

[...]

Conversation AI is an offshoot of one of the most successful of Google’s “moonshot” projects, Google Brain. It has helped revolutionize the field of machine learning through large-scale neural networks, and given Google advantages such as software that is more skillful than humans at recognizing images.

But Conversation AI won’t be able to defeat online abuse. Though Jigsaw’s stated goal is to “fight the rise of online mobs,” the program itself is a far more modest—and therefore more plausible—project. Conversation AI will primarily streamline the community moderation that is today performed by humans. So even if it is unable to neutralize the worst behavior online, it might foster more and better discourse on some sites.

Jigsaw is starting Conversation AI at the New York Times, where it will be rolled out in a few months to help the company manage its online comments. Human moderators currently review nearly every comment published on the site. Right now, Conversation AI is reading 18 million of them, learning to detect each individual category of comments that get rejected—insubstantial, off-topic, spam, incoherent, inflammatory, obscene, attack on commenter, attack on author, attack on publisher.

The Times’s goal is not necessarily to reduce abuse in its comments, a problem it already considers under control. Instead, it hopes to reduce the human moderators’ workload. “We don’t ever expect to have a system that’s fully automated,” Erica Greene, engineering manager of the New York Times community team, told me. Times community editor Bassey Etim estimates that somewhere between 50 and 80 percent of comments could eventually be auto-­moderated, freeing up employees to devote their efforts to creating more compelling content from the paper’s comment sections.

The New York Times site poses very different problems from the real-time free-for-all of Twitter and Reddit. And given the limitations of machine learning — as it exists today — Conversation AI cannot possibly fight abuse in the Internet’s wide-open spaces. For all the dazzling achievements of machine learning, it still hasn’t cracked human language, where patterns like the ones it can find in Go or images prove diabolically elusive.

The linguistic problem in abuse detection is context. Conversation AI’s comment analysis doesn’t model the entire flow of a discussion; it matches individual comments against learned models of what constitute good or bad comments. For example, comments on the New York Times site might be deemed acceptable if they tend to include common words, phrases, and other features.

But Greene says Google’s system frequently flagged comments on articles about Donald Trump as abusive because they quoted him using words that would get a comment rejected if they came from a reader. For these sorts of articles, the Times will simply turn off automatic moderation."

Let's be clear: this is a good idea. The machine learning system is replacing grunt work currently carried out by hordes of bored human moderators. Hard cases are referred up by the AI. Few people want their reading experience polluted by foul-mouthed, ignorant rants. Moderation is enforced on this publishing outlet too! We all use automated spam filters.

And yet .. .

We sometimes forget the downside of prosociality. All this politeness, agreeableness and consensus-seeking must lead eventually to a bland, stifling conformity. When the system is silted up - captured by vested interests spouting the empty language of universality - the only progress is through breaking the rules.

I checked with my mentor, Karl M., and he confirmed that the result is often rather impolite.

---

Anyway, what's done is done. I can only conclude this post with a satiric observation:

Monday, December 12, 2016

"Machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and a decreased emphasis on proving confidence intervals around these functions; we therefore present the two central approaches to statistics: frequentist estimators and Bayesian inference.

"Most machine learning algorithms can be divided into the categories of supervised learning and unsupervised learning; we describe these categories and give some examples of simple learning algorithms from each category.

"Most deep learning algorithms are based on an optimization algorithm called stochastic gradient descent."

I've now skim-read the PDF version. It's plainly an engineering text - plenty of detail for practitioners. As a consequence, it's hard to see the wood for the trees. Machine Learning systems are currently super-sophisticated classifiers under some carefully chosen optimisation criteria (least squares estimation is one of the simplest).

However, many problems can be put into this form.

I'm still thinking that the only way we're going to understand higher brain functions is by building models in an experimental way. The philosophers in their armchairs have conspicuously failed to deliver on 'the hard problem'.

The key to this is the ability to build and connect enough artificial neuron-type components (Intel has a new chip). A new slogan is called for: 'In the neural capacity lies the consciousness'.

That, and some clever architecture and design ideas - we seem to have no shortage of smart people flocking into this new discipline.

Schrödinger's advice to his physics doctoral candidates: 'Learn more maths!'.

""Anyone grieving for a loved one will soon be able to talk to them using artificial intelligence technology pioneered by a woman who lost her best friend.

"Eugenia Kuyda, co-founder of Luka, a US technology company, has used the latest in AI systems to create a virtual version of Roman Mazurenko, who was killed in a road accident last year, days before his 33rd birthday.

"She fed more than 8,000 lines of text messages sent by Mr Mazurenko into a Google programme that allows people to create their own “chatbots” — computer programmes designed to simulate conversation with human users.

"The chatbot responds to questions in language that mimics Mr Mazurenko’s speech patterns so that she and other friends can “talk” to him as part of the grieving process.

"Ms Kuyda, 29, said that any doubts she had about the technology were allayed when the bot responded to her first questions in a tone that sounded like her friend."

The point of all this is that someone else, interacting with my chatbot-Replika, will think they're talking to "me". But I'm a different person depending on to whom I'm talking. Not everyone gets my flippant sense of humour so occasionally I have to act serious.

That's going to be a tough one.

Then I thought: it's unlikely that Replika (the company) will build a personal back-story, a contextual knowledge-base to power inference and make "my" replies more intelligent.

So why not do it myself?

I could simply copy all those Q&A texts into a .txt file and process them via a program like Eliza plus an automated theorem-prover (ATP).

The easiest way to do this would be in Prolog. The Eliza part isn't so hard: I already implemented a simple version using "The Art of Prolog" by Sterling and Shapiro. Writing an ATP in Prolog would also be interesting (and is also covered in the book).

The strategy is this. Start with the natural language input from those forty texts per day, specifically the questions being asked of me. We can't do inference on natural language so we have to translate it into a formal language with inference rules. The simplest is first-order predicate calculus: the Prolog clause variant would work.

We need the additional knowledge base which I would hand-craft (based on my own text responses) plus some rules to guide conversational inference - getting from an input to producing an appropriate output. All this could be coded as Prolog rules. It's similar to writing a meta-interpreter for Prolog in Prolog, covered in the book.

Finally I would translate the clausal form of the output into natural language as "my" response. A better response than any that a superficial Eliza-chatbot-like architecture could give.

You may think, like me, that this is already sounding like a big project.

---

So then I think. No-one but me could ever use this system. (Which kind of undermines the point).

It would have to be embedded in something like SWI-Prolog which would need to be launched first and would only work on my own machine. Getting it to work consistently and robustly across the Internet is most likely impossible.

Perhaps this should be implemented in Javascript so that my expert-system-Replika hybrid could be loaded into a client web page for anyone who wanted to have a conversation with "me"?

Friday, December 09, 2016

"Based on molecular orbital theory, He2 should not exist, and a chemical bond cannot form between the atoms. However, the van der Waals force exists between helium atoms as shown by the existence of liquid helium, and at a certain range of distances between atoms the attraction exceeds the repulsion. So a molecule composed of two helium atoms bound by the van der Waals force can exist. The existence of this molecule was proposed as early as 1930.

He2 is the largest known molecule of two atoms when in its ground state, due to its extremely long bond length. The He2 molecule has a large separation distance between the atoms of about 5,200 picometres. This is the largest for a diatomic molecule without ro-vibronic excitation. The binding energy is only about 1.3 mK, 10−7 eV or 1.1×10−5 kcal/mol. The bond is 5,000 times weaker than the covalent bond in the hydrogen molecule"

This news release in my Google Now feed intrigued me: the helium molecule is very, very big.

"Helium atoms are loners. Only if they are cooled down to an extremely low temperature do they form a very weakly bound molecule. In so doing, they can keep a tremendous distance from each other thanks to the quantum-mechanical tunnel effect. As atomic physicists in Frankfurt have now been able to confirm, over 75 percent of the time they are so far apart that their bond can be explained only by the quantum-mechanical tunnel effect.

The binding energy in the helium molecule amounts to only about a billionth of the binding energy in everyday molecules such as oxygen or nitrogen. In addition, the molecule is so huge that small viruses or soot particles could fly between the atoms. This is due, physicists explain, to the quantum-mechanical "tunnel effect."

They use a potential well to illustrate the bond in a conventional molecule. The atoms cannot move further away from each other than the "walls" of this well. However, in quantum mechanics the atoms can tunnel into the walls. "It's as if two people each dig a tunnel on their own side with no exit," explains Professor Reinhard Dörner of the Institute of Nuclear Physics at Goethe University Frankfurt."

The He2 molecule has four electrons, so you might think that the helium nuclei would be held together even more strongly than the protons in H2. However, the He2 molecule is unknown under normal conditions of temperature and pressure: at room temperature, helium gas contains only helium atoms.

We need to consider how the electrons occupy the available molecular orbitals. As with H2, the two orbitals of lowest energy are 1σg and 1σu. In He2, two electrons ﬁll the bonding 1σg orbital, and the other two ﬁll the antibonding 1σu orbital. The two electrons in the antibonding orbital do not help to bind the molecule together; on the contrary, they practically cancel out the effect of the electrons in the bonding orbital. Stable molecules generally have more electrons in bonding orbitals than in antibonding orbitals.

Accurate calculations indicate that there is a very shallow minimum in the energy curve of He2 at a radius of Requilibrium = 3×10−10 metres with a dissociation energy of Dequilibrium = 0 .0009 eV. This very small dissociation energy is close to the energy of the lowest vibrational state, so detection of He2 molecules requires very low temperatures, and has only been achieved for a beam of helium atoms cooled to 10−3 K.

Because the energy curve has a very shallow minimum, the molecule samples a range of interatomic distances that are far from the equilibrium value.

Because the energy curve is asymmetric, the average separation of the two nuclei is much greater than the equilibrium separation, and has been estimated to be about 50×10−10 m."

---

To get us up to speed, consider the hydrogen molecule ion: two protons and one electron. The electron wavefunction is decisive in this molecule. Below are the relevant diagrams for the hydrogen molecule ion using the Born-Oppenheimer approximation and LCAO trial functions for the molecular wave function (p. 148).

"The trial function is taken to be a linear combination of atomic orbitals centred on each of the nuclei that form the molecule. This method is known as the linear combination of atomic orbitals, frequently abbreviated to LCAO. The resulting one-electron eigenfunctions for the molecule are called molecular orbitals. We shall now apply the LCAO method to the electronic ground state of the hydrogen molecule ion."

---

---

So now we're ready to look at the corresponding energy/probability-density vs separation graph for the helium dimer (from here). The weak binding requires that we consider the wave-function of the two helium nuclei.

The weak helium-helium Van der Waals potential decrease leads to the particle probability density distribution leaking more into the classically forbidden region (i..e, tunneling). This effect allows the wavefunction to extend to sizes of fullerenes, the diameter of DNA and even small viruses, while the classical turning point is located at 13.6 Å, the overall wavefunction extends to more than 200 Å.

.50 Å (Angstrom units) = 5,000 picometres.
I suspect that R measures the distance from the nucleus-nucleus midpoint (in spherical coordinates), while the numbers quoted in the caption above are the nucleus-nucleus separation distances (2R). Ψ is the two-nuclei wave-function. Given the enormous nucleus-nucleus separation, the electrons will be quite tightly bound to their respective nuclei.

"Quantum tunneling is a ubiquitous phenomenon in nature and crucial for many technological
applications. It allows quantum particles to reach regions in space which are
energetically not accessible according to classical mechanics.

"In this tunneling region the particle density is known to decay exponentially. This behavior is universal across all energy
scales from MeV in nuclear physics, to eV in molecules and solids, and to neV in optical lattices. For bound matter the fraction of the probability density distribution in this classically forbidden region is usually small.

"For shallow short range potentials this can change dramatically: upon decreasing the potential depth excited states are expelled one after the other as they become unbound.

"A further decrease of the potential depth effects the ground state as well, as more and more of its wavefunction expands into the tunneling region. Consequently, at the threshold (i.e. in the limit of vanishing binding energy) the size of the quantum system expands to infinity.

"For short range potentials this expansion is accompanied by the fact that the system becomes less classical and more quantum-like. Systems existing near that threshold (and therefore being dominated by the tunneling part of their wave-function) are called quantum halo states.

"One of the most extreme examples of such a quantum halo state can be found in the realm of atomic physics: the helium dimer (He2).

James McGill is an everyteenboy action hero. The plotting is absurd and the tempo unrelenting. The writing is better than Lee Child: page-turning. Back to more serious stuff soon.

---

Moving animals, on living in the moment.

“One of your most ancient writers, a historian named Herodotus, tells of a thief who was to be executed. As he was taken away he made a bargain with the king: in one year he would teach the king's favorite horse to sing hymns. The other prisoners watched the thief singing to the horse and laughed. "You will not succeed," they told him. "No one can."

To which the thief replied, "I have a year, and who knows what might happen in that time. The king might die. The horse might die. I might die. And perhaps the horse will learn to sing.”

Wednesday, December 07, 2016

Clare likes to check out Stourhead in early December, sifting ideas for Christmas decorations. After the House inspect, lunch in the Spread Eagle (pub on the estate) and then a walk around the lake.

I did a small video where, cued by Clare's earlier talk of coots, I spectacularly misidentified a bunch of ducks. I believe we might be having roast coot for Christmas lunch.*

Here are some pictures. Note especially the waterfall and water-wheel you can't really see in the video.

Low Winter Sun

The Waterfall and Water-Wheel

Across the lake

This was actually the best shot of the two of us ..

Just burnishing that hipster look ...

After we had eaten, I had a scare that I'd lost my mobile phone. Thank God for long pin sequences and remote wipe! It had slipped out of my coat pocket and lodged between cushions. I was out of communication for, I don't know, at least ten minutes!

Better to return to the soft-headed but so-romantic leftism of my youth.

It turns out that left-wing Christmas cards are hard to find. There's nothing useful on Amazon, and Google isn't much better. I draw the line at those incompetent authoritarians Stalin and Mao, and have little time for the Fidelistas.

I looked, but the Fourth International (which increasingly espouses only trendy identity politics) isn't doing cards this (or any) year.